You have already seen that using the Coherence cache is as simple as obtaining a reference to a NamedCache instance and using a Map-like
API to get data from it (and put data into it). While the simplicity of
the API is a great thing, it is important to understand what's going on
under the hood so that you can configure Coherence in the most
effective way, based on your application's data structures and data
access patterns.
The simplicity of access to
the Coherence named cache, from the API, might make you imagine the
named cache as in the following diagram:
Basically, your
application sees the Coherence named cache as a cloud-like structure
that is spread across the cluster of nodes, but accessible locally using
a very simple API. While this is a correct view from the client
application's perspective, it does not fully reflect what goes on behind
the scenes.
So let's clear the cloud and see what's hidden behind it.
Whenever you invoke a method on a named cache instance, that method call gets delegated to the clustered cache service
that the named cache belongs to. The preceding image depicts a single
named cache managed by the single cache service, but the relationship
between the named cache instances and the cache service is many to
one-each named cache belongs to exactly one cache service, but a single
cache service is typically responsible for more than one named cache.
The cache service is
responsible for the distribution of cache data to appropriate members on
cache writes, as well as for the retrieval of the data on cache reads.
However, the cache service is not responsible for the actual storage of the cached data. Instead, it delegates this responsibility to a backing map. There is one instance of the backing map 'per named cache per node', and this is where the cached data is actually stored.
There
are many backing map implementations available out of the box, and we
will discuss them shortly. For now, let's focus on the clustered cache
services and the cache topologies they enable.