This site hosts historical documentation. Visit www.terracotta.org for recent product information.
The BigMemory Go Technical FAQ answers frequently asked questions on how to use BigMemory Go, integration with other products, and solving issues. Other resources for resolving issues include:
Yes.
It's easy. To expand your license with a software subscription for more capacity, contact Terracotta.
Terracotta provides enterprise support for BigMemory Go as part of software subscription. To get enterprise support, contact Terracotta.
Yes.
No. Deploy BigMemory Go with as many applications and on as many servers as you like.
BigMemory Go is for in-memory data management on a single JVM (in-process) and comes with 32GB free. BigMemory Max is for distributed in-memory management across an array of servers. For more on Go vs. Max, see BigMemory Overview.
BigMemory Go is free to use, but it is not an open-source product. See the Ehcache website for an open-source caching project.
Yes. Create a CacheManager using new CacheManager(...)
and keep hold of the reference. The singleton approach, accessible with the getInstance(...)
method, is still available too. However, hundreds of caches can be supported with one CacheManager, so use separate CacheManagers where different configurations are needed. The Hibernate Provider has also been updated to support this behavior.
ehcache.xml
?See the file ehcache.xsd
in the BigMemory Go kit for the latest information on required configuration elements.
Automatic element versioning works only with memory-store caches only. BigMemory Go does not use auto-versioning.
To enable auto-versioning, set the system property net.sf.ehcache.element.version.auto
to true (it is false by default). Manual (user provided) versioning of cache elements is ignored when auto-versioning is in effect. Note that if this property is turned on for one of the ineligible caches, auto-versioning will silently fail.
BigMemory Go offers fast, robust disk persistence set through configuration.
There are two patterns available: write-through and write-behind caching. In write-through caching, writes to the cache cause writes to an underlying resource. The cache acts as a facade to the underlying resource. With this pattern, it often makes sense to read through the cache too. Write-behind caching uses the same client API; however, the write happens asynchronously.
While file systems or a web-service clients can underlie the facade of a write-through cache, the most common underlying resource is a database.
Yes. Just set the persistence strategy (in the <cache> configuration element) to "none":
<cache>
...
<persistence strategy="none"/>
...
</cache>
No. However, you can minimize the usage of memory using sizing configuration.
Remember that a value in an element is globally accessible from multiple threads. It is inherently not thread-safe to modify the value. It is safer to retrieve a value, delete the element and then reinsert the value.
The UpdatingCacheEntryFactory does work by modifying the contents of values in place in the cache. This is outside of the core of BigMemory Go and is targeted at high performance CacheEntryFactories for SelfPopulatingCaches.
Non-serializable object can be stored only in the BigMemory Go memory store (heap). If an attempt is made to overflow a non-serializable element to the BigMemory Go off-heap or disk stores, the element is removed and a warning is logged.
These three configuration attributes can be used to design effective data lifetimes. Their assigned values should be tested and tuned to help optimize performance. timeToIdleSeconds
(TTI) is the maximum number of seconds that an element can exist in the store without being accessed, while timeToLiveSeconds
(TTL) is the maximum number of seconds that an element can exist in the store whether or not is has been accessed. If the eternal
flag is set, elements are allowed to exist in the store eternally and none are evicted. The eternal setting overrides any TTI or TTL settings.
These attributes are set in the configuration file per cache. To set them per element, you must do so programmatically.
Your application is querying the database excessively only to find that there is no result. Since there is no result, there is nothing to cache. To prevent the query from being executed unnecessarily, cache a null value, signalling that a particular key doesn't exist.
In code, checking for intentional nulls versus non-existent cache entries may look like:
// cache an explicit null value:
cache.put(new Element("key", null));
Element element = cache.get("key");
if (element == null) {
// nothing in the cache for "key" (or expired) ...
} else {
// there is a valid element in the cache, however getObjectValue() may be null:
Object value = element.getObjectValue();
if (value == null) {
// a null value is in the cache ...
} else {
// a non-null value is in the cache ...
}
}
The cache configuration in ehcache.xml
may look similar to the following:
<cache
name="some.cache.name"
maxEntriesLocalHeap="10000"
eternal="false"
timeToIdleSeconds="300"
timeToLiveSeconds="600"
/>
Use a finite timeToLiveSeconds setting to force an occasional update.
The amount of memory consumed per thread is determined by the Stack Size. This is set using -Xss.
When the maximum number of elements in memory is reached, the Least
Recently Used (LRU) element is removed. "Used" in this case means
inserted with a put
or accessed with a get
. The LRU element is flushed asynchronously to the off-heap store.
Because the in-memory data is allowed a fixed maximum number of elements or bytes, it will have a maximum memory use equal to the number of elements multiplied by the average size. When an element is added beyond the maximum size, the LRU element gets flushed to the disk store. Running an expiry thread in memory turns out to be a very expensive operation and potentially contentious. It is far more efficient to only check expiry when need rather than explicitly search for it. The tradeoff is higher average memory use.
The disk-store expiry thread keeps the disk clean. There is hopefully less contention for the disk store's locks because commonly used values are in memory. If you are concerned about CPU utilization and locking in the disk store, you can set the diskExpiryThreadIntervalSeconds to a high number, such as 1 day. Or, you can effectively turn it off by setting the diskExpiryThreadIntervalSeconds to a very large value.
LRU, LFU and FIFO eviction strategies are supported.
An element (key and value) in BigMemory is guaranteed to .equals()
another as it moves between stores.
Yes. You use one instance of BigMemory Go with one ehcache.xml. You configure your caches with Hibernate names for use by Hibernate. You can have other caches which you interact with directly, outside of Hibernate.
Use the Cache.getQuiet() method. It returns an element without updating statistics.
Set the system property net.sf.ehcache.disabled=true
to disable BigMemory Go. This can easily be done using -Dnet.sf.ehcache.disabled=true
on the command line. If BigMemory Go is disabled, no elements will be added to the stores.
This is not possible. However, you can achieve the same result as follows:
Create a new cache:
Cache cache = new Cache("test2", 1, true, true, 0, 0, true, 120, ...);
cacheManager.addCache(cache);
See the BigMemory API documentation for the full parameters.
Get a list of keys using cache.getKeys
, then get each element and put it in the new cache.
None of this will use much memory because the new cache elements have values that reference the same data as the original cache.
Use cacheManager.removeCache("oldcachename")
to remove the original cache.
There is a shutdown hook which calls the shutdown on JVM exit. If the JVM keeps running after you stop using
BigMemory Go, you should call CacheManager.getInstance().shutdown()
so that
the threads are stopped and cache memory is released back to the JVM.
When you call CacheManager.shutdown()
is sets the singleton in
CacheManager to null. Using a cache after this generates a CacheException.
However, if you call CacheManager.create()
to instantiate a new CacheManager,
then you can still use BigMemory Go. Internally the CacheManager singleton gets set to the new one, allowing you to create and shut down any number of times.
Statistics gathering is disabled by default in order to optimize performance. You can enable statistics gathering in caches in one of the following ways:
statistics="true"
to the <cache>
element.To function, certain features in the Terracotta Management Console require statistics to be enabled.
BigMemory Go does not experience deadlocks. However, deadlocks in your application code can be detected with certain tools, such as the JDK tool JConsole.
You need to add a newly created cache to a CacheManager before it gets initialised. Use code like the following:
CacheManager manager = CacheManager.create();
Cache myCache = new Cache("testDiskOnly", 0, true, false, 5, 2);
manager.addCache(myCache);
Persistence was not configured or not configured correctly on the node.
BigMemory Go does not distribute data. See BigMemory Max.
There are a few ways to try to solve this, in order of preference:
The backport-concurrent library is used in BigMemory Go to provide java.util.concurrency facilities for Java 4 - Java 6. Use either the Java 4 version which is compatible with Java 4-6, or use the version for your JDK.
If you use this default implementation, the cache name is called "SimplePageCachingFilter". You need to define a cache with that name in ehcache.xml. If you override CachingFilter, you are required to set your own cache name.
WARN CacheManager ... Creating a new instance of CacheManager using the diskStorePath
"C:\temp\tempcache" which is already used by an existing CacheManager.
This means that, for some reason, your application is trying to create one or more additional instances of CacheManager with the same configuration. Depending upon your persistence strategy, BigMemory Go will automatically resolve the disk-path conflict, or it will let you know that you must explicitly configure the diskStorePath.
To eliminate the warning:
CacheManager.getInstance()
. In Hibernate, there is a special provider for this called
net.sf.ehcache.hibernate.SingletonEhCacheProvider
.
See Hibernate.The defaultCache
is optional. When you try to programmatically add a cache by name, CacheManager.add(String name)
, a default cache is expected to exist in the CacheManager configuration. To fix this error, add a defaultCache to the CacheManager's configuration.
Errors could occur if BigMemory Go runs with a web application that has been redeployed, causing BigMemory Go to not start properly or at all. If the web application is redeployed, be sure to restart BigMemory Go.