This site hosts historical documentation. Visit www.terracotta.org for recent product information.

Enterprise Ehcache Configuration Reference

Enterprise Ehcache uses the standard Ehcache configuration file to set clustering and consistency behavior, optimize cached data, integrate with Java Transaction API (JTA) and OSGi, and more.

Offloading Large Caches

Storing a distributed cache’s entire key set on each Terracotta client provides high locality-of-reference, reducing latency at the cost of using more client memory. It also allows for certain cache-management optimizations on each client that improve the overall performance of the cache. This works well for smaller key sets which can easily fit into the JVM.

However, for caches with elements numbering in the millions or greater, performance begins to deteriorate when every client must store the entire key set. Clusters with a large number of clients require even more overhead to manage those key sets. If the cache is also heavy on writes, that overhead can cause a considerable performance bottleneck.

In addition to making it more difficult to scale a cluster, larger caches can cause other serious performance issues:

  • Cache-loading slowdown – The cache’s entire key set must be fully present in the client before the cache is available.
  • Reduction in free client memory – Less available memory may cause more flushing and faulting.
  • More garbage collection – Larger heaps (to accommodate larger key sets) and more objects in memory means more garbage created and more Java garbage collection cycles.

The DCV2 mode of managing Terracotta clustered caches avoids these issues by offloading cache entries to the Terracotta server array, allowing clients to fault in only required keys. Some of the advantages of the DCV2, which is used by default, include server-side eviction, automatic and flexible hot-set caching by clients, and cluster-wide consistency without cluster-wide delta broadcasts.

Note the following about the DCV2 mode:

  • Under certain circumstances, unexpired elements evicted from Terracotta clients to meet the limit set by maxElementsInMemory or to free up memory may also be evicted from the Terracotta server array. The client cannot fault such elements back from the server. See How Configuration Affects Element Eviction for more information on how DCV2, element expiration, and element eviction are related.
  • UnlockeReadsView and bulk-load mode is not optimized for DCV2 with strong consistency. Elements populated through bulk load expire according to a set timeout and may persist in the cache even after being evicted by the server array (see How Configuration Affects Element Eviction for more information). You can bypass this issue by using "eventual" consistency mode (see Understanding Performance and Cache Consistency for more information).
  • The entire cache’s key set must fit into the server array’s aggregate heap. The server array’s aggregate heap is equal to the sum of each active server’s heap size. BigMemory allows you to bypass this restriction. See Improving Server Performance With BigMemory for more information.

Very large key sets can be offloaded effectively to a scaled-up Terracotta server array with a sufficient number of mirror groups. See Scaling the Terracotta Server Array for more information on mirror groups.

To configure a cache to not offload its key set, set the attribute storageStrategy="classic" in that cache’s <terracotta> element.

Tuning Concurrency

The server map underlying the Terracotta Server Array contains the data used by clients in the cluster and is segmented to improve performance through added concurrency. Under most circumstances, the concurrency value is optimized by the Terracotta Server Array and does not require tuning.

If an explicit and fixed segmentation value must be set, use the <terracotta> element’s concurrency attribute, making sure to set an appropriate concurrency value. A too-low concurrency value could cause unexpected eviction of elements. A too-high concurrency value may create many empty segments on the Terracotta Server Array (or many segments holding a few or just one element). In this case, maxElementsOnDisk may appear to have been exceeded, and the cluster may run low on memory as it loads all segments into RAM, even if they are empty.

The following information provides additional guidance for choosing a concurrency value:

  • With extremely large data sets, a high concurrency value can improve performance by hashing the data into more segments, which reduces lock contention.
  • If maxElementsOnDisk is not set, set to 0, or set to a value equal to or greater than 256, set concurrency equal to 256 (except for extremely large data sets). This is the default value.
  • If maxElementsOnDisk is set to a value less than 256, set concurrency to the highest power of 2 that is less than or equal to the value of maxElementsOnDisk. For example, if maxElementsOnDisk is 130, set concurrency to 128.
  • In environments with very few cache elements or a very low maxElementsOnDisk value, be sure to set concurrency to a value close to the number of expected elements.
  • In general, the concurrency value should be no less than than the number of active servers in the Terracotta Server Array, and optimally at least twice the number of active Terracotta servers.

To learn how to set concurrency for a cache, see the section on the <terracotta> element.

Non-Blocking Disconnected (Nonstop) Cache

A nonstop cache allows certain cache operations to proceed on clients that have become disconnected from the cluster or if a cache operation cannot complete by the nonstop timeout value. One way clients go into nonstop mode is when they receive a "cluster offline" event. Note that a nonstop cache can go into nonstop mode even if the node is not disconnected, such as when a cache operation is unable to complete within the timeout allotted by the nonstop configuration.

Configuring Nonstop

Nonstop is configured in a <cache> block under the <terracotta> subelement. In the following example, myCache has nonstop configuration:

<cache name="myCache" maxElementsInMemory="10000" eternal="false"
       overflowToDisk="false">
 <terracotta>
   <nonstop immediateTimeout="false" timeoutMillis="30000">
     <timeoutBehavior type="noop" />
   </nonstop>
 </terracotta>
</cache>

Nonstop is enabled by default or if <nonstop> appears in a cache’s <terracotta> block.

Nonstop Timeouts and Behaviors

Nonstop caches can be configured with the following attributes:

  • enabled – Enables ("true" DEFAULT) or disables ("false") the ability of a cache to execute certain actions after a Terracotta client disconnects. This attribute is optional for enabling nonstop.
  • immediateTimeout – Enables ("true") or disables ("false" DEFAULT) an immediate timeout response if the Terracotta client detects a network interruption (the node is disconnected from the cluster). If enabled, the first request made by a client can take up to the time specified by timeoutMillis and subsequent requests timeout immediately.
  • timeoutMillis – Specifies the number of milliseconds an application waits for any cache operation to return before timing out. The default value is 30000 (thirty seconds). The behavior after the timeout occurs is determined by timeoutBehavior.

<nonstop> has one self-closing subelement, <timeoutBehavior>. This subelement determines the response after a timeout occurs (timeoutMillis expires or an immediate timeout occurs). The response can be set by the <timeoutBehavior> attribute type. This attribute can have one of the values listed in the following table:

Value Behavior
exception (DEFAULT) Throw NonStopCacheException. See When is NonStopCacheException Thrown? for more information on this exception.
noop Return null for gets. Ignore all other cache operations. Hibernate users may want to use this option to allow their application to continue with an alternative data source.
localReads For caches with Terracotta clustering, allow inconsistent reads of cache data. Ignore all other cache operations. For caches without Terracotta clustering, throw an exception.

Tuning Nonstop Timeouts and Behaviors

You can tune the default timeout values and behaviors of nonstop caches to fit your environment.

Network Interruptions

For example, in an environment with regular network interruptions, consider disabling immediateTimeout and increasing timeoutMillis to prevent timeouts for most of the interruptions.

For a cluster that experiences regular but short network interruptions, and in which caches clustered with Terracotta carry read-mostly data or there is tolerance of potentially stale data, you may want to set timeoutBehavior to localReads.

Slow Cache Operations

In an environment where cache operations can be slow to return and data is required to always be in sync, increase timeoutMillis to prevent frequent timeouts. Set timeoutBehavior to noop to force the application to get data from another source or exception if the application should stop.

For example, a cache.acquireWriteLockOnKey(key) operation may exceed the nonstop timeout while waiting for a lock. This would trigger nonstop mode only because the lock couldn't be acquired in time. Using cache.tryWriteLockOnKey(key, timeout), with the method's timeout set to less than the nonstop timeout, avoids this problem.

Bulk Loading

If a nonstop cache is bulk-loaded using the Bulk-Load API, a multiplier is applied to the configured nonstop timeout whenever the method net.sf.ehcache.Ehcache.setNodeBulkLoadEnabled(boolean) is used. The default value of the multiplier is 10. You can tune the multiplier using the bulkOpsTimeoutMultiplyFactor system property:

-DbulkOpsTimeoutMultiplyFactor=10

This multiplier also affects the methods net.sf.ehcache.Ehcache.removeAll(), net.sf.ehcache.Ehcache.removeAll(boolean), and net.sf.ehcache.Ehcache.setNodeCoherent(boolean) (DEPRECATED).

When is NonStopCacheException Thrown?

NonStopCacheException is usually thrown when it is the configured behavior for a nonstop cache in a client that disconnects from the cluster. In the following example, the exception would be thrown 30 seconds after the disconnection (or the "cluster offline" event is received):

<nonstop immediateTimeout="false" timeoutMillis="30000">
<timeoutBehavior type="exception" />
</nonstop>

However, under certain circumstances the NonStopCache exception can be thrown even if a nonstop cache’s timeout behavior is not set to throw the exception. This can happen when the cache goes into nonstop mode during an attempt to acquire or release a lock. These lock operations are associated with certain lock APIs and special cache types such as Explicit Locking, BlockingCache, SelfPopulatingCache, and UpdatingSelfPopulatingCache.

A NonStopCacheException can also be thrown if the cache must fault in an element to satisfy a get() operation. If the Terracotta Server Array cannot respond within the configured nonstop timeout, the exception is thrown.

A related exception, InvalidLockAfterRejoinException, can be thrown during or after client rejoin (see Using Rejoin to Automatically Reconnect Terracotta Clients). This exception occurs when an unlock operation takes place on a lock obtained before the rejoin attempt completed.

TIP: Use try-finally Blocks
To ensure that locks are released properly, application code using Ehcache lock APIs should encapsulate lock-unlock operations with try-finally blocks:

myLock.acquireLock();
try {
  // Do some work.
} finally {
  myLock.unlock();
}

How Configuration Affects Element Eviction

Element eviction is a crucial part of keeping cluster resources operating efficiently. Element eviction and expiration are related, but an expired element is not necessarily evicted immediately and an evicted element is not necessarily an expired element. Cache elements may be evicted due to resource and configuration constraints, while expired elements are evicted from the Terracotta client when a get or put operation occurs on that element (sometimes called inline eviction).

The Terracotta server array contains the full key set (as well as all values), while clients contain a subset of keys and values based on elements they’ve faulted in from the server array. This storage approach is referred to as "DCV2" (Distributed Cache v2).

TIP: Eviction With UnlockedReadsView and Bulk Loading
Under certain circumstances, DCV2 caches may evict elements based on a configured timeout. See DCV2, Strict Consistency, UnlockedReadsView, and Bulk Loading for more information.

Typically, an expired cache element is evicted, or more accurately flushed, from a client tier to a lower tier when a get() or put() operation occurs on that element. However, a client may also flush expired, and then unexpired elements, whenever a cache’s sizing limit for a specific tier is reached or it is under memory pressure. This type of eviction is intended to meet configured and real memory constraints.

Flushing from clients does not mean eviction from the server array. Elements can become candidates for eviction from the server array when disks run low on space. Servers with a disk-store limitation set by maxElementsOnDisk can come under disk-space pressure and will evict expired elements first. However, unexpired elements can also be evicted if they meet the following criteria:

  • They are in a cache with infinite TTI/TTL (Time To Idle and Time To Live), or no explicit settings for TTI/TTL. Enabling a cache’s eternal flag overrides any finite TTI/TTL values that have been set.
  • They are not resident on any Terracotta client. These elements can be said to have been "orphaned". Once evicted, they will have to be faulted back in from a system of record if requested by a client.
  • Their per-element TTI/TTL settings indicate that they’ve expired and the server array is inspecting per-element TTI/TTL. Note that per-element TTI/TTL settings are, by default, not inspected by Terracotta servers.

TIP: Forcing Terracotta Servers to Inspect Per-Element TTI/TTL
To help maintain a high level of performance, per-element TTI/TTL settings are not inspected by Terracotta servers. To force servers to inspect and honor per-element TTI/TTL settings, enable the Terracotta property ehcache.storageStrategy.dcv2.perElementTTITTL.enabled by adding the following configuration to the top of the Terracotta configuration file (tc-config.xml by default) before starting the Terracotta server:

<tc-properties>
    <property name="ehcache.storageStrategy.dcv2.perElementTTITTL.enabled" value="true" />
</tc-properties>

While this setting may prevent unexpired elements (based on per-element TTI/TTL) from being evicted, it also degrades performance by incurring processing costs.

A server array will not evict unexpired cache entries if servers are configured to have infinite store (maxElementsOnDisk is not set or is set to 0). A server may also not evict cache entries if they remain resident in any client cache. Under these conditions, the expected data set must fit in the server array or the cluster may suffer from performance degradation and errors.

To learn about eviction and controlling the size of the cache, see the Ehcache documentation on data life and sizing caches.

DCV2, Strict Consistency, UnlockedReadsView, and Bulk Loading

When a cache that strict consistency is decorated with UnlockedReadsView (see Unlocked Reads for Consistent Caches (UnlockedReadsView)), unlocked reads may cause elements to be faulted in. These elements expire based on a cluster-wide timeout controlled by the Terracotta property ehcache.storageStrategy.dcv2.localcache.incoherentReadTimeout. This timeout, which by default is set to five minutes, can be tuned in the Terracotta configuration file (tc-config.xml):

<tc-properties>
<!-- The following timeout is set in milliseconds. -->
 <property name="ehcache.storageStrategy.dcv2.localcache.incoherentReadTimeout" value="300000" />
</tc-properties>

If the same elements are changed on a remote node, the local elements under the effect of this timeout will not expire or become invalid until the timeout is reached.

This timeout also applies to elements that are put into the cache using the bulk-load API (see Bulk-Load API).

Understanding Performance and Cache Consistency

Cache consistency modes are configuration settings and API methods that control the behavior of clustered caches with respect to balancing data consistency and application performance. A cache can be in one of the following consistency modes:

  • Eventual – This mode guarantees that data in the cache will eventually be consistent. Read/write performance is substantially boosted at the cost of potentially having an inconsistent cache for brief periods of time. This mode is set using the Ehcache configuration file and cannot be changed programmatically (see the attribute "consistency" in <terracotta>).
  • Strong – This mode ensures that data in the cache remains consistent across the cluster at all times. It guarantees that a read gets an updated value only after all write operations to that value are completed, and that each put operation is in a separate transaction. The use of locking and transaction acknowledgments maximizes consistency at a potentially substantial cost in performance. This mode is set using the Ehcache configuration file and cannot be changed programmatically (see the attribute "consistent" in <terracotta>).
  • Bulk Load – This mode is optimized for bulk-loading data into the cache without the slowness introduced by locks or regular eviction. It is similar to the eventual mode, but has batching, higher write speeds, and weaker consistency guarantees. This mode is set using the bulk-load API only (see Bulk-Load API). When turned off, allows the configured consistency mode (either strong or eventual) to take effect again.

Use configuration to set the permanent consistency mode for a cache as required for your application, and the bulk-load mode only during the time when populating (warming) or refreshing the cache.

The following APIs and settings also affect consistency:

  • Explicit Locking – This API provides methods for cluster-wide (application-level) locking on specific elements in a cache. There is guaranteed consistency across the cluster at all times for operations on elements covered by a lock. When used with the strong consistency mode in a cache, each cache operation is committed in a single transaction. When used with the eventual consistency mode in a cache, all cache operations covered by an explicit lock are committed in a single transaction. While explicit locking of elements provides fine-grained locking, there is still the potential for contention, blocked threads, and increased performance overhead from managing clustered locks. See Explicit Locking for more information.
  • UnlockedReadsView – A cache decorator that allows dirty reads of the cache. This decorator can be used only with caches in the strong consistency mode. UnlockedReadsView raises performance for this mode by bypassing the requirement for a read lock. See Unlocked Reads for Consistent Caches (UnlockedReadsView) for more information.
  • Atomic methods – To guarantee write consistency at all times and avoid potential race conditions for put operations, use the atomic methods Cache.putIfAbsent(Element element) and Cache.replace(Element oldOne, Element newOne). However, there is no guarantee that these methods’ return value is not stale because another operation may change the element after the atomic method completes but before the return value is read. To guarantee the return value, use locks (see Explicit Locking). Note that using locks may impact performance.
  • Bulk-loading methods – Bulk-loading Cache methods putAll(), getAll(), and removeAll() provide high-performance and eventual consistency. These can also be used with strong consistency. If you can use them, it's unnecessary to use bulk-load mode. See the API documentation for details.

To optimize consistency and performance, consider using eventually consistent caches while selectively using explicit locking in your application where cluster-wide consistency is critical.

Cache Events in a Terracotta Cluster

Cache events are fired for certain cache operations:

  • Evictions – An eviction on a client generates an eviction event on that client. An eviction on a Terracotta server fires an event on a random client.
  • Puts – A put() on a client generates a put event on that client.
  • Updates – If a cache uses default storage strategy (<terracotta ... storageStrategy="DCV2" ... >), then an update on a client generates a put event on that client.
  • orphan eviction – An orphan is an element that exists only on the Terracotta Server Array. If an orphan is evicted, an eviction event is fired on a random client.

See Cache Events Configuration for more information on configuring the scope of cache events.

Handling Cache Update Events

Caches generate put events whenever elements are put or updated. If it is important for your application to distinguish between puts and updates, check for the existence of the element during put() operations:

if (cache.containsKey(key)) {
  cache.put(element);
  // Action in the event handler on replace.
} else {
  cache.put(element);
  // Action in the event handler on new puts.
}

To protect against races, wrap the if block with explicit locks (see Explicit Locking). You can also use the atomic cache methods putIfAbsent() or to check for the existence of an element:

if((olde = cache.putIfAbsent(element)) == null) { // Returns null if successful or returns the existing (old) element.
  // Action in the event handler on new puts.
} else {
  cache.replace(old, newElement); // Returns true if successful.
  // Action in the event handler on replace.
}

If your code cannot use these approaches (or a similar workaround), you can force update events for cache updates by setting the Terracotta property ehcache.clusteredStore.checkContainsKeyOnPut at the top of the Terracotta configuration file (tc-config.xml by default) before starting the Terracotta Server Array:

<tc-properties>
 <property name="ehcache.clusteredStore.checkContainsKeyOnPut" value="true" />
</tc-properties>

Enabling this property can substantially degrade performance.

Configuring Caches for High Availability

Enterprise Ehcache caches provide the following High Availability (HA) settings:

To learn about configuring HA in a Terracotta cluster, see Configuring Terracotta Clusters For High Availability.

Using Rejoin to Automatically Reconnect Terracotta Clients

A Terracotta client running Enterprise Ehcache may disconnect and be timed out (ejected) from the cluster. Typically, this occurs because of network communication interruptions lasting longer than the configured HA settings for the cluster. Other causes include long GC pauses and slowdowns introduced by other processes running on the client hardware.

You can configure clients to automatically rejoin a cluster after they are ejected. If the ejected client continues to run under nonstop cache settings, and then senses that it has reconnected to the cluster (receives a clusterOnline event), it can begin the rejoin process.

Note the following about using the rejoin feature:

  • Rejoin is for CacheManagers with only nonstop caches. If one or more of a CacheManager’s caches is not set to be nonstop, and rejoin is enabled, an exception is thrown at initialization. An exception is also thrown in this case if a cache is created programmatically without nonstop.
  • Clients rejoin as new members and will wipe all cached data to ensure that no pauses or inconsistencies are introduced into the cluster.
  • Any nonstop-related operations that begin (and do not complete) before the rejoin operation completes may be unsuccessful and may generate a NonStopCacheException.
  • If Enterprise Ehcache client with rejoin enabled is running in a JVM with Terracotta clients that do not have rejoin, then only that client will rejoin after a disconnection. The remaining clients cannot rejoin and may cause the application to behave unpredictably.
  • Once a client rejoins, the clusterRejoined event is fired on that client only.

Configuring Rejoin

The rejoin feature is disabled by default. To enable the rejoin feature in an Enterprise Ehcache client, follow these steps:

  1. Ensure that all of the caches in the Ehcache configuration file where rejoin is enabled have nonstop enabled.
  2. Ensure that your application does not create caches on the client without nonstop enabled.
  3. Enable the rejoin attribute in the client’s <terracottaConfig> element:

    <terracottaConfig url="myHost:9510" rejoin="true" />
    

For more options on configuring <terracottaConfig>, see the configuration reference.

Avoiding OOME From Multiple Rejoins

Each time a client rejoins a cluster, it reloads all class definitions into the heap’s Permanent Generation (PermGen) space. If a number of rejoins happen before Java garbage collection (GC) is able to free up enough PermGen, an OutOfMemory error (OOME) can occur. Allocating a larger PermGen space can make an OOME less likely under these conditions.

The default value of PermGen on Oracle JVM is 64MB. You can tune this value using the Java options -XX:PermSize (starting value) and -XX:MaxPermSize (maximum allowed value). For example:

-XX:PermSize=<value>m -XX:MaxPermSize=<value>m

If your cluster experiences regular node disconnections that trigger many rejoins, and OOMEs are occurring, investigate your application’s usage of the PermGen space and how well GC is keeping up with reclaiming that space. Then test lower and higher values for PermGen with the aim of eliminating the OOMEs.

TIP: Use the Most Current Supported Version of the JDK
Rejoin operations are known to be more stable on JDK versions greater than 1.5.

Exception During Rejoin

Under certain circumstances, if one of the Ehcache locking APIs is being used by your application, an InvalidLockAfterRejoinException could be thrown. See When is NonStopCacheException Thrown? for more information.

Working With Transactional Caches

Transactional caches add a level of safety to cached data and ensure that the cached data and external data stores are in sync. Enterprise Ehcache caches can participate in JTA transactions as an XA resource. This is useful in JTA applications requiring caching, or where cached data is critical and must be persisted and remain consistent with System of Record data.

However, transactional caches are slower than non-transactional caches due to the overhead from having to write transactionally. Transactional caches also have the following restrictions:

  • Data can be accessed only transactionally, even for read-only purposes. You must encapsulate data access with begin() and commit() statements. This may not be necessary under certain circumstances (see, for example, the discussion on Spring in Transactions in Ehcache).
  • copyOnRead and copyOnWrite must be enabled. These <cache> attributes are "false" by default and must set to "true".
  • Caches must be strongly consistent. A transactional cache’s consistency attribute must be set to "strong".
  • Nonstop caches cannot be made transactional except in strict mode (xa_strict). Transactional caches in other modes must not contain the <nonstop> subelement.
  • Decorating a transactional cache with UnlockedReadsView can return inconsistent results for data obtained through UnlockedReadsView. Puts, and gets not through UnlockedReadsView, are not affected.
  • Objects stored in a transactional cache must override equals() and hashCode(). If overriding equals() and hashCode() is not possible, see Implementing an Element Comparator.

You can choose one of three different modes for transactional caches:

  • Strict XA – Has full support for XA transactions. May not be compatible with transaction managers that do not fully support JTA.
  • XA – Has support for the most common JTA components, so likely to be compatible with most transaction managers. But unlike strict XA, may fall out of sync with a database after a failure (has no recovery). Integrity of cache data, however, is preserved.
  • Local – Local transactions written to a local store and likely to be faster than the other transaction modes. This mode does not require a transaction manager and does not synchronize with remote data sources. Integrity of cache data is preserved in case of failure.

NOTE: Deadlocks
Both the XA and local mode write to the underlying store synchronously and using pessimistic locking. Under certain circumstances, this can result in a deadlock, which generates a DeadLockException after a transaction times out and a commit fails. Your application should catch DeadLockException (or TransactionException) and call rollback().

Deadlocks can have a severe impact on performance. A high number of deadlocks indicates a need to refactor application code to prevent races between concurrent threads attempting to update the same data.

These modes are explained in the following sections.

Strict XA (Full JTA Support)

Note that Ehcache as an XA resource:

  • Has an isolation level of ReadCommitted.
  • Updates the underlying store asynchronously, potentially creating update conflicts. With this optimistic locking approach, Ehcache may force the transaction manager to roll back the entire transaction if a commit() generates a RollbackException (indicating a conflict).
  • Can work alongside other resources such as JDBC or JMS resources.
  • Guarantees that its data is always synchronized with other XA resources.
  • Can be configured on a per-cache basis (transactional and non-transactional caches can exist in the same configuration).
  • Automatically performs enlistment.
  • Can be used standalone or integrated with frameworks such as Hibernate.
  • Is tested with the most common transaction managers by Atomikos, Bitronix, JBoss, WebLogic, and others.

For more information on working with transactional caches in Enterprise Ehcache for Hibernate, see Setting Up Transactional Caches.

Configuration

To configure Enterprise Ehcache as an XA resource able to participate in transactions, the following <cache> attributes must be set as shown:

  • transactionalMode="xa_strict"
  • copyOnRead="true"
  • copyOnWrite="true"

In addition, the <cache> subelement <terracotta> must have the following attributes set as shown:

  • valueMode="serialization"
  • clustered="true"

For example, the following cache is configured for transactions with strict XA:

<cache name="com.my.package.Foo"
     maxElementsInMemory="500"
     eternal="false"
     overflowToDisk="false"
     copyOnRead="true"
     copyOnWrite="true"
     consistency="strong"
     transactionalMode="xa_strict">
   <terracotta clustered="true" valueMode="serialization" />
</cache>

Any other XA resource that could be involved in the transaction, such as a database, must also be configured to support XA.

Usage

Your application can directly use a transactional cache in transactions. This usage must occur after the transaction manager has been set to start a new transaction and before it has ended the transaction.

For example:

...
myTransactionMan.begin();
Cache fooCache = cacheManager.getCache("Foo");
fooCache.put("1", "Bar");
myTransactionMan.commit();
...

If more than one transaction writes to a cache, it is possible for an XA transaction to fail. See Avoiding XA Commit Failures With Atomic Methods for more information.

XA (Basic JTA Support)

Transactional caches set to "xa" provide support for basic JTA operations. Configuring and using XA does not differ from using local transactions (see Local Transactions), except that "xa" mode requires a transaction manager and allows the cache to participate in JTA transactions.

NOTE: Atomikos Transaction Manager
When using XA with an Atomikos transaction Manager, be sure to set com.atomikos.icatch.threaded_2pc=false in the Atomikos configuration. This helps prevent unintended rollbacks due to a bug in the way Atomikos behaves under certain conditions.

For example, the following cache is configured for transactions with XA:

<cache name="com.my.package.Foo"
     maxElementsInMemory="500"
     eternal="false"
     overflowToDisk="false"
     copyOnRead="true"
     copyOnWrite="true"
     consistency="strong"
     transactionalMode="xa">
   <terracotta clustered="true" valueMode="serialization" />
</cache>

Any other XA resource that could be involved in the transaction, such as a database, must also be configured to support XA.

Local Transactions

Local transactional caches (with the transactionalMode attribute set to "local") write to a local store using an API that is part of the Enterprise Ehcache core application. Local transactions have the following characteristics:

  • Recovery occurs at the time an element is accessed.
  • Updates are written to the underlying store immediately.
  • Get operations on the underlying store may block during commit operations.

To use local transactions, instantiate a TransactionController instance instead of a transaction manager instance:

TransactionController txCtrl = myCacheManager.getTransactionController();
...
txCtrl.begin();
Cache fooCache = cacheManager.getCache("Foo");
fooCache.put("1", "Bar");
txCtrl.commit();
...

You can use rollback() to roll back the transaction bound to the current thread.

TIP: Finding the Status of a Transaction on the Current Thread
You can find out if a transaction is in process on the current thread by calling

TransactionController.getCurrentTransactionContext() and checking its return value. If the value isn't null, a transaction has started on the current thread.

Commit Failures and Timeouts

Commit operations can fail if the transaction times out. If the default timeout requires tuning, you can get and set its current value:

int currentDefaultTransactionTimeout = txCtrl.getDefaultTransactionTimeout();
...
txCtrl.setDefaultTransactionTimeout(30); // in seconds -- must be greater than zero.

You can also bypass the commit timeout using the following version of commit():

txCtrl.commit(true); // "true" forces the commit to ignore the timeout.

Avoiding XA Commit Failures With Atomic Methods

If more than one transaction writes to a cache, it is possible for an XA transaction to fail. In the following example, if a second transaction writes to the same key ("1") and completes its commit first, the commit in the example may fail:

...
myTransactionMan.begin();
Cache fooCache = cacheManager.getCache("Foo");
fooCache.put("1", "Bar");
myTransactionMan.commit();
...

One approach to prevent this type of commit failure is to use one of the atomic put methods, such as Cache.replace():

myTransactionMan.begin();
int val = cache.get(key).getValue();  // "cache" is configured to be transactional.
Element olde = new Element (key, val);
if (cache.replace(olde, new Element(key, val + 1)) { // True only if the element was successfully replaced.
 myTransactionMan.commit();
}
else { myTransactionMan.rollback(); }

Another useful atomic put method is Cache.putIfAbsent(Element element), which returns null on success (no previous element exists with the new element’s key) or returns the existing element (the put is not executed). Atomic methods cannot be used with null elements, or elements with null keys.

Implementing an Element Comparator

For all transactional caches, the atomic methods Cache.removeElement(Element element) and Cache.replace(Element old, Element element) must compare elements for the atomic operation to complete. This requires all objects stored in the cache to override equals() and hashCode().

If overriding these methods is not desirable for your application, a default comparator is used (net.sf.echache.store.DefaultElementValueComparator). You can also implement a custom comparator and specify it in the cache configuration with <elementValueComparator>:

<cache name="com.my.package.Foo"
     maxElementsInMemory="500"
     eternal="false"
     overflowToDisk="false"
     copyOnRead="true"
     copyOnWrite="true"
     consistency="strong"
     transactionalMode="xa">
   <elementValueComparator class="com.company.xyz.MyElementComparator" />
   <terracotta clustered="true" valueMode="serialization" />
</cache>

Custom comparators must implement net.sf.ehcache.store.ElementValueComparator.

A comparator can also be specified programmatically.

Working With OSGi

To allow Enterprise Ehcache to behave as an OSGi component, the following attributes should be set as shown:

<cache ... copyOnRead="true" ... >
...
  <terracotta ... clustered="true" valueMode="serialization" ... />
...
</cache>

Your OSGi bundle will require the following JAR files (showing version from a Terracotta 3.6.2 kit):

  • ehcache-core-2.5.2.jar
  • ehcache-terracotta-2.5.2.jar
  • slf4j-api-1.6.1.jar
  • slf4j-nop-1.6.1.jar

    Or use another appropriate logger binding.

Use the following directory structure:

 -- net.sf.ehcache
          |
          | - ehcache.xml
          |- ehcache-core-2.5.2.jar
          |
          |- ehcache-terracotta-2.5.2.jar
          |
          | - slf4j-api-1.6.1.jar
          |
          | - slf4j-nop-1.6.1.jar
          |
          | - META-INF/
              | - MANIFEST.MF

The following is an example manifest file:

Manifest-Version: 1.0
 Export-Package: net.sf.ehcache;version="2.5.2"
 Bundle-Vendor: Terracotta
 Bundle-ClassPath: .,ehcache-core-2.5.2.jar,ehcache-terracotta-2.5.2.jar,slf4j-api-1.6.1.jar,slf4j-nop-1.6.1.jar
 Bundle-Version: 2.5.2
 Bundle-Name: EHCache bundle
 Created-By: 1.6.0_15 (Apple Inc.)
 Bundle-ManifestVersion: 2
 Import-Package: org.osgi.framework;version="1.3.0"
 Bundle-SymbolicName: net.sf.ehcache
 Bundle-RequiredExecutionEnvironment: J2SE-1.5

Use versions appropriate to your setup.

To create the bundle, execute the following command in the net.sf.ehcache directory:

jar cvfm net.sf.ehcache.jar MANIFEST.MF *