Call us: +1-415-738-4000

Configuring BigMemory

BigMemory is configured in the Terracotta server's environment and in its configuration file.

Configuring Direct Memory Space

Before starting a Terracotta server with off-heap, direct memory space, also called direct (memory) buffers, must be allocated. Direct memory space is allocated using the Java property MaxDirectMemorySize:

-XX:MaxDirectMemorySize=<amount of memory alloted>[m|g]

where "m" stands for megabytes (MB) and "g" stands for gigabytes (GB).

Note the following about allocating direct memory space:

  • MaxDirectMemorySize must be added to the Terracotta server's startup environment. For example, you can add it the server's Java options in ${TERRACOTTA_HOME}/bin/ or %TERRACOTTA_HOME%\bin\start-tc-server.bat.
  • Direct memory space, which is part of the Java process heap, is separate from the object heap allocated by -Xmx. The value allocated by MaxDirectMemorySize must not exceed physical RAM, and is likely to be less than total available RAM due to other memory requirements.
  • The amount of direct memory space allocated must be within the constraints of available system memory and configured off-heap memory (see Configuring Off-Heap).

Configuring Off-Heap

BigMemory is set up by configuring off-heap memory in the Terracotta configuration file for each Terracotta server, then allocating memory at startup using MaxDirectMemorySize. For example, to allocate up to 9GB of off-heap memory, add the block as shown:

<server host="myHost" name="server1">
<!-- Allocate 9GB of off-heap memory for clustered data. -->

The amount of configured off-heap memory must be at least 128MB and at least 32MB less than the amount of memory allocated by MaxDirectMemorySize. This is because 32MB of memory is utilized by the server's communication layer.

If, at startup, a server determines that the memory allocated by MaxDirectMemorySize is insufficient, an error similar to the following is logged:

2011-03-28 07:39:59,316 ERROR - The JVM argument -XX:MaxDirectMemorySize(128m) 
cannot be less than TC minimum Direct memory requirement: 202.22m

In this case, you must set MaxDirectMemorySize to a value equal to or greater than the minimum given in the error.

Maximum, Minimum, and Default Values

The maximum amount of direct memory space you can use depends on the process data model (32-bit or 64-bit) and the associated operating system limitations, the amount of virtual memory available on the system, and the amount of physical memory available on the system. While 32-bit systems have strict limitations on the amount of memory that can be effectively managed, 64-bit systems can allow as much memory as the hardware and operating system can handle.

The maximum amount you can allocate to off-heap memory cannot exceed the amount of direct memory space, and should likely be less because direct memory space may be shared with other Java and system processes.

The minimum off-heap you can allocate per server is 160MB.

Notes for 32-Bit Systems

In a 32-bit process model, the amount of heap-offload you can achieve is limited by the addressable memory. The maximum virtual address size of the process is typically 4 GB, though most 32-bit operating systems have a 2GB limit. The maximum heap size available to Java is lower still due to particular OS limitations, other operations that may run on the machine (such as mmap operations used by certain APIs), and various JVM requirements for loading shared libraries and other code.

A useful rule to observe is to allocate no more to off-heap memory than what is left over after -Xmx is set. For example, if you set -Xmx3G, then off-heap should be no more than 1GB. Breaking this rule may not cause an OOME on startup, but one is likely to occur at some point during the JVM's life.

Default Value of Direct Memory Space

If you configure off-heap memory but do not allocate direct memory space with -XX:MaxDirectMemorySize, the default value for direct memory space depends on your version of your JVM. Oracle HotSpot has a default equal to maximum heap size (-Xmx value), although some early versions may default to a particular value.

Optimizing BigMemory

Note the following recommendations:

  • Thoroughly test BigMemory with your application before going to production. It is recommended that you test BigMemory with the actual amount of data you expect to use in production.
  • Be sure to allot at least 15 percent more off-heap memory to BigMemory than the size of your data set. To maximize performance, BigMemory reserves a portion of off-heap memory to store meta-data and other purposes.
  • If working with distributed cache, consider using the sizing parameters available through Ehcache configuration.

If performance or functional issues arise, see the suggested tuning tips in this section.

General Memory allocation

Committing too much of a system's physical memory is likely to result in paging of virtual memory to disk, quite likely during garbage collection operations, leading to significant performance issues. On systems with multiple Java processes, or multiple processes in general, the sum of the Java heaps and off-heap stores for those processes should also not exceed the size of the physical RAM in the system. Besides memory allocated to the heap, Java processes require memory for other items, such as code (classes), stacks, and PermGen.

Note that MaxDirectMemorySize sets an upper limit for the JVM to enforce, but does not actually allocate the specified memory. Overallocation of direct memory (or buffer) space is therefore possible, and could lead to paging or even memory-related errors. The limit on direct buffer space set by MaxDirectMemorySize should take into account the total physical memory available, the amount of memory that is allotted to the JVM object heap, and the portion of direct buffer space that other Java processes may consume.

Note also that there could be other users of direct buffers (such as NIO and certain frameworks and containers). Consider allocating additional direct buffer memory to account for that additional usage.

Compressed References

For 64-bit JVMs running Java 6 Update 14 or higher, consider enabling compressed references to improve overall performance. For heaps up to 32GB, this feature causes references to be stored at half the size, as if the JVM is running in 32-bit mode, freeing substantial amounts of heap for memory-intensive applications. The JVM, however, remains in 64-bit mode, retaining the advantages of that mode.

For the Oracle HotSpot, compressed references are enabled using the option -XX:+UseCompressedOops. For IBM JVMs, use -Xcompressedrefs.

Swapiness and Huge Pages

An OS could swap data from memory to disk even if memory is not running low. For the purpose of optimization, data that appears to be unused may be a target for swapping. Because BigMemory can store substantial amounts of data in RAM, its data may be swapped by the OS. But swapping can degrade overall cluster performance by introducing thrashing, the condition where data is frequently moved forth and back between memory and disk.

To make heap memory use more efficient, Linux, Microsoft Windows, and Oracle Solaris users should review their configuration and usage of swappiness as well as the size of the swapped memory pages. In general, BigMemory benefits from lowered swappiness and the use of huge
(also known as big pages, large pages, and superpages).Settings for these behaviors vary by OS and JVM. For Oracle HotSpot, -XX:+UseLargePages and -XX:LargePageSizeInBytes=<size> (where <size> is a value allowed by the OS for specific CPUs) can be used to control page size. However, note that this setting does not affect how off-heap memory is allocating. Over-allocating huge pages while also configuring substantial off-heap memory can starve off-heap allocation and lead to memory and performance problems.

Maximum Serialized Size of an Element

This section applies when using BigMemory through the Ehcache API.

Unlike the memory and the disk stores, by default the off-heap store has a 4MB limit for classes with high quality hashcodes, and 256KB limit for those with pathologically bad hashcodes. The built-in classes such as String and the java.lang.Number subclasses Long and Integer have high quality hashcodes. This can issues when objects are expected to be larger than the default limits.

To override the default size limits, set the system property net.sf.ehcache.offheap.cache_name.config.idealMaxSegmentSize to the size you require.

For example,