LATEST VERSION: 9.1.0 - CHANGELOG
Pivotal GemFire® v9.1

Transactions by Region Type

A transaction is managed on a per-cache basis, so multiple regions in the cache can participate in a single transaction. The data scope of a Geode cache transaction is the cache that hosts the transactional data. For partitioned regions, this may be a remote host to the one running the transaction application. Any transaction that includes one or more partitioned regions is run on the member storing the primary copy of the partitioned region data. Otherwise, the transaction host is the same one running the application.

  • The client executing the transaction code is called the transaction initiator.

  • The member contacted by the transaction initiator is called the transaction delegate.

  • The member that hosts the data—and the transaction—is called the transaction host.

The transaction host may be the same member or different member from the transaction initiator. In either case, when the transaction commits, data distribution is done from the transaction host in the same way.

Note: If you have consistency checking enabled in your region, the transaction will generate all necessary version information for the region update when the transaction commits. See Transactions and Consistent Regions for more details.

Transactions and Partitioned Regions

In partitioned regions, transaction operations are done first on the primary data store then distributed to other members from there, regardless of which member initiates the cache operation. This is the same as is done for normal cache operations on partitioned regions.

In this figure, M1 runs two transactions.

  • The first transaction, T1, works on data whose primary buckets are stored in M1, so M1 is the transaction host.
  • The second transaction, T2, works on data whose primary buckets are stored in M2, so M1 is the transaction delegate and M2 is the transaction host.

Transaction on a Partitioned Region:

The transaction is managed on the transaction host. This includes the transactional view, all operations, and all local cache event handling. In this example, when T2 is committed, the data on M2 is updated and the transaction events are distributed throughout the system, exactly as if the transaction had originated on M2.

The first region operation within the transaction determines the transaction host. All other operations must also work with that as their transaction host:

  • All partitioned region data managed inside the transaction must use the transaction host as their primary data store. In the example, if transaction T2 tried to work on entry W in addition to entries Y and Z, the TransactionDataNotColocatedException would be thrown. For information on partitioning data so it is properly colocated for transactions, see Understanding Custom Partitioning and Data Colocation. In addition, the data must not be moved during the transaction. Design partitioned region rebalancing to avoid rebalancing while transactions are running. See Rebalancing Partitioned Region Data.
  • All non-partitioned region data managed inside the transaction must be available on the transaction host and must be distributed. Operations on regions with local scope are not allowed in transactions with partitioned regions.

The next figure shows a transaction that operates on two partitioned regions and one replicated region. As with the single region example, all local event handling is done on the transaction host.

For a transaction to work, the first operation must be on one of the partitioned regions, to establish M2 as the transaction host. Running the first operation on a key in the replicated region would set M1 as the transaction host, and subsequent operations on the partitioned region data would fail with a TransactionDataNotColocatedException exception.

Transaction on a Partitioned Region with Other Regions:

Transactions and Replicated Regions

For replicated regions, the transaction and its operations are applied to the local member and the resulting transaction state is distributed to other members according to the attributes of each region.

Note: If possible, use distributed-ack scope for your regions where you will run transactions. The REPLICATE region shortcuts use distributed-ack scope.

The region’s scope affects how data is distributed during the commit phase. Transactions are supported for these region scopes:

  • distributed-ack. Handles transactional conflicts both locally and between members. The distributed-ack scope is designed to protect data consistency. This scope provides the highest level of coordination among transactions in different members. When the commit call returns for a transaction run on all distributed-ack regions, you can be sure that the transaction’s changes have already been sent and processed. In addition, any callbacks in the remote member have been invoked.
  • distributed-no-ack. Handles transactional conflicts locally, with less coordination between members. This provides the fastest transactions with distributed regions, but it does not work for all situations. This scope is appropriate for:
    • Applications with only one writer
    • Applications with multiple writers that write to nonoverlapping data sets
  • local. No distribution, handles transactional conflicts locally. Transactions on regions with local scope have no distribution, but they perform conflict checks in the local member. You can have conflict between two threads when their transactions change the same entry.

Transactions on non-replicated regions (regions that use the old API with DataPolicy EMPTY, NORMAL and PRELOADED) are always transaction initiators, and the transaction data host is always a member with a replicated region. This is similar to the way transactions using the PARTITION_PROXY shortcut are forwarded to members with primary bucket.

Note: When you have transactions operating on EMPTY, NORMAL or PARTITION regions, make sure that the Geode property conserve-sockets is set to false to avoid distributed deadlocks. An empty region is a region created with the API RegionShortcut.REPLICATE_PROXY or a region with that uses the old API of DataPolicy set to EMPTY.

Conflicting Transactions in Distributed-Ack Regions

In this series of figures, even after the commit operation is launched, the transaction continues to exist during the data distribution (step 3). The commit does not complete until the changes are made in the remote caches and M1 receives the acknowledgement that verifies that the tasks are complete.

Step 1: Before commit, Transactions T1 and T2 each change the same entry in Region B within their local cache. T1 also makes a change to Region A.

Step 2: Conflict detected and eliminated. The distributed system recognizes the potential conflict from Transactions T1 and T2 using the same entry. T1 started to commit first, so it is allowed to continue. T2’s commit fails with a conflict.

Step 3: Changes are in transit. T1 commits and its changes are merged into the local cache. The commit does not complete until Geode distributes the changes to the remote regions and acknowledgment is received.

Step 4: After commit. Region A in M2 and Region B in M3 reflect the changes from transaction T1 and M1 has received acknowledgment. Results may not be identical in different members if their region attributes (such as expiration) are different.

Conflicting Transactions in Distributed-No-Ack Regions

These figures show how using the no-ack scope can produce unexpected results. These two transactions are operating on the same region B entry. Since they use no-ack scope, the conflicting changes cross paths and leave the data in an inconsistent state.

Step 1: As in the previous example, Transactions T1 and T2 each change the same entry in Region B within their local cache. T1 also makes a change to Region A. Neither commit fails, and the data becomes inconsistent.

Step 2: Changes are in transit. Transactions T1 and T2 commit and merge their changes into the local cache. Geode then distributes changes to the remote regions.

Step 3: Distribution is complete. The non-conflicting changes in Region A have been distributed to M2 as expected. For Region B however, T1 and T2 have traded changes, which is not the intended result.

Conflicting Transactions with Local Scope

When encountering conflicts with local scope, the first transaction to start the commit process completes, and the other transaction’s commit fails with a conflict.. In the diagram below, the resulting value for entry Y depends on which transaction commits first.

Transactions and Persistent Regions

By default, Geode does not allow transactions on persistent regions. You can enable the use of transactions on persistent regions by setting the property gemfire.ALLOW_PERSISTENT_TRANSACTIONS to true. This may also be accomplished at server startup using gfsh:

gfsh start server --name=server1 --dir=server1_dir \
--J=-Dgemfire.ALLOW_PERSISTENT_TRANSACTIONS=true 

Since Geode does not provide atomic disk persistence guarantees, the default behavior is to disallow disk-persistent regions from participating in transactions. However, when choosing to enable transactions on persistent regions, consider the following:

  • Geode does ensure atomicity for in-memory updates.
  • When any failed member is unable to complete the logic triggered by a transaction (including subsequent disk writes), that failed member is removed from the distributed system and, if restarted, must rebuild its state from surviving nodes that successfully complete the updates.
  • The chances of multiple nodes failing to complete the disk writes that result from a transaction commit due to nodes crashing for unrelated reasons are small. The real risk is that the file system buffers holding the persistent updates do not get written to disk in the case of operating system or hardware failure. If only the Geode process crashes, atomicity still exists. The overall risk of losing disk updates can also be mitigated by enabling synchronized disk file mode for the disk stores, but this incurs a high performance penalty.

To mitigate the risk of data not get fully written to disk on all copies of the participating persistent disk stores:

  • Make sure you have enough redundant copies of the data. The guarantees of multiple/distributed in-memory copies being (each) atomically updated as part of the Transaction commit sequence can help guard against data corruption.
  • When executing transactions on persistent regions, we recommend using the TransactionWriter to log all transactions along with a time stamp. This will allow you to recover in the event that all nodes fail simultaneously while a transaction is being committed. You can use the log to recover the data manually.