You can make configuration adjustments to improve multicast and unicast UDP performance of peer-to-peer communication.
You can tune your VMware GemFire UDP messaging to maximize throughput. There are two main tuning goals: to use the largest reasonable datagram packet sizes and to reduce retransmission rates. These actions reduce messaging overhead and overall traffic on your network while still getting your data where it needs to go. VMware GemFire also provides statistics to help you decide when to change your UDP messaging settings.
Before you begin, you should understand VMware GemFire Basic Configuration and Programming. See also the general communication tuning and multicast-specific tuning covered in Socket Communication and Multicast Communication.
You can change the UDP datagram size with the VMware GemFire property
udp-fragment-size. This is the maximum packet size for transmission over UDP unicast or multicast sockets. When possible, smaller messages are combined into batches up to the size of this setting.
Most operating systems set a maximum transmission size of 64k for UDP datagrams, so this setting should be kept under 60k to allow for communication headers. Setting the fragment size too high can result in extra network traffic if your network is subject to packet loss, as more data must be resent for each retransmission. If many UDP retransmissions appear in DistributionStats, you maybe achieve better throughput by lowering the fragment size.
UDP protocols typically have a flow-control protocol built into them to keep processes from being overrun by incoming no-ack messages. The VMware GemFire UDP flow-control protocol is a credit based system in which the sender has a maximum number of bytes it can send before getting its byte credit count replenished, or recharged, by its receivers. While its byte credits are too low, the sender waits. The receivers do their best to anticipate the sender’s recharge requirements and provide recharges before they are needed. If the sender’s credits run too low, it explicitly requests a recharge from its receivers.
This flow-control protocol, which is used for all multicast and unicast no-ack messaging, is
configured using a three-part VMware GemFire property
mcast-flow-control. This property is composed of:
byteAllowance—Determines how many bytes (also referred to as credits) can be sent before receiving a recharge from the receiving processes.
rechargeThreshold—Sets a lower limit on the ratio of the sender’s remaining credit to its
byteAllowance. When the ratio goes below this limit, the receiver automatically sends a recharge. This reduces recharge request messaging from the sender and helps keep the sender from blocking while waiting for recharges.
rechargeBlockMs—Tells the sender how long to wait while needing a recharge before explicitly requesting one.
In a well-tuned system, where consumers of cache events are keeping up with producers, the
byteAllowance can be set high to limit flow-of-control messaging and pauses. JVM bloat or frequent message retransmissions are an indication that cache events from producers are overrunning consumers.
VMware GemFire stores retransmission statistics for its senders and receivers. You can use these statistics to help determine whether your flow control and fragment size settings are appropriate for your system.
The retransmission rates are stored in the DistributionStats
mcastRetransmits. For multicast, there is also a receiver-side statistic
that can be used to see which processes aren’t keeping up and are requesting retransmissions. There
is no comparable way to tell which receivers are having trouble receiving unicast UDP messages.