Hinted Handoff How it works RF: 3 1 Coordinator Replica 1 6 2 Store a hint Write Request 5 3 Replica 2 4 Replica 3
Hinted Handoff The coordinator store a hint, when A replica node for the row is either known to be down ahead of time. OR A replica node for the row does not respond to the write request.
Hinted Handoff Hints are replayed, when A node is making alive by the Gossiper OR The node checks every ten minutes for any hints for writes that timed out during an outage too brief for the failure detector to notice through gossip
Hinted Handoff system.hints
Hinted Handoff system.hints ● target_id : node ID concerned by the hint ● hint_id : hint ID (with a timestamp) ● message_version : internal message service version ● mutation : the actual data being written
Hinted Handoff Hint optimizations ● max_hint_delivery_threads : default 2 Need to be increased for multiple DC ● hinted_handoff_throttle_in_kb : default 1024 (Maximum throttle in KBs per second, per delivery thread)
Hinted Handoff Key points ● Hinted handoff is enabled by default ● Hinted handoff is an optional part of write ● Hinted handoff is an optimization ● Hints have a TTL ● Hints are uncorrelated to consistency level
Read Repair Goals 1. Ensure that all replicas have the most recent version of frequently-read data. 2. Anti-entropy real-time mechanism. read_repair_chance : (Default: 0.1)
Read Repair Global read setup ● Determine replicas to invoke ○ ConsistencyLevel vs Read Repair ● First data node respond with full data set, others send digest ● Coordinator waits for ConsistencyLevel
Read Repair Consistent reads - algorithm ● Compare digests ● If any mismatch ○ re-request to same nodes (full data set) ○ compare full data sets, send update ○ block until out-of-date replicas respond ● Return merged data set to the client
Read Repair How it works RF: 2 1 Coordinator Replica 1 up-to-date 1 2 out-of-date Read Request Read repair request ConsistencyLevel.QUORUM up-to-date 5 3 Replica 2 4
Read Repair The coordinator send a read repair, when A replica node for the row has responded an out-of-date value. OR The read repair chance declared on the column family is activated.
Read Repair Read repair configuration ● read_repair_chance : Default to 0.1 ● dclocal_read_repair_chance : Default to 0.0 Configured by Column Family (Table)
Read Repair Key points ● Consistent read is a part of read request ● Read repair is a probability to sync data ● Read repair is configured by column family ● Read repair can be Local DC or Global
Anti-entropy repair node Goals 1. Ensures that all data on a replica is made consistent. 2. Repair inconsistencies on a node that has been down for a while. nodetool repair <keyspace> [table] <opts>
Anti-entropy repair node Nodetool repair, when During normal operation as part of regular, scheduled cluster maintenance. OR During node recovery after a failure or on a node that has been down for a while. OR On nodes that contains data that is not read frequently.
Anti-entropy repair node How it works ● Determine peers nodes with matching ranges. ● Triggers a major (validation) compaction on peers. ○ i.e. do the read part of the compaction stage ● Read and generate the Merkle Tree on each peer. ● Initiator awaits trees from peers. ● Compare every trees to every other trees. ● If any differences, the differing nodes exchange conflicting ranges.
Anti-entropy repair node Merkle Tree Top Hash Hash Hash 0 1 Hash Hash Hash Hash 0-0 0-1 1-0 1-1 Partition Partition Partition Partition Data 1 Data 2 Data 3 Data 4
Anti-entropy repair node Cautious ● Building the Merkle Tree is disk I/O and CPU intensive ○ due to validation compaction ● Overstreaming occurs ○ due to the streaming of partitions
Anti-entropy repair node Options ● -pr (--partitioner-range) : repairs only the main partition range for that node Use -pr for periodic repair maintenance, and execute repair on each node. Don’t use -pr for a recovering node because other replicas for that node need to be repaired too.
Anti-entropy repair node Options ● -snapshot : only one replica at a time do computation => make sequential repairs. Always use -snapshot to avoid overloading the targeted node and it’s replica. ● Since 2.0.2, sequential repair is the default behavior. ○ renamed to -par (for parallel)