HBASE-29039 Seek past delete markers instead of skipping one at a time#8001
HBASE-29039 Seek past delete markers instead of skipping one at a time#8001junegunn wants to merge 2 commits intoapache:masterfrom
Conversation
|
TestVisibilityLabelsWithDeletes is failing, which likely explains the additional changes in #6557. I'll try to fix it, but if it ends up resembling the previous approach, I'll drop this. |
When a DeleteColumn or DeleteFamily marker is encountered during a normal user scan, the matcher currently returns SKIP, forcing the scanner to advance one cell at a time. This causes read latency to degrade linearly with the number of accumulated delete markers for the same row or column. Since these are range deletes that mask all remaining versions of the column, seek past the entire column immediately via columns.getNextRowOrNextColumn(). This is safe because cells arrive in timestamp descending order, so any puts newer than the delete have already been processed. For DeleteFamily, also fix getKeyForNextColumn in ScanQueryMatcher to bypass the empty-qualifier guard (HBASE-18471) when the cell is a DeleteFamily marker. Without this, the seek barely advances past the current cell instead of jumping to the first real qualified column. The optimization is only applied with plain ScanDeleteTracker, and skipped when: - seePastDeleteMarkers is true (KEEP_DELETED_CELLS) - newVersionBehavior is enabled (sequence IDs determine visibility) - visibility labels are in use (delete/put label mismatch)
018a268 to
3a87682
Compare
Fixed by: - !seePastDeleteMarkers && !(deletes instanceof NewVersionBehaviorTracker)
+ !seePastDeleteMarkers && deletes.getClass() == ScanDeleteTracker.class |
|
I found a regression with this patch. When scanning across many rows where each row has only one The optimization helps when multiple delete markers accumulate for the same row or column. But for the common case of one delete per row, the seek is wasted and the overhead adds up across many rows. Benchmark data (scan time at 300K iterations,
benchmark(:DeleteFamilyDifferentRows) do |i|
row = i.to_s.to_java_bytes
T.put(Put.new(row).addColumn(CF, CQ, VALUE))
T.delete(Delete.new(row))
end
One possible approach: only seek on the second (or n-th) delete marker for the same scope. The first one would
Would this kind of heuristic make sense? Note On the threshold for switching from skip to seek: based on my benchmarks, seek is roughly 50% more expensive than skip. So the false positive (i.e. seek unnecessary because there are just N deletes per Put) overhead depends on the threshold N:
Higher N reduces the relative overhead of false positives, but delays the benefit when markers are truly accumulating (N-1 extra skips per row before seeking kicks in). N=2 or N=3 both seem reasonable, but since we're optimizing for the case where many delete markers accumulate, a higher N like 10 would also work. The first few extra skips are negligible when there are hundreds of markers to seek past. Happy to hear thoughts on what makes sense here. Note This patch does not compare qualifiers of contiguous delete markers. Doing so (e.g. exposing a method on |
|
59ad767 implements the heuristic with N = 3 (i.e. seek every 3 contiguous delete markers) The regression in normal case (no redundant delete markers) is fixed (see
The performance benefit with many redundant delete markers remains:
|
6be48a0 to
e7dc782
Compare
Seeking is ~50% more expensive than skipping. When each row has only one DeleteFamily or DeleteColumn marker (common case), the seek overhead adds up across many rows, causing ~50% scan regression. Introduce a counter that tracks consecutive range delete markers per row. Only switch from SKIP to SEEK after seeing SEEK_ON_DELETE_MARKER_THRESHOLD (default 3) markers, indicating actual accumulation. This preserves skip performance for the common case while still optimizing the accumulation case.
e7dc782 to
59ad767
Compare






Context
HBASE-30036 (#7993) consolidates redundant delete markers on flush, preventing them from growing unbounded in HFiles. However, markers still accumulate in the memstore before flush, degrading read performance. HBASE-29039 addresses this from the read path side. Both are needed for full coverage. There is an open PR (#6557), but the review process has been stalled. This is an alternative approach with fewer code changes, hopefully making it easier to reach consensus.
Test result
Using the test code in HBASE-30036.
DeleteFamilyDeleteColumnContiguousDeleteColumnInterleavedDescription
When a DeleteColumn or DeleteFamily marker is encountered during a normal user scan, the matcher currently returns SKIP, forcing the scanner to advance one cell at a time. This causes read latency to degrade linearly with the number of accumulated delete markers for the same row or column.
Since these are range deletes that mask all remaining versions of the column, seek past the entire column immediately via columns.getNextRowOrNextColumn(). This is safe because cells arrive in timestamp descending order, so any puts newer than the delete have already been processed.
For DeleteFamily, also fix getKeyForNextColumn in ScanQueryMatcher to bypass the empty-qualifier guard (HBASE-18471) when the cell is a DeleteFamily marker. Without this, the seek barely advances past the current cell instead of jumping to the first real qualified column.
The optimization is skipped when: