Skip to content

fix(inkless): POD-1965 Use local log start in DisklessLeaderEndPoint#591

Open
viktorsomogyi wants to merge 1 commit into
mainfrom
svv/ts-unification-local-lso
Open

fix(inkless): POD-1965 Use local log start in DisklessLeaderEndPoint#591
viktorsomogyi wants to merge 1 commit into
mainfrom
svv/ts-unification-local-lso

Conversation

@viktorsomogyi
Copy link
Copy Markdown
Contributor

Fetching with FetchHandler returns the log_start_offset from the coordinator which is actually the diskless start offset. If we pass this on, then ReplicaManager will believe that log_start_offset is the first offset and therefore enforces retention.
Instead of this behavior we should use the local log start offset to properly respect the offset boundaries and retention.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adjusts consolidation fetch behavior so the DisklessLeaderEndPoint does not propagate the coordinator-provided “diskless start offset” as the fetch response logStartOffset, preventing ReplicaManager from treating that as the true log start and incorrectly enforcing retention boundaries.

Changes:

  • Update DisklessLeaderEndPoint.fetch to prefer a locally-derived log start offset (and handle lookup failures).
  • Expand DisklessLeaderEndPointTest coverage around log start offset overlaying and error precedence.
  • Clarify offset semantics via comments in SQL migrations.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.

File Description
storage/inkless/src/main/resources/db/migration/V11__Init_diskless_log.sql Adds clarification comment for diskless_start_offset in init request type.
storage/inkless/src/main/resources/db/migration/V1__Create_tables.sql Adds an explanatory comment on logs.log_start_offset (needs consistency with V11 semantics).
core/src/main/scala/io/aiven/inkless/consolidation/DisklessLeaderEndPoint.scala Changes fetch response mapping to use a locally-derived LSO and adds inconsistency logging.
core/src/test/scala/io/aiven/inkless/consolidation/DisklessLeaderEndPointTest.scala Updates existing fetch tests and adds new cases for LSO behavior under mixed diskless/local lookup outcomes.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread core/src/main/scala/io/aiven/inkless/consolidation/DisklessLeaderEndPoint.scala Outdated
Comment thread storage/inkless/src/main/resources/db/migration/V1__Create_tables.sql Outdated
@viktorsomogyi viktorsomogyi force-pushed the svv/ts-unification-local-lso branch from 3454f3f to 4c9d846 Compare May 13, 2026 13:53
@viktorsomogyi viktorsomogyi requested a review from Copilot May 13, 2026 13:53
@viktorsomogyi viktorsomogyi force-pushed the svv/ts-unification-local-lso branch from 4c9d846 to 7e2313e Compare May 13, 2026 13:59
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.

Use local log's log start offset instead of the one that is stored in
the diskless coordinator. This is needed for consolidating partitions
as they expect that the log start at the beginning of the already
consolidated log that is in UnifiedLog.

Fetching with FetchHandler returns the log_start_offset from the coordinator
which is actually the diskless start offset. If we pass this on, then
ReplicaManager will believe that log_start_offset is the first offset and
therefore enforces retention.
Instead of this behavior we should use the local log start offset to
properly respect the offset boundaries and retention.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants