fix: Trim tar record padding to avoid broken-pipe failure on Podman#1684
Conversation
✅ Deploy Preview for testcontainers-dotnet ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
✅ Files skipped from review due to trivial changes (2)
WalkthroughReplaced Temporalio solution project entries with Temporal equivalents. Added internal Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/Testcontainers/Containers/TarOutputMemoryStream.cs`:
- Line 45: Replace the platform-dependent Encoding.Default used in the
TarOutputMemoryStream constructor base call with Encoding.UTF8 to ensure tar
entry names are encoded as UTF-8; locate the constructor in class
TarOutputMemoryStream and change the base(...) invocation that currently passes
Encoding.Default to pass Encoding.UTF8 instead so filenames remain consistent
and non-ASCII-safe across platforms.
In `@Testcontainers.slnx`:
- Line 69: This change includes an out-of-scope project rename: the solution
entry for Testcontainers.Temporalio (and its test project) was renamed to
Testcontainers.Temporal; either revert the rename in this branch so the bugfix
PR only contains the TarOutputMemoryStream.cs fix, or split the rename into a
separate commit/PR and update the PR description to document the rename; look
for the solution entries referencing "Testcontainers.Temporalio" and
"Testcontainers.Temporal" (and their corresponding test project names) and
either restore the original names in this PR or move those edits into a distinct
refactor commit/PR and note it in the PR description.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: db28a0d0-dcdf-46f4-856f-06e4efdbc59b
📒 Files selected for processing (2)
Testcontainers.slnxsrc/Testcontainers/Containers/TarOutputMemoryStream.cs
|
By the way, I've fixed the broken reference to the Temporal projects in the |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/Testcontainers/Images/DockerfileArchive.cs (1)
195-197: LGTM — consistent record sizing withTarOutputMemoryStream.Aligning the Dockerfile archive to a 512 B record size is the right move: the produced archive is also streamed to the build endpoint, so it benefits from the same Podman broken-pipe mitigation.
Optional nit: this is the only place
DockerfileArchive(Images) reaches intoTarOutputMemoryStream(Containers), creating a small cross-module coupling just to share the literal1. If you’d like to keep namespaces independent, a private const local to this file (or a shared internal helper) would work equally well. Non-blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/Testcontainers/Images/DockerfileArchive.cs` around lines 195 - 197, The code in DockerfileArchive is referencing TarOutputMemoryStream.TarBlockFactor from another module to set the TarOutputStream block factor; to avoid cross-module coupling, introduce a private const int (e.g., TarBlockFactor = 1) inside the DockerfileArchive class/file and use that constant when constructing TarOutputStream instead of TarOutputMemoryStream.TarBlockFactor; update the TarOutputStream(...) call site in DockerfileArchive (where TarOutputMemoryStream.TarBlockFactor is currently used) to use the new private const.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/Testcontainers/Images/DockerfileArchive.cs`:
- Around line 195-197: The code in DockerfileArchive is referencing
TarOutputMemoryStream.TarBlockFactor from another module to set the
TarOutputStream block factor; to avoid cross-module coupling, introduce a
private const int (e.g., TarBlockFactor = 1) inside the DockerfileArchive
class/file and use that constant when constructing TarOutputStream instead of
TarOutputMemoryStream.TarBlockFactor; update the TarOutputStream(...) call site
in DockerfileArchive (where TarOutputMemoryStream.TarBlockFactor is currently
used) to use the new private const.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 1ae0883f-0108-4999-b850-b87fc91a171d
📒 Files selected for processing (2)
src/Testcontainers/Containers/TarOutputMemoryStream.cssrc/Testcontainers/Images/DockerfileArchive.cs
e72bacc
into
testcontainers:develop
Problem
When running MongoDb replica-set tests (or any container that uses
WithResourceMapping) against a Podman daemon, the container start intermittently fails with:The failure is flaky: it's rare on an idle host, but becomes near-certain when other containers using the same image are already running. In the MongoDb test suite specifically, the four replica-set test classes fail while the twelve non-replica-set tests pass, because the replica-set code path is the only one that calls
WithResourceMapping(to upload a keyfile init script).Fix
Set
blockFactor: 1inTarOutputMemoryStream's base constructor call (src/Testcontainers/Containers/TarOutputMemoryStream.cs).With
blockFactor = 1the record size equals the block size (512 bytes). Each block is flushed to the underlyingMemoryStreamimmediately, andWriteFinalRecordfinds no partial record to pad. The emitted archive is exactly:— no trailing padding. After the subprocess reads the second EOF block and exits, the HTTP sender has nothing more to write. The race window is closed.
The on-the-wire format is unchanged in every other respect. The tar specification does not mandate any particular record size for archive consumers; all conformant readers (GNU tar, BSD tar, Go
archive/tar, Docker's in-process reader, Podman's reader) accept a block factor of 1 archive. This is a strictly smaller byte stream, not a malformed one, so it is safe on Docker as well.Impact
WithResourceMappingcallers benefit, not just MongoDb — any module that uploads bytes or files into a container on Podman was susceptible to the same race.Related issues
References
Root cause
This is a confirmed race condition in Podman's
containers/buildahcopier package — see containers/buildah#6573.Podman's
PUT /containers/{id}/archivehandler pipes the HTTP request body into a tar subprocess viaio.Copy. The subprocess reads the body throughtar.Reader; as soon asNext()returnsio.EOF(after the two 512-byte end-of-archive zero blocks), the subprocess exits and closes the read end of the pipe. If the HTTP sender is still writing bytes at that moment, it gets EPIPE → the 500 above.Our archive producer (
TarOutputMemoryStream, backed by SharpZipLib'sTarOutputStream) uses a block factor of 20 by default, which causes SharpZipLib to pad the finished archive up to the next 10 240-byte record boundary after the two EOF zero blocks. For a small file (like the ~100-byte keyfile init script) that produces roughly 8 KB of trailing zero padding that serves no purpose but gives Podman's race condition something to EPIPE on.The race is timing-dependent: on a busy host the write is slower relative to the read, so the subprocess wins the race more often and the error appears. The upstream Buildah fix (containers/buildah#6678, merged 2026-02-11) drains the full body before exiting the subprocess, but as of this writing it is not yet included in any tagged Buildah or Podman release.
The same failure has been observed in the testcontainers-java ecosystem: testcontainers/testcontainers-java#6640 (closed as "not planned"; workaround was test-level retry).
Summary by CodeRabbit
Bug Fixes
Chores