AVRO-4241: [Java] BinaryDecoder should verify available bytes before reading#3725
AVRO-4241: [Java] BinaryDecoder should verify available bytes before reading#3725iemejia wants to merge 3 commits intoapache:mainfrom
Conversation
ef9c52a to
ce192d2
Compare
…reading Add ensureAvailableBytes() pre-check in readString, readBytes, readArrayStart, arrayNext, readMapStart, and mapNext to verify the source has sufficient data before proceeding. Byte-array-backed sources return an exact remaining count. Stream-backed sources return buffered bytes plus InputStream.available(), which is reliable for the finite streams used by DataFileReader and DataFileStream. Includes regression tests and updated array/map limit tests.
ce192d2 to
8471fc2
Compare
|
To cherry pick to 1.12! |
There was a problem hiding this comment.
Pull request overview
This PR strengthens Java binary decoding against truncated/malicious inputs by adding “bytes remaining” awareness and using it to fail fast (EOF) before allocating large buffers or collection backing structures.
Changes:
- Add a
Decoder#remainingBytes()API (default-1) and plumb it throughBinaryDecoderandValidatingDecoder. - Add early “ensure available bytes” checks for
BinaryDecoder.readString/readBytesand schema-aware prechecks for array/map block counts inGenericDatumReader. - Add/adjust regression tests covering string/bytes length validation and array/map byte-limit behavior.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| lang/java/avro/src/main/java/org/apache/avro/io/Decoder.java | Introduces remainingBytes() default API for decoders. |
| lang/java/avro/src/main/java/org/apache/avro/io/BinaryDecoder.java | Implements remainingBytes() and adds early byte-availability checks for string/bytes reads. |
| lang/java/avro/src/main/java/org/apache/avro/io/ValidatingDecoder.java | Delegates remainingBytes() to the underlying decoder. |
| lang/java/avro/src/main/java/org/apache/avro/generic/GenericDatumReader.java | Adds schema-aware byte validation for arrays/maps before allocating/reading blocks. |
| lang/java/avro/src/main/java/org/apache/avro/util/ByteBufferInputStream.java | Implements available() to support accurate remaining-byte reporting. |
| lang/java/avro/src/test/java/org/apache/avro/io/TestBinaryDecoder.java | Adds tests ensuring EOF is thrown before large allocations for string/bytes. |
| lang/java/avro/src/test/java/org/apache/avro/generic/TestGenericDatumReader.java | Adds unit tests for minBytesPerElement and end-to-end collection byte validation. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| int buffered = ba.getLim() - ba.getPos(); | ||
| try { | ||
| if (in.getClass() == ByteArrayInputStream.class || in.getClass() == ByteBufferInputStream.class) { | ||
| return buffered + in.available(); |
| int minBytesPerEntry = 1 + minBytesPerElement(valueSchema); | ||
| if (count > 0) { | ||
| int remaining = decoder.remainingBytes(); | ||
| if (remaining >= 0 && count * (long) minBytesPerEntry > remaining) { |
| */ | ||
| private static void ensureAvailableMapBytes(Decoder decoder, long count, Schema valueSchema) throws EOFException { | ||
| // Map keys are always strings: at least 1 byte for the length varint | ||
| int minBytesPerEntry = 1 + minBytesPerElement(valueSchema); |
There was a problem hiding this comment.
go with copilot and use longs here to avoid problems on very large files
| private long arrayNext(ResolvingDecoder in, Schema elementType) throws IOException { | ||
| long l = in.arrayNext(); | ||
| if (l > 0) { | ||
| ensureAvailableCollectionBytes(in, l, elementType); |
There was a problem hiding this comment.
if this a no-op on l <= 0 then you wouldn't need to guard all the uses
| } | ||
|
|
||
| @Override | ||
| public int available() throws IOException { |
There was a problem hiding this comment.
there's this real oddness with available(), in that some interpret it as "all that is left in the stream" but it can also be interpreted as "bytes you can read() before blocking for new data". that's probably the correct one.
it does hold here, just important not to use available() as measures of how much is left in a stream, which may be larger. looks like you are doing the right thing and testing it later on.
| return 0; // break recursion for self-referencing schemas | ||
| } | ||
| long sum = 0; | ||
| for (Schema.Field f : schema.getFields()) { |
There was a problem hiding this comment.
I worry about the cost of this operation on a complex wide and recursive structure, as it'll be invoked once per record.
There was a problem hiding this comment.
I added a JMH benchmark test to measure the impact of this for wide and deeply nested structures the results are promising apparently the extra cost is negligible.
https://gist.github.com/iemejia/bae3302ec0f3d2abf92e99911ccba606
There was a problem hiding this comment.
that's cool. I'm hitting serious field enum problems on variants in parquet, as shredded variants explode the schema.
…t overflow per PR comments
|
All PR review comments addressed |
Add ensureAvailableBytes() pre-check in readString, readBytes, readArrayStart, arrayNext, readMapStart, and mapNext to verify the source has sufficient data before proceeding.
Byte-array-backed sources return an exact remaining count. Stream-backed sources return buffered bytes plus InputStream.available(), which is reliable for the finite streams used by DataFileReader and DataFileStream.
Includes regression tests and updated array/map limit tests.
R: @RyanSkraba @martin-g
What is the purpose of the change
(For example: This pull request improves file read performance by buffering data, fixing AVRO-XXXX.)
Verifying this change
(Please pick one of the following options)
This change is a trivial rework / code cleanup without any test coverage.
(or)
This change is already covered by existing tests, such as (please describe tests).
(or)
This change added tests and can be verified as follows:
(example:)
Documentation