fix: complete native_datafusion Parquet schema-mismatch rejections#4229
Conversation
When COMET_SCHEMA_EVOLUTION_ENABLED is false, the native_datafusion scan path now rejects reading Parquet INT32 as INT64, FLOAT as DOUBLE, and INT32 as DOUBLE — matching the existing validation in native_iceberg_compat. The allow_type_promotion flag is passed from JVM via protobuf and checked in replace_with_spark_cast() before allowing widening casts. Closes apache#3720 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Format the SchemaColumnConvertNotSupportedException message produced by the type-promotion check so it matches Spark's vectorized reader output: column rendered as [name], expected as Spark catalog string (bigint), found as Parquet primitive name (INT32). This lets the SPARK-35640 and "row group skipping doesn't overflow" tests pass, and updates 3.4.3.diff to remove their IgnoreCometNativeDataFusion tags. The TimestampLTZ to TimestampNTZ case (SPARK-36182) and decimal precision/scale case (SPARK-34212) remain ignored, tracked under apache#4219 and apache#3720 respectively. Also reverts the cfg(test) gate on parquet/util/test_common so the parquet_read benchmark builds.
Run the 3720-tagged tests in dev/diffs/3.5.8.diff, 4.0.2.diff, and 4.1.1.diff against patched Spark trees with the type-promotion fix applied, then drop the IgnoreCometNativeDataFusion tag for tests that now pass and keep it on tests that still fail. 3.5.8: Drop tags for SPARK-35640 (int as long) and "row group skipping doesn't overflow", repoint SPARK-36182 at issue apache#4219. Same scope as 3.4.3, since the test source matches. 4.0.2 and 4.1.1: Drop tags for SPARK-47447 (TimestampLTZ as TimestampNTZ) and "row group skipping". 4.1.1 also drops the tag for SPARK-45604 (timestamp_ntz to array<timestamp_ntz>). Tests for SPARK-35640 (binary as timestamp), SPARK-34212 (decimal precision/scale), the schema-mismatch vectorized-reader test, and the parameterized ParquetTypeWideningSuite cases (unsupported parquet conversion, unsupported parquet timestamp conversion, parquet decimal precision change, parquet decimal precision and scale change) still fail and remain ignored under apache#3720.
The test reads a partitioned dataset where one partition is an empty parquet file written with INT32 schema and the other has 10 rows of INT64. Spark's vectorized reader silently skips the type check for the empty file because no row groups are scanned. The native_datafusion adapter rejects the INT32 to INT64 promotion at plan time regardless of file row count, so the test now fails when allow_type_promotion is false (Spark 3.x default). Tag the test with IgnoreCometNativeDataFusion under the existing 3720 umbrella in 3.4.3.diff and 3.5.8.diff. Spark 4.x defaults allow_type_promotion to true so its diffs are unaffected.
|
Nice reduction in ignored tests. One concern on scope. The three-case match in But I ran a probe against this PR on Spark 3.5 with
The top three rows are what the PR fixes and look right under both settings. The bottom seven are wrong-answer paths under both settings: silent overflow on narrowing, silent precision loss on widening Spark doesn't allow, silent raw-int-as-epoch-seconds reinterpretation for Not asking you to fix all of them in this PR. But I think the framing in the commit message and code comment (
Either is fine by me. I'd lean toward (2) to keep this PR scoped. Probe used (slimmed, put under package org.apache.comet.parquet
import scala.util.Try
import org.apache.spark.sql.{CometTestBase, DataFrame}
import org.apache.spark.sql.internal.SQLConf
import org.apache.comet.CometConf
class TypePromotionProbeSuite extends CometTestBase {
import testImplicits._
private def probe(label: String)(body: => Any): Unit = {
val result = Try(body)
// scalastyle:off println
println(s"[PROBE] $label -> ${result match {
case scala.util.Success(v) => s"OK value=$v"
case scala.util.Failure(e) => s"THROW ${e.getClass.getSimpleName}"
}}")
// scalastyle:on println
}
private def runAll(ev: Boolean): Unit = withSQLConf(
CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION,
CometConf.COMET_SCHEMA_EVOLUTION_ENABLED.key -> ev.toString,
SQLConf.USE_V1_SOURCE_LIST.key -> "parquet") {
def run(label: String, df: DataFrame, writeType: String, readAs: String): Unit =
probe(s"$label (ev=$ev)") {
withTempPath { dir =>
df.selectExpr(s"cast(c as $writeType) as c").write.parquet(dir.getCanonicalPath)
spark.read.schema(s"c $readAs").parquet(dir.getCanonicalPath)
.collect().map(_.get(0)).toSeq
}
}
run("int->long", Seq(1, 2, 3).toDF("c"), "int", "bigint")
run("float->double", Seq(1.0f, 2.0f, 3.0f).toDF("c"), "float", "double")
run("int->double", Seq(1, 2, 3).toDF("c"), "int", "double")
run("long->int narrowing", Seq(1L, 2L, 3L, Int.MaxValue.toLong + 5L).toDF("c"), "bigint", "int")
run("double->float narrowing",Seq(1.5, 2.5, 1e40).toDF("c"), "double", "float")
run("float->long", Seq(1.5f, 2.5f).toDF("c"), "float", "bigint")
run("long->double", Seq(1L, 2L, (1L << 54) + 1L).toDF("c"), "bigint", "double")
run("int->float", Seq(1, 2, (1 << 25) + 1).toDF("c"), "int", "float")
run("int->timestamp", Seq(1, 2, 3).toDF("c"), "int", "timestamp")
run("double->long", Seq(1.0, 2.0, 3.0).toDF("c"), "double", "bigint")
}
test("probe ev=false") { runAll(ev = false) }
test("probe ev=true") { runAll(ev = true) }
} |
|
I guess the bigger question to me becomes: why do we have |
|
Thanks for the probe — that table makes the remaining surface concrete. Going with option (2): filed #4297 to track the seven unconditionally-rejected conversions, with your probe table and code copied over so the behavior is captured. Tightened the code comment and PR description here to be explicit that this PR only closes the three On the bigger question about deprecating |
…ion-validation # Conflicts: # dev/diffs/3.4.3.diff # dev/diffs/3.5.8.diff # native/core/src/parquet/parquet_support.rs # native/proto/src/proto/operator.proto # spark/src/main/scala/org/apache/comet/serde/operator/CometNativeScan.scala
…-Spark-version default
Removes the public `spark.comet.schemaEvolution.enabled` ConfigEntry and reads
type-promotion permissiveness directly from the per-Spark-version constant in
`ShimCometConf` (false on 3.x, true on 4.x). Mirrors what Spark's vectorized
reader does without requiring a user-tunable knob that historically existed only
for the now-dead Java Iceberg-Comet integration.
- Promote `COMET_SCHEMA_EVOLUTION_ENABLED` to a public `val` in `ShimCometConf`
- Drop the `internal()` ConfigEntry from `CometConf`
- Swap Java callers in `AbstractColumnReader` and `TypeUtil` to read the constant
- Swap `CometNativeScan` to set the proto field from the constant
- Rewrite `ParquetReadSuite` tests that flipped the conf:
- `schema evolution`: drop the parametrization, branch on the constant
- `type widening` and `read byte, int, short, long together`: gate with
`assume(...COMET_SCHEMA_EVOLUTION_ENABLED)` since they only apply on 4.x
Closes apache#4298.
Also regenerates `dev/diffs/3.4.3.diff` and `dev/diffs/3.5.8.diff` to include the
SPARK-26709 IgnoreCometNativeDataFusion tag alongside main's SPARK-33084 tag
(both branches added separate hunks to SQLQuerySuite; merge needed a re-diff).
|
@mbutrovich I removed |
…ion-validation # Conflicts: # dev/diffs/4.0.2.diff # dev/diffs/4.1.1.diff
…-26709 The plan-time check in `replace_with_spark_cast` rejects the three widenings (INT32->INT64, FLOAT->DOUBLE, INT32->DOUBLE) regardless of file row count. Spark's vectorized reader only invokes `ParquetVectorUpdaterFactory.getUpdater` while decoding a row group, so a Parquet file with no row groups (e.g. written from an empty DataFrame) passes silently. SPARK-26709's mixed-partition case hit this: one partition is an empty INT32 file, another has 10 rows of INT64. Replace the eager `return Err(...)` with a `RejectOnNonEmpty` PhysicalExpr that returns an empty array of the target type when the input batch has 0 rows and raises `ParquetSchemaConvert` otherwise. The JVM shim converts the error to `SchemaColumnConvertNotSupportedException` with the same Spark-compatible column-name and type formatting. Drops the `IgnoreCometNativeDataFusion` tag for SPARK-26709 in 3.4.3.diff and 3.5.8.diff (both diffs regenerated from clean Spark trees).
Match Spark's `_LEGACY_ERROR_TEMP_2063` exactly for the two BINARY-related rejection paths in `replace_with_spark_cast`: - Existing BINARY -> non-string/binary/decimal rejection: format column as `[name]`, emit Parquet primitive names (`BINARY`) and Spark catalog names (`int`, `timestamp`, ...) instead of Arrow datatype debug names. - New non-BINARY -> string/binary rejection: Spark's vectorized reader has no `int -> string` or `long -> string` updater in `ParquetVectorUpdaterFactory`, so reject these to match (previously we silently produced strings via `spark_expr::Cast`). Extends `parquet_primitive_name` / `spark_catalog_name` with Utf8, Binary, Date32, and Timestamp entries needed by the new error format. Un-ignores tests now passing: - `schema mismatch failure error message for parquet vectorized reader` (all four diffs, tests both directions) - `SPARK-35640: read binary as timestamp should throw schema incompatible error` (4.0.2, 4.1.1) - `SPARK-35640: int as long should throw schema incompatible error` (3.4.3, 3.5.8) was already enabled by the existing type-promotion rejection but the upmerge regen had re-added the ignore tag; drop it here. Replaces the existing `parquet_roundtrip_int_as_string` test (which was asserting silent wrong-answer behavior) with `parquet_int_read_as_string_errors` plus a companion `parquet_string_read_as_int_errors`.
|
I iterated more on this PR and it now unignores more tests and no longer adds any new ignores - let's see if CI passes |
|
@mbutrovich this is ready for another look |
…ion gaps - Tests pointing at apache#3720 in 4.1.1.diff now reference apache#4297 (primitive narrowing), apache#4343 (decimal-to-decimal narrowing), or apache#4344 (integer-to- decimal narrowing). - Replace the high-level scan-compat note with three concrete entries.
…, apache#4344) Extend the native_datafusion schema adapter to mirror the rejections in `ParquetVectorUpdaterFactory.getUpdater` / `isDecimalTypeMatched`: - Integer -> decimal where the requested decimal cannot represent the source integer type's range (apache#4344). INT32 sources require `precision - scale >= 10`, INT64 sources `>= 20`. - Decimal -> decimal where `scaleIncrease < 0` OR `precisionIncrease < scaleIncrease` (apache#4343). Generalises the prior scale-only narrowing check to also reject precision narrowing and scale widening that overflows the integer side. - Primitive numeric / date / timestamp conversions Spark rejects on every supported version (apache#4297): `long -> int`, `double -> float`, `float|double -> int*`, `int|long -> float`, `int -> timestamp`, `long -> date|timestamp`, `date -> timestamp(LTZ)`, and `timestamp -> date`. Deferred to runtime via `RejectOnNonEmpty` so empty Parquet files pass through (SPARK-26709). `spark_catalog_name` now returns String so it can format `decimal(p,s)` per Spark's `catalogString()`. `parquet_primitive_name` gains decimal / date / timestamp arms so error messages report the underlying Parquet primitive (INT32/INT64/FIXED_LEN_BYTE_ARRAY) instead of UNKNOWN. Coverage: 18 new Rust unit tests across the three rejection classes, plus expanded JVM regression tests in ParquetReadSuite. The four Spark SQL tests pointing at apache#4343/apache#4344/apache#4297 in dev/diffs/4.1.1.diff are unignored (no umbrella-apache#3720 references remain). The compat doc collapses the type-promotion bullet, leaving only the TimestampLTZ-as-TimestampNTZ pre-Spark-4.0 caveat.
|
I pushed more changes and attempted to unignore all 4.1.1 tests (except for the one that depends on the error message containing the file name). If CI passes I will go ahead and unignore tests in the other diffs. |
…pache#4351, apache#4352) Two follow-ups to 80836b1, which unignored four `ParquetTypeWideningSuite` test groups in `dev/diffs/4.1.1.diff` and surfaced a remaining schema-adapter gap on top. Schema adapter (apache#4351): a Parquet BINARY column without a `DecimalLogicalTypeAnnotation` was silently allowed through when the requested schema was `DecimalType`. The cast then fell through and Arrow's `RecordBatch::try_new` raised a generic `column types must match schema types` error as `CometNativeException` instead of the Spark-equivalent `SchemaColumnConvertNotSupportedException`. Drop `Decimal128/Decimal256` from the BINARY/STRING allowed-target match in `replace_with_spark_cast`, and add the same rejection in `wrap_all_type_mismatches` so the fallback path doesn't construct a `CometCastColumnExpr` for a cast it can't actually perform. New unit test `parquet_binary_read_as_decimal_errors`. With this fix, Spark's SPARK-34212 in `ParquetQuerySuite` passes without an ignore tag. Test diffs (apache#4352): re-add `IgnoreCometNativeDataFusion` to the five test groups in `dev/diffs/4.1.1.diff`'s `ParquetTypeWideningSuite` whose failure isn't a rejection-logic bug but a parquet-mr-only behavior assertion (`expectError = vectorized` paths and the explicit `overflows with parquet-mr` test). Same annotation added to the previously-unannotated overflow test in `dev/diffs/4.0.2.diff`. The schema-adapter rejection is correct; Comet just has no parquet-mr-equivalent backend that produces silent overflow on the non-vectorized path. Docs: new `Schema Mismatch Handling` subsection in the parquet scan compat guide. Explains that these gaps only apply to explicit schemas or schema evolution (not plain `spark.read.parquet(path)`), notes cross-version differences, and lists the one remaining gap (apache#4316: missing file path in `ParquetSchemaConvert` errors).
Mirrors the BINARY -> DECIMAL(...) iteration in Spark's `SPARK-34212 Parquet should read decimals correctly`. Walks the exception cause chain rather than using `intercept[SparkException].getCause` directly because Spark 3.x's task error handling wraps the shim's `SparkException` once more on the way back to the driver, producing a two-level chain (`SparkException -> SparkException -> SchemaColumnConvertNotSupportedException`); Spark 4.0+ produces a one-level chain (`SparkException -> SchemaColumnConvertNotSupportedException`). Validated locally against Spark 3.4.3, 3.5.8, 4.0.2, and 4.1.1. The strict `intercept(...).getCause.isInstanceOf` check Spark's own SPARK-34212 uses matches Comet on 4.x but not on 3.x, so SPARK-34212 itself can be unignored on 4.0.2 / 4.1.1 but should stay ignored on 3.4.3 / 3.5.8 until the 3.x shim no longer requires the extra wrapping.
…ic issues
Migrate the remaining `IgnoreCometNativeDataFusion("…apache/issues/3720")` references:
- `dev/diffs/4.0.2.diff`:
- Unignore `SPARK-34212 Parquet should read decimals correctly` in
`ParquetQuerySuite`. The schema-adapter rejection from apache#4351 covers all
three iterations (INT32/INT64/BINARY -> DECIMAL); the cause chain on
Spark 4.x is single-layer so the test's strict
`intercept(...).getCause.isInstanceOf[SchemaColumnConvertNotSupportedException]`
assertion is satisfied.
- Move the five `ParquetTypeWideningSuite` test groups from apache#3720 to apache#4352
(parquet-mr permissive-overflow umbrella).
- `dev/diffs/3.4.3.diff` and `dev/diffs/3.5.8.diff`:
- Re-tag `SPARK-34212` and `row group skipping doesn't overflow when reading
into larger type` from apache#3720 to apache#4354 (Spark-3.x cause-chain wrapping).
These tests will start passing once the 3.x shim no longer adds the extra
`SparkException` layer; the schema-adapter rejection itself is correct.
After this change `apache#3720` has zero references in `dev/diffs/` and can be
closed as superseded by apache#4297, apache#4343, apache#4344, apache#4351, apache#4352, apache#4354.
Validated locally with the new `native_datafusion rejects BINARY (no decimal
annotation) read as DecimalType` test (added in the previous commit) on Spark
3.4.3, 3.5.8, 4.0.2, and 4.1.1.
mbutrovich
left a comment
There was a problem hiding this comment.
Thanks for sticking with this one @andygrove. Another round of feedback. In general, it feels like the diff could be trimmed dramatically with a simplify skill from Claude and asking it to reduce wordy comments. There are a lot of "what" not "why" comments and fluffy wording.
| child: Arc<dyn PhysicalExpr>, | ||
| target_field: FieldRef, | ||
| column: String, | ||
| physical_type: String, |
There was a problem hiding this comment.
parquet_primitive_name returns &'static str (line 266), but the field stores String and every construction site (690, 725, 768, 801, 871) calls .to_string() on the static. Any reason not to make the field &'static str and skip the allocation? On the same struct, would column and spark_type work as Arc<str> so with_new_children (line 1123) does not have to clone three strings each time?
| DataType::Utf8 | DataType::LargeUtf8 | DataType::Binary | DataType::LargeBinary | ||
| ) | ||
| { | ||
| let rejection: Arc<dyn PhysicalExpr> = Arc::new(RejectOnNonEmpty { |
There was a problem hiding this comment.
All three sites build the same five-field struct literal with the same column: format!("[{}]", cast.input_field().name()) shape and the same parquet_primitive_name(...).to_string() and spark_catalog_name(...) calls. Would a small constructor like
fn make_rejection(child: Arc<dyn PhysicalExpr>, cast: &CastColumnExpr,
physical: &DataType, target: &DataType) -> Arc<dyn PhysicalExpr>pull the duplication out? The four SparkError::ParquetSchemaConvert literals at 574, 648, 722, 765 repeat the same shape and could share a sibling helper.
| column: cast.input_field().name().to_string(), | ||
| physical_type: physical_type.to_string(), | ||
| spark_type: target_type.to_string(), | ||
| column: format!("[{}]", cast.input_field().name()), |
There was a problem hiding this comment.
Seven call sites do format!("[{}]", cast.input_field().name()). Is there a reason the bracket framing has to live at the call site rather than inside SparkError::ParquetSchemaConvert's Display impl (or a constructor on the variant)?
| /// producing nulls (mirrors `spark.sql.parquet.fieldId.read.ignoreMissing`). | ||
| pub ignore_missing_field_id: bool, | ||
| /// Whether type promotion (schema evolution) is allowed, e.g. INT32 -> INT64, | ||
| /// FLOAT -> DOUBLE. Mirrors spark.comet.schemaEvolution.enabled. |
There was a problem hiding this comment.
spark.comet.schemaEvolution.enabled was removed in this PR. Should this comment point at the per-version ShimCometConf.COMET_SCHEMA_EVOLUTION_ENABLED constant instead?
| DataType::Int64, | ||
| DataType::Int8 | DataType::Int16 | DataType::Int32, | ||
| ) | ||
| // Long -> floating point. |
There was a problem hiding this comment.
A lot of these inline comments say the same thing as the match arm directly below them, e.g. // Long -> narrower int. above (DataType::Int64, DataType::Int8 | DataType::Int16 | DataType::Int32). The arms read fine on their own. The ones that carry actual context (the IntegerToDoubleUpdater note at 844, the "raw INT32; DATE-annotated columns surface as Date32" parenthetical at 849, the SPARK-26709 references) seem worth keeping. Would dropping the others make this block easier to read?
| let filename = get_temp_filename(); | ||
| let filename = filename.as_path().as_os_str().to_str().unwrap().to_string(); | ||
| let file = File::create(&filename)?; | ||
| let writer = ArrowWriter::try_new(file, Arc::clone(&file_schema), None)?; |
There was a problem hiding this comment.
Both tests build ArrowWriter and FileScanConfigBuilder and expr_adapter_factory from scratch, even though roundtrip at schema_adapter.rs:815 already does that setup. Could threading an expect_empty: bool (or factoring the shared setup out) let both tests go through roundtrip and drop the duplicated ~30 lines?
|
CI is green as of ac0e989 |
…pache#4352) Comet's native_datafusion scan rejects Parquet-to-Spark conversions that Spark's vectorized reader rejects, but Spark's parquet-mr (non-vectorized) path silently overflows / nulls. Disabling PARQUET_VECTORIZED_READER_ENABLED opts into parquet-mr semantics that Comet has no equivalent for, so fall back to Spark in that case. Re-enables the affected ParquetTypeWideningSuite tests in 4.0.2 and 4.1.1 diffs.
|
@kazuyukitanimura @parthchandra @comphead @mbutrovich This PR is now ready for review. It unignores almost all of the Spark SQL tests that were previously ignored for |
:( |
My bad. I had tried one more fix then failed to revert it correctly (forgot to re-ignore the tests). |
Commit 2ddece0 reverted the parquet-mr fallback added in apache#4352 but did not restore the IgnoreCometNativeDataFusion annotations that apache#4352 had removed. Without the fallback, those tests run against Comet's native scan and fail because Comet rejects conversions that parquet-mr would silently overflow/null.
Which issues does this PR close?
Closes #3720 — umbrella for
native_datafusionsilent acceptance of incompatible Parquet reads, now fully split into specific child issues.Closes #4297 — primitive numeric / date / timestamp conversions Spark rejects (
long → int,double → float,float|double → int*,int|long → float,int → timestamp,long → date|timestamp,date → timestamp(LTZ),timestamp → date).Closes #4343 — decimal-to-decimal precision/scale narrowing (
scaleIncrease < 0ORprecisionIncrease < scaleIncrease).Closes #4344 — integer-to-decimal narrowing (INT32 source needs
precision − scale ≥ 10, INT64 source≥ 20).Closes #4351 — plain BINARY (no
DecimalLogicalTypeAnnotation) read asDecimalType.Closes #4298 — deprecation of
spark.comet.schemaEvolution.enabled.Not closed:
ParquetTypeWideningSuitetests asserting parquet-mr's permissive non-vectorized behavior. Architectural difference (Comet has no parquet-mr fallback); tests stay ignored against this issue.ParquetSchemaConverttranslation produces an extraSparkExceptioncause-chain layer. Discovered while validating SPARK-34212 on 3.x; the schema-adapter rejection itself is correct, but the existing 3.x tests assert the immediate cause type.ParquetSchemaConverterrors. Cosmetic.TimestampLTZ → TimestampNTZpermissive read.Rationale for this change
Under
spark.comet.scan.impl=native_datafusion, several Spark SQL tests that expectSchemaColumnConvertNotSupportedExceptionon incompatible Parquet reads were passing silently — DataFusion's reader was coercing mismatched numeric / decimal / binary types instead of erroring, producing wrong answers (silent overflow on narrowing, silent precision loss on widening, raw-byte reinterpretation forint → timestamp, etc.). This PR adds the rejections Spark's vectorized reader performs inParquetVectorUpdaterFactory.getUpdater, formatted to match Spark's_LEGACY_ERROR_TEMP_2063(3.x) /FAILED_READ_FILE.PARQUET_COLUMN_DATA_TYPE_MISMATCH(4.x) error params byte-for-byte.What changes are included in this PR?
Schema adapter (
native/core/src/parquet/schema_adapter.rs)RejectOnNonEmptyPhysicalExprthat returns an empty array for batches with zero rows and raisesSparkError::ParquetSchemaConvertotherwise. Used to defer rejection to evaluation time so files with no row groups (e.g. emptyDataFrame.write) pass silently — mirrors Spark's per-row-groupgetUpdatercheck (SPARK-26709).replace_with_spark_cast:#4088/#4351): rejects all non-string/binary target types, includingDecimal128/256(Arrow already exposes decimal-annotated BINARY asDecimal128, so observing physicalBinaryhere unambiguously means a non-decimal source).#4343): rejectsdst_scale < src_scaleORdst_precision − dst_scale < src_precision − src_scale.#4344): rejects when the requested decimal cannot represent the source integer type's range.RejectOnNonEmpty:INT32 → INT64,FLOAT → DOUBLE,INT32 → DOUBLE) whenallow_type_promotionis false.#4297).wrap_all_type_mismatchesfallback path so the rejection fires whether the default adapter constructs aCastColumnExpr(typical) or fails (which happens for Binary → Decimal128 because DataFusion has no built-in cast for that pair).parquet_primitive_nameandspark_catalog_nameproduce Spark-compatible names (INT32,INT64,FIXED_LEN_BYTE_ARRAY,decimal(p,s),timestamp_ntz, …) for error params.Configuration
spark.comet.schemaEvolution.enabledconf in favor of a per-Spark-version constant inShimCometConf(falseon 3.x,trueon 4.x) — Spark 3.x's vectorized reader rejects these widenings unconditionally, Spark 4.x always accepts them.Spark-SQL test diffs (
dev/diffs/{3.4.3,3.5.8,4.0.2,4.1.1}.diff)The remaining
IgnoreCometNativeDataFusionannotations now point at specific issues —#3720has zero references indev/diffs/. Final state per issue:#4316(missing file path)#4352(parquet-mr permissive)#4354(3.x cause-chain wrapping)#4219(LTZ → NTZ Spark 3.x)"cannot be pushed down"(no issue)#3720Specifically:
SPARK-34212 Parquet should read decimals correctlyis unignored on 4.0.2 and 4.1.1 (passes thanks to the#4351schema-adapter fix). Stays ignored against#4354on 3.x where the shim's extra cause-chain layer makes the strictintercept(...).getCause.isInstanceOfassertion fail.ParquetTypeWideningSuitetest groups in4.1.1.diffthat commit80836b18dhad unignored prematurely are re-ignored against#4352, along with the matching tests in4.0.2.diff. The schema-adapter rejection is correct on the vectorized=true branch; these tests assert parquet-mr's permissive behavior on the vectorized=false branch, which Comet doesn't replicate (no parquet-mr fallback).parquet decimal type change Decimal(5, 2) → Decimal(3, 2) overflows with parquet-mrtest in4.0.2.diffand4.1.1.diffis ignored against#4352(had been unannotated, would fail otherwise).Documentation (
docs/source/user-guide/latest/compatibility/scans.md)New
Schema Mismatch Handlingsubsection in the parquet scan compat guide. States explicitly that these gaps apply only when the requested read schema differs from the file schema (explicit user schema or schema-evolution / partitioned reads), not to plainspark.read.parquet(path). Notes per-version differences and lists the only remaining user-visible gap (#4316— missing file path in error). Mentions across-Spark-version differences (Spark 3.x'sschemaEvolution.enabled-gated widenings vs 4.0+ unconditional acceptance,TimestampLTZ → TimestampNTZ).How are these changes tested?
Rust unit tests in
schema_adapter.rs— 18 new tests covering each rejection class plus the empty-file pass-through:parquet_empty_file_disallowed_widening,parquet_non_empty_file_disallowed_widening_errorsparquet_int_read_as_string_errors,parquet_string_read_as_int_errorsparquet_binary_read_as_decimal_errors(regression for#4351)parquet_int32_read_as_narrow_decimal_errors,parquet_int64_read_as_narrow_decimal_errors,parquet_int32_read_as_wide_decimal_succeeds,parquet_int32_read_as_decimal_with_scale_errorsparquet_decimal_precision_narrowing_errors,parquet_decimal_int_precision_narrowing_errors,parquet_decimal_scale_widening_without_precision_errors,parquet_decimal_widening_succeedsparquet_long_read_as_int_errors,parquet_long_read_as_double_errors,parquet_double_read_as_float_errors,parquet_float_read_as_long_errors,parquet_double_read_as_long_errors,parquet_int_read_as_float_errorsparquet_int_read_as_timestamp_ntz_errors,parquet_long_read_as_date_errors,parquet_date_read_as_ltz_timestamp_errors,parquet_timestamp_read_as_date_errorsJVM regression test in
ParquetReadSuite.scala:native_datafusion rejects BINARY (no decimal annotation) read as DecimalType— mirrors the BINARY iteration ofSPARK-34212. Walks the cause chain because Spark 3.x produces an extraSparkExceptionlayer (#4354); verified locally on Spark 3.4.3, 3.5.8, 4.0.2, and 4.1.1.Spark SQL CI — the affected Spark-SQL tests run under CI with the regenerated diffs. The previously-failing 37 tests in
4.1.1.diff(CI logs from 80836b1) are now either unignored and passing (SPARK-34212) or re-ignored under#4352with a clear architectural reason.