Skip to content

Conversation

@Shekharrajak
Copy link
Contributor

Which issue does this PR close?

Closes #2972.

Rationale for this change

The contains expression shows poor performance in Comet (0.2X vs Spark) because DataFusion's make_scalar_function wrapper expands scalar patterns to arrays, bypassing arrow-rs's optimized scalar path.

What changes are included in this PR?

  • Add SparkContains UDF with optimized scalar pattern handling using memchr::memmem::Finder for SIMD-accelerated substring search
  • Register the function in comet_scalar_funcs.rs to override DataFusion's built-in contains
  • Add contains to CometStringExpressionBenchmark
  • Enhance contains test in CometExpressionSuite

How are these changes tested?

  • 4 new unit tests in contains.rs (array-scalar, scalar-scalar, null handling, empty pattern)
  • Enhanced integration test in CometExpressionSuite.scala
  • All 122 spark-expr tests pass

@codecov-commenter
Copy link

codecov-commenter commented Dec 27, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 59.55%. Comparing base (f09f8af) to head (e50e077).
⚠️ Report is 807 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff              @@
##               main    #2991      +/-   ##
============================================
+ Coverage     56.12%   59.55%   +3.42%     
- Complexity      976     1379     +403     
============================================
  Files           119      167      +48     
  Lines         11743    15495    +3752     
  Branches       2251     2568     +317     
============================================
+ Hits           6591     9228    +2637     
- Misses         4012     4971     +959     
- Partials       1140     1296     +156     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@andygrove
Copy link
Member

I tested locally and see good performance now:

OpenJDK 64-Bit Server VM 17.0.17+10-Ubuntu-122.04 on Linux 6.8.0-90-generic
AMD Ryzen 9 7950X3D 16-Core Processor
contains:                                 Best Time(ms)   Avg Time(ms)   Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
------------------------------------------------------------------------------------------------------------------------
Spark                                              1864           1870           9          0.6        1777.7       1.0X
Comet (Scan)                                       1986           1992           9          0.5        1894.1       0.9X
Comet (Scan + Exec)                                1328           1330           4          0.8        1266.1       1.4X


[dependencies]
arrow = { workspace = true }
arrow-string = "57.0.0"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We already depend on arrow. Is contains not re-exported in the arrow crate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks - removed it a449297

@andygrove
Copy link
Member

Thanks @Shekharrajak this is a really nice speedup! Could you fix the clippy errors (you can probably just run cargo clippy --fix).

@Shekharrajak Shekharrajak force-pushed the fix/issue-2972-contains-performance branch from 54dc054 to 27929a3 Compare December 27, 2025 18:08
* Override waitForTasksToFinish to ensure SparkContext is active before checking tasks. This
* fixes the issue where waitForTasksToFinish returns -1 when SparkContext is not active.
*/
override protected def waitForTasksToFinish(): Unit = {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this fixes the PR check error :

Image

//! scalar path in arrow-rs.
use arrow::array::{Array, ArrayRef, AsArray, BooleanArray};
use arrow::datatypes::DataType;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fmt check error fix

op.cmd match {
case cmd: InsertIntoHadoopFsRelationCommand =>
// Skip INSERT OVERWRITE DIRECTORY operations (catalogTable is None for directory writes)
if (cmd.catalogTable.isEmpty) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image

fix for error :

RROR org.apache.spark.sql.execution.command.InsertIntoDataSourceDirCommand: Failed to write to directory Some(file:/__w/datafusion-comet/datafusion-comet/apache-spark/target/tmp/spark-76b62d31-5bd6-4d4b-9770-262cb08e84f3)
org.apache.spark.sql.AnalysisException: [COLUMN_ALREADY_EXISTS] The column `id` already exists. Choose another name or rename the existing column. SQLSTATE: 42711
	at org.apache.spark.sql.errors.QueryCompilationErrors$.columnAlreadyExistsError(QueryCompilationErrors.scala:2700)
	at org.apache.spark.sql.util.SchemaUtils$.checkColumnNameDuplication(SchemaUtils.scala:151)
	at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:86)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:117)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:115)
	at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:129)
	at org.apache.spark.sql.execution.QueryExecution.$anonfun$eagerlyExecuteCommands$2(QueryExecution.scala:155)
[info] - SPARK-25389 INSERT OVERWRITE LOCAL DIRECTORY ... STORED AS with duplicated names(caseSensitivity=true, format=orc) (22 milliseconds)
18:44:25.173 ERROR org.apache.spark.sql.execution.command.InsertIntoDataSourceDirCommand: Failed to write to directory Some(file:/__w/datafusion-comet/datafusion-comet/apache-spark/target/tmp/spark-76ef391d-5d5f-4997-afb4-97ac714c1697)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Improve performance of contains expression

3 participants