[SPARK-56424][PYTHON] Add ASV benchmark for SQL_SCALAR_PANDAS_UDF#55316
Open
Yicong-Huang wants to merge 1 commit intoapache:masterfrom
Open
[SPARK-56424][PYTHON] Add ASV benchmark for SQL_SCALAR_PANDAS_UDF#55316Yicong-Huang wants to merge 1 commit intoapache:masterfrom
Yicong-Huang wants to merge 1 commit intoapache:masterfrom
Conversation
### What changes were proposed in this pull request? Add ASV microbenchmark for `SQL_SCALAR_PANDAS_UDF` eval type in `bench_eval_type.py`. ### Why are the changes needed? Part of SPARK-55724 (Micro-benchmark PySpark Eval Types). Guards against performance regressions during the serializer refactor by measuring the full Arrow-to-Pandas-to-Arrow round-trip for scalar Pandas UDFs. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Verified all 27 scenario/UDF combinations run successfully.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Add ASV microbenchmarks for
SQL_SCALAR_PANDAS_UDFeval type inbench_eval_type.py.The new
_ScalarPandasBenchMixinfollows the same pattern as the existing_ScalarArrowBenchMixinbut uses Pandas Series operations. It measures the full Arrow-to-Pandas-to-Arrow round-trip that occurs in scalar Pandas UDFs.Scenarios (9):
sm_batch_few_col,sm_batch_many_col,lg_batch_few_col,lg_batch_many_col,pure_ints,pure_floats,pure_strings,pure_ts,mixed_typesUDFs (3):
identity_udf(passthrough),sort_udf(Series.sort_values),nullcheck_udf(Series.notna)Benchmark classes (2):
ScalarPandasUDFTimeBench,ScalarPandasUDFPeakmemBenchWhy are the changes needed?
This is part of the PySpark Serializer & EvalType Refactor effort (SPARK-55724). We need baseline benchmarks for every eval type before refactoring the serialization path, so we can detect performance regressions.
SQL_SCALAR_PANDAS_UDFis one of the most commonly used eval types and its Arrow-to-Pandas-to-Arrow conversion cost is a key metric for the refactor.Does this PR introduce any user-facing change?
No.
How was this patch tested?
Ran all 54 benchmarks (27 time + 27 peakmem) locally with
python/asv run --python=same --quick:Was this patch authored or co-authored using generative AI tooling?
No.