Expecto: Extracting Formal Specifications from Natural Language Description for Trustworthy Oracles (Artifact)
This repository contains the artifact implementation for the paper Expecto: Extracting Formal Specifications from Natural Language Description for Trustworthy Oracles.
The experiments in the paper were conducted with the following setup:
- Ubuntu 22.04
- Docker 24.0.2
- 64 CPU cores (Xeon Gold 6226R)
- 512 GB RAM
- 256 GB storage
First, obtain the Docker image that includes the datasets, dependencies, and experiment code. Pulling the image is the easiest option.
docker pull prosyslab/expecto-artifactIf you downloaded expecto-artifact.tar.gz from Zenodo, you can load it instead.
gunzip -c expecto-artifact.tar.gz | docker loadYou can verify that the image was pulled or loaded correctly by running:
docker images | grep expecto-artifact
> prosyslab/expecto-artifact latest ...docker run -it --name expecto-artifact prosyslab/expecto-artifact zshExpecto uses GPT-4.1-mini (OpenAI) as its LLM backend, so you need a valid
OpenAI API key to run the experiments.
Inside the container, change to /workspace/expecto-artifact and create a .env file with your OpenAI API key.
Note: An OpenAI API key will be provided through HotCRP comment
cd /workspace/expecto-artifact
cat > .env <<'EOF'
OPENAI_API_KEY=YOUR_KEY_HERE
EOFRun the following script to confirm that the setup completed successfully.
Make sure every item in the Summary section is marked as [PASS].
It takes about 2 minutes.
Command:
python3 scripts/test.pyExpected output:
[PASS] .env file: /workspace/expecto-artifact/.env
[PASS] OPENAI_API_KEY: OPENAI_API_KEY is defined in .env and starts with 'sk-'
[PASS] datasets directory: /workspace/expecto-artifact/datasets
[PASS] dataset file: datasets/apps.json
[PASS] dataset file: datasets/human_eval_plus.json
[PASS] dataset file: datasets/defects4j.jsonl
Running RQ1 smoke test:
/usr/bin/python3 /workspace/expecto-artifact/scripts/run_artifact.py rq1 --limit 1 --output-root /workspace/data/experiment/artifact/test-smoke --force
...
[PASS] RQ1 execution: run_artifact.py completed successfully
[PASS] Expecto markers: .../full/runs/apps/ts/evaluation_result/manifest.json
[PASS] NL2Postcond marker: .../full/runs/apps/nl2_base/.../aggregated_result.json
[PASS] RQ1 figure output: .../full/figures/rq1/evaluation.rq1.table.pdf
Summary:
[PASS] .env file
[PASS] OPENAI_API_KEY
[PASS] datasets directory
[PASS] dataset files
[PASS] RQ1 execution
[PASS] Expecto markers
[PASS] NL2Postcond markers
[PASS] RQ1 figure outputs
[PASS] sample result outputs
This test script checks the following:
- The required datasets exist.
OPENAI_API_KEYis set.- The RQ1 experiment can run on a single sample, and the final figure files are generated correctly.
Note: The figures generated by test.py are not intended to match the exact values reported in the paper. They only confirm that the workflow runs correctly before you start a full reproduction.
- LLM non-determinism: The generated specifications and their classifications may differ from those reported in the paper because of LLM non-determinism. However, the overall tendencies (e.g., Expecto producing more
S&Cspecifications than NL2Postcond) should still hold.
The expecto-artifact repository has the following directory structure:
/workspace/expecto-artifact
├── README.md <- Top-level README (this file)
├── analyzer/ <- Figure and table generation scripts
├── datasets/ <- HumanEval+, APPS, and Defects4J datasets
├── expecto/ <- Core Expecto implementation
├── nl-2-postcond/ <- NL2Postcond baseline from FSE '24
└── scripts/ <- Scripts for running artifact experiments
└── run_artifact.py <- Main artifact runner
Generated outputs from run_artifact.py are stored as follows. The most important files to inspect are sample_results.json for per-run summaries and the PDFs under figures/ for the reproduced paper artifacts.
/workspace/data/experiment/artifact
├── target/ <- Outputs for targeted reproduction (Section 3)
│ └── runs/ <- Raw LLM outputs and evaluation results
│ └── <BENCHMARK>/<CONFIG>/ <- Outputs for one benchmark/configuration pair
│ └── sample_results.json <- Summary of generated sample results
├── full/ <- Outputs for full paper reproduction (Sections 4 and 5)
│ ├── runs/ <- Raw LLM outputs and evaluation results
│ └── figures/ <- Tables and figures generated from `runs/`
│ ├── rq1/ <- Figures for RQ1
│ ├── rq2/ <- Figures for RQ2
│ ├── rq3/ <- Figures for RQ3
│ └── rq4/ <- Figures for RQ4
└── mini/ <- Outputs for reduced quick-check runs (Section 6)
├── runs/
└── figures/
Use the target command when you want to reproduce individual benchmark problems instead of the full paper results. The paper covers three benchmarks, four Expecto configurations, and two NL2Postcond configurations; you can run any supported combination by specifying the benchmark, configuration, and sample ID(s).
Run the following commands to compare Expecto's tree-search configuration (ts) with the two NL2Postcond configurations (nl2_base and nl2_simple) on the motivating example in Fig. 2.
It takes about 2 minutes.
# run Expecto (`ts` means the tree search specification synthesis + specification validation with test cases)
python3 scripts/run_artifact.py target \
--benchmark apps \
--config ts \
--sample-ids 75
# run NL2Postcond base prompt configuration
python3 scripts/run_artifact.py target \
--benchmark apps \
--config nl2_base \
--sample-ids 75
# run NL2Postcond simple prompt configuration
python3 scripts/run_artifact.py target \
--benchmark apps \
--config nl2_simple \
--sample-ids 75Each command writes its result to one of the following files:
/workspace/data/experiment/artifact/target/runs/apps/ts/sample_results.json:
[
{
"id": "75",
"classification": "S&C",
"nl_description": "You are given a description of a depot...",
"specification": "predicate spec(n: int, m: int, grid: list[string], out_status: string, out_x: int, out_y: int) { ... }"
}
]/workspace/data/experiment/artifact/target/runs/apps/nl2_base/sample_results.json:
[
{
"id": "75",
"classification": "S",
"nl_description": "You are given a description of a depot...",
"specification": "assert ..."
}
]/workspace/data/experiment/artifact/target/runs/apps/nl2_simple/sample_results.json:
[
{
"id": "75",
"classification": "C",
"nl_description": "You are given a description of a depot...",
"specification": "assert ..."
}
]Each sample_results.json entry contains:
id: the benchmark problem ID. In this example, it is always75.classification: the evaluation result for the generated specification. Here, Expecto produces a sound and complete (S&C) specification, while the two NL2Postcond configurations produce sound but incomplete (S) or complete but unsound (C) specifications.nl_description: the original natural-language description from the APPS benchmark prompt.specification: the generated formal specification. This field stores the specifications shown in Fig. 2. Expecto emits a DSL specification such aspredicate spec(...), and NL2Postcond emits a single assertion such asassert ....
Use the following command template for any other target problem:
python3 scripts/run_artifact.py target \
--benchmark <BENCHMARK> \
--config <CONFIG> \
--sample-ids <SAMPLE-IDS>BENCHMARKspecifies which benchmark to use. You can chooseapps,humaneval_plus, ordefects4j.CONFIGspecifies the Expecto or NL2Postcond configuration used in the paper.
| Method | CONFIG value |
Description | Used paper section(s) |
|---|---|---|---|
| Expecto | mono |
Monolithic specification synthesis + Specification validation with test cases | 4.3 |
| Expecto | topdown |
Top-down specification synthesis + Specification validation with test cases | 4.3 |
| Expecto | ts |
Tree search specification synthesis + Specification validation with test cases | 4.2, 4.3, 4.4, 4.5 |
| Expecto | without_tc |
Tree search specification synthesis | 4.4 |
| NL2Postcond | nl2_base |
NL2Postcond base prompt strategy | 4.2, 4.5 |
| NL2Postcond | nl2_simple |
NL2Postcond simple prompt strategy | 4.2, 4.5 |
SAMPLE-IDSspecifies the benchmark problem IDs as a comma-separated list. For example, you can provide15,23,56. You can find the available IDs for each benchmark in/workspace/expecto-artifact/datasets/available_target_ids.csv.
This command generates raw LLM outputs and evaluates the resulting specifications for soundness and completeness.
The generated outputs and evaluation results are saved in /workspace/data/experiment/artifact/target/runs/<BENCHMARK>/<CONFIG>.
After the evaluation is complete, the runner also generates /workspace/data/experiment/artifact/target/runs/<BENCHMARK>/<CONFIG>/sample_results.json, so you can immediately inspect the classification result, original natural-language description, and generated specification for each sample.
sample_results.json contains one object per sample, and each object has the following fields:
id: the sample IDclassification: the sample classification result. It can be one of the following values:S&C: sound and completeS: sound but incompleteC: complete but unsoundW: neither sound nor complete
nl_description: the original natural-language description- APPS and HumanEval+ use the benchmark prompt text
- Defects4J uses the method Javadoc description
specification: the generated specification- Expecto generates specifications in the form of
predicate spec(...) { ... } - NL2Postcond generates specifications in the form of
assert ...
- Expecto generates specifications in the form of
This section explains the full command, which runs all experiments in the paper for RQ1 through RQ4.
It performs the generation and evaluation described in Section 3 on the full benchmarks, then produces the paper's figures and tables.
The total expected runtime of the full command is approximately 50 hours.
It takes about 50 hours.
python3 scripts/run_artifact.py fullRunning the command above executes all generation-and-evaluation settings used in the paper.
For APPS and HumanEval+, it runs 6 settings per benchmark: 4 Expecto configurations and 2 NL2Postcond configurations.
For Defects4J, it runs 3 settings: 1 Expecto configuration and 2 NL2Postcond configurations.
In total, this is 6 + 6 + 3 = 15 generation-and-evaluation runs.
It then processes the outputs into the same figure and table formats shown in the paper.
- The results for each benchmark / generation-configuration combination are stored at:
/workspace/data/experiment/artifact/full/runs/<BENCHMARK>/<CONFIG>/sample_results.json- The raw LLM outputs and evaluation artifacts used to build each summary are stored alongside it under
/workspace/data/experiment/artifact/full/runs/<BENCHMARK>/<CONFIG>/
- The paper's figures and tables are stored at:
- Table 1 (RQ1 main comparison):
/workspace/data/experiment/artifact/full/figures/rq1/evaluation.rq1.table.pdf - Fig. 8 (RQ1 threshold analysis):
/workspace/data/experiment/artifact/full/figures/rq1/evaluation.thresholds.pdf - Fig. 9 (RQ2 generation algorithm ablation):
/workspace/data/experiment/artifact/full/figures/rq2/evaluation.rq2.pdf - Fig. 10 (RQ3 test-case ablation):
/workspace/data/experiment/artifact/full/figures/rq3/evaluation.rq3.testcase.pdf - Table 2 (RQ4 Defects4J comparison):
/workspace/data/experiment/artifact/full/figures/rq4/evaluation.rq4.defects4j.table.pdf
- Table 1 (RQ1 main comparison):
For details on each RQ, see Section 5.
This section explains the commands and outputs used to reproduce each research question (RQ).
If you have already completed the full experiment with the full command, you can inspect the generated outputs instead of running the commands again.
This experiment compares Expecto against the two NL2Postcond prompt strategies (nl2_base and nl2_simple) on the APPS and HumanEval+ benchmarks.
For Expecto, the comparison uses the tree-search-with-test-cases (ts) configuration.
It takes about 12 hours from scratch, but if you have already run the full command, you can skip this step and inspect the generated outputs instead.
python3 scripts/run_artifact.py rq1This command runs specification generation and evaluation for six experiment settings (2 benchmarks x 3 configs) and produces the main RQ1 outputs: Table 1 and Fig. 8.
Table 1 shows the number of samples in each classification category (S&C, S, C, and W) for each setting.
Fig. 8 shows how the number of samples in the S&C and W categories changes as the threshold X in the soundness criterion varies.
For example, suppose a benchmark has 10 incorrect input-output pairs, and a generated specification rejects 8 of them.
Then the specification is sound when X = 80, because it catches at least 80% of the incorrect pairs, but it is not sound when X = 90.
Fig. 8 repeats this counting for different threshold values and shows how the classification changes.
The generated raw data and final outputs can be found in:
- Paths to
sample_results.json:/workspace/data/experiment/artifact/full/runs/apps/ts/sample_results.json/workspace/data/experiment/artifact/full/runs/apps/nl2_base/sample_results.json/workspace/data/experiment/artifact/full/runs/apps/nl2_simple/sample_results.json/workspace/data/experiment/artifact/full/runs/humaneval_plus/ts/sample_results.json/workspace/data/experiment/artifact/full/runs/humaneval_plus/nl2_base/sample_results.json/workspace/data/experiment/artifact/full/runs/humaneval_plus/nl2_simple/sample_results.json
- Table 1 (RQ1 main comparison):
/workspace/data/experiment/artifact/full/figures/rq1/evaluation.rq1.table.pdf - Fig. 8 (RQ1 threshold analysis):
/workspace/data/experiment/artifact/full/figures/rq1/evaluation.thresholds.pdf
- DSL compiler bug fix: After the submission version, we found bugs in the DSL compiler that caused some specifications to be misclassified as
WorSinstead ofS&C. We have fixed these bugs, so you may see moreS&Cspecifications than those reported in the paper, especially for Expecto on HumanEval+. However, the overall trends and conclusions should still hold.
This experiment evaluates the two main generation components of Expecto: top-down specification synthesis and tree search.
It compares three Expecto configurations on the APPS and HumanEval+ benchmarks: monolithic generation (mono), top-down synthesis without tree search (topdown), and top-down synthesis with tree search (ts).
It takes about 24 hours from scratch, but if you have already run the full command, you can skip this step and inspect the generated outputs instead.
python3 scripts/run_artifact.py rq2This command runs specification generation and evaluation for six Expecto ablation settings (2 benchmarks x 3 configs) and produces the main RQ2 output, Fig. 9.
Fig. 9 is a bar chart comparing the number of samples in each classification category for different Expecto generation algorithms.
It shows that introducing top-down synthesis and tree search improves the quality of the generated specifications, for example by increasing the number of S&C specifications and decreasing the number of W specifications.
This can be seen from the fact that, for each benchmark, the number of S&C specifications increases in the order mono < topdown < ts.
The generated raw data and final output can be found in:
- Paths to
sample_results.json:/workspace/data/experiment/artifact/full/runs/apps/mono/sample_results.json/workspace/data/experiment/artifact/full/runs/apps/topdown/sample_results.json/workspace/data/experiment/artifact/full/runs/apps/ts/sample_results.json/workspace/data/experiment/artifact/full/runs/humaneval_plus/mono/sample_results.json/workspace/data/experiment/artifact/full/runs/humaneval_plus/topdown/sample_results.json/workspace/data/experiment/artifact/full/runs/humaneval_plus/ts/sample_results.json
- Fig. 9 (RQ2 generation algorithm ablation):
/workspace/data/experiment/artifact/full/figures/rq2/evaluation.rq2.pdf
This experiment measures how much user-provided test cases help Expecto during specification generation.
It compares the full tree-search configuration with test cases (ts) against the version that does not use test cases (without_tc) on the APPS and HumanEval+ benchmarks.
It takes about 18 hours from scratch, but if you have already run the full command, you can skip this step and inspect the generated outputs instead.
python3 scripts/run_artifact.py rq3This command runs specification generation and evaluation for four experiment settings (2 benchmarks x 2 configs) and produces the main RQ3 output, Fig. 10.
Fig. 10 is a bar chart comparing the number of samples in each classification category (S&C, S, C, W) between the ts configuration, which uses test cases, and the without_tc configuration, which does not.
It shows that introducing test cases improves the quality of the generated specifications, for example by increasing the number of S&C specifications.
The generated raw data and final output can be found in:
- Paths to
sample_results.json:/workspace/data/experiment/artifact/full/runs/apps/ts/sample_results.json/workspace/data/experiment/artifact/full/runs/apps/without_tc/sample_results.json/workspace/data/experiment/artifact/full/runs/humaneval_plus/ts/sample_results.json/workspace/data/experiment/artifact/full/runs/humaneval_plus/without_tc/sample_results.json
- Fig. 10 (RQ3 test-case ablation):
/workspace/data/experiment/artifact/full/figures/rq3/evaluation.rq3.testcase.pdf
This experiment evaluates whether Expecto can generate bug-detecting specifications for real-world software methods.
It compares Expecto with the two NL2Postcond prompt strategies (nl2_base and nl2_simple) on the Defects4J benchmark, using the tree-search configuration with test cases (ts) for Expecto.
It takes about 12 hours from scratch, but if you have already run the full command, you can skip this step and inspect the generated outputs instead.
python3 scripts/run_artifact.py rq4This command runs specification generation and evaluation for three Defects4J experiment settings (1 benchmark x 3 configs) and produces the main RQ4 output, Table 2.
Table 2 compares the number of specifications in each classification category (S&C, S, C, W) produced on the Defects4J benchmark by Expecto (ts) and the two NL2Postcond prompt strategies (nl2_base, nl2_simple).
The table shows that Expecto produces more specifications that are both correct and capable of detecting bugs (S&C) than NL2Postcond, while also producing fewer W specifications.
The generated raw data and final output can be found in:
- Paths to
sample_results.json:/workspace/data/experiment/artifact/full/runs/defects4j/ts/sample_results.json/workspace/data/experiment/artifact/full/runs/defects4j/nl2_base/sample_results.json/workspace/data/experiment/artifact/full/runs/defects4j/nl2_simple/sample_results.json
- Table 2 (RQ4 Defects4J comparison):
/workspace/data/experiment/artifact/full/figures/rq4/evaluation.rq4.defects4j.table.pdf
- DSL compiler bug fix: After the submission version, we found bugs in the DSL compiler that caused some specifications to be misclassified as
WorSinstead ofS&C. We have fixed these bugs, so you may see moreS&Cspecifications than those reported in the paper, especially for Expecto on Defects4J. However, the overall trends and conclusions should still hold.
This section explains the mini command, which runs a reduced version of the experiments on fixed subsets of the benchmarks used in the paper.
It performs the same generation, evaluation, and figure/table export steps as the full command, but uses only 20 samples from each benchmark so that the results can be produced much more quickly.
We recommend running mini first if you want to check your setup produces reasonable results before committing to the ~50-hour full run. It takes 4.5 hours.
Note: mini is not meant to reproduce the exact paper numbers. It is meant for quick inspection and trend checking.
To run the reduced-profile experiments for RQ1 through RQ4, use:
python3 scripts/run_artifact.py miniIf you only want to run the reduced-profile experiment for a specific RQ, specify the RQ number as follows:
python3 scripts/run_artifact.py <rq1|rq2|rq3|rq4> --miniRunning the command above executes the same experiment families as the full profile for RQ1 through RQ4, but on fixed 20-sample subsets of APPS, HumanEval+, and Defects4J.
It writes per-run outputs to:
/workspace/data/experiment/artifact/mini/runs/<BENCHMARK>/<CONFIG>/sample_results.json
It also generates the corresponding reduced-profile figures and tables at:
- Table 1 (RQ1 main comparison):
/workspace/data/experiment/artifact/mini/figures/rq1/evaluation.rq1.table.pdf - Fig. 8 (RQ1 threshold analysis):
/workspace/data/experiment/artifact/mini/figures/rq1/evaluation.thresholds.pdf - Fig. 9 (RQ2 generation algorithm ablation):
/workspace/data/experiment/artifact/mini/figures/rq2/evaluation.rq2.pdf - Fig. 10 (RQ3 test-case ablation):
/workspace/data/experiment/artifact/mini/figures/rq3/evaluation.rq3.testcase.pdf - Table 2 (RQ4 Defects4J comparison):
/workspace/data/experiment/artifact/mini/figures/rq4/evaluation.rq4.defects4j.table.pdf
The fixed sample IDs used by mini are listed below. You do not need these IDs unless you want to inspect exactly which subset is used.
- APPS IDs:
15, 57, 23, 76, 83, 39, 101, 3701, 4004, 37, 94, 16, 61, 52, 42, 47, 72, 90, 4005, 71 - HumanEval+ IDs:
22, 52, 75, 61, 83, 92, 34, 53, 73, 129, 140, 158, 54, 124, 6, 71, 32, 123, 162, 63 - Defects4J IDs:
Chart_6_workspace_objdump_d4j_full_fresh_chart_6_source_org_jfree_chart_util_ShapeList_java_boolean_equals_Object_obj, Cli_18_workspace_objdump_d4j_full_fresh_cli_18_src_java_org_apache_commons_cli_PosixParser_java_void_processOptionToken_String_token_boolean_stopAtNonOption, Compress_40_workspace_objdump_d4j_full_compress_40_src_main_java_org_apache_commons_compress_utils_BitInputStream_java_long_readBits_int_count, Jsoup_76_workspace_objdump_d4j_full_jsoup_76_src_main_java_org_jsoup_parser_HtmlTreeBuilderState_java_boolean_process_Token_t_HtmlTreeBuilder_tb, Jsoup_85_workspace_objdump_d4j_full_jsoup_85_src_main_java_org_jsoup_nodes_Attribute_java_Attribute_String_key_String_val_Attributes_parent, Lang_32_workspace_objdump_d4j_full_lang_32_src_main_java_org_apache_commons_lang3_builder_HashCodeBuilder_java_boolean_isRegistered_Object_value, Math_35_workspace_objdump_d4j_full_math_35_src_main_java_org_apache_commons_math3_genetics_ElitisticListPopulation_java_ElitisticListPopulation_int_populationLimit_double_elitismRate, Math_73_workspace_objdump_d4j_full_math_73_src_main_java_org_apache_commons_math_analysis_solvers_BrentSolver_java_double_solve_UnivariateRealFunction_f_double_min_double_max, Math_80_workspace_objdump_d4j_full_math_80_src_main_java_org_apache_commons_math_linear_EigenDecompositionImpl_java_boolean_flipIfWarranted_int_n_int_step, Math_96_workspace_objdump_d4j_full_math_96_src_java_org_apache_commons_math_complex_Complex_java_boolean_equals_Object_other, Cli_10_workspace_objdump_d4j_full_fresh_cli_10_src_java_org_apache_commons_cli_Parser_java_CommandLine_parse_Options_options_String_arguments_Properties_properties_boolean_stopAtNonOption, Cli_18_workspace_objdump_d4j_full_fresh_cli_18_src_java_org_apache_commons_cli_PosixParser_java_String_flatten_Options_options_String_arguments_boolean_stopAtNonOption, Cli_32_workspace_objdump_d4j_full_fresh_cli_32_src_main_java_org_apache_commons_cli_HelpFormatter_java_int_findWrapPos_String_text_int_width_int_startPos, Closure_114_workspace_objdump_d4j_full_fresh_closure_114_src_com_google_javascript_jscomp_NameAnalyzer_java_void_visit_NodeTraversal_t_Node_n_Node_parent, Closure_74_workspace_objdump_d4j_full_fresh_closure_74_src_com_google_javascript_jscomp_PeepholeFoldConstants_java_Node_tryFoldBinaryOperator_Node_subtree, Closure_78_workspace_objdump_d4j_full_fresh_closure_78_src_com_google_javascript_jscomp_PeepholeFoldConstants_java_Node_tryFoldArithmeticOp_Node_n_Node_left_Node_right, Closure_97_workspace_objdump_d4j_full_fresh_closure_97_src_com_google_javascript_jscomp_PeepholeFoldConstants_java_Node_tryFoldBinaryOperator_Node_subtree, Codec_4_workspace_objdump_d4j_full_codec_4_src_java_org_apache_commons_codec_binary_Base64_java_byte_encodeBase64_byte_binaryData_boolean_isChunked_boolean_urlSafe_int_maxResultSize, Jsoup_38_workspace_objdump_d4j_full_jsoup_38_src_main_java_org_jsoup_parser_HtmlTreeBuilderState_java_boolean_process_Token_t_HtmlTreeBuilder_tb, Jsoup_46_workspace_objdump_d4j_full_jsoup_46_src_main_java_org_jsoup_nodes_Entities_java_void_escape_StringBuilder_accum_String_string_Document_OutputSettings_out_boolean_inAttribute_boolean_normaliseWhite_boolean_stripLeadingWhite
This section explains the easiest way to add a new benchmark such as APPS or HumanEval+.
Below is a minimal example of a new benchmark item named my_benchmark.
{
"problem_id": "1",
"prompt": "Write a function that returns True when a list of integers is sorted in non-decreasing order, and False otherwise.",
"input_output": "{\"inputs\": [[[1, 2, 2, 5]], [[3, 1, 4]], [[7]], [[0, 0, 0]], [[5, 4]]], \"outputs\": [true, false, true, true, false]}",
"mutated_input_output": "{\"inputs\": [[[1, 2, 2, 5]], [[3, 1, 4]], [[7]], [[0, 0, 0]], [[5, 4]]], \"outputs\": [false, true, false, false, true]}",
"signature": "def postcondition(nums: list[int], result: bool):\n pass"
}This example works as follows:
problem_id = "1"becomes the sample identifier. The loader also uses it when filtering samples and selecting validation test cases.promptis the natural-language task description shown to the model.input_outputstores correct input-output pairs. It is used to check completeness, so the generated specification should accept these examples.mutated_input_outputstores incorrect input-output pairs. It is used to check soundness, so the generated specification should reject them.signatureindicates that the specification is apostconditionfunction overnums: list[int]andresult: boolusing Python syntax. Every parameter in this signature must have an explicit type annotation.
If your new benchmark follows the same structure, each item should contain the following fields.
| Field | Required? | Where it is used |
|---|---|---|
problem_id |
Yes | Becomes Sample.id and metadata["problem_id"]. It is also used as the stable key for sample filtering and validation-test sampling in record_to_sample(...). |
prompt |
Yes | Becomes the natural-language problem description seen by the solver. In APPS, the loader combines it with a few example test cases; in HumanEval+, the loader uses it directly as Sample.input. |
input_output |
Yes | A string representation of an object that contains inputs and outputs for correct input-output pairs. record_to_sample(...) calls json.loads(...) on this string, pairs each input with the corresponding output, and stores the result in metadata["test_list"]. This list is used for completeness evaluation. A small subset also becomes metadata["prompt_test_list"] to guide generation. |
mutated_input_output |
Yes | A string representation of an object that contains inputs and outputs for incorrect or bug-revealing input-output pairs. record_to_sample(...) calls json.loads(...) on this string, pairs each input with the corresponding output, and stores the result in metadata["mutated_test_list"]. This list is used for soundness evaluation. |
signature |
Yes | Copied to metadata["signature"]. Every parameter must have an explicit type annotation. Write this signature using Python type syntax such as list[int], bool, and tuple[int, str]. The existing internal converters then transform those Python type annotations into the specification type system used by Expecto, so you should reuse that path instead of writing Expecto-specific types directly in the dataset. |
parser |
No | Copied to metadata["parser"]. This field is only needed when the stored input and output examples must be converted before they match the signature. APPS needs it because its examples are stored as standard input and output strings. HumanEval+ does not need it because its examples are already stored as structured values, so it uses null. |
If your benchmark needs extra metadata, you can add fields beyond those listed above. The fields listed here are the ones that must be handled by the current APPS- and HumanEval+-style loaders. Any other fields may be added freely for benchmark-specific use.
The input_output and mutated_input_output fields are stored as strings. Each string must contain a JSON object with two arrays named inputs and outputs.
All benchmarks in this format share the same first loading step:
- Read the string in
input_outputormutated_input_output. - Call
json.loads(...)on that string. - Read the
inputsarray and theoutputsarray from the decoded object. - Pair them with
zip(inputs, outputs). - Store the resulting list of
(input, output)pairs in sample metadata.
Regardless of the benchmark, these fields must decode into an object with the exact keys inputs and outputs.
If your benchmark already stores structured values, json.loads(...) is enough, and you usually do not need a parser. HumanEval+ uses this style. After loading, the data can look like this:
{
"inputs": [[[1, 2, 3], 2], [[5, 5], 1]],
"outputs": [true, false]
}In this case, each decoded input value already matches the structure expected by the signature. For example, if the signature is def postcondition(nums: list[int], result: bool):, an input may already be a list such as [1, 2, 3], and an output may already be true or false. When these values come from input_output, they are treated as correct input-output pairs and used for completeness evaluation. When they come from mutated_input_output, they are treated as incorrect input-output pairs and used for soundness evaluation.
If your benchmark stores raw standard input and standard output text, json.loads(...) is only the first step. APPS uses this style. After loading, the data may look like this:
{
"inputs": ["1 2 3\n", "5 4\n"],
"outputs": ["True\n", "False\n"]
}In this case, the decoded values are still just strings. They do not yet match the structure expected by the signature, so you need a parser field that converts an input string and an output string into the argument tuple required by postcondition(...).
For example, if the signature is def postcondition(nums: list[int], result: bool):, the parser may need to:
- Split the input string.
- Convert the tokens to integers.
- Convert the output string to a Boolean value.
- Return
(nums, result).
The loader still stores (input, output) pairs in metadata, but later generation and evaluation steps call the parser to convert those raw strings into meaningful values. Pairs from input_output are used to check completeness, and pairs from mutated_input_output are used to check soundness.
In both cases, the loader pairs inputs[i] with outputs[i], so both arrays must have the same length.
For signature, prefer ordinary Python type annotations. For example:
def postcondition(xs: list[int], target: int, result: bool):
passThis project already contains converters that read these Python type annotations and turn them into the internal specification types used during template generation and evaluation. You should reuse that conversion path rather than inventing a separate type notation for the dataset.
| Python annotation | Internal specification type |
|---|---|
int |
int |
bool |
bool |
float |
real |
str |
string |
list[T] |
list[T] |
set[T] |
set[T] |
tuple[T1, T2, ...] |
tuple[T1, T2, ...] |
dict[K, V] |
map[K, V] |
Optional[T] |
option[T] |
The relevant conversion logic is in /workspace/expecto-artifact/expecto/src/utils/dsl.py.
- Add a new dataset file such as
datasets/my_benchmark.json. - Create a new task file by copying
/workspace/expecto-artifact/expecto/src/tasks/apps.pyor/workspace/expecto-artifact/expecto/src/tasks/humaneval_plus.py. - Update the task name, dataset file name, and
benchmark=argument passed tosample_sequence_for_validation(...). - Keep the
record_to_sample(...)contract compatible with the existing solvers and scorers.
If your new benchmark can be expressed with this schema, you usually do not need to change the solver or scorer code.
In practice, adding the dataset file and a new task module modeled after
/workspace/expecto-artifact/expecto/src/tasks/apps.py or /workspace/expecto-artifact/expecto/src/tasks/humaneval_plus.py is usually enough.