Do you have the swe-bench harness code?

#2
by aidando73 - opened

Hi nebius team.

Thanks for open sourcing this. I appreciate it.

I'm assuming you guys worked from a fork of swe-bench to evaluate these instances. Is that open source also?

I'm having trouble just using the constants.py (copying it over to the current swe-bench) - but running into a few errors like:

ValueError: Could not find requirements.txt at paths ['requirements.txt', 'requirements-dev.txt', 'requirements-test.txt', 'requirements_test.txt', 'requirements_dev.txt'] for repo DataDog/integrations-core at commit 0bc38e9e4a0259e455c5d79fb798fb924a427169
Nebius org

Hi @aidando73 !

Could you tell me a bit more about how you did it?
Did you just copy the constants into the latest swe-bench commit and run the validation scripts?

Hi @aidando73 !

Could you tell me a bit more about how you did it?
Did you just copy the constants into the latest swe-bench commit and run the validation scripts?

Hi @ibragim-bad Thanks for your amazing work. Could you clarify which commit of swe-bench should be used to run the evaluation?

Nebius org

Hi @godcherry7 @aidando73

You can use this fork for evaluation. It was forked from the original repo on this commit.

Hi @godcherry7 @aidando73

You can use this fork for evaluation. It was forked from the original repo on this commit.

Please ensure that the code can successfully run the evaluation for SWE-bench-extra. Following the documentation, it fails to produce the correct results.

Nebius org

Can you please try again?
This is a sample script to run the validation script on the first instance of this dataset with the gold patch

import os
import json
import subprocess
from datasets import load_dataset

def main():
    os.makedirs("tmp", exist_ok=True)
    os.makedirs("log", exist_ok=True)
    
    print("Downloading SWE-bench-extra dataset from Hugging Face...")
    dataset = load_dataset("nebius/SWE-bench-extra", split="train")
    
    first_instance = dataset[0]
    print(f"Extracted first instance: {first_instance['instance_id']}")
    
    output_file = "test_sample.json"
    with open(output_file, "w") as f:
        json.dump([first_instance], f)
    
    print(f"Saved instance to {output_file}")
    
    print("Running validation script...")
    subprocess.run(
        "PYTHONPATH=/SWE-bench-extra python /SWE-bench-extra/swebench/harness/engine_validation.py \
        --instances_path /SWE-bench-extra/test_sample.json \
        --log_dir logs \
        --temp_dir tmp \
        --num_workers 1 \
        --verbose",
        shell=True,
        check=True
    )
    
    print("Done!")

if __name__ == "__main__":
    main()

Can you please try again?
This is a sample script to run the validation script on the first instance of this dataset with the gold patch

import os
import json
import subprocess
from datasets import load_dataset

def main():
    os.makedirs("tmp", exist_ok=True)
    os.makedirs("log", exist_ok=True)
    
    print("Downloading SWE-bench-extra dataset from Hugging Face...")
    dataset = load_dataset("nebius/SWE-bench-extra", split="train")
    
    first_instance = dataset[0]
    print(f"Extracted first instance: {first_instance['instance_id']}")
    
    output_file = "test_sample.json"
    with open(output_file, "w") as f:
        json.dump([first_instance], f)
    
    print(f"Saved instance to {output_file}")
    
    print("Running validation script...")
    subprocess.run(
        "PYTHONPATH=/SWE-bench-extra python /SWE-bench-extra/swebench/harness/engine_validation.py \
        --instances_path /SWE-bench-extra/test_sample.json \
        --log_dir logs \
        --temp_dir tmp \
        --num_workers 1 \
        --verbose",
        shell=True,
        check=True
    )
    
    print("Done!")

if __name__ == "__main__":
    main()

Thanks for your response! You have indeed provided an elegant and unified defaultdict approach for configuring constants across multiple repositories. However, your repository version is outdated, and I need to create and upload a Docker image.

Therefore, I have integrated the constant configurations from the constants.py file you provided, which correspond to SWE-bench-extra, into the latest SWE-Bench repository for testing. Specifically:

  1. I added the following code after
    https://github.com/SWE-bench/SWE-bench/blob/bb6c98dae9b9786f2a0b7ca6f6bde61ad1c0cb09/swebench/harness/constants/python.py#L1433:

    from collections import defaultdict
    
    TEST_PYTEST_WO_DEPRECATION = (
        "pytest --no-header -rA --tb=no  -p no:cacheprovider -W ignore::DeprecationWarning"
    )
    
    MAP_VERSION_TO_INSTALL_PLACEHOLDER = {
        "0.0": {
            "python": "3.9",
            "packages": "requirements.txt",
            "pip_packages": [
                "pytest",
                "cython",
                "distro",
                "pytest-cov",
                "pytest-xdist",
                "pytest-mock",
                "pytest-asyncio",
                "pytest-bdd",
                "pytest-benchmark",
                "pytest-randomly",
                "responses",
                "mock",
                "hypothesis",
                "freezegun",
                "trustme",
                "requests-mock",
                "requests",
                "tomlkit",
            ],
            "install": "pip install --force-reinstall -e .; pip install -e .[test]; pip install -e .[testing]; pip install -e .[tests]; pip install -e .[dev]",
            "pre_install": ["apt install -y make gcc g++ pkg-config"],
            "test_cmd": TEST_PYTEST_WO_DEPRECATION,
        }
    }
    
    MAP_REPO_TO_REQS_PATHS_PLACEHOLDER = [
        "requirements.txt",
        "requirements-dev.txt",
        "requirements-test.txt",
        "requirements_test.txt",
        "requirements_dev.txt",
    ]
    
    MAP_REPO_TO_REQS_PATHS = defaultdict(
        lambda: MAP_REPO_TO_REQS_PATHS_PLACEHOLDER, MAP_REPO_TO_REQS_PATHS
    )
    
  2. I added the following import to
    https://github.com/SWE-bench/SWE-bench/blob/bb6c98dae9b9786f2a0b7ca6f6bde61ad1c0cb09/swebench/harness/constants/init.py#L6:

    from collections import defaultdict
    

    Then, I modified
    https://github.com/SWE-bench/SWE-bench/blob/bb6c98dae9b9786f2a0b7ca6f6bde61ad1c0cb09/swebench/harness/constants/init.py#L129
    to:

    MAP_REPO_VERSION_TO_SPECS = defaultdict(
        lambda: MAP_VERSION_TO_INSTALL_PLACEHOLDER,
        {**MAP_REPO_VERSION_TO_SPECS_JS, **MAP_REPO_VERSION_TO_SPECS_PY}
    )
    

    And modified
    https://github.com/SWE-bench/SWE-bench/blob/bb6c98dae9b9786f2a0b7ca6f6bde61ad1c0cb09/swebench/harness/constants/init.py#L139
    to:

    MAP_REPO_TO_EXT = defaultdict(
        lambda: "py",
        {
            **{k: "js" for k in MAP_REPO_VERSION_TO_SPECS_JS.keys()},
            **{k: "py" for k in MAP_REPO_VERSION_TO_SPECS_PY.keys()},
        }
    )
    
  3. I updated
    https://github.com/SWE-bench/SWE-bench/blob/bb6c98dae9b9786f2a0b7ca6f6bde61ad1c0cb09/swebench/harness/log_parsers/init.py#L2
    to:

    from swebench.harness.log_parsers.python import MAP_REPO_TO_PARSER_PY, parse_log_pytest
    from collections import defaultdict
    
    MAP_REPO_TO_PARSER = defaultdict(
        lambda: parse_log_pytest,
        {**MAP_REPO_TO_PARSER_JS, **MAP_REPO_TO_PARSER_PY}
    )
    

By doing so, I updated your constant configuration approach to be compatible with the latest SWE-Bench repository, and I believe the data format should be compatible.

However, when I executed the first instance for testing and evaluation, I encountered the following error:

(SWE-bench) root@cpu01-2050-SWE-bench:~/SWE-bench# python -m swebench.harness.run_evaluation --dataset_name nebius/SWE-bench-extra --predictions_path gold --max_workers 1 --split train --instance_ids 0b01001001__spectree-64 --cache_level instance --run_id validate-gold  --namespace ""
<frozen runpy>:128: RuntimeWarning: 'swebench.harness.run_evaluation' found in sys.modules after import of package 'swebench.harness', but prior to execution of 'swebench.harness.run_evaluation'; this may result in unpredictable behaviour
Using gold predictions - ignoring predictions_path
1 instances already run, skipping...
No instances to run.
Cleaning cached images...
Removed 0 images.
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/root/SWE-bench/swebench/harness/run_evaluation.py", line 617, in <module>
    main(**vars(args))
  File "/root/SWE-bench/swebench/harness/run_evaluation.py", line 528, in main
    return make_run_report(predictions, full_dataset, run_id, client)
  File "/root/SWE-bench/swebench/harness/reporting.py", line 81, in make_run_report
    test_specs = list(map(make_test_spec, full_dataset))
  File "/root/SWE-bench/swebench/harness/test_spec/test_spec.py", line 203, in make_test_spec
    env_script_list = make_env_script_list(instance, specs, env_name)
  File "/root/SWE-bench/swebench/harness/test_spec/python.py", line 132, in get_requirements
    return get_requirements_by_commit(instance["repo"], commit)
  File "/root/SWE-bench/swebench/harness/test_spec/python.py", line 77, in get_requirements_by_commit
    raise ValueError(
ValueError: Could not find requirements.txt at paths ['requirements.txt', 'requirements-dev.txt', 'requirements-test.txt', 'requirements_test.txt', 'requirements_dev.txt'] for repo 0b01001001/spectree at commit 2401580b6f41fe72f1360493ee46e8a842bd04ba

I couldn't find commit 2401580b6f41fe72f1360493ee46e8a842bd04ba for repository 0b01001001/spectree. SWE-Bench prioritizes using environment_setup_commit to create environments, but it seems that environment_setup_commit does not exist.

I am not sure how environment_setup_commit is obtained. Please confirm that environment_setup_commit corresponds to a real commit.
Is there a difference in how environment_setup_commit is used between the old and new versions?
It seems that repo:environment_setup_commit cannot be found. How does the SWE-bench-extra version locate the codebase corresponding to environment_setup_commit?
If I want to evaluate the SWE-bench-extra dataset using the new version of SWE-Bench, what modifications are needed to adapt to environment_setup_commit?

Yes, we forked the repository a while ago, before the Docker image functionality was added.
I tried running your commit on SWE-bench.
I updated the dataset with environment_setup_commit, which now essentially duplicates base_commit_sha.

You just need to update the following "pre_install" command in your MAP_VERSION_TO_INSTALL_PLACEHOLDER:

"pre_install": ["apt install -y make gcc g++ pkg-config"] -> "pre_install": ["apt update && apt install -y make gcc g++ pkg-config"]

I tried using your command. Can you please try again?

python -m swebench.harness.run_evaluation --dataset_name nebius/SWE-bench-extra --predictions_path gold --max_workers 1 --split train --instance_ids 0b01001001__spectree-64 --cache_level instance --run_id Using gold predictions - ignoring predictions_path
Building base image (sweb.base.py.arm64:latest)
Base images built successfully.
No environment images need to be built.
Found 1 existing instance images. Will reuse them.
Running 1 instances...
1 ran successfully, 0 failed: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:19<00:00, 19.61s/it]
All instances run.
Cleaning cached images...
Removed 0 images.
Total instances: 1
Instances submitted: 1
Instances completed: 1
Instances incomplete: 0
Instances resolved: 1
Instances unresolved: 0
Instances with empty patches: 0
Instances with errors: 0
Unstopped containers: 0
Unremoved images: 1
Report written to gold.validate-gold.json

Yes, we forked the repository a while ago, before the Docker image functionality was added.
I tried running your commit on SWE-bench.
I updated the dataset with environment_setup_commit, which now essentially duplicates base_commit_sha.

You just need to update the following "pre_install" command in your MAP_VERSION_TO_INSTALL_PLACEHOLDER:

"pre_install": ["apt install -y make gcc g++ pkg-config"] -> "pre_install": ["apt update && apt install -y make gcc g++ pkg-config"]

I tried using your command. Can you please try again?

python -m swebench.harness.run_evaluation --dataset_name nebius/SWE-bench-extra --predictions_path gold --max_workers 1 --split train --instance_ids 0b01001001__spectree-64 --cache_level instance --run_id Using gold predictions - ignoring predictions_path
Building base image (sweb.base.py.arm64:latest)
Base images built successfully.
No environment images need to be built.
Found 1 existing instance images. Will reuse them.
Running 1 instances...
1 ran successfully, 0 failed: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:19<00:00, 19.61s/it]
All instances run.
Cleaning cached images...
Removed 0 images.
Total instances: 1
Instances submitted: 1
Instances completed: 1
Instances incomplete: 0
Instances resolved: 1
Instances unresolved: 0
Instances with empty patches: 0
Instances with errors: 0
Unstopped containers: 0
Unremoved images: 1
Report written to gold.validate-gold.json

I spent a great deal of effort trying to modify the SWE-Bench environment constants for SWE-bench-extra to create images for evaluation, but I still cannot come up with a unified environment configuration scheme that works for creating images for all instances. Here is my modified approach: https://github.com/lycfight/SWE-bench.

SWE-bench is extremely rigid in how it sets up the environment configuration—it can only be hardcoded through constants. I removed "packages": "requirements.txt", from MAP_VERSION_TO_INSTALL_PLACEHOLDER to avoid triggering an interruption at this line, since most repositories do not have a requirements file, yet that still doesn't solve the issue for the ones that do have a requirements file.

Furthermore, some repositories require certain dependencies (such as MySQL) to be pre-installed in the pre_install step. Your unified configuration method conveniently handles the environment configuration for most instances, but a few repositories with special dependency installations may not be covered and still need to be added manually.

If there is a more sophisticated approach, please share your thoughts.

Can you give me some examples of instances that don't have a requirements.txt and require certain dependencies? I will check them as well and sync your fork with ours. Because in our fork we have updated the get_requirements function so that it will not fail if there is no requirements.txt file in the repository. Same for some commands like pre-install.

Can you give me some examples of instances that don't have a requirements.txt and require certain dependencies? I will check them as well and sync your fork with ours. Because in our fork we have updated the get_requirements function so that it will not fail if there is no requirements.txt file in the repository. Same for some commands like pre-install.

There are many such errors. The simplest way to address this is to clone my fork and add "packages": "requirements.txt" after this line, keeping it consistent with your constants.

Then, run the following commands:

cd SWE-bench
pip install -e .
python -m swebench.harness.run_evaluation \
    --dataset_name nebius/SWE-bench-extra \
    --predictions_path gold \
    --max_workers 8 \ 
    --split train \ 
    --cache_level instance \
    --run_id validate-gold \
    --namespace ""

This will print out numerous errors. Logs of failed image creations are saved in the SWE-bench/logs/build_images directory.

If you use my fork without removing "packages": "requirements.txt", you may encounter commits where no requirements.txt exists. Due to the way defaultdict is set up, it assumes that any repository without explicit configuration has requirements and attempts to find a requirements.txt file, which doesn't exist. This leads to an exception being triggered at:
https://github.com/SWE-bench/SWE-bench/blob/b8084edba022be47912d2a21eba5f01ac3990da3/swebench/harness/test_spec/python.py#L218

such as:
{"instance_id": "bimpression__sosw-93", "error": "Could not find requirements.txt at paths ['requirements.txt', 'requirements-dev.txt', 'requirements-test.txt', 'requirements_test.txt', 'requirements_dev.txt'] for repo bimpression/sosw at commit c3bd62526e62090f986b24a0b88decec4644b2b9"}

To avoid this issue, my only solution is to remove "packages": "requirements.txt", which makes SWE-Bench skip the conditional branch at:
https://github.com/SWE-bench/SWE-bench/blob/b8084edba022be47912d2a21eba5f01ac3990da3/swebench/harness/test_spec/python.py#L253

By doing so, it no longer searches for requirements.txt, preventing the exception from occurring. This results in a unified environment image being created for all instances.

However, this still doesn't fully resolve the issue of setting up environments for repositories that do have requirements.txt. At:
https://github.com/SWE-bench/SWE-bench/blob/b8084edba022be47912d2a21eba5f01ac3990da3/swebench/harness/test_spec/python.py#L306

SWE-Bench only runs a generic installation command, which is insufficient for setting up the required environment in some cases. For example:

Issue with 3YOURMIND__django-migration-linter-186

It lacks MySQL support, leading to errors such as:

2025-03-03 20:08:31,727 - INFO - Preparing metadata (setup.py): started
2025-03-03 20:08:31,832 - INFO - Preparing metadata (setup.py): finished with status 'error'
2025-03-03 20:08:31,836 - INFO - error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [13 lines of output]
      /bin/sh: 1: mysql_config: not found
      /bin/sh: 1: mariadb_config: not found
      /bin/sh: 1: mysql_config: not found
      ...
      OSError: mysql_config not found
      [end of output]

2025-03-03 20:08:32,426 - ERROR - Error: The command '/bin/sh -c /bin/bash /root/setup_repo.sh' returned a non-zero code: 1

Issue with 3YOURMIND__django-migration-linter-258

The error occurs due to missing dependencies, such as MySQL-related libraries:

2025-03-03 20:08:28,994 - INFO - Getting requirements to build wheel: started
2025-03-03 20:08:29,112 - INFO - Getting requirements to build wheel: finished with status 'error'
2025-03-03 20:08:29,116 - INFO - error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [26 lines of output]
      Trying pkg-config --exists mysqlclient
      Command 'pkg-config --exists mysqlclient' returned non-zero exit status 1.
      Trying pkg-config --exists mariadb
      Command 'pkg-config --exists mariadb' returned non-zero exit status 1.
      ...
      Exception: Can not find valid pkg-config name.
      Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
      [end of output]

2025-03-03 20:08:30,701 - ERROR - Error: The command '/bin/sh -c /bin/bash /root/setup_repo.sh' returned a non-zero code: 1

These errors suggest that the generic environment setup is not enough to handle repositories requiring additional system-level dependencies (e.g., MySQL). A more robust solution is needed to dynamically detect and install the correct dependencies for each repository.

Can you give me some examples of instances that don't have a requirements.txt and require certain dependencies? I will check them as well and sync your fork with ours. Because in our fork we have updated the get_requirements function so that it will not fail if there is no requirements.txt file in the repository. Same for some commands like pre-install.

I can provide a subset of instance IDs where image creation failed due to incomplete constant settings in constants:

[
    "3YOURMIND__django-migration-linter-186",
    "3YOURMIND__django-migration-linter-258",
    "BlueBrain__NeuroM-990",
    "CMakePP__CMinx-57",
    "CTFd__CTFd-1802",
    "CTFd__CTFd-2269",
    "CTFd__CTFd-2333",
    "CodeForPhilly__chime-190",
    "Duke-GCB__lando-169",
    "ESMCI__cime-4019",
    "ESMCI__cime-4053",
    "ESMCI__cime-4058",
    "G-Node__python-odml-304",
    "HBehrens__fixup_shaper_svg-3",
    "HEPCloud__decisionengine-391",
    "Instagram__LibCST-626",
    "Instagram__LibCST-628",
    "KATO-Hiro__Somen-Soupy-262",
    "KATO-Hiro__Somen-Soupy-300",
    "LKI__chinese-calendar-11",
    "Materials-Consortia__optimade-python-tools-903",
    "Parquery__icontract-112",
    "PrefectHQ__prefect-5237",
    "ProjectQ-Framework__ProjectQ-53",
    "PyCQA__isort-1432",
    "PyCQA__isort-1717",
    "PyCQA__isort-1727",
    "PyPSA__PyPSA-524",
    "PyPSA__linopy-249",
    "RDFLib__rdflib-1496",
    "RadioAstronomySoftwareGroup__pyuvsim-448",
    "RasaHQ__rasa-sdk-237",
    "RedHatInsights__insights-core-3193",
    "RedHatInsights__insights-core-3486",
    "RedHatInsights__insights-core-3649",
    "RedHatInsights__insights-core-3949",
    "RedHatInsights__insights-core-3984",
    "ReproNim__reproman-518",
    "ReproNim__reproman-539",
    "ReproNim__reproman-544",
    "SpikeInterface__spikeinterface-2716",
    "TTitcombe__PrivacyPanda-20",
    "TTitcombe__PrivacyPanda-23",
    "TTitcombe__PrivacyPanda-29",
    "TTitcombe__PrivacyPanda-7",
    "UFRN-URNAI__urnai-tools-78",
    "VISTAS-IVES__pyvistas-57",
    "adafruit__circup-21",
    "adamboche__python-marshmallow-union-33",
    "adamgilman__furlong-5",
    "agronholm__anyio-483",
    "agronholm__anyio-485",
    "agronholm__anyio-597",
    "aj-white__cipher_typer-5",
    "alexa__alexa-skills-kit-sdk-for-python-92",
    "alltheplaces__alltheplaces-949",
    "aws__chalice-446",
    "aws__chalice-449",
    "biotite-dev__biotite-97",
    "bird-house__birdy-116",
    "bird-house__birdy-137",
    "bird-house__birdy-138",
    "bird-house__birdy-139",
    "bird-house__birdy-66",
    "bridgecrewio__checkov-211",
    "c0fec0de__anytree-232",
    "catalyst-team__catalyst-151",
    "chakki-works__seqeval-63",
    "codalab__codalab-worksheets-2049",
    "codalab__codalab-worksheets-2239",
    "conan-io__conan-2705",
    "conan-io__conan-3100",
    "conan-io__conan-3197",
    "conan-io__conan-4299",
    "csyhuang__hn2016_falwa-28",
    "cython__cython-2237",
    "d0c-s4vage__lookatme-213",
    "darosior__python-bip32-10",
    "darosior__python-bip32-11",
    "dbekaert__RAiDER-90",
    "desihub__desitransfer-10",
    "desihub__desitransfer-21"
]

These instances failed to create their images due to missing or incorrect constant settings.

As I said above, we updated the get_requirements function so that it doesn’t crash. You just need to remove raise in this function so that swe-bench doesn’t fail.

https://github.com/lycfight/SWE-bench/blob/0d8dd295a605ad166ce21393209ad22300aae64c/swebench/harness/test_spec/python.py#L71
Just update this:

def get_requirements_by_commit(repo: str, commit: str) -> str:
    lines = ""
    for req_path in MAP_REPO_TO_REQS_PATHS[repo]:
        reqs_url = posixpath.join(SWE_BENCH_URL_RAW, repo, commit, req_path)
        reqs = requests.get(reqs_url, headers=HEADERS)
        if reqs.status_code == 200:
            lines = reqs.text
            break
    else:
        return lines

As you noticed, the universal installation method, even though it's elegant, makes some assumptions. One of them is that when running predefined install commands, even if some of them are invalid, the next test run can still pass successfully. In 3YOURMIND__django-migration-linter-186, this is exactly what happens. The sequential installation attempt (pip install -e ..;pip install -e …) fails, causing an error. You can make a small patch in the command itself:
https://github.com/lycfight/SWE-bench/blob/0d8dd295a605ad166ce21393209ad22300aae64c/swebench/harness/constants/python.py#L1484

        "install": "pip install --force-reinstall -e . || true; pip install -e .[test] || true; pip install -e .[testing] || true; pip install -e .[tests] || true; pip install -e .[dev] || true",

However, 35 instances are really failing, so I removed them from the dataset to avoid interfering with full test runs.

Sign up or log in to comment