decompile-eval / README.md
LLM4Binary's picture
Update README.md
cd252c9 verified
|
raw
history blame
1.44 kB
metadata
license: mit

This is the evaluation benchmark of LLM4Decompile project.

About

  • Decompile-Bench-Eval includes manually crafted binaries from the well-established HumanEval and MBPP, alongside the compiled GitHub repositories released after 2025 to mitigate data leakage issues.
  • C and C++ Support: These datasets include both C and C++ source code, whereas earlier models (LLM4Decompile-V1.5, V2) and the HumanEval-Decompile dataset were limited to C only.

Columns

It contains three splits, huameval, mbpp, and github2025. We also provide a json verison for the data. They contains the following columns:

{
"index":"index of the function", 
"func_name":"demangled name for he function", 
"func_dep":"function dependecies (includes, help functions), or the path to the source code", 
"func":"source code", 
"test":"unit tests for the function, empty for github data", 
"opt":"optimization, O0, O1, O2, O3", 
"language":"language, c or cpp", 
"asm":"assembly", 
"ida_asm":"assembly from ida pro", 
"ida_pseudo":"decompiled results (pseudo code) from ida pro", 
"ghidra_asm":"assembly from ghidra", 
"ghidra_pseudo":"decompiled results (pseudo code) from ghidra"
}

Others

For more details, please check LLM4Decompile project.