Datasets:
Tasks:
Other
Formats:
json
Languages:
English
Size:
< 1K
Tags:
reasoning-datasets-competition
combinatorial optimization
LLM evaluation
google OR tools
test dataset
optimization
License:
Upload 2 files
Browse files- README.md +13 -37
- llm_combinatorial_test.json +0 -0
README.md
CHANGED
@@ -1,28 +1,7 @@
|
|
1 |
-
|
2 |
-
language:
|
3 |
-
- "en"
|
4 |
-
pretty_name: "LLM Combinatorial Optimization Test Dataset"
|
5 |
-
tags:
|
6 |
-
- "reasoning-datasets-competition"
|
7 |
-
- "combinatorial optimization"
|
8 |
-
- "LLM evaluation"
|
9 |
-
- "google OR tools"
|
10 |
-
- "test dataset"
|
11 |
-
- "optimization"
|
12 |
-
license: "mit"
|
13 |
-
task_categories:
|
14 |
-
- "other"
|
15 |
-
---
|
16 |
-
|
17 |
-
# LLM Combinatorial Optimization Test Dataset
|
18 |
|
19 |
## Overview
|
20 |
-
|
21 |
-
This dataset is created for evaluating the effectiveness of Large Language Models (LLMs) in solving various combinatorial optimization problems. The dataset was generated by combining multiple JSON files containing problem instances from different problem domains. Each instance represents a distinct optimization problem along with its associated solution outputs generated using Google's OR Tools.
|
22 |
-
|
23 |
-
## Structure and Attributes
|
24 |
-
|
25 |
-
Each instance in this dataset is a JSON object that contains some common attributes along with problem-specific details. The common attributes present in every instance include:
|
26 |
|
27 |
## Problem Types and Attributes
|
28 |
Each instance in the dataset is expected to have the following attributes:
|
@@ -39,30 +18,27 @@ Additionally, instances are tagged with a specific **problem_type** (e.g., 'vrp'
|
|
39 |
### Instance Counts by Problem Type
|
40 |
| Problem Type | Count |
|
41 |
|--------------|-------|
|
|
|
|
|
42 |
| vrp | 30 |
|
43 |
-
| knapsack | 30 |
|
44 |
| tsp | 30 |
|
45 |
-
|
|
46 |
-
| jssp | 30 |
|
47 |
|
48 |
## Attribute Details
|
49 |
Below are the attributes extracted from the first instance along with their inferred data types. Note that the dataset may contain additional problem-specific attributes that vary by problem type:
|
50 |
```
|
51 |
-
{'
|
52 |
-
'city_size': 'int',
|
53 |
-
'demands': 'list',
|
54 |
-
'input': 'str',
|
55 |
-
'instance_id': 'int',
|
56 |
'instruction': 'str',
|
57 |
-
'
|
58 |
-
'
|
59 |
-
'
|
|
|
60 |
'output_list_of_list': 'str',
|
61 |
'output_starjob': 'str',
|
62 |
-
'
|
63 |
'problem_type': 'str',
|
64 |
-
'
|
65 |
-
'
|
66 |
```
|
67 |
|
68 |
## Conclusion
|
|
|
1 |
+
# Combined Combinatorial Optimization Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
## Overview
|
4 |
+
This dataset is created for evaluating the effectiveness of Large Language Models (LLMs) on various combinatorial optimization problems. Each instance in the dataset represents a problem instance and includes all the necessary attributes to learn and evaluate problem solutions generated by Google's OR Tools.
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
## Problem Types and Attributes
|
7 |
Each instance in the dataset is expected to have the following attributes:
|
|
|
18 |
### Instance Counts by Problem Type
|
19 |
| Problem Type | Count |
|
20 |
|--------------|-------|
|
21 |
+
| jssp | 30 |
|
22 |
+
| binpack | 30 |
|
23 |
| vrp | 30 |
|
|
|
24 |
| tsp | 30 |
|
25 |
+
| knapsack | 30 |
|
|
|
26 |
|
27 |
## Attribute Details
|
28 |
Below are the attributes extracted from the first instance along with their inferred data types. Note that the dataset may contain additional problem-specific attributes that vary by problem type:
|
29 |
```
|
30 |
+
{'input': 'str',
|
|
|
|
|
|
|
|
|
31 |
'instruction': 'str',
|
32 |
+
'makespan': 'int',
|
33 |
+
'matrix': 'str',
|
34 |
+
'num_jobs': 'int',
|
35 |
+
'num_machines': 'int',
|
36 |
'output_list_of_list': 'str',
|
37 |
'output_starjob': 'str',
|
38 |
+
'path': 'str',
|
39 |
'problem_type': 'str',
|
40 |
+
'prompt_jobs_first': 'str',
|
41 |
+
'prompt_machines_first': 'str'}
|
42 |
```
|
43 |
|
44 |
## Conclusion
|
llm_combinatorial_test.json
CHANGED
The diff for this file is too large to render.
See raw diff
|
|