lixin4sky commited on
Commit
552874e
1 Parent(s): 311d760

change figures' path

Browse files
Files changed (1) hide show
  1. README.md +5 -8
README.md CHANGED
@@ -1,7 +1,4 @@
1
- <<<<<<< HEAD
2
- =======
3
  ---
4
- >>>>>>> 2e37a45df21ce9449428b7afc0e170a6dd7042b0
5
  license: mit
6
  language:
7
  - en
@@ -13,7 +10,7 @@ base_model:
13
  - deepseek-ai/deepseek-coder-7b-instruct-v1.5
14
  library_name: transformers, alignment-handbook
15
  pipeline_tag: question-answering
16
- <<<<<<< HEAD
17
 
18
  ### 1. Introduction of this repository
19
 
@@ -27,20 +24,20 @@ Official Repository of "Can Large Language Models Analyze Graphs like Profession
27
 
28
  #### The pipeline of ProGraph benchmark construction
29
 
30
- <img width="1000px" alt="" src="https://huggingface.co/spaces/lixin4sky/ProGraph/blob/main/figure_1_the_pipeline_of_ProGraph_benchmark_construction.jpg">
31
 
32
  #### The pipeline of LLM4Graph dataset construction and corresponding model enhancement.
33
  Code datasets. We construct two code datasets in the form of QA pairs. The questions in both datasets are the same, but the answers differ. In the simpler dataset, each answer only contains Python code. Inspired by Chain of Thought (CoT) [55], each answer in the more complex dataset additionally includes relevant APIs and their documents as prefixes. This modification can facilitate open-source models to utilize document information more effectively. We name the above code datasets as Code (QA) and Doc+Code (QA), respectively. Unlike the hand-crafted benchmark, problems in the code datasets are automatically generated and each contains only one key API.
34
 
35
- <img width="1000px" alt="" src="https://huggingface.co/spaces/lixin4sky/ProGraph/blob/main/figure_2_the_pipeline_of_LLM4Graph_dataset_construction_and_corresponding_model_enhancement.jpg">
36
 
37
  #### The pass rate (left) and accuracy (right) of open-source models with instruction tuning.
38
 
39
- <img width="1000px" alt="" src="https://huggingface.co/spaces/lixin4sky/ProGraph/blob/main/figure_4_the_pass%20rate_and_accuracy_of_open-source_models_withe_instruction_tuning.jpg">
40
 
41
  #### Compilation error statistics for open source models.
42
 
43
- <img width="1000px" alt="" src="https://huggingface.co/spaces/lixin4sky/ProGraph/blob/main/figure_6_compilation_error_statistics_for_open-source_models.jpg">
44
 
45
  #### Performance (%) of open-source models regarding different question types.
46
 
 
 
 
1
  ---
 
2
  license: mit
3
  language:
4
  - en
 
10
  - deepseek-ai/deepseek-coder-7b-instruct-v1.5
11
  library_name: transformers, alignment-handbook
12
  pipeline_tag: question-answering
13
+ ---
14
 
15
  ### 1. Introduction of this repository
16
 
 
24
 
25
  #### The pipeline of ProGraph benchmark construction
26
 
27
+ <img width="1000px" alt="" src="figures/figure_1_the_pipeline_of_ProGraph_benchmark_construction.jpg">
28
 
29
  #### The pipeline of LLM4Graph dataset construction and corresponding model enhancement.
30
  Code datasets. We construct two code datasets in the form of QA pairs. The questions in both datasets are the same, but the answers differ. In the simpler dataset, each answer only contains Python code. Inspired by Chain of Thought (CoT) [55], each answer in the more complex dataset additionally includes relevant APIs and their documents as prefixes. This modification can facilitate open-source models to utilize document information more effectively. We name the above code datasets as Code (QA) and Doc+Code (QA), respectively. Unlike the hand-crafted benchmark, problems in the code datasets are automatically generated and each contains only one key API.
31
 
32
+ <img width="1000px" alt="" src="figures/figure_2_the_pipeline_of_LLM4Graph_dataset_construction_and_corresponding_model_enhancement.jpg">
33
 
34
  #### The pass rate (left) and accuracy (right) of open-source models with instruction tuning.
35
 
36
+ <img width="1000px" alt="" src="figures/figure_4_the_pass rate_and_accuracy_of_open-source_models_withe_instruction_tuning.jpg">
37
 
38
  #### Compilation error statistics for open source models.
39
 
40
+ <img width="1000px" alt="" src="figures/figure_6_compilation_error_statistics_for_open-source_models.jpg">
41
 
42
  #### Performance (%) of open-source models regarding different question types.
43