Datasets:

Languages:
Vietnamese
ArXiv:
License:
holylovenia commited on
Commit
315fd33
·
verified ·
1 Parent(s): 9ffdcd2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -10,7 +10,7 @@ tags:
10
  - machine-translation
11
  ---
12
 
13
- MULTISPIDER, the largest multilingual text-to-SQL dataset which covers seven languages (English, German, French, Spanish, Japanese, Chinese, and Vietnamese). Upon MULTISPIDER, we further identify the lexical and structural challenges of text-to-SQL (caused by specific language properties and dialect sayings) and their intensity across different languages.
14
 
15
 
16
  ## Languages
@@ -20,25 +20,25 @@ vie
20
  ## Supported Tasks
21
 
22
  Machine Translation
23
-
24
  ## Dataset Usage
25
  ### Using `datasets` library
26
  ```
27
- from datasets import load_dataset
28
- dset = datasets.load_dataset("SEACrowd/multispider", trust_remote_code=True)
29
  ```
30
  ### Using `seacrowd` library
31
  ```import seacrowd as sc
32
  # Load the dataset using the default config
33
- dset = sc.load_dataset("multispider", schema="seacrowd")
34
  # Check all available subsets (config names) of the dataset
35
- print(sc.available_config_names("multispider"))
36
  # Load the dataset using a specific config
37
- dset = sc.load_dataset_by_config_name(config_name="<config_name>")
38
  ```
39
-
40
- More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
41
-
42
 
43
  ## Dataset Homepage
44
 
 
10
  - machine-translation
11
  ---
12
 
13
+ MULTISPIDER, the largest multilingual text-to-SQL dataset which coversseven languages (English, German, French, Spanish, Japanese,Chinese, and Vietnamese). Upon MULTISPIDER, we further identifythe lexical and structural challenges of text-to-SQL (caused byspecific language properties and dialect sayings) and theirintensity across different languages.
14
 
15
 
16
  ## Languages
 
20
  ## Supported Tasks
21
 
22
  Machine Translation
23
+
24
  ## Dataset Usage
25
  ### Using `datasets` library
26
  ```
27
+ from datasets import load_dataset
28
+ dset = datasets.load_dataset("SEACrowd/multispider", trust_remote_code=True)
29
  ```
30
  ### Using `seacrowd` library
31
  ```import seacrowd as sc
32
  # Load the dataset using the default config
33
+ dset = sc.load_dataset("multispider", schema="seacrowd")
34
  # Check all available subsets (config names) of the dataset
35
+ print(sc.available_config_names("multispider"))
36
  # Load the dataset using a specific config
37
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
38
  ```
39
+
40
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
41
+
42
 
43
  ## Dataset Homepage
44