Datasets:

Languages:
Thai
ArXiv:
License:
holylovenia commited on
Commit
0130b7f
·
verified ·
1 Parent(s): 9f85c27

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
  ---
12
 
13
 
14
- Corpus-based dictionary of Thai and English languages. This dataset contains frequently-used words from trusted publications such as novels, academic documents and newspaper. The dataset link contains Thai-English and English-Thai lexicons. Thai-English vocabulary consists of vocabulary, type of word (part of speech), translation, synonym (synonym) and sample sentences with a list of Thai-> English words, 53,000 words and English vocabulary list -> Thai, 83,000 words.
15
 
16
 
17
  ## Languages
@@ -21,25 +21,25 @@ tha
21
  ## Supported Tasks
22
 
23
  Machine Translation
24
-
25
  ## Dataset Usage
26
  ### Using `datasets` library
27
  ```
28
- from datasets import load_dataset
29
- dset = datasets.load_dataset("SEACrowd/lexitron", trust_remote_code=True)
30
  ```
31
  ### Using `seacrowd` library
32
  ```import seacrowd as sc
33
  # Load the dataset using the default config
34
- dset = sc.load_dataset("lexitron", schema="seacrowd")
35
  # Check all available subsets (config names) of the dataset
36
- print(sc.available_config_names("lexitron"))
37
  # Load the dataset using a specific config
38
- dset = sc.load_dataset_by_config_name(config_name="<config_name>")
39
  ```
40
-
41
- More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
42
-
43
 
44
  ## Dataset Homepage
45
 
 
11
  ---
12
 
13
 
14
+ Corpus-based dictionary of Thai and English languages.This dataset contains frequently-used words from trustedpublications such as novels, academic documents and newspaper.The dataset link contains Thai-English and English-Thai lexicons.Thai-English vocabulary consists of vocabulary, type of word(part of speech), translation, synonym (synonym) and sample sentenceswith a list of Thai-> English words, 53,000 words and English vocabularylist -> Thai, 83,000 words.
15
 
16
 
17
  ## Languages
 
21
  ## Supported Tasks
22
 
23
  Machine Translation
24
+
25
  ## Dataset Usage
26
  ### Using `datasets` library
27
  ```
28
+ from datasets import load_dataset
29
+ dset = datasets.load_dataset("SEACrowd/lexitron", trust_remote_code=True)
30
  ```
31
  ### Using `seacrowd` library
32
  ```import seacrowd as sc
33
  # Load the dataset using the default config
34
+ dset = sc.load_dataset("lexitron", schema="seacrowd")
35
  # Check all available subsets (config names) of the dataset
36
+ print(sc.available_config_names("lexitron"))
37
  # Load the dataset using a specific config
38
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
39
  ```
40
+
41
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
42
+
43
 
44
  ## Dataset Homepage
45