angel1987 commited on
Commit
c423d60
·
verified ·
1 Parent(s): e8223d9

Add description

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -1,13 +1,14 @@
1
  ---
2
  title: LiteraLingo TopK Dev
3
  emoji: 💻
4
- colorFrom: gray
5
- colorTo: red
6
  sdk: gradio
7
  sdk_version: 4.41.0
8
  app_file: app.py
9
  pinned: false
10
  license: mit
 
11
  ---
12
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
@@ -102,4 +103,4 @@ By producing multiple responses for a given input, this approach used various po
102
  2. **Model Performance**: Inference times can vary.
103
  The inference time for the Falcon-Instruct models are notably long. It took from 300 ~ up to 800 seconds. Using API is recommended (~0.6sec, under 1 sec in most cases). Please remember to provide the corresponding HF Token that grants access to the API for each query, otherwise only a warning message will be shown.
104
 
105
- 3. Model Limitations: Some models may produce different results or have limitations based on their configuration and the input provided. Note that a default prefix is prepended to every incoming query based on prompt testing for a suitable choice. You can configure these default values in globals.py
 
1
  ---
2
  title: LiteraLingo TopK Dev
3
  emoji: 💻
4
+ colorFrom: blue
5
+ colorTo: yellow
6
  sdk: gradio
7
  sdk_version: 4.41.0
8
  app_file: app.py
9
  pinned: false
10
  license: mit
11
+ short_description: Convert figurative sentences into their literal meanings
12
  ---
13
 
14
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
103
  2. **Model Performance**: Inference times can vary.
104
  The inference time for the Falcon-Instruct models are notably long. It took from 300 ~ up to 800 seconds. Using API is recommended (~0.6sec, under 1 sec in most cases). Please remember to provide the corresponding HF Token that grants access to the API for each query, otherwise only a warning message will be shown.
105
 
106
+ 3. Model Limitations: Some models may produce different results or have limitations based on their configuration and the input provided. Note that a default prefix is prepended to every incoming query based on prompt testing for a suitable choice. You can configure these default values in globals.py