rathore11 commited on
Commit
1a62ccd
·
verified ·
1 Parent(s): 0151d9e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -5
README.md CHANGED
@@ -7,11 +7,22 @@ sdk: docker
7
  pinned: false
8
  license: apache-2.0
9
  short_description: using public LLM to create own Gen AI app
10
-
11
- app_port: 7860 # This MUST match the port your FastAPI app listens on inside the container
12
- image: dharmendrarathore/first-py-app:latest # Points to your Docker Hub image
13
- hf_token_secret: HUGGINGFACEHUB_API_TOKEN # This tells Spaces to mount the secret as an env var
14
  ---
15
 
16
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
17
 
 
7
  pinned: false
8
  license: apache-2.0
9
  short_description: using public LLM to create own Gen AI app
10
+ app_port: 7860
11
+ image: dharmendrarathore/first-py-app:latest
12
+ hf_token_secret: HUGGINGFACEHUB_API_TOKEN
 
13
  ---
14
 
15
+ # PY LLM DEMO
16
+
17
+ This Hugging Face Space hosts a FastAPI application serving a Qwen language model via a Docker container.
18
+ The Docker image is pulled from Docker Hub.
19
+
20
+ ## API Endpoint Usage:
21
+
22
+ To interact with the API, send a POST request with a JSON body with a "question" field to the `/api/generate` endpoint.
23
+
24
+ **Example using `curl`:**
25
+
26
+ ```bash
27
+ curl -X POST -H "Content-Type: application/json" -d '{"question": "How do large language models work?"}' [https://rathore11-py-llm-demo.hf.space/api/generate](https://rathore11-py-llm-demo.hf.space/api/generate)
28