VenkateshRoshan commited on
Commit
1d92a29
·
1 Parent(s): 9f203ff

app and dockerfile for hf added

Browse files
Files changed (3) hide show
  1. README.md +1 -18
  2. app_hf.py +1 -1
  3. dockerfile_hf +3 -3
README.md CHANGED
@@ -12,23 +12,6 @@ pinned: false
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
13
 
14
  # Real Time Customer Support Chatbot
15
-
16
-
17
-
18
- # Hugging Face Model Deployment with Docker
19
-
20
- This repository contains a Dockerized setup for deploying a Hugging Face model. The `dockerfile_hf` file defines the environment to build and run the application.
21
-
22
- ## Table of Contents
23
- - [Project Overview](#project-overview)
24
- - [Setup](#setup)
25
- - [Docker Usage](#docker-usage)
26
- - [Running the Container](#running-the-container)
27
- - [Model Inference](#model-inference)
28
- - [License](#license)
29
-
30
- ## Project Overview
31
-
32
  ---
33
  ### Developing a real-time customer support chatbot using a fine-tuned LLM to provide accurate responses. Building CI/CD pipelines for scalable deployment on AWS SageMaker and integrating MLflow for tracking model versions, experiment logging, and continuous improvements.
34
- ---
 
12
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
13
 
14
  # Real Time Customer Support Chatbot
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
  ### Developing a real-time customer support chatbot using a fine-tuned LLM to provide accurate responses. Building CI/CD pipelines for scalable deployment on AWS SageMaker and integrating MLflow for tracking model versions, experiment logging, and continuous improvements.
17
+ ---
app_hf.py CHANGED
@@ -192,7 +192,7 @@ if __name__ == "__main__":
192
  demo = create_chat_interface()
193
  print("Starting Gradio server...")
194
  demo.launch(
195
- share=True,
196
  server_name="0.0.0.0",
197
  server_port=7860, # Changed to 7860 for Gradio
198
  debug=True,
 
192
  demo = create_chat_interface()
193
  print("Starting Gradio server...")
194
  demo.launch(
195
+ share=False,
196
  server_name="0.0.0.0",
197
  server_port=7860, # Changed to 7860 for Gradio
198
  debug=True,
dockerfile_hf CHANGED
@@ -2,10 +2,10 @@ FROM python3.10-slim
2
 
3
  WORKDIR /app
4
 
5
- COPY app_hf.py /app/app_hf.py
6
- COPY src/ /app/src/
7
-
8
  COPY requirements.txt .
 
9
  RUN pip install --no-cache-dir --upgrade pip
10
  RUN pip install --no-cache-dir torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
11
  RUN pip install --no-cache-dir -r requirements.txt
 
2
 
3
  WORKDIR /app
4
 
5
+ COPY app_hf.py .
6
+ COPY src/ .
 
7
  COPY requirements.txt .
8
+
9
  RUN pip install --no-cache-dir --upgrade pip
10
  RUN pip install --no-cache-dir torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
11
  RUN pip install --no-cache-dir -r requirements.txt