Spaces:
Runtime error
Runtime error
# -*- coding: utf-8 -*- | |
"""Untitled6.ipynb | |
Automatically generated by Colab. | |
Original file is located at | |
https://colab.research.google.com/drive/1F6f_vJbssO7C2FM6FILWljFYacDmbVBY | |
""" | |
from IPython.display import display, HTML | |
# Inject CSS to enable wrapping | |
display(HTML(''' | |
<style> | |
.output_area pre { | |
white-space: pre-wrap; | |
} | |
</style> | |
''')) | |
!pip install transformers gradio | |
import os | |
from google.colab import userdata | |
# Get the Hugging Face API key from Colab Secrets | |
api_key = userdata.get('HF_TOKEN') | |
# Ensure the API key is set before using it | |
if api_key is None: | |
raise ValueError("Hugging Face API key not found. Please ensure it is set in Colab Secrets.") | |
# Set the Hugging Face token as an environment variable | |
os.environ["HF_TOKEN"] = api_key | |
# Hugging Face libraries will automatically use this token for authentication | |
print("Hugging Face API key successfully loaded! You're good to go!") | |
# Now you can continue with your Hugging Face-related code | |
# Import necessary libraries | |
from transformers import AutoModelForCausalLM, AutoTokenizer | |
# Load model and tokenizer | |
model_name = "distilgpt2" # A lightweight, CPU-friendly model | |
tokenizer = AutoTokenizer.from_pretrained(model_name) | |
model = AutoModelForCausalLM.from_pretrained(model_name) | |
# Define the function to generate a response | |
def generate_response(prompt): | |
# Tokenize the input prompt | |
inputs = tokenizer(prompt, return_tensors="pt") | |
# Generate a response | |
outputs = model.generate( | |
inputs.input_ids, | |
max_length=50, | |
do_sample=True, # Enable sampling | |
temperature=0.7, # Controls randomness | |
top_p=0.9, # Nucleus sampling | |
pad_token_id=tokenizer.eos_token_id | |
) | |
# Decode the output and set clean_up_tokenization_spaces to True to avoid warnings | |
response = tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True) | |
return response | |
# Example usage | |
prompt = "I went to Safeway and I bought a" | |
response = generate_response(prompt) | |
print(response) | |
def persona_response(prompt, persona="I am a helpful assistant"): | |
full_prompt = f"{persona}. {prompt}" | |
return generate_response(full_prompt) | |
# Import Gradio | |
import gradio as gr | |
# Define Gradio interface function | |
def chat_interface(user_input, persona="I am a helpful assistant"): | |
return persona_response(user_input, persona) | |
# Set up Gradio interface | |
interface = gr.Interface( | |
fn=chat_interface, | |
inputs=["text", "text"], # Allows input for both prompt and persona | |
outputs="text", | |
title="Simple Chatbot", | |
description="Type something to chat with the bot! Add a persona to change its style, like 'I am a shopping assistant.'" | |
) | |
# Launch the Gradio interface in Colab | |
interface.launch(share=True) # share=True creates a public link | |
"""### Uploading to Hugging Face Spaces | |
Now that we have our chatbot working, here’s how to upload it to Hugging Face Spaces: | |
1. Go to [Hugging Face Spaces](https://huggingface.co/spaces). | |
2. Create a new Space, choose "Gradio" as the app type, and name it (e.g., "My Simple Chatbot"). | |
3. Upload this notebook, along with any necessary files or model assets. | |
4. Set up your Space and click "Deploy." Your Gradio chatbot will now be live! | |
**Tips for Deployment:** | |
- Ensure you include any persona settings or customizations. | |
- Test the app after deployment to confirm it works as expected. | |
Once deployed, students can share the link with family and friends! | |
""" |