apetulante commited on
Commit
429b9ab
·
1 Parent(s): 3ca9df0

Upload 4_3_gradio_and_huggingface_spaces.py

Browse files
Files changed (1) hide show
  1. 4_3_gradio_and_huggingface_spaces.py +144 -0
4_3_gradio_and_huggingface_spaces.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """4_3-gradio-and-huggingface-spaces.ipynb
3
+
4
+ Automatically generated by Colaboratory.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/1ML3Jf1UwkDRuEPK7NoVr1Uel9tWa_oP7
8
+
9
+ # Gradio Interfaces and HuggingFace Spaces
10
+
11
+ Huggingface [Spaces](https://huggingface.co/spaces) provide an easy-to-use way to explore and demo models. The platform is highly accessible, free to use, and allows you to share models without the need for the user to run any code.
12
+
13
+ The best part - you can insert your own model from huggingface, build your app with [gradio](https://gradio.app/docs/), and deploy in no time!
14
+
15
+ Let's use the model that we generated in the `4_1-text-classification-finetune-solns.ipynb` notebook and create a gradio space to demonstrate it!
16
+
17
+ ## Install and Import Packages
18
+ """
19
+
20
+ # Commented out IPython magic to ensure Python compatibility.
21
+ # %%capture
22
+ # !pip install gradio transformers
23
+
24
+ # import necessary libraries
25
+ import gradio as gr
26
+ import numpy as np
27
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
28
+ from huggingface_hub import notebook_login
29
+
30
+ !git config --global credential.helper store
31
+
32
+ notebook_login()
33
+
34
+ """## Load in Your Model
35
+
36
+ Next, we'll load in our model from huggingface. This should be in a HF repo under your name, probably formatted `your-username/model-name`.
37
+ We'll use the `Auto` classes to load in this model. The `Auto` classes in the Hugging Face transformers library are designed to automatically infer the correct model architecture or tokenizer based on the model checkpoint provided.
38
+
39
+ For example, below, AutoModelForSequenceClassification is specifically designed for sequence classification tasks, such as text classification or sentiment analysis (which is what bert-emotion was). If you've fine-tuned a model for a different type of task, like question answering or named entity recognition, you would need to use a different auto model class that corresponds to that task. For example, for question answering, you might use AutoModelForQuestionAnswering.
40
+
41
+ To ensure the right model class is used, you should use the appropriate auto model class based on the task your model was fine-tuned for. You can look at the config.json file associated with a model checkpoint to see the type of model. (You can also use this model name directly - but the `Auto` classes will give you more flexibility!)
42
+
43
+ [ See more about Auto classes [here](https://huggingface.co/docs/transformers/model_doc/auto#auto-classes). ]
44
+ """
45
+
46
+ # specify the model name
47
+ # replace 'your-username/model-name' with the name of your custom trained model
48
+ model_name = 'apetulante/bert-emotion'
49
+
50
+ # initialize the model and tokenizer
51
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
52
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
53
+
54
+ """Let's also define our labels so we know how to interpret the output from the model."""
55
+
56
+ labels = {0: 'anger', 1: 'joy', 2: 'optimism', 3: 'sadness'}
57
+
58
+ """## Define and Create the Gradio Interface
59
+
60
+ Next, we'll define a function that will do the sentiment analysis task for us. A lot of this should look very similar to how we did basic inferencing with Huggingface, because now that we've pushed our model there, we can grab it just like any other model!
61
+ """
62
+
63
+ # Define the prediction function
64
+ def predict_sentiment(text):
65
+ # Tokenize the input tweet using the tokenizer
66
+ inputs = tokenizer.encode_plus(
67
+ text,
68
+ add_special_tokens=True, # Add special tokens for BERT
69
+ truncation=True, # Truncate the input if it exceeds the maximum sequence length
70
+ padding='longest', # Pad the input sequences to the length of the longest sequence
71
+ return_tensors='pt' # Return PyTorch tensors
72
+ )
73
+
74
+ # Pass the tokenized inputs to the model
75
+ outputs = model(**inputs)
76
+
77
+ # Get the predicted class by finding the index of the highest logit score
78
+ logits = outputs.logits.detach().numpy()
79
+ predicted_class = np.argmax(logits, axis=1).item()
80
+
81
+ # Map the predicted class index to the corresponding sentiment label using the labels dictionary
82
+ sentiment_label = labels[predicted_class]
83
+
84
+ # Return the predicted sentiment label
85
+ return sentiment_label
86
+
87
+ predict_sentiment("okay,let's go!")
88
+
89
+ """Let's define the Gradio interface with `sentiment_analysis` as the function that takes user inputs and generates outputs. The `inputs` argument specifies the input component, in this case a textbox where users can enter text. The `outputs` argument specifies the type of the output, in this case a simple text."""
90
+
91
+ # Define the Gradio interface
92
+ iface = gr.Interface(
93
+ fn=predict_sentiment,
94
+ inputs="text",
95
+ outputs="text",
96
+ title="Sentiment Analysis",
97
+ description="Enter a tweet and get its sentiment prediction.",
98
+ examples=[
99
+ ["I'm furious right now."],
100
+ ["I have been feeling amazing lately!"],
101
+ ["I think that everything is going to turn out okay."],
102
+ ["Feeling really down today."],
103
+ ]
104
+ )
105
+
106
+ # Run the Gradio interface
107
+ iface.launch()
108
+
109
+ """You may notice a "flag" option here. The flag functionality is a default feature in Gradio. When you launch a Gradio interface, you'll notice a "Flag" button alongside each input-output pair. Clicking this button allows you to flag examples where the model's output may not be correct or as expected.
110
+
111
+ We can view these flagged examples in the `log.csv` file that will be saved in the `flagged` folder to the left.
112
+
113
+ ## Turn it into a Huggingface Space!
114
+
115
+ Simply turn this code into a app.py file, and create a huggingface space. Since the model is already hosted on huggingface, you should be up and running in no time!
116
+ """
117
+
118
+
119
+
120
+ """## Optional Homework
121
+
122
+ We've just touched the surface of what gradio can do here, but there are a TON of other options of cool features to add or things to do with gradio. Try out a few on your own!
123
+
124
+ The code to create the gradio space is also fairly short. You can try giving the code to make this space to ChatGPT, and ask it to help you come up with additional features.
125
+ """
126
+
127
+ #@title Add Confidence Information
128
+ #@markdown With each of these predictions, the model has some confidence
129
+ #@markdown that the given prediction is correct.
130
+ #@markdown It can be useful to display the relative prediction confidence
131
+ #@markdown for *all* classes, so we can know if the model was less sure of
132
+ #@markdown an answer
133
+
134
+ #@title Predict in Batch
135
+ #@markdown Often, it's convenient to use a gradio space to allow
136
+ #@markdown users to predict on a batch of inputs.
137
+ #@markdown Imagine you have a text file with a new tweet to determine the sentiment
138
+ #@markdown of on each line. How can you edit this gradio space to accept
139
+ #@markdown and return a .txt file?
140
+
141
+ #@title Try Visualizations
142
+ #@markdown With a batch prediction, there's an opportunity
143
+ #@markdown to try visualizations with the data.
144
+ #@markdown Try to show a pie or bar chart of the sentiments of a batch.