jadehardouin commited on
Commit
468deee
1 Parent(s): 847028b

Create How_to_contribute.md (#2)

Browse files

- Create How_to_contribute.md (3e333fc1283b92774e02764adf0a7897876cde07)

Files changed (1) hide show
  1. How_to_contribute.md +100 -0
How_to_contribute.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contribute to our TCO Calculator
2
+
3
+ ## What can you contribute with
4
+
5
+ The TCO Calculator’s purpose is to assist users in comparing the deployment [Total Cost of Ownership](https://www.techtarget.com/searchdatacenter/definition/TCO?Offer=abt_pubpro_AI-Insider) (TCO) of various AI model services. To do so, it computes the cost/request of said service and adds a labor cost to get a comprehensive estimate of how much the set-up of these services would cost.
6
+
7
+ Here is the formula used to compute the cost/request of an AI model service:
8
+
9
+
10
+ with:
11
+ - CR = Cost per Request
12
+ - CIT_1K = Cost per 1000 Input Tokens
13
+ - COT_1K = Cost per 1000 Output Tokens
14
+ - IT = Input Tokens
15
+ - OT = Output Tokens
16
+
17
+ To contribute, you’ll have to provide the value of the input and output cost/token.
18
+
19
+ If you want to add your own service to this [Gradio](https://www.gradio.app/) application, you’ll have to follow two main steps:
20
+ 1. Create a class for your model in our `models.py` file.
21
+ 2. Add the name of your model class in our `app.py` file.
22
+
23
+ ## Create a class for your model
24
+
25
+ Step by step, we’ll see how you can create a model class for your own service that can later be an option of our TCO Calculator.
26
+
27
+ First, you need to create a class for your model and set basic information such as the name of your service and the latency of your model.
28
+
29
+ ```python
30
+ # The name of your new model service's class
31
+ class NewModel(BaseTCOModel):
32
+
33
+ def __init__(self):
34
+ # Name of the AI model service and the category it belongs to (SaaS, Open source)
35
+ self.set_name("(Category) Service name")
36
+ self.set_latency("The average latency of your model")
37
+ super().__init__()
38
+ ```
39
+ Then, you’ll have to create the core function of your model page, the `render` function. Its first elements will be the [Gradio components](https://www.gradio.app/docs/components) that you want to put in your model page.
40
+ It can be a Dropdown with multiple choices for the user to make or a Textbox with information about the computation parameters the user has to know (for instance, the cost of the hardware set-up used). There can be multiple of them.
41
+ **All components’ visibility must be set to `False`**.
42
+
43
+ ```python
44
+ def render(self):
45
+ #Create as many Gradio components as you want to provide information or customization to the user
46
+ #Put all their visibility to False
47
+ #Don't forget to put the interactive parameter of the component to False if the value is fixed
48
+ self.model_parameter = gr.Dropdown(["Option 1", "Option 2"], value="Option 1", visible=False, interactive=True, label="Title for this parameter", info="Add some information to clarify specific aspects of your parameter")
49
+ ```
50
+ Then, still in the `render` function, you must instantiate your input and output cost/token. They are the key values needed to compute the cost/request of your AI model service.
51
+ Note that the user can’t interact with these since they are the values you’ll have to provide from benchmark tests on your model.
52
+ ```python
53
+ #Put the values of the input and output cost per token
54
+ #These values can be updated using a function above that is triggered by a change in the parameters
55
+ #Put default values accordingly to the default parameters
56
+
57
+ self.input_cost_per_token = gr.Number(0.1, visible=False, label="($) Price/1K input prompt tokens", interactive=False)
58
+
59
+ self.output_cost_per_token = gr.Number(0.2, visible=False, label="($) Price/1K output prompt tokens", interactive=False)
60
+ ```
61
+ Then, if the user can modify some parameters using the Gradio components mentioned above, you’ll have to update the values influenced by this.
62
+ This is why you need to create an update function that has the changing parameter(s) for input(s) and outputs the correct value.
63
+ In the test example, the parameter only influences the cost/token but it could be another parameter whose choices depend on the value of the former one.
64
+ ```python
65
+
66
+ def on_model_parameter_change(model_parameter):
67
+ if model_parameter == "Option 1":
68
+ input_tokens_cost_per_token = 0.1
69
+ output_tokens_cost_per_token = 0.2
70
+ else:
71
+ input_tokens_cost_per_token = 0.2
72
+ output_tokens_cost_per_token = 0.4
73
+ return input_tokens_cost_per_token, output_tokens_cost_per_token
74
+ ```
75
+
76
+ Don’t forget to add the triggering event that calls the update function when a Gradio parameter is changed.
77
+ Note that the inputs and outputs can vary depending on the update function.
78
+ ```python
79
+ self.model_parameter.change(on_model_parameter_change, inputs=self.model_parameter, outputs=[self.input_cost_per_token, self.output_cost_per_token])
80
+ ```
81
+ The last element of the render function you have to implement is the labor cost parameter. It provides an estimation of how much it would cost to have engineers deploy the model. Note that for a SaaS solution, this cost is 0 when considering the set-up of the service.
82
+ ```python
83
+ self.labor = gr.Number(0, visible=False, label="($) Labor cost per month", info="This is an estimate of the labor cost of the AI engineer in charge of deploying the model", interactive=True)
84
+ ```
85
+ Eventually, you need to create another function that will be necessary to compute the cost/request of your AI model service. You must have the same inputs and outputs as below and this function can be used to convert values.
86
+ ```python
87
+ def compute_cost_per_token(self, input_cost_per_token, output_cost_per_token, labor):
88
+ #Additional computation on your cost_per_token values
89
+ #You often need to convert some values here
90
+ return input_cost_per_token, output_cost_per_token, labor
91
+ ```
92
+
93
+ Once your model class is ready, you’ll have to **add it to the `models.py` file** in our TCO Calculator’s [Hugging Face repository](https://huggingface.co/spaces/mithril-security/TCO_calculator/tree/main).
94
+ ## Update the app.py file
95
+ For the user to be able to select your AI model service option in the Calculator, you still have one step to go.
96
+
97
+ In the following code line of the `app.py` file (line 93), you’ll have to **add the name of your model class** as follows:
98
+ ```python
99
+ Models: list[models.BaseTCOModel] = [models.OpenAIModelGPT4,..., models.NewModel]
100
+ ```