Update README.md
Browse files
README.md
CHANGED
@@ -5,9 +5,9 @@ datasets:
|
|
5 |
language:
|
6 |
- en
|
7 |
metrics:
|
8 |
-
-
|
9 |
-
-
|
10 |
-
-
|
11 |
pipeline_tag: tabular-regression
|
12 |
library_name: sklearn
|
13 |
tags:
|
@@ -17,197 +17,137 @@ tags:
|
|
17 |
- demand-forecasting
|
18 |
- inventory-management
|
19 |
---
|
20 |
-
#
|
21 |
|
22 |
-
|
23 |
-
|
24 |
-
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
|
25 |
|
26 |
## Model Details
|
27 |
|
28 |
### Model Description
|
29 |
|
30 |
-
|
31 |
-
|
32 |
|
33 |
-
|
34 |
-
- **
|
35 |
-
- **
|
36 |
-
- **
|
37 |
-
- **
|
38 |
-
- **Language(s) (NLP):** [More Information Needed]
|
39 |
-
- **License:** [More Information Needed]
|
40 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
41 |
|
42 |
### Model Sources [optional]
|
43 |
|
44 |
-
|
45 |
-
|
46 |
-
- **Repository:** [More Information Needed]
|
47 |
-
- **Paper [optional]:** [More Information Needed]
|
48 |
-
- **Demo [optional]:** [More Information Needed]
|
49 |
|
50 |
## Uses
|
51 |
|
52 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
53 |
-
|
54 |
### Direct Use
|
55 |
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
### Downstream Use [optional]
|
61 |
|
62 |
-
|
63 |
-
|
64 |
-
[More Information Needed]
|
65 |
|
66 |
### Out-of-Scope Use
|
67 |
|
68 |
-
|
69 |
-
|
70 |
-
[More Information Needed]
|
71 |
|
72 |
## Bias, Risks, and Limitations
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
|
|
77 |
|
78 |
### Recommendations
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
83 |
|
84 |
## How to Get Started with the Model
|
85 |
|
86 |
-
Use the code below to
|
87 |
|
88 |
-
|
|
|
|
|
89 |
|
90 |
-
|
|
|
|
|
|
|
91 |
|
92 |
-
|
93 |
-
|
94 |
-
|
|
|
|
|
|
|
|
|
|
|
95 |
|
96 |
-
|
|
|
|
|
97 |
|
98 |
-
|
|
|
99 |
|
100 |
-
|
|
|
|
|
|
|
101 |
|
102 |
-
|
103 |
|
104 |
-
|
105 |
|
|
|
106 |
|
107 |
-
|
108 |
|
109 |
-
|
110 |
|
111 |
-
|
|
|
|
|
|
|
112 |
|
113 |
-
|
114 |
|
115 |
-
|
|
|
|
|
|
|
|
|
|
|
116 |
|
117 |
## Evaluation
|
118 |
|
119 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
120 |
-
|
121 |
### Testing Data, Factors & Metrics
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
<!-- This should link to a Dataset Card if possible. -->
|
126 |
-
|
127 |
-
[More Information Needed]
|
128 |
-
|
129 |
-
#### Factors
|
130 |
-
|
131 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
132 |
-
|
133 |
-
[More Information Needed]
|
134 |
-
|
135 |
-
#### Metrics
|
136 |
-
|
137 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
### Results
|
142 |
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
## Model Examination [optional]
|
150 |
-
|
151 |
-
<!-- Relevant interpretability work for the model goes here -->
|
152 |
-
|
153 |
-
[More Information Needed]
|
154 |
|
155 |
## Environmental Impact
|
156 |
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
- **
|
162 |
-
- **Hours used:** [More Information Needed]
|
163 |
-
- **Cloud Provider:** [More Information Needed]
|
164 |
-
- **Compute Region:** [More Information Needed]
|
165 |
-
- **Carbon Emitted:** [More Information Needed]
|
166 |
-
|
167 |
-
## Technical Specifications [optional]
|
168 |
-
|
169 |
-
### Model Architecture and Objective
|
170 |
-
|
171 |
-
[More Information Needed]
|
172 |
-
|
173 |
-
### Compute Infrastructure
|
174 |
-
|
175 |
-
[More Information Needed]
|
176 |
-
|
177 |
-
#### Hardware
|
178 |
-
|
179 |
-
[More Information Needed]
|
180 |
-
|
181 |
-
#### Software
|
182 |
-
|
183 |
-
[More Information Needed]
|
184 |
-
|
185 |
-
## Citation [optional]
|
186 |
-
|
187 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
188 |
-
|
189 |
-
**BibTeX:**
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
**APA:**
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Glossary [optional]
|
198 |
-
|
199 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
200 |
-
|
201 |
-
[More Information Needed]
|
202 |
-
|
203 |
-
## More Information [optional]
|
204 |
-
|
205 |
-
[More Information Needed]
|
206 |
|
207 |
-
## Model Card Authors
|
208 |
|
209 |
-
|
210 |
|
211 |
## Model Card Contact
|
212 |
|
213 |
-
|
|
|
5 |
language:
|
6 |
- en
|
7 |
metrics:
|
8 |
+
- r-squared
|
9 |
+
- RMSE
|
10 |
+
- MAE
|
11 |
pipeline_tag: tabular-regression
|
12 |
library_name: sklearn
|
13 |
tags:
|
|
|
17 |
- demand-forecasting
|
18 |
- inventory-management
|
19 |
---
|
20 |
+
# Smart Inventory Advisor for Retail Sales
|
21 |
|
22 |
+
This repository contains a machine learning model trained to predict daily sales for retail products and provide actionable inventory recommendations. The project was developed as part of the Datathon Nexus Crew.
|
|
|
|
|
23 |
|
24 |
## Model Details
|
25 |
|
26 |
### Model Description
|
27 |
|
28 |
+
The core of this project is a **Random Forest Regressor** (`scikit-learn`) trained to forecast the number of units sold (`Sales`) for a given product based on a variety of features. The primary output is a "Store Owner's Action Plan" that identifies products at risk of stocking out and recommends a reorder quantity.
|
|
|
29 |
|
30 |
+
- **Developed by:** jerewy (NexusCrew)
|
31 |
+
- **Model type:** `RandomForestRegressor`
|
32 |
+
- **Language(s) (NLP):** en
|
33 |
+
- **License:** MIT
|
34 |
+
- **Finetuned from model [optional]:** This model was trained from scratch.
|
|
|
|
|
|
|
35 |
|
36 |
### Model Sources [optional]
|
37 |
|
38 |
+
- **Repository:** [https://github.com/jerewy/datathon_nexus_crew](https://github.com/jerewy/datathon_nexus_crew)
|
|
|
|
|
|
|
|
|
39 |
|
40 |
## Uses
|
41 |
|
|
|
|
|
42 |
### Direct Use
|
43 |
|
44 |
+
This model is intended to be used as a decision-support tool for small to medium-sized retail business owners. It helps answer two key questions:
|
45 |
+
1. **Which products should I focus on restocking right now?**
|
46 |
+
2. **How many units of each product should I order?**
|
|
|
|
|
47 |
|
48 |
+
The primary function, `get_smart_inventory_recommendations` in the notebook, automates this process.
|
|
|
|
|
49 |
|
50 |
### Out-of-Scope Use
|
51 |
|
52 |
+
This model is not suitable for real-time stock market prediction or long-term (multi-year) financial forecasting. Its predictions are based on the patterns in the provided dataset and may not be accurate if underlying market conditions change drastically. The model should not be used for making automated financial decisions without human oversight.
|
|
|
|
|
53 |
|
54 |
## Bias, Risks, and Limitations
|
55 |
|
56 |
+
The main limitation of this model is the "predictability ceiling" of the source data. The final model achieves a Testing R² score of **0.34**, which means that while it successfully captures the predictable patterns, about 66% of the variance in daily sales is due to unpredictable, random factors not present in the data.
|
57 |
|
58 |
+
- **Data Dependency:** The model's performance is entirely dependent on the quality and patterns of the training data. It assumes future trends will resemble historical ones.
|
59 |
+
- **Synthetic Data:** The model was trained on a synthetic dataset. Performance on real-world, noisy retail data will likely differ and may require further tuning.
|
60 |
|
61 |
### Recommendations
|
62 |
|
63 |
+
Users should be aware that the model provides a forecast, not a guarantee. The "Recommended Reorder Qty" should be treated as a strong, data-driven suggestion, but business owners should still apply their own domain knowledge, especially when considering external factors not included in the dataset (e.g., upcoming local events, new competitors).
|
|
|
|
|
64 |
|
65 |
## How to Get Started with the Model
|
66 |
|
67 |
+
Use the code below to load the saved model and preprocessors to make predictions on new data.
|
68 |
|
69 |
+
```python
|
70 |
+
import pickle
|
71 |
+
import pandas as pd
|
72 |
|
73 |
+
# Load the trained model, scaler, and encoders
|
74 |
+
model = pickle.load(open('sales_model.pkl', 'rb'))
|
75 |
+
scaler = pickle.load(open('scaler.pkl', 'rb'))
|
76 |
+
label_encoders = pickle.load(open('label_encoders.pkl', 'rb'))
|
77 |
|
78 |
+
# --- Example: Prepare a single row of new data ---
|
79 |
+
# NOTE: This must have the same structure as the training data
|
80 |
+
new_data = pd.DataFrame([{
|
81 |
+
'Inventory': 200, 'Orders': 50, 'Price': 35.0, 'Discount': 10,
|
82 |
+
'Competitor Price': 33.0, 'Promotion': 1, 'Category': 'Groceries',
|
83 |
+
'Region': 'North', 'Weather': 'Sunny', 'Seasonality': 'Spring',
|
84 |
+
'DayOfWeek': 2, 'Month': 4, 'Day': 15
|
85 |
+
}])
|
86 |
|
87 |
+
# Apply the same preprocessing
|
88 |
+
for col in ['Category', 'Region', 'Weather', 'Seasonality']:
|
89 |
+
new_data[col] = label_encoders[col].transform(new_data[col])
|
90 |
|
91 |
+
# Scale the features
|
92 |
+
new_data_scaled = scaler.transform(new_data)
|
93 |
|
94 |
+
# Make a prediction
|
95 |
+
predicted_sales = model.predict(new_data_scaled)
|
96 |
+
print(f"Predicted Sales: {predicted_sales[0]:.2f} units")
|
97 |
+
```
|
98 |
|
99 |
+
## Training Details
|
100 |
|
101 |
+
### Training Data
|
102 |
|
103 |
+
The model was trained on the "Retail Store Inventory" dataset, which contains over 73,000 daily records across multiple stores and products. The data is synthetic but realistically models retail sales patterns.
|
104 |
|
105 |
+
### Training Procedure
|
106 |
|
107 |
+
#### Preprocessing
|
108 |
|
109 |
+
The training data was preprocessed as follows:
|
110 |
+
1. **Feature Engineering:** `DayOfWeek`, `Month`, and `Day` were extracted from the `Date` column.
|
111 |
+
2. **Label Encoding:** Categorical features (`Category`, `Region`, `Weather`, `Seasonality`) were converted to numerical values.
|
112 |
+
3. **Standard Scaling:** All numerical features were scaled to have a mean of 0 and a standard deviation of 1.
|
113 |
|
114 |
+
#### Training Hyperparameters
|
115 |
|
116 |
+
The model was tuned using `RandomizedSearchCV` to find the optimal settings. The best-performing hyperparameters were:
|
117 |
+
- **n_estimators:** 200
|
118 |
+
- **min_samples_split:** 10
|
119 |
+
- **min_samples_leaf:** 4
|
120 |
+
- **max_features:** 'sqrt'
|
121 |
+
- **max_depth:** 20
|
122 |
|
123 |
## Evaluation
|
124 |
|
|
|
|
|
125 |
### Testing Data, Factors & Metrics
|
126 |
|
127 |
+
- **Testing Data:** The model was evaluated on a 20% holdout set from the original data, created using a standard `train_test_split` with `random_state=42`.
|
128 |
+
- **Metrics:** The primary evaluation metric was **R-squared (R²)** to measure the proportion of variance in sales that the model could predict. **Root Mean Squared Error (RMSE)** and **Mean Absolute Error (MAE)** were also used to measure prediction error in terms of units sold.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
### Results
|
131 |
|
132 |
+
| Metric | Value |
|
133 |
+
|----------------|---------|
|
134 |
+
| Training R² | 0.6324 |
|
135 |
+
| **Testing R²** | **0.3402** |
|
136 |
+
| RMSE | 88.39 |
|
137 |
+
| MAE | 69.27 |
|
|
|
|
|
|
|
|
|
|
|
138 |
|
139 |
## Environmental Impact
|
140 |
|
141 |
+
- **Hardware Type:** Not tracked (likely trained on Google Colab standard CPU instances).
|
142 |
+
- **Hours used:** Not tracked.
|
143 |
+
- **Cloud Provider:** Not tracked.
|
144 |
+
- **Compute Region:** Not tracked.
|
145 |
+
- **Carbon Emitted:** Not tracked.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
146 |
|
147 |
+
## Model Card Authors
|
148 |
|
149 |
+
Hernicksen Satria, Jeremy Wijaya, Lawryan Andrew Darisang (NexusCrew)
|
150 |
|
151 |
## Model Card Contact
|
152 |
|
153 |
+
Please open an issue in the repository for questions or feedback.
|