import streamlit as st
# Page configuration
st.set_page_config(page_title="Linear Regression", page_icon="🤖", layout="wide")
# Custom CSS for styling
st.markdown("""
""", unsafe_allow_html=True)
# Title
st.markdown("
Complete Overview of Linear Regression
", unsafe_allow_html=True)
st.write("""
Linear Regression is a key **Supervised Learning** method mainly used for **regression problems**.
It predicts continuous outputs by identifying the best-fit line (or hyperplane) that minimizes the gap between actual and predicted values.
""")
# Understanding the Best Fit Line
st.markdown("What Defines the Best Fit Line?
", unsafe_allow_html=True)
st.write("""
The ideal regression line is one that:
- Gets as close as possible to all data points.
- **Minimizes the error** between actual and predicted values.
- Is calculated using optimization techniques like **Ordinary Least Squares (OLS)** or **Gradient Descent**.
""")
st.image("linearregression.png", width=900)
# Training in Simple Linear Regression
st.markdown("Training Process: Simple Linear Regression
", unsafe_allow_html=True)
st.write("""
Simple Linear Regression is typically used when there's only one or two features.
It models the relationship between \( x \) (independent variable) and \( y \) (dependent variable) as:
\[ y = w_1x + w_0 \]
Where:
- \( w_1 \) is the slope (weight)
- \( w_0 \) is the intercept (bias)
**Steps:**
- Initialize weights \( w_1 \), \( w_0 \) randomly.
- Predict output \( \hat{y} \) for each input \( x \).
- Compute **Mean Squared Error (MSE)** to assess prediction error.
- Update weights using **Gradient Descent**.
- Repeat until the error is minimized.
""")
# Testing Phase
st.markdown("Testing Phase
", unsafe_allow_html=True)
st.write("""
Once trained:
1. Feed a new input \( x \).
2. Use the model to predict \( \hat{y} \).
3. Compare predicted value with actual to evaluate model accuracy.
""")
# Multiple Linear Regression
st.markdown("Multiple Linear Regression
", unsafe_allow_html=True)
st.write("""
Multiple Linear Regression (MLR) extends simple linear regression by using multiple features to predict the output.
- Initialize all weights \( w_1, w_2, ..., w_n \) and \( w_0 \).
- Predict \( y \) for each data point.
- Calculate loss using **MSE**.
- Update weights via **Gradient Descent** to improve model accuracy.
""")
# Gradient Descent Optimization
st.markdown("Gradient Descent: Optimization Technique
", unsafe_allow_html=True)
st.write("""
Gradient Descent is used to minimize the loss function:
- Begin with random weights and bias.
- Compute gradient (derivative) of the loss function.
- Update weights using the gradient:
\[ w = w - \alpha \frac{dL}{dw} \]
- Repeat until convergence (minimum loss is achieved).
- Suggested learning rates: **0.1, 0.01** — avoid very large values.
""")
# Assumptions of Linear Regression
st.markdown("Core Assumptions in Linear Regression
", unsafe_allow_html=True)
st.write("""
1. **Linearity**: Relationship between input and output is linear.
2. **No Multicollinearity**: Features should not be highly correlated.
3. **Homoscedasticity**: Errors should have constant variance.
4. **Normality of Errors**: Residuals are normally distributed.
5. **No Autocorrelation**: Residuals are independent of each other.
""")
# Evaluation Metrics
st.markdown("Model Evaluation Metrics
", unsafe_allow_html=True)
st.write("""
- **Mean Squared Error (MSE)**: Average squared error between predicted and actual values.
- **Mean Absolute Error (MAE)**: Average absolute difference.
- **R-squared (R²)**: Proportion of variance explained by the model.
""")
# Jupyter Notebook Link
st.markdown("Practice Notebook: Linear Regression Implementation
", unsafe_allow_html=True)
st.markdown("Click here to open the Jupyter Notebook", unsafe_allow_html=True)
# Closing Message
st.write("Linear Regression remains a simple yet powerful tool. Understanding how it works under the hood—optimization, assumptions, and evaluation—helps in building better models.")