Spaces:
Sleeping
Sleeping
Create 100_Graduate_Machine_Learning.py
Browse files
pages/100_Graduate_Machine_Learning.py
ADDED
@@ -0,0 +1,161 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
|
3 |
+
# Set up the main title
|
4 |
+
st.title("PhD-Level Machine Learning Course")
|
5 |
+
|
6 |
+
# Overview of the Course
|
7 |
+
st.header("Course Overview")
|
8 |
+
st.write("""
|
9 |
+
This PhD-level course in Machine Learning is designed to cover both foundational and cutting-edge topics,
|
10 |
+
providing a comprehensive understanding of machine learning theory, advanced optimization techniques, probabilistic models, and more.
|
11 |
+
The course involves hands-on projects and research problems to solidify your knowledge in this rapidly evolving field.
|
12 |
+
""")
|
13 |
+
|
14 |
+
# Course Objectives
|
15 |
+
st.header("Course Objectives")
|
16 |
+
st.markdown("""
|
17 |
+
- **Understanding Deep Learning Theory**: Dive into the foundational theories behind deep learning models.
|
18 |
+
- **Exploration of Probabilistic Graphical Models**: Study the application of PGM in ML.
|
19 |
+
- **Advanced Optimization Techniques**: Explore methods like stochastic gradient descent, variational inference, and more.
|
20 |
+
- **Cutting-edge Research Areas**: Learn about the latest research in generative models, reinforcement learning, and unsupervised learning.
|
21 |
+
- **Hands-on Research and Projects**: Implement state-of-the-art models in practical settings.
|
22 |
+
""")
|
23 |
+
|
24 |
+
# Syllabus and Modules
|
25 |
+
st.header("Course Syllabus")
|
26 |
+
|
27 |
+
# Module 1: Foundations and Theoretical Aspects
|
28 |
+
st.subheader("Module 1: Foundations and Theoretical Aspects")
|
29 |
+
st.write("""
|
30 |
+
- **Linear Models**: Linear Regression, Ridge and Lasso Regression, Regularization Techniques.
|
31 |
+
- **Convex Optimization**: Convex sets, Gradient Descent, Duality.
|
32 |
+
- **Kernel Methods**: Support Vector Machines, Kernel tricks, Gaussian and Polynomial Kernels.
|
33 |
+
- **Readings**: "Elements of Statistical Learning" by Hastie, Tibshirani, Friedman.
|
34 |
+
""")
|
35 |
+
st.write("### Problems for Module 1:")
|
36 |
+
st.write("""
|
37 |
+
1. Implement simple linear regression from scratch using only NumPy. Compare your results with scikit-learn.
|
38 |
+
2. Solve for the weights of a ridge regression model by manually deriving the closed-form solution.
|
39 |
+
3. Implement Lasso regression using coordinate descent. Evaluate its performance on a synthetic dataset.
|
40 |
+
4. Prove that the cost function in linear regression is convex.
|
41 |
+
5. Solve the constrained optimization problem: minimize f(x) = x^2 subject to x ≥ 1 using gradient descent.
|
42 |
+
""")
|
43 |
+
|
44 |
+
# Module 2: Probabilistic Models in Machine Learning
|
45 |
+
st.subheader("Module 2: Probabilistic Models in Machine Learning")
|
46 |
+
st.write("""
|
47 |
+
- **Bayesian Networks**: Structure learning, Conditional independence, Markov Properties.
|
48 |
+
- **Hidden Markov Models (HMMs)**: Forward-backward algorithm, Viterbi algorithm, Baum-Welch for parameter estimation.
|
49 |
+
- **Gaussian Mixture Models (GMMs)**: Expectation-Maximization (EM), Clustering.
|
50 |
+
- **Readings**: "Pattern Recognition and Machine Learning" by Bishop.
|
51 |
+
""")
|
52 |
+
st.write("### Problems for Module 2:")
|
53 |
+
st.write("""
|
54 |
+
1. Build a Bayesian network for a medical diagnosis problem. Perform exact inference on this network.
|
55 |
+
2. Derive the update rules for Bayesian networks and implement them to calculate posterior probabilities.
|
56 |
+
3. Implement the forward-backward algorithm for HMM and apply it to a sequence prediction problem.
|
57 |
+
4. Use Gaussian Mixture Models for image segmentation.
|
58 |
+
5. Implement variable elimination for a small Bayesian network and compare it with junction tree inference.
|
59 |
+
""")
|
60 |
+
|
61 |
+
# Module 3: Advanced Deep Learning
|
62 |
+
st.subheader("Module 3: Advanced Deep Learning")
|
63 |
+
st.write("""
|
64 |
+
- **Generative Adversarial Networks (GANs)**: GAN formulation, Loss functions, Generator vs. Discriminator dynamics.
|
65 |
+
- **Variational Autoencoders (VAEs)**: Latent variable models, KL Divergence, Reparameterization trick.
|
66 |
+
- **Deep Reinforcement Learning**: Policy Gradients, Q-learning, Actor-Critic methods.
|
67 |
+
- **Readings**: "Deep Learning" by Goodfellow, Bengio, Courville.
|
68 |
+
""")
|
69 |
+
st.write("### Problems for Module 3:")
|
70 |
+
st.write("""
|
71 |
+
1. Implement a basic GAN from scratch using PyTorch and train it on the MNIST dataset.
|
72 |
+
2. Build a CycleGAN to perform image translation (e.g., transforming horses into zebras) using a public dataset.
|
73 |
+
3. Implement a Variational Autoencoder (VAE) for dimensionality reduction on the MNIST dataset.
|
74 |
+
4. Train a Proximal Policy Optimization (PPO) agent for continuous control in the BipedalWalker environment.
|
75 |
+
5. Derive the loss functions for both the generator and discriminator in GANs and explain their interaction during training.
|
76 |
+
""")
|
77 |
+
|
78 |
+
# Module 4: Reinforcement Learning
|
79 |
+
st.subheader("Module 4: Reinforcement Learning")
|
80 |
+
st.write("""
|
81 |
+
- **Value-Based Methods**: Q-learning, SARSA, Bellman equations, MDPs.
|
82 |
+
- **Policy-Based Methods**: Policy gradient methods, REINFORCE algorithm, Actor-Critic models.
|
83 |
+
- **Readings**: "Reinforcement Learning: An Introduction" by Sutton and Barto.
|
84 |
+
""")
|
85 |
+
st.write("### Problems for Module 4:")
|
86 |
+
st.write("""
|
87 |
+
1. Implement Q-learning from scratch to solve a grid-world environment.
|
88 |
+
2. Apply SARSA to a stochastic grid-world environment and compare its performance to Q-learning.
|
89 |
+
3. Implement the REINFORCE algorithm for a simple policy gradient task and analyze the variance in the gradient estimates.
|
90 |
+
4. Train an actor-critic agent using A2C in a continuous state space environment.
|
91 |
+
5. Train an agent using PPO in the Humanoid-v2 environment from OpenAI Gym.
|
92 |
+
""")
|
93 |
+
|
94 |
+
# Module 5: Transfer Learning and Meta-Learning
|
95 |
+
st.subheader("Module 5: Transfer Learning and Meta-Learning")
|
96 |
+
st.write("""
|
97 |
+
- **Transfer Learning Concepts**: Fine-tuning, Feature extraction, Adapting pre-trained models.
|
98 |
+
- **Meta-Learning (Few-Shot Learning)**: Learning-to-learn, MAML, Prototypical Networks.
|
99 |
+
- **Readings**: Recent advances in transfer learning (BERT, GPT, etc.).
|
100 |
+
""")
|
101 |
+
st.write("### Problems for Module 5:")
|
102 |
+
st.write("""
|
103 |
+
1. Fine-tune BERT for a text classification task on a custom dataset using Hugging Face.
|
104 |
+
2. Implement feature extraction from a pre-trained model (e.g., VGG16) and apply it to a new dataset for object detection.
|
105 |
+
3. Train a Prototypical Network for few-shot learning and test it on a small dataset of handwritten characters.
|
106 |
+
4. Explore how few-shot learning techniques can be applied to reinforcement learning tasks.
|
107 |
+
5. Train a transfer learning model on medical images (e.g., X-rays) using DenseNet and analyze the results.
|
108 |
+
""")
|
109 |
+
|
110 |
+
# Module 6: Unsupervised and Self-Supervised Learning
|
111 |
+
st.subheader("Module 6: Unsupervised and Self-Supervised Learning")
|
112 |
+
st.write("""
|
113 |
+
- **Clustering and Dimensionality Reduction**: K-means, PCA, t-SNE, UMAP.
|
114 |
+
- **Self-Supervised Learning**: SimCLR, BYOL, Denoising Autoencoders.
|
115 |
+
""")
|
116 |
+
st.write("### Problems for Module 6:")
|
117 |
+
st.write("""
|
118 |
+
1. Implement K-means clustering and apply it to a dataset of customer behavior data to segment users.
|
119 |
+
2. Use t-SNE to visualize high-dimensional data (e.g., word embeddings) and explore the resulting clusters.
|
120 |
+
3. Train a self-supervised model on an image dataset and visualize learned features.
|
121 |
+
4. Implement contrastive learning (SimCLR) to learn visual representations without labels.
|
122 |
+
5. Apply PCA for dimensionality reduction on a real-world dataset (e.g., image data).
|
123 |
+
""")
|
124 |
+
|
125 |
+
# Module 7: Advanced Optimization Techniques
|
126 |
+
st.subheader("Module 7: Advanced Optimization Techniques")
|
127 |
+
st.write("""
|
128 |
+
- **Stochastic Optimization**: SGD, Adam, RMSProp, Learning Rate Schedules.
|
129 |
+
- **Variational Inference**: ELBO maximization, Stochastic Variational Inference, Monte Carlo methods.
|
130 |
+
""")
|
131 |
+
st.write("### Problems for Module 7:")
|
132 |
+
st.write("""
|
133 |
+
1. Implement stochastic gradient descent (SGD) with learning rate schedules and compare different optimization techniques (Adam, RMSProp) on the MNIST dataset.
|
134 |
+
2. Derive the update rules for the Adam optimizer and implement it from scratch.
|
135 |
+
3. Train a deep learning model with cyclical learning rates and analyze the training dynamics.
|
136 |
+
4. Implement variational inference to fit a Bayesian neural network on a small dataset.
|
137 |
+
5. Explore the impact of weight decay and momentum in training deep networks and visualize their effect on the loss surface.
|
138 |
+
""")
|
139 |
+
|
140 |
+
# Module 8: Special Topics in Machine Learning
|
141 |
+
st.subheader("Module 8: Special Topics in Machine Learning")
|
142 |
+
st.write("""
|
143 |
+
- **Interpretability**: SHAP, LIME, Counterfactual Explanations, Fairness in ML.
|
144 |
+
- **Adversarial Learning**: FGSM, PGD, Robust models.
|
145 |
+
- **Causal Inference**: Structural Equation Modeling (SEM), Causal graphs, Counterfactuals.
|
146 |
+
- **Readings**: "The Book of Why" by Judea Pearl.
|
147 |
+
""")
|
148 |
+
st.write("### Problems for Module 8:")
|
149 |
+
st.write("""
|
150 |
+
1. Implement an adversarial attack on a deep learning model and build defenses.
|
151 |
+
2. Use SHAP and LIME to interpret the results of a complex machine learning model.
|
152 |
+
3. Develop a causal inference model for decision-making in healthcare or economics.
|
153 |
+
4. Implement counterfactual explanations to improve model interpretability.
|
154 |
+
5. Explore fairness in machine learning using real-world datasets.
|
155 |
+
""")
|
156 |
+
|
157 |
+
# Assessments Section
|
158 |
+
st.header("Assessments")
|
159 |
+
st.write("""
|
160 |
+
- **Midterm Project**: Design and implement a machine learning model that incorporates advanced theoretical methods and optimization techniques.
|
161 |
+
- **Final Research Paper**: Write and present a research paper on a selected topic in advanced machine learning
|