ProCreations commited on
Commit
a348bb0
·
verified ·
1 Parent(s): 38602b0

Upload walkthrough-enhancement.js

Browse files
Files changed (1) hide show
  1. walkthrough-enhancement.js +178 -0
walkthrough-enhancement.js ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // Enhanced Walkthrough Mode - Deep AI Education
2
+ // This file provides additional educational features for the walkthrough mode
3
+
4
+ // Enhanced training speed control for walkthrough mode
5
+ const walkthroughSpeeds = {
6
+ 'ultra_slow': 2000, // 2 seconds between training steps - for detailed explanation
7
+ 'slow': 1000, // 1 second - good for following along
8
+ 'normal': 500, // 0.5 seconds - default walkthrough speed
9
+ 'fast': 200 // 0.2 seconds - faster but still educational
10
+ };
11
+
12
+ // Enhanced tutorial data with deeper explanations
13
+ const enhancedTutorials = {
14
+ basics: {
15
+ title: 'Neural Network Basics - Deep Dive',
16
+ description: 'Understand every component of a neural network in detail',
17
+ steps: [
18
+ {
19
+ title: 'Welcome to Neural Networks!',
20
+ content: 'Neural networks are computational models inspired by biological neural networks. Each artificial neuron processes inputs, applies weights, adds bias, and produces an output through an activation function. Think of it as a simplified version of how brain neurons communicate!',
21
+ element: null,
22
+ position: 'center',
23
+ duration: 5000,
24
+ explanation: 'Neural networks revolutionized AI by mimicking how the brain processes information through interconnected neurons.'
25
+ },
26
+ {
27
+ title: 'Input Layer - Data Entry Point',
28
+ content: 'The input layer receives raw data. Each neuron holds one feature or dimension of your data. For images, this might be pixel values. For text, word embeddings. For logic gates, binary values (0 or 1). The values you see (like 0.00) represent the current activation of each input neuron.',
29
+ element: '#networkCanvas',
30
+ position: 'right',
31
+ highlight: {x: 0, y: 0, width: 150, height: 300},
32
+ duration: 8000,
33
+ explanation: 'Input neurons don\'t perform calculations - they just hold and pass forward the data values.'
34
+ },
35
+ {
36
+ title: 'Hidden Layers - The Thinking Process',
37
+ content: 'Hidden layers are where the magic happens! Each neuron combines inputs from the previous layer using learned weights, adds a bias term, and applies an activation function (like ReLU). Multiple hidden layers allow the network to learn increasingly complex patterns and abstractions.',
38
+ element: '#networkCanvas',
39
+ position: 'right',
40
+ highlight: {x: 150, y: 0, width: 200, height: 300},
41
+ duration: 10000,
42
+ explanation: 'Hidden layers transform input data through mathematical operations: z = Σ(weight × input) + bias, then activation = max(0, z) for ReLU.'
43
+ },
44
+ {
45
+ title: 'Output Layer - The Final Decision',
46
+ content: 'The output layer produces the final result. For classification, it uses sigmoid/softmax to output probabilities. For regression, it might use linear activation for continuous values. The number here represents the network\'s confidence or prediction value.',
47
+ element: '#networkCanvas',
48
+ position: 'left',
49
+ highlight: {x: 350, y: 0, width: 100, height: 300},
50
+ duration: 8000,
51
+ explanation: 'Output layer neurons apply specific activation functions based on the task type (sigmoid for binary classification, softmax for multi-class).'
52
+ }
53
+ ]
54
+ },
55
+
56
+ training: {
57
+ title: 'Training Process - Step by Step',
58
+ description: 'Learn how neural networks learn through backpropagation',
59
+ steps: [
60
+ {
61
+ title: 'The Learning Cycle Overview',
62
+ content: 'Neural network training follows a cycle: 1) Forward pass (prediction), 2) Loss calculation (how wrong we are), 3) Backward pass (find gradients), 4) Weight updates (improve the network). This cycle repeats thousands of times until the network learns the pattern.',
63
+ element: null,
64
+ position: 'center',
65
+ duration: 8000,
66
+ explanation: 'This is the fundamental learning algorithm that powers all modern deep learning.'
67
+ },
68
+ {
69
+ title: 'Forward Propagation - Making Predictions',
70
+ content: 'Data flows left to right through the network. Each neuron receives inputs, multiplies by weights, adds bias, and applies activation function. Watch the numbers change as different training examples flow through - this is the network making predictions!',
71
+ element: '#networkCanvas',
72
+ position: 'bottom',
73
+ duration: 10000,
74
+ explanation: 'Forward pass: for each layer, output = activation_function(weights × inputs + bias)',
75
+ action: 'highlight_forward_flow'
76
+ },
77
+ {
78
+ title: 'Loss Calculation - Measuring Mistakes',
79
+ content: 'Loss functions measure how far the prediction is from the correct answer. Mean Squared Error for regression: (predicted - actual)². Cross-entropy for classification. Lower loss = better predictions. The goal is to minimize this number!',
80
+ element: '#lossValue',
81
+ position: 'bottom',
82
+ duration: 8000,
83
+ explanation: 'Different tasks use different loss functions, but they all measure prediction error.'
84
+ },
85
+ {
86
+ title: 'Backpropagation - Learning from Mistakes',
87
+ content: 'Here\'s where the magic happens! The error propagates backward through the network using calculus (chain rule). Each weight learns how much it contributed to the error and adjusts accordingly. This is why it\'s called "backpropagation".',
88
+ element: '#networkCanvas',
89
+ position: 'bottom',
90
+ duration: 12000,
91
+ explanation: 'Backprop uses gradient descent: weight_new = weight_old - learning_rate × gradient',
92
+ action: 'highlight_backward_flow'
93
+ },
94
+ {
95
+ title: 'Weight Updates - Getting Smarter',
96
+ content: 'Weights are updated using gradients and learning rate. Learning rate controls step size - too big and we overshoot, too small and learning is slow. Watch the connection colors change as weights adjust! Green = positive weights, Red = negative weights.',
97
+ element: '#networkCanvas',
98
+ position: 'top',
99
+ duration: 10000,
100
+ explanation: 'Optimal learning rate is crucial - it\'s often found through experimentation or adaptive methods like Adam.'
101
+ }
102
+ ]
103
+ },
104
+
105
+ visualization: {
106
+ title: 'Understanding Visualizations - Read the AI\'s Mind',
107
+ description: 'Learn to interpret every visual element of the training process',
108
+ steps: [
109
+ {
110
+ title: 'Network Diagram - The AI Brain Map',
111
+ content: 'This diagram shows the current state of every neuron and connection. Circle brightness = activation level (how excited the neuron is). Line thickness = weight strength (how much influence). Colors help distinguish layers and positive/negative weights.',
112
+ element: '#networkCanvas',
113
+ position: 'bottom',
114
+ duration: 10000,
115
+ explanation: 'Real-time visualization helps you understand what the network is "thinking" at each moment.'
116
+ },
117
+ {
118
+ title: 'Neuron Activations - Digital Excitement',
119
+ content: 'Numbers inside neurons show activation values (0.00 to 1.00). Higher values mean the neuron is more "activated" or "excited" by the current input. Watch how these values change with different training examples!',
120
+ element: '#networkCanvas',
121
+ position: 'right',
122
+ duration: 8000,
123
+ explanation: 'Activation values flow through the network like electrical signals in a brain.'
124
+ },
125
+ {
126
+ title: 'Connection Weights - Learned Knowledge',
127
+ content: 'Green lines = positive weights (excitatory connections), Red lines = negative weights (inhibitory connections). Thicker lines = stronger connections. These weights encode everything the network has learned!',
128
+ element: '#networkCanvas',
129
+ position: 'right',
130
+ duration: 10000,
131
+ explanation: 'Weights are the network\'s memory - they store all learned patterns and relationships.'
132
+ },
133
+ {
134
+ title: 'Loss Chart - Learning Progress',
135
+ content: 'This chart is like the network\'s report card! Y-axis shows error level, X-axis shows training progress. The line should generally go down (getting better). Plateaus mean learning has slowed or stopped.',
136
+ element: '#lossChart',
137
+ position: 'left',
138
+ duration: 8000,
139
+ explanation: 'Loss curves tell the story of learning - steep drops mean rapid improvement, flat lines mean stability or convergence.'
140
+ },
141
+ {
142
+ title: 'Training Statistics - Performance Dashboard',
143
+ content: 'Epochs = training cycles completed. Loss = current error level. Accuracy = percentage correct. Current = which example we\'re learning from. These metrics tell you exactly how well the AI is performing!',
144
+ element: '.stats-grid',
145
+ position: 'bottom',
146
+ duration: 10000,
147
+ explanation: 'Monitoring these metrics helps diagnose training problems and track progress.'
148
+ },
149
+ {
150
+ title: 'Prediction Results - The Moment of Truth',
151
+ content: 'Each card shows a training example. Raw = actual network output. Predicted = final decision. Status = correct/wrong. Green border = correct prediction, Red = wrong, Blue = currently training on this example.',
152
+ element: '#taskOutput',
153
+ position: 'top',
154
+ duration: 10000,
155
+ explanation: 'This is where you see the network\'s actual performance on each individual example.'
156
+ }
157
+ ]
158
+ },
159
+
160
+ logic: {
161
+ title: 'Logic Gates - Building Blocks of Computing',
162
+ description: 'See how neural networks learn the fundamental operations of digital computers',
163
+ steps: [
164
+ {
165
+ title: 'Logic Gates - Foundation of Computing',
166
+ content: 'Logic gates are the building blocks of all digital computers! They take binary inputs (0 or 1) and produce binary outputs following simple rules. Neural networks can learn these rules from examples, just like learning any other pattern.',
167
+ element: null,
168
+ position: 'center',
169
+ duration: 8000,
170
+ explanation: 'Every computer operation, from simple arithmetic to complex AI, ultimately relies on combinations of these basic logic gates.'
171
+ },
172
+ {
173
+ title: 'AND Gate - Both Must Be True',
174
+ content: 'AND outputs 1 only when BOTH inputs are 1. Like saying "I\'ll go outside if it\'s sunny AND warm." This pattern is "linearly separable" - you can draw a straight line to separate the 0s from the 1s, making it easy for neural networks to learn.',
175
+ element: '#taskOutput',
176
+ position: 'top',
177
+ duration: 10000,
178
+ explanation: 'Linear separability means a simple