Aluode commited on
Commit
9172632
·
verified ·
1 Parent(s): 1bf3a01

Upload 2 files

Browse files
Files changed (2) hide show
  1. app.py +1864 -0
  2. requirements.txt +15 -0
app.py ADDED
@@ -0,0 +1,1864 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import pandas as pd
3
+ import yfinance as yf
4
+ import streamlit as st
5
+ import plotly.express as px
6
+ import plotly.graph_objects as go
7
+ from plotly.subplots import make_subplots
8
+ import matplotlib.pyplot as plt
9
+ from sklearn.preprocessing import MinMaxScaler
10
+ from datetime import datetime, timedelta
11
+ import time
12
+ from scipy.stats import linregress
13
+ import requests
14
+ from scipy import signal
15
+ import ta
16
+ from ta.trend import MACD, SMAIndicator, EMAIndicator
17
+ from ta.momentum import RSIIndicator, StochasticOscillator
18
+ from ta.volatility import BollingerBands, AverageTrueRange
19
+ from ta.volume import OnBalanceVolumeIndicator, MFIIndicator
20
+
21
+ class DendriticNode:
22
+ """
23
+ Represents a single node in the dendritic network.
24
+ Each node can have parent and child dendrites, forming a hierarchical structure.
25
+ """
26
+ def __init__(self, level=0, feature_index=None, threshold=0.5, parent=None, name=None, growth_factor=1.0):
27
+ self.level = level # Depth in the hierarchy
28
+ self.feature_index = feature_index # Which feature this node tracks
29
+ self.threshold = threshold # Activation threshold
30
+ self.parent = parent # Parent node
31
+ self.children = [] # Child nodes
32
+ self.strength = 0.5 # Connection strength
33
+ self.activation_history = [] # Recent activation levels
34
+ self.prediction_vector = None # Pattern that often follows this node's activation
35
+ self.name = name # Optional human-readable name for this dendrite
36
+ self.growth_factor = growth_factor # How readily this dendrite grows new connections
37
+ self.learning_rate = 0.01 # Adjustable learning rate
38
+ self.prediction_confidence = 0.5 # Confidence in predictions (0-1)
39
+ self.last_activations = [] # Store last few activations for pattern recognition
40
+ self.pattern_memory = {} # Dictionary to store recognized patterns
41
+
42
+ def activate(self, input_vector, learning_rate=0.01):
43
+ """Activate the node based on input and propagate to children"""
44
+ # Calculate activation based on feature if available
45
+ if self.feature_index is not None and self.feature_index < len(input_vector):
46
+ activation = input_vector[self.feature_index]
47
+ else:
48
+ # For higher-level nodes, activation is a weighted aggregate of children
49
+ if not self.children:
50
+ activation = 0.5 # Default activation
51
+ else:
52
+ # Prioritize stronger child dendrites for activation
53
+ child_activations = []
54
+ child_weights = []
55
+ for child in self.children:
56
+ child_act = child.activate(input_vector)
57
+ child_activations.append(child_act)
58
+ child_weights.append(child.strength)
59
+
60
+ # If all weights are zero, use uniform weighting
61
+ total_weight = sum(child_weights)
62
+ if total_weight == 0:
63
+ activation = np.mean(child_activations) if child_activations else 0.5
64
+ else:
65
+ # Calculate weighted average
66
+ activation = sum(a * w for a, w in zip(child_activations, child_weights)) / total_weight
67
+
68
+ # Update strength based on activation
69
+ if activation > self.threshold:
70
+ # Strong activation increases strength more when close to threshold
71
+ strength_boost = learning_rate * (1 + 0.5 * (1 - abs(activation - self.threshold)))
72
+ self.strength += strength_boost
73
+ else:
74
+ # Decay is slower for specialized dendrites to maintain stability
75
+ decay_rate = learning_rate * 0.1 * (1.0 if self.name is None else 0.5)
76
+ self.strength -= decay_rate
77
+
78
+ # Ensure strength remains bounded
79
+ self.strength = np.clip(self.strength, 0.1, 1.0)
80
+
81
+ # Store activation in history
82
+ self.activation_history.append(activation)
83
+ if len(self.activation_history) > 100: # Keep last 100 activations
84
+ self.activation_history.pop(0)
85
+
86
+ # Store recent activations for pattern recognition
87
+ self.last_activations.append(activation)
88
+ if len(self.last_activations) > 5: # Track last 5 activations
89
+ self.last_activations.pop(0)
90
+
91
+ # Check if we have a recognizable pattern
92
+ if len(self.last_activations) >= 3:
93
+ # Simplify the pattern to a signature (e.g., up-down-up)
94
+ pattern_sig = ''.join(['U' if self.last_activations[i] > self.last_activations[i-1]
95
+ else 'D' for i in range(1, len(self.last_activations))])
96
+
97
+ # Store this pattern's occurrence
98
+ if pattern_sig in self.pattern_memory:
99
+ self.pattern_memory[pattern_sig] += 1
100
+ else:
101
+ self.pattern_memory[pattern_sig] = 1
102
+
103
+ return activation * self.strength
104
+
105
+ def update_prediction(self, future_vector, learning_rate=0.01):
106
+ """Update prediction vector based on what follows this node's activation"""
107
+ if not self.activation_history:
108
+ return # No activations yet
109
+
110
+ # Only update prediction if recent activation was significant
111
+ recent_activation = self.activation_history[-1] if self.activation_history else 0
112
+ if recent_activation * self.strength < 0.3:
113
+ return # Not active enough to learn from
114
+
115
+ if self.prediction_vector is None:
116
+ self.prediction_vector = future_vector.copy()
117
+ self.prediction_confidence = 0.5 # Initial confidence
118
+ else:
119
+ # Adjust learning rate based on activation strength
120
+ effective_rate = learning_rate * min(1.0, recent_activation * 2)
121
+
122
+ # Calculate prediction error
123
+ if hasattr(future_vector, '__len__') and hasattr(self.prediction_vector, '__len__'):
124
+ error = np.sqrt(np.mean((np.array(future_vector) - np.array(self.prediction_vector))**2))
125
+
126
+ # Adjust confidence based on error (lower error = higher confidence)
127
+ confidence_change = 0.1 * (1.0 - min(error * 2, 1.0))
128
+ self.prediction_confidence = np.clip(
129
+ self.prediction_confidence + confidence_change, 0.1, 0.9)
130
+
131
+ # Update prediction with weighted blend
132
+ self.prediction_vector = (1 - effective_rate) * self.prediction_vector + effective_rate * future_vector
133
+
134
+ def predict(self):
135
+ """Generate prediction based on current activation pattern"""
136
+ if self.prediction_vector is None:
137
+ return None
138
+
139
+ # Scale by strength and confidence
140
+ prediction = self.prediction_vector * self.strength * self.prediction_confidence
141
+
142
+ # If we have recognized patterns, boost prediction based on pattern history
143
+ if self.last_activations and len(self.last_activations) >= 3:
144
+ pattern_sig = ''.join(['U' if self.last_activations[i] > self.last_activations[i-1]
145
+ else 'D' for i in range(1, len(self.last_activations))])
146
+
147
+ if pattern_sig in self.pattern_memory:
148
+ # Boost based on how often we've seen this pattern (normalized)
149
+ pattern_count = self.pattern_memory[pattern_sig]
150
+ total_patterns = sum(self.pattern_memory.values())
151
+ pattern_confidence = min(0.2, pattern_count / (total_patterns + 1))
152
+
153
+ # If last part of pattern is "U", boost upward prediction
154
+ if pattern_sig.endswith('U'):
155
+ for i in range(len(prediction)):
156
+ prediction[i] = min(1.0, prediction[i] + pattern_confidence)
157
+ # If last part of pattern is "D", boost downward prediction
158
+ elif pattern_sig.endswith('D'):
159
+ for i in range(len(prediction)):
160
+ prediction[i] = max(0.0, prediction[i] - pattern_confidence)
161
+
162
+ return prediction
163
+
164
+ def grow_dendrite(self, feature_index=None, threshold=None, name=None, growth_factor=None):
165
+ """Grow a new child dendrite"""
166
+ if threshold is None:
167
+ threshold = self.threshold + np.random.uniform(-0.1, 0.1) # Slightly different threshold
168
+
169
+ if growth_factor is None:
170
+ growth_factor = self.growth_factor
171
+
172
+ # Create new child with reference to parent
173
+ child = DendriticNode(
174
+ level=self.level + 1,
175
+ feature_index=feature_index,
176
+ threshold=threshold,
177
+ parent=self,
178
+ name=name,
179
+ growth_factor=growth_factor
180
+ )
181
+ self.children.append(child)
182
+ return child
183
+
184
+ def prune_weak_dendrites(self, min_strength=0.2):
185
+ """Remove weak dendrites that haven't been useful"""
186
+ # Don't prune named dendrites (preserve specialized ones)
187
+ self.children = [child for child in self.children
188
+ if child.strength > min_strength or child.name is not None]
189
+
190
+ # Recursively prune children
191
+ for child in self.children:
192
+ child.prune_weak_dendrites(min_strength)
193
+
194
+ class HierarchicalDendriticNetwork:
195
+ """
196
+ Implements a hierarchical network of dendrites for stock prediction.
197
+ The network self-organizes based on patterns in the input data.
198
+ """
199
+ def __init__(self, input_dim, max_levels=3, initial_dendrites_per_level=5):
200
+ self.input_dim = input_dim # Number of input features
201
+ self.max_levels = max_levels # Maximum depth of hierarchy
202
+
203
+ # Root node (soma)
204
+ self.root = DendriticNode(level=0, name="root")
205
+
206
+ # Initialize basic structure
207
+ self._initialize_dendrites(initial_dendrites_per_level)
208
+
209
+ # Scaling for inputs
210
+ self.scaler = MinMaxScaler(feature_range=(0, 1))
211
+
212
+ # Memory for temporal patterns
213
+ self.memory_window = 15 # Days to remember (increased from 10)
214
+ self.memory_buffer = [] # Store recent data
215
+
216
+ # Fractal dimension estimate
217
+ self.fractal_dim = 1.0
218
+
219
+ # Performance tracking
220
+ self.prediction_accuracy = []
221
+ self.predicted_directions = []
222
+ self.actual_directions = []
223
+
224
+ # Feature importance tracking
225
+ self.feature_importance = np.ones(input_dim) / input_dim
226
+
227
+ # Market regime detection
228
+ self.current_regime = "unknown" # "bullish", "bearish", "sideways", "volatile"
229
+ self.regime_history = []
230
+
231
+ # Adaptive threshold based on market volatility
232
+ self.confidence_threshold = 0.55 # Starting threshold
233
+ self.volatility_history = []
234
+
235
+ # Cross-asset correlations (will be populated during training)
236
+ self.asset_correlations = {}
237
+
238
+ def _initialize_dendrites(self, dendrites_per_level):
239
+ """Create initial dendrite structure with specialized dendrites for stock patterns"""
240
+ # Price level dendrites
241
+ self.root.grow_dendrite(feature_index=0, threshold=0.3, name="price_low", growth_factor=1.2)
242
+ self.root.grow_dendrite(feature_index=0, threshold=0.5, name="price_mid", growth_factor=1.0)
243
+ self.root.grow_dendrite(feature_index=0, threshold=0.7, name="price_high", growth_factor=1.2)
244
+
245
+ # Price trend dendrites
246
+ self.root.grow_dendrite(feature_index=1, threshold=0.3, name="downtrend", growth_factor=1.2)
247
+ self.root.grow_dendrite(feature_index=1, threshold=0.5, name="neutral_trend", growth_factor=0.8)
248
+ self.root.grow_dendrite(feature_index=1, threshold=0.7, name="uptrend", growth_factor=1.2)
249
+
250
+ # Volatility dendrites
251
+ self.root.grow_dendrite(feature_index=2, threshold=0.3, name="low_volatility", growth_factor=0.8)
252
+ self.root.grow_dendrite(feature_index=2, threshold=0.7, name="high_volatility", growth_factor=1.2)
253
+
254
+ # Volume dendrites
255
+ self.root.grow_dendrite(feature_index=3, threshold=0.3, name="low_volume", growth_factor=0.7)
256
+ self.root.grow_dendrite(feature_index=3, threshold=0.7, name="high_volume", growth_factor=1.3)
257
+
258
+ # Momentum dendrites
259
+ self.root.grow_dendrite(feature_index=4, threshold=0.3, name="negative_momentum", growth_factor=1.2)
260
+ self.root.grow_dendrite(feature_index=4, threshold=0.7, name="positive_momentum", growth_factor=1.2)
261
+
262
+ # RSI dendrites
263
+ self.root.grow_dendrite(feature_index=7, threshold=0.3, name="oversold", growth_factor=1.3)
264
+ self.root.grow_dendrite(feature_index=7, threshold=0.7, name="overbought", growth_factor=1.3)
265
+
266
+ # MACD dendrites
267
+ self.root.grow_dendrite(feature_index=5, threshold=0.3, name="bearish_macd", growth_factor=1.1)
268
+ self.root.grow_dendrite(feature_index=5, threshold=0.7, name="bullish_macd", growth_factor=1.1)
269
+
270
+ # Bollinger Band dendrites
271
+ self.root.grow_dendrite(feature_index=6, threshold=0.2, name="below_lower_band", growth_factor=1.3)
272
+ self.root.grow_dendrite(feature_index=6, threshold=0.8, name="above_upper_band", growth_factor=1.3)
273
+
274
+ # Currency-related dendrites
275
+ if self.input_dim > 15: # If we have currency features
276
+ self.root.grow_dendrite(feature_index=15, threshold=0.3, name="dollar_weak", growth_factor=1.1)
277
+ self.root.grow_dendrite(feature_index=15, threshold=0.7, name="dollar_strong", growth_factor=1.1)
278
+
279
+ # Level 2: Create pattern detector dendrites
280
+ # Create dendrites that specifically look for common patterns
281
+
282
+ # Find dendrites by name
283
+ uptrend = None
284
+ downtrend = None
285
+ high_volume = None
286
+ low_volatility = None
287
+ oversold = None
288
+ overbought = None
289
+
290
+ for child in self.root.children:
291
+ if child.name == "uptrend":
292
+ uptrend = child
293
+ elif child.name == "downtrend":
294
+ downtrend = child
295
+ elif child.name == "high_volume":
296
+ high_volume = child
297
+ elif child.name == "low_volatility":
298
+ low_volatility = child
299
+ elif child.name == "oversold":
300
+ oversold = child
301
+ elif child.name == "overbought":
302
+ overbought = child
303
+
304
+ # Pattern 1: Uptrend with increasing volume (bullish)
305
+ if uptrend and high_volume:
306
+ pattern1 = uptrend.grow_dendrite(threshold=0.6, name="uptrend_with_volume", growth_factor=1.5)
307
+ for _ in range(2):
308
+ pattern1.grow_dendrite(threshold=0.6)
309
+
310
+ # Pattern 2: Downtrend with high volatility (bearish)
311
+ if downtrend:
312
+ pattern2 = downtrend.grow_dendrite(threshold=0.4, name="downtrend_continuation", growth_factor=1.5)
313
+ for _ in range(2):
314
+ pattern2.grow_dendrite(threshold=0.4)
315
+
316
+ # Pattern 3: Low volatility with positive momentum (potential breakout)
317
+ if low_volatility:
318
+ pattern3 = low_volatility.grow_dendrite(threshold=0.6, name="volatility_compression", growth_factor=1.5)
319
+ for _ in range(2):
320
+ pattern3.grow_dendrite(threshold=0.6)
321
+
322
+ # Pattern 4: Oversold with volume spike (potential reversal)
323
+ if oversold and high_volume:
324
+ pattern4 = oversold.grow_dendrite(threshold=0.7, name="oversold_reversal", growth_factor=1.5)
325
+ for _ in range(2):
326
+ pattern4.grow_dendrite(threshold=0.7)
327
+
328
+ # Pattern 5: Overbought with volume decline (potential top)
329
+ if overbought:
330
+ pattern5 = overbought.grow_dendrite(threshold=0.3, name="overbought_reversal", growth_factor=1.5)
331
+ for _ in range(2):
332
+ pattern5.grow_dendrite(threshold=0.3)
333
+
334
+ # Add some general dendrites for other patterns
335
+ for dendrite in self.root.children:
336
+ for _ in range(dendrites_per_level // 5):
337
+ dendrite.grow_dendrite()
338
+
339
+ # Level 3: Higher-level pattern integration
340
+ if self.max_levels >= 3:
341
+ # Create specialized market regime dendrites
342
+ bullish_regime = self.root.grow_dendrite(name="bullish_regime", threshold=0.7, growth_factor=1.2)
343
+ bearish_regime = self.root.grow_dendrite(name="bearish_regime", threshold=0.3, growth_factor=1.2)
344
+ sideways_regime = self.root.grow_dendrite(name="sideways_regime", threshold=0.5, growth_factor=1.0)
345
+
346
+ # Add children to these regime detectors
347
+ for _ in range(dendrites_per_level // 3):
348
+ bullish_regime.grow_dendrite(threshold=np.random.uniform(0.6, 0.8))
349
+ bearish_regime.grow_dendrite(threshold=np.random.uniform(0.2, 0.4))
350
+ sideways_regime.grow_dendrite(threshold=np.random.uniform(0.4, 0.6))
351
+
352
+ def preprocess_data(self, data):
353
+ """Preprocess stock data for the dendritic network"""
354
+ # Extract relevant features
355
+ features = self._extract_features(data)
356
+
357
+ # Scale features to [0, 1]
358
+ if features.shape[0] > 0: # Check if we have any data
359
+ scaled_features = self.scaler.fit_transform(features)
360
+ return scaled_features
361
+ return np.array([])
362
+
363
+ def _extract_features(self, data):
364
+ """Extract features from stock data with enhanced technical indicators"""
365
+ if data.empty:
366
+ return np.array([])
367
+
368
+ # Create a copy of the dataframe to avoid modifying the original
369
+ df = data.copy()
370
+
371
+ # Basic features
372
+ features = []
373
+
374
+ # 1. Price features - normalized closing price
375
+ close = df['Close'].values
376
+ price = (close - np.mean(close)) / (np.std(close) + 1e-8)
377
+ features.append(price)
378
+
379
+ # 2. Returns (daily percent change)
380
+ returns = df['Close'].pct_change().fillna(0).values
381
+ features.append(returns)
382
+
383
+ # 3. Volatility (rolling std of returns)
384
+ volatility = df['Close'].pct_change().rolling(window=5).std().fillna(0).values
385
+ features.append(volatility)
386
+
387
+ # 4. Volume relative to average
388
+ rel_volume = df['Volume'] / df['Volume'].rolling(window=20).mean().fillna(1)
389
+ rel_volume = rel_volume.fillna(1).values
390
+ features.append(rel_volume)
391
+
392
+ # 5. Price momentum (rate of change over 5 days)
393
+ momentum = df['Close'].pct_change(periods=5).fillna(0).values
394
+ features.append(momentum)
395
+
396
+ # 6. MACD Line
397
+ macd = MACD(close=df['Close']).macd()
398
+ macd = (macd - np.mean(macd)) / (np.std(macd) + 1e-8)
399
+ features.append(macd.fillna(0).values)
400
+
401
+ # 7. Bollinger Bands Position
402
+ bb = BollingerBands(close=df['Close'], window=20, window_dev=2)
403
+ bb_pos = (df['Close'] - bb.bollinger_lband()) / (bb.bollinger_hband() - bb.bollinger_lband() + 1e-8)
404
+ features.append(bb_pos.fillna(0.5).values)
405
+
406
+ # 8. RSI
407
+ rsi = RSIIndicator(close=df['Close'], window=14).rsi() / 100.0
408
+ features.append(rsi.fillna(0.5).values)
409
+
410
+ # 9. Stochastic Oscillator
411
+ stoch = StochasticOscillator(high=df['High'], low=df['Low'], close=df['Close']).stoch() / 100.0
412
+ features.append(stoch.fillna(0.5).values)
413
+
414
+ # 10. Average True Range (normalized)
415
+ atr = AverageTrueRange(high=df['High'], low=df['Low'], close=df['Close']).average_true_range()
416
+ atr = (atr - np.min(atr)) / (np.max(atr) - np.min(atr) + 1e-8)
417
+ features.append(atr.fillna(0.2).values)
418
+
419
+ # 11. On Balance Volume (normalized)
420
+ obv = OnBalanceVolumeIndicator(close=df['Close'], volume=df['Volume']).on_balance_volume()
421
+ obv = (obv - np.mean(obv)) / (np.std(obv) + 1e-8)
422
+ features.append(obv.fillna(0).values)
423
+
424
+ # 12. Money Flow Index
425
+ mfi = MFIIndicator(high=df['High'], low=df['Low'], close=df['Close'],
426
+ volume=df['Volume'], window=14).money_flow_index() / 100.0
427
+ features.append(mfi.fillna(0.5).values)
428
+
429
+ # 13. Price Distance from 50-day SMA (normalized)
430
+ sma50 = SMAIndicator(close=df['Close'], window=50).sma_indicator()
431
+ sma_dist = (df['Close'] - sma50) / (df['Close'] + 1e-8)
432
+ features.append(sma_dist.fillna(0).values)
433
+
434
+ # 14. EMA Crossover Signal (fast vs slow EMAs)
435
+ ema12 = EMAIndicator(close=df['Close'], window=12).ema_indicator()
436
+ ema26 = EMAIndicator(close=df['Close'], window=26).ema_indicator()
437
+ ema_cross = (ema12 - ema26) / (df['Close'] + 1e-8)
438
+ features.append(ema_cross.fillna(0).values)
439
+
440
+ # 15. Fibonacci Retracement Levels (dynamic)
441
+ # Find recent high and low in a rolling window
442
+ window = 20
443
+ df['RollingHigh'] = df['High'].rolling(window=window).max()
444
+ df['RollingLow'] = df['Low'].rolling(window=window).min()
445
+
446
+ # Calculate where current price is in the retracement levels
447
+ range_size = df['RollingHigh'] - df['RollingLow']
448
+ fib_pos = (df['Close'] - df['RollingLow']) / (range_size + 1e-8)
449
+ features.append(fib_pos.fillna(0.5).values)
450
+
451
+ # Include any currency-related features if present
452
+ for col in df.columns:
453
+ if col.startswith('Currency_'):
454
+ # Normalize currency data
455
+ curr_data = df[col].values
456
+ if len(curr_data) > 0:
457
+ curr_norm = (curr_data - np.mean(curr_data)) / (np.std(curr_data) + 1e-8)
458
+ features.append(curr_norm)
459
+
460
+ # Transpose to get features as columns
461
+ return np.transpose(np.array(features))
462
+
463
+ def add_currency_data(self, data, currency_data):
464
+ """Add currency exchange rate data to feature set"""
465
+ if data.empty or currency_data.empty:
466
+ return data
467
+
468
+ # Resample currency data to match stock data frequency
469
+ currency_data = currency_data.reindex(data.index, method='ffill')
470
+
471
+ # Add currency columns to stock data
472
+ for col in currency_data.columns:
473
+ data[f'Currency_{col}'] = currency_data[col]
474
+
475
+ return data
476
+
477
+ def add_sector_data(self, data, sector_ticker, period="1y"):
478
+ """Add sector ETF data for correlation analysis"""
479
+ try:
480
+ # Fetch sector data
481
+ sector_data = yf.Ticker(sector_ticker).history(period=period)
482
+ if sector_data.empty:
483
+ return data
484
+
485
+ # Align with stock data dates
486
+ sector_data = sector_data.reindex(data.index, method='ffill')
487
+
488
+ # Calculate daily returns
489
+ sector_returns = sector_data['Close'].pct_change().fillna(0)
490
+
491
+ # Add to stock data
492
+ data[f'Sector_{sector_ticker}'] = sector_returns
493
+
494
+ return data
495
+ except Exception as e:
496
+ st.error(f"Error fetching sector data: {e}")
497
+ return data
498
+
499
+ def detect_market_regime(self, data, lookback=20):
500
+ """Detect current market regime based on price action and volatility"""
501
+ if len(data) < lookback:
502
+ return "unknown"
503
+
504
+ # Get recent data
505
+ recent = data.iloc[-lookback:]
506
+
507
+ # Calculate trend strength
508
+ returns = recent['Close'].pct_change().dropna()
509
+ trend = np.sum(returns) / (np.std(returns) + 1e-8)
510
+
511
+ # Calculate volatility
512
+ volatility = np.std(returns) * np.sqrt(252) # Annualized
513
+
514
+ # Store volatility for adaptive thresholds
515
+ self.volatility_history.append(volatility)
516
+ if len(self.volatility_history) > 10:
517
+ self.volatility_history.pop(0)
518
+
519
+ # Update confidence threshold based on recent volatility
520
+ if len(self.volatility_history) > 1:
521
+ avg_vol = np.mean(self.volatility_history)
522
+ # Higher volatility = higher threshold (require more confidence)
523
+ self.confidence_threshold = 0.5 + min(0.2, avg_vol)
524
+
525
+ # Determine regime
526
+ if abs(trend) < 0.5: # Low trend strength
527
+ if volatility > 0.2: # But high volatility
528
+ regime = "volatile"
529
+ else:
530
+ regime = "sideways"
531
+ elif trend > 0.5: # Strong uptrend
532
+ regime = "bullish"
533
+ else: # Strong downtrend
534
+ regime = "bearish"
535
+
536
+ self.current_regime = regime
537
+ self.regime_history.append(regime)
538
+
539
+ return regime
540
+
541
+ def estimate_fractal_dimension(self):
542
+ """
543
+ Estimate the fractal dimension of the dendrite activation patterns
544
+ using a box counting method simulation
545
+ """
546
+ # Create a simulated activation grid from dendrite strengths
547
+ grid_size = 32
548
+ activation_grid = np.zeros((grid_size, grid_size))
549
+
550
+ def add_node_to_grid(node, x=0, y=0, spread=grid_size/2):
551
+ # Add fuzzy activation for more complex boundaries
552
+ strength = node.strength
553
+ x_int, y_int = int(x), int(y)
554
+
555
+ # Create a small activation cloud around the dendrite
556
+ for dx in range(-1, 2):
557
+ for dy in range(-1, 2):
558
+ nx, ny = (x_int + dx) % grid_size, (y_int + dy) % grid_size
559
+ # Stronger activation at center, weaker at edges
560
+ dist = np.sqrt(dx**2 + dy**2)
561
+ activation_grid[nx, ny] = max(
562
+ activation_grid[nx, ny],
563
+ strength * max(0, 1 - dist/2)
564
+ )
565
+
566
+ # Add children in a circular pattern with some randomization
567
+ if node.children:
568
+ angle_step = 2 * np.pi / len(node.children)
569
+ for i, child in enumerate(node.children):
570
+ angle = i * angle_step + np.random.uniform(-0.2, 0.2)
571
+ new_spread = max(1, spread * (0.6 + 0.1 * np.random.random()))
572
+ new_x = x + np.cos(angle) * new_spread
573
+ new_y = y + np.sin(angle) * new_spread
574
+ add_node_to_grid(child, new_x, new_y, new_spread)
575
+
576
+ # Start from center of grid
577
+ add_node_to_grid(self.root, grid_size//2, grid_size//2)
578
+
579
+ # Apply Gaussian blur to create more natural boundaries
580
+ from scipy.ndimage import gaussian_filter
581
+ activation_grid = gaussian_filter(activation_grid, sigma=0.5)
582
+
583
+ # Create more defined boundaries using edge detection
584
+ edges = np.zeros_like(activation_grid)
585
+ threshold = 0.2
586
+ for i in range(1, grid_size-1):
587
+ for j in range(1, grid_size-1):
588
+ if activation_grid[i, j] > threshold:
589
+ # Check if there's a significant gradient in any direction
590
+ neighbors = [
591
+ activation_grid[i-1, j], activation_grid[i+1, j],
592
+ activation_grid[i, j-1], activation_grid[i, j+1]
593
+ ]
594
+ if max(neighbors) - min(neighbors) > 0.15:
595
+ edges[i, j] = 0.5 # Mark as boundary
596
+
597
+ # Combine the activation with boundary emphasis
598
+ combined_grid = activation_grid.copy()
599
+ combined_grid[edges > 0] += 0.3 # Enhance boundaries
600
+ combined_grid = np.clip(combined_grid, 0, 1)
601
+
602
+ # Apply box counting method to estimate fractal dimension
603
+ box_sizes = [1, 2, 4, 8, 16]
604
+ counts = []
605
+
606
+ for size in box_sizes:
607
+ count = 0
608
+ # Count boxes of size 'size' needed to cover the pattern
609
+ for i in range(0, grid_size, size):
610
+ for j in range(0, grid_size, size):
611
+ if np.any(combined_grid[i:i+size, j:j+size] > 0.25):
612
+ count += 1
613
+ counts.append(count)
614
+
615
+ # Calculate dimension from log-log plot slope
616
+ if all(c > 0 for c in counts):
617
+ coeffs = np.polyfit(np.log(box_sizes), np.log(counts), 1)
618
+ self.fractal_dim = -coeffs[0] # Negative slope gives dimension
619
+
620
+ return self.fractal_dim, combined_grid
621
+
622
+ def find_pattern_correlations(self, input_data_buffer):
623
+ """Find patterns of feature correlations in the input data"""
624
+ if not input_data_buffer or len(input_data_buffer) < 5:
625
+ return {}
626
+
627
+ # Stack data from buffer
628
+ data_matrix = np.vstack(input_data_buffer)
629
+
630
+ # Calculate correlation matrix
631
+ corr_matrix = np.corrcoef(data_matrix.T)
632
+
633
+ # Find strongest feature pairs
634
+ pairs = []
635
+ n_features = corr_matrix.shape[0]
636
+ for i in range(n_features):
637
+ for j in range(i+1, n_features):
638
+ pairs.append((i, j, abs(corr_matrix[i, j])))
639
+
640
+ # Sort by correlation strength
641
+ pairs.sort(key=lambda x: x[2], reverse=True)
642
+
643
+ # Return top correlations
644
+ top_pairs = {}
645
+ for i, j, strength in pairs[:5]: # Top 5 correlations
646
+ if strength > 0.4: # Only meaningful correlations
647
+ key = f"feature_{i}_feature_{j}"
648
+ top_pairs[key] = strength
649
+
650
+ return top_pairs
651
+
652
+ def train(self, data, epochs=1, learning_rate=0.01, growth_frequency=10):
653
+ """
654
+ Train the dendritic network on stock data.
655
+ The network adapts its structure based on patterns in the data.
656
+ """
657
+ if data.empty:
658
+ return
659
+
660
+ # First determine market regime
661
+ self.detect_market_regime(data)
662
+
663
+ # Preprocess data
664
+ scaled_data = self.preprocess_data(data)
665
+
666
+ if len(scaled_data) == 0:
667
+ return
668
+
669
+ # Initialize memory buffer
670
+ self.memory_buffer = []
671
+
672
+ # Train for specified number of epochs
673
+ for epoch in range(epochs):
674
+ # Track predictions for evaluation
675
+ predicted_values = []
676
+ actual_values = []
677
+
678
+ # Process each time step
679
+ for i in range(len(scaled_data) - 1):
680
+ current_vector = scaled_data[i]
681
+ future_vector = scaled_data[i + 1]
682
+
683
+ # Add to memory buffer
684
+ self.memory_buffer.append(current_vector)
685
+ if len(self.memory_buffer) > self.memory_window:
686
+ self.memory_buffer.pop(0)
687
+
688
+ # Find pattern correlations periodically
689
+ if i % 20 == 0 and len(self.memory_buffer) > 5:
690
+ self.find_pattern_correlations(self.memory_buffer)
691
+
692
+ # Activate dendrites
693
+ root_activation = self.root.activate(current_vector, learning_rate)
694
+
695
+ # Make a prediction before seeing the next value
696
+ if i > self.memory_window:
697
+ prediction = self.predict_next()
698
+ if prediction is not None and len(prediction) > 0:
699
+ # For now, just use first feature (price) for evaluation
700
+ predicted_values.append(prediction[0])
701
+ actual_values.append(future_vector[0])
702
+
703
+ # Update dendrite predictions
704
+ self._update_predictions(future_vector, learning_rate)
705
+
706
+ # Periodically grow new dendrites or prune weak ones
707
+ if i % growth_frequency == 0:
708
+ self._adapt_structure(current_vector, learning_rate)
709
+
710
+ # Calculate prediction accuracy for this epoch
711
+ if predicted_values and actual_values:
712
+ # Calculate directional accuracy (up/down)
713
+ pred_dir = []
714
+ actual_dir = []
715
+
716
+ for i in range(1, len(predicted_values)):
717
+ # Predicted direction: is next predicted value higher than current actual?
718
+ pred_dir.append(1 if predicted_values[i] > actual_values[i-1] else 0)
719
+ # Actual direction: is next actual value higher than current actual?
720
+ actual_dir.append(1 if actual_values[i] > actual_values[i-1] else 0)
721
+
722
+ if pred_dir and actual_dir:
723
+ accuracy = sum(p == a for p, a in zip(pred_dir, actual_dir)) / len(pred_dir)
724
+ self.prediction_accuracy.append(accuracy)
725
+
726
+ # Store for analysis
727
+ self.predicted_directions.extend(pred_dir)
728
+ self.actual_directions.extend(actual_dir)
729
+
730
+ if epoch == epochs - 1: # Only on last epoch
731
+ st.write(f"Epoch {epoch+1}: Directional Accuracy = {accuracy:.4f}")
732
+
733
+ # Calculate fractal dimension after training
734
+ self.estimate_fractal_dimension()
735
+
736
+ def _update_predictions(self, future_vector, learning_rate):
737
+ """Update prediction vectors throughout the network"""
738
+ # Only update if we have enough memory
739
+ if len(self.memory_buffer) < 2:
740
+ return
741
+
742
+ # Get last and current vectors
743
+ current_vector = self.memory_buffer[-1]
744
+
745
+ def update_node_predictions(node, level_learning_rate):
746
+ # Update this node's prediction
747
+ node.update_prediction(future_vector, level_learning_rate)
748
+
749
+ # Recursively update child nodes with diminishing learning rate
750
+ child_lr = level_learning_rate * 0.9 # Reduce learning rate for children
751
+ for child in node.children:
752
+ update_node_predictions(child, child_lr)
753
+
754
+ # Start from root with base learning rate
755
+ update_node_predictions(self.root, learning_rate)
756
+
757
+ def _adapt_structure(self, current_vector, learning_rate):
758
+ """Adapt the dendritic structure by growing or pruning dendrites"""
759
+ # Grow new dendrites where useful
760
+ def adapt_node(node):
761
+ # Probabilistic growth based on activation, strength, and level
762
+ growth_prob = node.strength * node.growth_factor * (1.0 / (node.level + 1))
763
+ if np.random.random() < growth_prob and node.level < self.max_levels - 1:
764
+ # Determine feature for new dendrite
765
+ if node.level == 0:
766
+ # First level dendrites track specific features
767
+ # Prioritize features based on their importance
768
+ feature_weights = self.feature_importance + 0.1 # Avoid zero probability
769
+ feature_idx = np.random.choice(
770
+ range(self.input_dim),
771
+ p=feature_weights/np.sum(feature_weights)
772
+ )
773
+
774
+ # Create dendrite with threshold biased toward discriminating values
775
+ if current_vector[feature_idx] > 0.7:
776
+ threshold = np.random.uniform(0.6, 0.9) # High threshold
777
+ elif current_vector[feature_idx] < 0.3:
778
+ threshold = np.random.uniform(0.1, 0.4) # Low threshold
779
+ else:
780
+ threshold = np.random.uniform(0.3, 0.7) # Middle threshold
781
+
782
+ node.grow_dendrite(feature_index=feature_idx, threshold=threshold)
783
+ else:
784
+ # Higher level dendrites can track patterns across features
785
+ threshold = np.random.uniform(0.3, 0.7)
786
+ node.grow_dendrite(threshold=threshold)
787
+
788
+ # Recursively adapt children
789
+ for child in node.children:
790
+ adapt_node(child)
791
+
792
+ # Update feature importance based on current activation
793
+ if len(self.memory_buffer) > 1:
794
+ last_vector = self.memory_buffer[-2]
795
+ current_vector = self.memory_buffer[-1]
796
+
797
+ # Changes in features that correlate with changes in price are important
798
+ price_change = current_vector[0] - last_vector[0]
799
+ for i in range(1, min(len(current_vector), len(self.feature_importance))):
800
+ feature_change = current_vector[i] - last_vector[i]
801
+ importance_update = abs(feature_change * price_change) * 0.1
802
+ self.feature_importance[i] = self.feature_importance[i] * 0.99 + importance_update
803
+
804
+ # Normalize
805
+ self.feature_importance = self.feature_importance / np.sum(self.feature_importance)
806
+
807
+ # Start adaptation from root
808
+ adapt_node(self.root)
809
+
810
+ # Periodically prune weak dendrites, but less often in early training
811
+ if np.random.random() < 0.15: # 15% chance to prune
812
+ min_strength = 0.15 # Lower threshold to keep more dendrites
813
+ self.root.prune_weak_dendrites(min_strength=min_strength)
814
+
815
+ def predict_next(self):
816
+ """
817
+ Generate a prediction for the next time step based on recent memory
818
+ and dendrite activation patterns
819
+ """
820
+ if not self.memory_buffer:
821
+ return None
822
+
823
+ # Get latest input
824
+ current_vector = self.memory_buffer[-1]
825
+
826
+ # Activate the network with current input
827
+ self.root.activate(current_vector, learning_rate=0) # Don't learn during prediction
828
+
829
+ # Collect predictions from all dendrites
830
+ predictions = []
831
+
832
+ def collect_predictions(node, weight=1.0):
833
+ pred = node.predict()
834
+ if pred is not None:
835
+ # Weight by strength, prediction confidence, and node level
836
+ effective_weight = weight * node.strength * node.prediction_confidence
837
+
838
+ # Named dendrites get extra weight
839
+ if node.name is not None:
840
+ effective_weight *= 1.5
841
+
842
+ # Adjust weight based on current market regime
843
+ if self.current_regime == "bullish" and node.name and "bull" in node.name:
844
+ effective_weight *= 1.5
845
+ elif self.current_regime == "bearish" and node.name and "bear" in node.name:
846
+ effective_weight *= 1.5
847
+
848
+ predictions.append((pred, effective_weight))
849
+
850
+ for child in node.children:
851
+ # Deeper nodes have less influence
852
+ child_weight = weight * 0.9
853
+ collect_predictions(child, child_weight)
854
+
855
+ # Start collection from root
856
+ collect_predictions(self.root)
857
+
858
+ # Combine weighted predictions
859
+ if not predictions:
860
+ return None
861
+
862
+ # Weight by dendrite strength and confidence
863
+ weighted_sum = np.zeros_like(predictions[0][0])
864
+ total_weight = 0
865
+
866
+ for pred, weight in predictions:
867
+ weighted_sum += pred * weight
868
+ total_weight += weight
869
+
870
+ if total_weight > 0:
871
+ return weighted_sum / total_weight
872
+ return None
873
+
874
+ def predict_days_ahead(self, days_ahead=5, current_data=None):
875
+ """
876
+ Make predictions for multiple days ahead by feeding predictions
877
+ back into the network
878
+ """
879
+ if current_data is not None:
880
+ # Reset memory with latest actual data
881
+ scaled_data = self.preprocess_data(current_data)
882
+ self.memory_buffer = list(scaled_data[-self.memory_window:])
883
+
884
+ if not self.memory_buffer:
885
+ return None
886
+
887
+ # Start with current memory state
888
+ predictions = []
889
+ confidences = []
890
+
891
+ # Get current market regime for context
892
+ if current_data is not None:
893
+ self.detect_market_regime(current_data)
894
+
895
+ # Make sequential predictions
896
+ for day in range(days_ahead):
897
+ # Predict next day
898
+ next_day = self.predict_next()
899
+ if next_day is None:
900
+ break
901
+
902
+ # Calculate confidence based on dendrite activations
903
+ confidence = 0.5 # Default confidence
904
+
905
+ # Higher confidence if dendrites agree
906
+ if len(self.memory_buffer) > 1:
907
+ # Check if dendrites show consistent pattern recognition
908
+ pattern_consistency = 0
909
+ total_patterns = 0
910
+
911
+ for child in self.root.children:
912
+ if child.name is not None and len(child.activation_history) > 2:
913
+ # Check for consistent activation pattern
914
+ recent_acts = child.activation_history[-3:]
915
+ if all(a > 0.6 for a in recent_acts) or all(a < 0.4 for a in recent_acts):
916
+ pattern_consistency += 1
917
+ total_patterns += 1
918
+
919
+ if total_patterns > 0:
920
+ consistency_score = pattern_consistency / total_patterns
921
+ confidence = 0.5 + 0.4 * consistency_score
922
+
923
+ # Adjust confidence based on volatility
924
+ if len(self.volatility_history) > 0:
925
+ recent_vol = self.volatility_history[-1]
926
+ # Lower confidence when volatility is high
927
+ confidence -= min(0.2, recent_vol)
928
+
929
+ # Add predictions and confidence
930
+ predictions.append(next_day)
931
+ confidences.append(confidence)
932
+
933
+ # Update memory with prediction
934
+ self.memory_buffer.append(next_day)
935
+ if len(self.memory_buffer) > self.memory_window:
936
+ self.memory_buffer.pop(0)
937
+
938
+ return np.array(predictions), np.array(confidences)
939
+
940
+ def get_trading_signals(self, predictions, confidences, threshold=None):
941
+ """
942
+ Convert predictions to trading signals
943
+ threshold: confidence level needed for a buy/sell signal
944
+ """
945
+ if predictions is None or len(predictions) == 0:
946
+ return []
947
+
948
+ # Use adaptive threshold based on market regime if not specified
949
+ if threshold is None:
950
+ threshold = self.confidence_threshold
951
+
952
+ signals = []
953
+ for i, (pred, conf) in enumerate(zip(predictions, confidences)):
954
+ # Use the first feature (price) direction for signal
955
+ price_direction = pred[0] # Scaled between 0-1
956
+
957
+ # Adjust confidence threshold based on market regime
958
+ adjusted_threshold = threshold
959
+ if self.current_regime == "volatile":
960
+ adjusted_threshold += 0.05 # Higher threshold in volatile markets
961
+ elif self.current_regime == "sideways":
962
+ adjusted_threshold += 0.02 # Slightly higher in sideways markets
963
+
964
+ # Generate signals based on confidence-adjusted threshold
965
+ if price_direction > 0.5 + (adjusted_threshold - 0.5) and conf > adjusted_threshold:
966
+ signals.append('BUY')
967
+ elif price_direction < 0.5 - (adjusted_threshold - 0.5) and conf > adjusted_threshold:
968
+ signals.append('SELL')
969
+ else:
970
+ signals.append('HOLD')
971
+
972
+ return signals
973
+
974
+ def visualize_dendrites(self, max_nodes=50):
975
+ """Generate a visualization of the dendrite network structure"""
976
+ # Count nodes at each level and compute average strengths
977
+ level_counts = {}
978
+ level_strengths = {}
979
+ active_nodes = {}
980
+ named_nodes = {}
981
+
982
+ def traverse_node(node):
983
+ if node.level not in level_counts:
984
+ level_counts[node.level] = 0
985
+ level_strengths[node.level] = []
986
+ active_nodes[node.level] = 0
987
+ named_nodes[node.level] = []
988
+
989
+ level_counts[node.level] += 1
990
+ level_strengths[node.level].append(node.strength)
991
+
992
+ if node.strength > 0.6:
993
+ active_nodes[node.level] += 1
994
+
995
+ if node.name is not None:
996
+ named_nodes[node.level].append((node.name, node.strength))
997
+
998
+ for child in node.children:
999
+ traverse_node(child)
1000
+
1001
+ traverse_node(self.root)
1002
+
1003
+ # Create visualization
1004
+ fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 6))
1005
+
1006
+ # Plot 1: Node counts by level
1007
+ levels = sorted(level_counts.keys())
1008
+ counts = [level_counts[level] for level in levels]
1009
+
1010
+ ax1.bar(levels, counts, alpha=0.7)
1011
+ ax1.set_xlabel('Dendrite Level')
1012
+ ax1.set_ylabel('Number of Dendrites')
1013
+ ax1.set_title(f'Dendritic Network Structure (Fractal Dimension: {self.fractal_dim:.3f})')
1014
+
1015
+ # Add active node counts as a line
1016
+ active_counts = [active_nodes.get(level, 0) for level in levels]
1017
+ ax1_2 = ax1.twinx()
1018
+ ax1_2.plot(levels, active_counts, 'r-', marker='o')
1019
+ ax1_2.set_ylabel('Number of Active Dendrites (>0.6 strength)', color='r')
1020
+ ax1_2.tick_params(axis='y', labelcolor='r')
1021
+
1022
+ # Plot 2: Average strengths by level
1023
+ avg_strengths = [np.mean(level_strengths.get(level, [0])) for level in levels]
1024
+
1025
+ ax2.bar(levels, avg_strengths, color='green', alpha=0.7)
1026
+ ax2.set_xlabel('Dendrite Level')
1027
+ ax2.set_ylabel('Average Dendrite Strength')
1028
+ ax2.set_title('Dendrite Strength by Level')
1029
+ ax2.set_ylim([0, 1])
1030
+
1031
+ # Add specialized dendrite info
1032
+ important_nodes = []
1033
+ for level in named_nodes:
1034
+ for name, strength in named_nodes[level]:
1035
+ if strength > 0.5: # Only show strong specialized dendrites
1036
+ important_nodes.append((name, level, strength))
1037
+
1038
+ # Sort by strength
1039
+ important_nodes.sort(key=lambda x: x[2], reverse=True)
1040
+
1041
+ # Display top nodes in a text box
1042
+ if important_nodes:
1043
+ node_text = "\n".join([f"{name}: {strength:.2f}"
1044
+ for name, level, strength in important_nodes[:max_nodes]])
1045
+ ax2.text(1.05, 0.5, f"Strong Specialized Dendrites:\n{node_text}",
1046
+ transform=ax2.transAxes, fontsize=9,
1047
+ verticalalignment='center', bbox=dict(boxstyle="round", alpha=0.1))
1048
+
1049
+ # Add fractal dimension
1050
+ ax1.text(0.05, 0.95, f'Fractal Dimension: {self.fractal_dim:.3f}',
1051
+ transform=ax1.transAxes, fontsize=10,
1052
+ verticalalignment='top', bbox=dict(boxstyle="round", alpha=0.1))
1053
+
1054
+ plt.tight_layout()
1055
+
1056
+ # Create grid visualization
1057
+ fd, grid = self.estimate_fractal_dimension()
1058
+
1059
+ return fig, grid, important_nodes
1060
+
1061
+ def evaluate_performance(self, test_data):
1062
+ """Evaluate prediction performance on test data"""
1063
+ if test_data.empty:
1064
+ return None
1065
+
1066
+ # Get market regime for test data
1067
+ self.detect_market_regime(test_data)
1068
+
1069
+ scaled_data = self.preprocess_data(test_data)
1070
+
1071
+ if len(scaled_data) < self.memory_window + 1:
1072
+ return None
1073
+
1074
+ # Initialize memory with beginning of test data
1075
+ self.memory_buffer = list(scaled_data[:self.memory_window])
1076
+
1077
+ # Make predictions and compare with actual values
1078
+ predicted_values = []
1079
+ actual_values = []
1080
+ confidences = []
1081
+
1082
+ for i in range(self.memory_window, len(scaled_data) - 1):
1083
+ # Current vector becomes last memory item
1084
+ current_vector = scaled_data[i]
1085
+ future_vector = scaled_data[i + 1]
1086
+
1087
+ # Update memory
1088
+ self.memory_buffer.append(current_vector)
1089
+ if len(self.memory_buffer) > self.memory_window:
1090
+ self.memory_buffer.pop(0)
1091
+
1092
+ # Predict next
1093
+ prediction = self.predict_next()
1094
+ if prediction is not None:
1095
+ # For simplicity, just use first feature (price) for evaluation
1096
+ predicted_values.append(prediction[0])
1097
+ actual_values.append(future_vector[0])
1098
+
1099
+ # Calculate prediction confidence
1100
+ confidence = 0.5 # Default
1101
+
1102
+ # Higher confidence if dendrites agree
1103
+ pattern_consistency = 0
1104
+ total_patterns = 0
1105
+
1106
+ for child in self.root.children:
1107
+ if child.name is not None and len(child.activation_history) > 0:
1108
+ recent_act = child.activation_history[-1]
1109
+ if recent_act > 0.7 or recent_act < 0.3: # Strong signal
1110
+ pattern_consistency += 1
1111
+ total_patterns += 1
1112
+
1113
+ if total_patterns > 0:
1114
+ consistency_score = pattern_consistency / total_patterns
1115
+ confidence = 0.5 + 0.3 * consistency_score
1116
+
1117
+ confidences.append(confidence)
1118
+
1119
+ if not predicted_values:
1120
+ return None
1121
+
1122
+ # Calculate directional prediction metrics
1123
+ pred_directions = []
1124
+ actual_directions = []
1125
+
1126
+ for i in range(1, len(predicted_values)):
1127
+ # Predicted direction: is next predicted value higher than current actual?
1128
+ pred_dir = 1 if predicted_values[i] > actual_values[i-1] else 0
1129
+ # Actual direction: is next actual value higher than current actual?
1130
+ actual_dir = 1 if actual_values[i] > actual_values[i-1] else 0
1131
+
1132
+ pred_directions.append(pred_dir)
1133
+ actual_directions.append(actual_dir)
1134
+
1135
+ # Calculate directional accuracy
1136
+ dir_accuracy = sum(p == a for p, a in zip(pred_directions, actual_directions)) / len(pred_directions) if pred_directions else 0
1137
+
1138
+ # Calculate RMSE on scaled values
1139
+ rmse = np.sqrt(np.mean((np.array(predicted_values) - np.array(actual_values)) ** 2))
1140
+
1141
+ # Calculate confidence-weighted accuracy
1142
+ weighted_correct = 0
1143
+ total_weight = 0
1144
+
1145
+ for i in range(len(pred_directions)):
1146
+ if i < len(confidences):
1147
+ weight = confidences[i]
1148
+ if pred_directions[i] == actual_directions[i]:
1149
+ weighted_correct += weight
1150
+ total_weight += weight
1151
+
1152
+ confidence_accuracy = weighted_correct / total_weight if total_weight > 0 else 0
1153
+
1154
+ # Calculate profitability metrics
1155
+ # Simple simulation of buying/selling based on predictions
1156
+ initial_capital = 10000
1157
+ capital = initial_capital
1158
+ position = 0 # Shares held
1159
+
1160
+ # Get original price data from test data for more realistic simulation
1161
+ prices = test_data['Close'].values[-len(pred_directions)-1:]
1162
+
1163
+ for i in range(len(pred_directions)):
1164
+ current_price = prices[i]
1165
+ next_price = prices[i+1]
1166
+
1167
+ # If we predict up and don't have a position, buy
1168
+ if pred_directions[i] == 1 and position == 0:
1169
+ position = capital / current_price
1170
+ capital = 0
1171
+ # If we predict down and have a position, sell
1172
+ elif pred_directions[i] == 0 and position > 0:
1173
+ capital = position * current_price
1174
+ position = 0
1175
+
1176
+ # Liquidate final position
1177
+ if position > 0:
1178
+ capital = position * prices[-1]
1179
+
1180
+ # Calculate returns
1181
+ strategy_return = (capital / initial_capital - 1) * 100
1182
+ buy_hold_return = (prices[-1] / prices[0] - 1) * 100
1183
+
1184
+ return {
1185
+ 'directional_accuracy': dir_accuracy,
1186
+ 'confidence_weighted_accuracy': confidence_accuracy,
1187
+ 'rmse': rmse,
1188
+ 'predictions': predicted_values,
1189
+ 'actual': actual_values,
1190
+ 'predicted_directions': pred_directions,
1191
+ 'actual_directions': actual_directions,
1192
+ 'confidences': confidences,
1193
+ 'strategy_return': strategy_return,
1194
+ 'buy_hold_return': buy_hold_return,
1195
+ 'market_regime': self.current_regime,
1196
+ 'test_data_length': len(test_data)
1197
+ }
1198
+
1199
+ # Fetch stock and currency data
1200
+ def fetch_stock_data(ticker, period="2y", interval="1d"):
1201
+ """Fetch stock data from Yahoo Finance"""
1202
+ try:
1203
+ stock = yf.Ticker(ticker)
1204
+ data = stock.history(period=period, interval=interval)
1205
+ return data
1206
+ except Exception as e:
1207
+ st.error(f"Error fetching stock data: {e}")
1208
+ return pd.DataFrame()
1209
+
1210
+ def fetch_currency_data(currencies=["EURUSD=X", "JPYUSD=X", "CNYUSD=X"], period="2y", interval="1d"):
1211
+ """Fetch currency data for Euro, Yen, and Yuan against USD"""
1212
+ try:
1213
+ currency_data = {}
1214
+ for curr in currencies:
1215
+ ticker = yf.Ticker(curr)
1216
+ data = ticker.history(period=period, interval=interval)
1217
+ if not data.empty:
1218
+ currency_data[curr.replace('=X', '')] = data['Close']
1219
+
1220
+ return pd.DataFrame(currency_data)
1221
+ except Exception as e:
1222
+ st.error(f"Error fetching currency data: {e}")
1223
+ return pd.DataFrame()
1224
+
1225
+ def fetch_sector_data(sectors=None, period="2y"):
1226
+ """Fetch sector ETF data for additional context"""
1227
+ if sectors is None:
1228
+ # Default technology sector ETF
1229
+ sectors = ["XLK"] # Technology sector ETF
1230
+
1231
+ try:
1232
+ sector_data = {}
1233
+ for sector in sectors:
1234
+ ticker = yf.Ticker(sector)
1235
+ data = ticker.history(period=period)
1236
+ if not data.empty:
1237
+ sector_data[sector] = data['Close']
1238
+
1239
+ return pd.DataFrame(sector_data)
1240
+ except Exception as e:
1241
+ st.error(f"Error fetching sector data: {e}")
1242
+ return pd.DataFrame()
1243
+
1244
+ def train_test_split(data, test_size=0.2):
1245
+ """Split data into training and testing sets"""
1246
+ if data.empty:
1247
+ return pd.DataFrame(), pd.DataFrame()
1248
+
1249
+ split_idx = int(len(data) * (1 - test_size))
1250
+ train_data = data.iloc[:split_idx].copy()
1251
+ test_data = data.iloc[split_idx:].copy()
1252
+ return train_data, test_data
1253
+
1254
+ def compare_with_baseline(test_data, dsa_results):
1255
+ """Compare DSA performance with simple baseline models and ML benchmarks"""
1256
+ if test_data.empty or dsa_results is None:
1257
+ return {}
1258
+
1259
+ # Extract closing prices for simplicity
1260
+ closes = test_data['Close'].values
1261
+
1262
+ # Baseline 1: Previous day prediction (assumption: tomorrow = today)
1263
+ prev_day_accuracy = 0.5 # Default to random guessing
1264
+ if len(closes) > 2:
1265
+ # Simply predict the same direction as previous day
1266
+ baseline1_dir_pred = []
1267
+ baseline1_dir_actual = []
1268
+
1269
+ for i in range(1, len(closes)-1):
1270
+ # Previous day direction
1271
+ prev_direction = 1 if closes[i] > closes[i-1] else 0
1272
+ # Actual next day direction
1273
+ actual_direction = 1 if closes[i+1] > closes[i] else 0
1274
+
1275
+ baseline1_dir_pred.append(prev_direction)
1276
+ baseline1_dir_actual.append(actual_direction)
1277
+
1278
+ prev_day_accuracy = sum(p == a for p, a in zip(baseline1_dir_pred, baseline1_dir_actual)) / len(baseline1_dir_pred)
1279
+
1280
+ # Baseline 2: Simple moving average (10-day)
1281
+ ma_period = 10
1282
+ ma_accuracy = 0.5 # Default to random guessing
1283
+
1284
+ if len(closes) > ma_period + 1:
1285
+ ma_dir_pred = []
1286
+ ma_dir_actual = []
1287
+
1288
+ for i in range(ma_period, len(closes)-1):
1289
+ ma_value = np.mean(closes[i-ma_period:i])
1290
+ ma_dir = 1 if closes[i] > ma_value else 0 # If current price > MA, predict up
1291
+ actual_dir = 1 if closes[i+1] > closes[i] else 0
1292
+
1293
+ ma_dir_pred.append(ma_dir)
1294
+ ma_dir_actual.append(actual_dir)
1295
+
1296
+ ma_accuracy = sum(p == a for p, a in zip(ma_dir_pred, ma_dir_actual)) / len(ma_dir_pred)
1297
+
1298
+ # Baseline 3: Linear regression on recent prices
1299
+ lr_period = 14
1300
+ lr_accuracy = 0.5 # Default to random guessing
1301
+
1302
+ if len(closes) > lr_period + 1:
1303
+ lr_dir_pred = []
1304
+ lr_dir_actual = []
1305
+
1306
+ for i in range(lr_period, len(closes)-1):
1307
+ X = np.arange(lr_period).reshape(-1, 1)
1308
+ y = closes[i-lr_period:i]
1309
+ slope, intercept, _, _, _ = linregress(X.flatten(), y)
1310
+
1311
+ # Predict trend direction based on slope
1312
+ lr_dir = 1 if slope > 0 else 0
1313
+ actual_dir = 1 if closes[i+1] > closes[i] else 0
1314
+
1315
+ lr_dir_pred.append(lr_dir)
1316
+ lr_dir_actual.append(actual_dir)
1317
+
1318
+ lr_accuracy = sum(p == a for p, a in zip(lr_dir_pred, lr_dir_actual)) / len(lr_dir_pred)
1319
+
1320
+ # Baseline 4: MACD crossover strategy
1321
+ macd_accuracy = 0.5 # Default
1322
+
1323
+ if len(test_data) > 26: # Need at least 26 days for MACD
1324
+ # Calculate MACD
1325
+ ema12 = test_data['Close'].ewm(span=12, adjust=False).mean()
1326
+ ema26 = test_data['Close'].ewm(span=26, adjust=False).mean()
1327
+ macd_line = ema12 - ema26
1328
+ signal_line = macd_line.ewm(span=9, adjust=False).mean()
1329
+
1330
+ # Generate signals
1331
+ macd_dir_pred = []
1332
+ macd_dir_actual = []
1333
+
1334
+ for i in range(26, len(test_data)-1):
1335
+ # MACD crossover: Buy when MACD crosses above signal line
1336
+ macd_val = macd_line.iloc[i]
1337
+ signal_val = signal_line.iloc[i]
1338
+ macd_prev = macd_line.iloc[i-1]
1339
+ signal_prev = signal_line.iloc[i-1]
1340
+
1341
+ # Bullish crossover: MACD crosses above signal line
1342
+ bullish = macd_prev < signal_prev and macd_val > signal_val
1343
+ # Bearish crossover: MACD crosses below signal line
1344
+ bearish = macd_prev > signal_prev and macd_val < signal_val
1345
+
1346
+ if bullish:
1347
+ pred = 1 # Predict up
1348
+ elif bearish:
1349
+ pred = 0 # Predict down
1350
+ else:
1351
+ # No crossover, maintain previous direction
1352
+ pred = 1 if macd_val > signal_val else 0
1353
+
1354
+ actual = 1 if test_data['Close'].iloc[i+1] > test_data['Close'].iloc[i] else 0
1355
+
1356
+ macd_dir_pred.append(pred)
1357
+ macd_dir_actual.append(actual)
1358
+
1359
+ if macd_dir_pred:
1360
+ macd_accuracy = sum(p == a for p, a in zip(macd_dir_pred, macd_dir_actual)) / len(macd_dir_pred)
1361
+
1362
+ # Add a random baseline
1363
+ random_accuracy = 0.5 # Theoretical random guessing accuracy
1364
+
1365
+ # Calculate the theoretical best possible accuracy
1366
+ max_accuracy = max(prev_day_accuracy, ma_accuracy, lr_accuracy, macd_accuracy, random_accuracy)
1367
+ improvement = ((dsa_results['directional_accuracy'] / max_accuracy) - 1) * 100 if max_accuracy > 0 else 0
1368
+
1369
+ # Calculate the profitability comparison
1370
+ strategy_return = dsa_results.get('strategy_return', 0)
1371
+ buy_hold_return = dsa_results.get('buy_hold_return', 0)
1372
+
1373
+ return {
1374
+ 'dsa_accuracy': dsa_results['directional_accuracy'],
1375
+ 'dsa_confidence_accuracy': dsa_results.get('confidence_weighted_accuracy', 0),
1376
+ 'previous_day_accuracy': prev_day_accuracy,
1377
+ 'moving_average_accuracy': ma_accuracy,
1378
+ 'linear_regression_accuracy': lr_accuracy,
1379
+ 'macd_accuracy': macd_accuracy,
1380
+ 'random_guessing': random_accuracy,
1381
+ 'max_baseline_accuracy': max_accuracy,
1382
+ 'improvement_percentage': improvement,
1383
+ 'dsa_return': strategy_return,
1384
+ 'buy_hold_return': buy_hold_return
1385
+ }
1386
+
1387
+ # Interactive Streamlit app for visualization
1388
+ def main():
1389
+ st.title("Enhanced Dendritic Stock Algorithm (DSA)")
1390
+ st.markdown("""
1391
+ ### Hierarchical Dendritic Network for Stock Prediction
1392
+
1393
+ This system implements a biological-inspired dendritic network that forms fractal patterns
1394
+ at the boundaries between different processing regimes. These patterns emerge naturally
1395
+ from the self-organizing dynamics, demonstrating our theory about boundary-emergent complexity.
1396
+ """)
1397
+
1398
+ st.sidebar.header("Settings")
1399
+
1400
+ # Stock selection
1401
+ ticker_options = {
1402
+ "Apple": "AAPL",
1403
+ "Microsoft": "MSFT",
1404
+ "Google": "GOOGL",
1405
+ "Amazon": "AMZN",
1406
+ "Tesla": "TSLA",
1407
+ "Meta": "META",
1408
+ "Nvidia": "NVDA",
1409
+ "Berkshire Hathaway": "BRK-B",
1410
+ "Visa": "V",
1411
+ "JPMorgan Chase": "JPM",
1412
+ "S&P 500 ETF": "SPY",
1413
+ "Nasdaq ETF": "QQQ"
1414
+ }
1415
+
1416
+ ticker_name = st.sidebar.selectbox(
1417
+ "Select Stock",
1418
+ list(ticker_options.keys()),
1419
+ index=0
1420
+ )
1421
+ ticker = ticker_options[ticker_name]
1422
+
1423
+ # Add option for custom ticker
1424
+ custom_ticker = st.sidebar.text_input("Or enter custom ticker:", "")
1425
+ if custom_ticker:
1426
+ ticker = custom_ticker.upper()
1427
+
1428
+ # Optional sector ETF to include
1429
+ include_sector = st.sidebar.checkbox("Include Sector ETF data", value=True)
1430
+ sector_etf = None
1431
+ if include_sector:
1432
+ sector_etf = st.sidebar.selectbox(
1433
+ "Select Sector ETF",
1434
+ ["XLK", "XLF", "XLE", "XLV", "XLI", "XLY", "XLP", "XLU", "XLB", "XLRE"],
1435
+ index=0,
1436
+ help="XLK=Technology, XLF=Financials, XLE=Energy, XLV=Healthcare, XLI=Industrials"
1437
+ )
1438
+
1439
+ # Training parameters
1440
+ st.sidebar.subheader("Training Parameters")
1441
+ train_period = st.sidebar.selectbox(
1442
+ "Training Period",
1443
+ ["6mo", "1y", "2y", "5y", "max"],
1444
+ index=1
1445
+ )
1446
+ test_size = st.sidebar.slider("Test Data Size (%)", 10, 50, 20)
1447
+ epochs = st.sidebar.slider("Training Epochs", 1, 10, 3)
1448
+
1449
+ # Network parameters
1450
+ st.sidebar.subheader("Network Parameters")
1451
+ dendrites_per_level = st.sidebar.slider("Initial Dendrites per Level", 3, 20, 10)
1452
+ max_levels = st.sidebar.slider("Maximum Hierarchy Levels", 1, 5, 3)
1453
+ memory_window = st.sidebar.slider("Memory Window (Days)", 5, 30, 15)
1454
+
1455
+ # Prediction parameters
1456
+ st.sidebar.subheader("Prediction Parameters")
1457
+ days_ahead = st.sidebar.slider("Days to Predict Ahead", 1, 30, 5)
1458
+ signal_threshold = st.sidebar.slider("Base Signal Threshold", 0.51, 0.99, 0.55,
1459
+ help="Higher values require more confidence for buy/sell signals")
1460
+
1461
+ # Advanced options
1462
+ st.sidebar.subheader("Advanced Options")
1463
+ show_advanced = st.sidebar.checkbox("Show Advanced Metrics", value=False)
1464
+
1465
+ # Load data on button click
1466
+ if st.sidebar.button("Load Data and Train"):
1467
+ # Show loading message
1468
+ with st.spinner("Fetching stock and market data..."):
1469
+ stock_data = fetch_stock_data(ticker, period=train_period)
1470
+
1471
+ if stock_data.empty:
1472
+ st.error(f"No data found for ticker {ticker}")
1473
+ else:
1474
+ # Progress bar for all steps
1475
+ progress_bar = st.progress(0)
1476
+ total_steps = 7
1477
+ current_step = 0
1478
+
1479
+ # Show basic info
1480
+ st.subheader(f"{ticker} Stock Information")
1481
+ st.write(f"Data from {stock_data.index[0].date()} to {stock_data.index[-1].date()}")
1482
+ st.write(f"Total days: {len(stock_data)}")
1483
+
1484
+ # Fetch currency data
1485
+ currency_data = fetch_currency_data(period=train_period)
1486
+ if not currency_data.empty:
1487
+ st.write("Currency data loaded:", list(currency_data.columns))
1488
+
1489
+ # Add sector data if requested
1490
+ sector_data = None
1491
+ if include_sector and sector_etf:
1492
+ sector_data = fetch_sector_data([sector_etf], period=train_period)
1493
+ if not sector_data.empty:
1494
+ st.write(f"Sector ETF data loaded: {sector_etf}")
1495
+
1496
+ # Progress update
1497
+ current_step += 1
1498
+ progress_bar.progress(current_step / total_steps)
1499
+
1500
+ # Add currency data to stock data
1501
+ combined_data = stock_data.copy()
1502
+ if not currency_data.empty:
1503
+ for curr in currency_data.columns:
1504
+ # Align currency data to stock data dates
1505
+ currency_aligned = currency_data[curr].reindex(combined_data.index, method='ffill')
1506
+ combined_data[f'Currency_{curr}'] = currency_aligned
1507
+
1508
+ # Add sector data if available
1509
+ if sector_data is not None and not sector_data.empty:
1510
+ for sect in sector_data.columns:
1511
+ # Align sector data to stock data dates
1512
+ sector_aligned = sector_data[sect].reindex(combined_data.index, method='ffill')
1513
+ # Calculate daily returns
1514
+ combined_data[f'Sector_{sect}'] = sector_aligned.pct_change().fillna(0)
1515
+
1516
+ # Progress update
1517
+ current_step += 1
1518
+ progress_bar.progress(current_step / total_steps)
1519
+
1520
+ # Split into train/test
1521
+ train_data, test_data = train_test_split(combined_data, test_size=test_size/100)
1522
+
1523
+ # Create and configure network
1524
+ feature_count = 16 # Fixed based on extract_features method
1525
+ network = HierarchicalDendriticNetwork(
1526
+ input_dim=feature_count,
1527
+ max_levels=max_levels,
1528
+ initial_dendrites_per_level=dendrites_per_level
1529
+ )
1530
+ network.memory_window = memory_window
1531
+
1532
+ # Progress update
1533
+ current_step += 1
1534
+ progress_bar.progress(current_step / total_steps)
1535
+
1536
+ # Train the network
1537
+ with st.spinner("Training dendritic network..."):
1538
+ network.train(train_data, epochs=epochs)
1539
+
1540
+ # Progress update
1541
+ current_step += 1
1542
+ progress_bar.progress(current_step / total_steps)
1543
+
1544
+ # Evaluate on test data
1545
+ with st.spinner("Evaluating performance..."):
1546
+ eval_results = network.evaluate_performance(test_data)
1547
+
1548
+ if eval_results:
1549
+ st.subheader("Performance Evaluation")
1550
+ st.write(f"Directional Accuracy: {eval_results['directional_accuracy']:.4f}")
1551
+ st.write(f"Confidence-Weighted Accuracy: {eval_results['confidence_weighted_accuracy']:.4f}")
1552
+ st.write(f"RMSE (scaled): {eval_results['rmse']:.4f}")
1553
+ st.write(f"Detected Market Regime: {eval_results['market_regime'].upper()}")
1554
+
1555
+ # Show returns
1556
+ st.write(f"DSA Trading Return: {eval_results['strategy_return']:.2f}%")
1557
+ st.write(f"Buy & Hold Return: {eval_results['buy_hold_return']:.2f}%")
1558
+
1559
+ # Compare with baselines
1560
+ baseline_results = compare_with_baseline(test_data, eval_results)
1561
+
1562
+ # Progress update
1563
+ current_step += 1
1564
+ progress_bar.progress(current_step / total_steps)
1565
+
1566
+ if baseline_results:
1567
+ st.subheader("Comparison with Baseline Models")
1568
+
1569
+ # Format improvement percentage
1570
+ improvement = baseline_results.get('improvement_percentage', 0)
1571
+ improvement_text = f"+{improvement:.2f}%" if improvement > 0 else f"{improvement:.2f}%"
1572
+
1573
+ results_df = pd.DataFrame({
1574
+ 'Model': [
1575
+ f"Dendritic Stock Algorithm ({improvement_text})",
1576
+ 'Previous Day Strategy',
1577
+ 'Moving Average',
1578
+ 'Linear Regression',
1579
+ 'MACD Crossover',
1580
+ 'Random Guessing'
1581
+ ],
1582
+ 'Directional Accuracy': [
1583
+ baseline_results['dsa_accuracy'],
1584
+ baseline_results['previous_day_accuracy'],
1585
+ baseline_results['moving_average_accuracy'],
1586
+ baseline_results['linear_regression_accuracy'],
1587
+ baseline_results['macd_accuracy'],
1588
+ baseline_results['random_guessing']
1589
+ ]
1590
+ })
1591
+
1592
+ # Plot comparison
1593
+ fig = px.bar(results_df, x='Model', y='Directional Accuracy',
1594
+ title="Model Comparison - Directional Accuracy",
1595
+ color='Directional Accuracy',
1596
+ color_continuous_scale=px.colors.sequential.Blues)
1597
+
1598
+ fig.add_hline(y=0.5, line_dash="dash", line_color="red",
1599
+ annotation_text="Random Guess (50%)")
1600
+
1601
+ fig.update_layout(
1602
+ yaxis_range=[0.4, max(0.75, baseline_results['dsa_accuracy'] * 1.1)],
1603
+ xaxis_title="",
1604
+ yaxis_title="Directional Accuracy"
1605
+ )
1606
+
1607
+ st.plotly_chart(fig, use_container_width=True)
1608
+
1609
+ # Show return comparison
1610
+ returns_df = pd.DataFrame({
1611
+ 'Strategy': ['Dendritic Stock Algorithm', 'Buy & Hold'],
1612
+ 'Return (%)': [
1613
+ baseline_results['dsa_return'],
1614
+ baseline_results['buy_hold_return']
1615
+ ]
1616
+ })
1617
+
1618
+ fig_returns = px.bar(returns_df, x='Strategy', y='Return (%)',
1619
+ title="Return Comparison",
1620
+ color='Return (%)',
1621
+ color_continuous_scale=px.colors.sequential.Greens)
1622
+
1623
+ st.plotly_chart(fig_returns, use_container_width=True)
1624
+
1625
+ # Progress update
1626
+ current_step += 1
1627
+ progress_bar.progress(current_step / total_steps)
1628
+
1629
+ # Make future predictions
1630
+ with st.spinner("Generating predictions..."):
1631
+ latest_data = combined_data.tail(memory_window)
1632
+ predictions, confidences = network.predict_days_ahead(days_ahead, latest_data)
1633
+
1634
+ if predictions is not None:
1635
+ signals = network.get_trading_signals(predictions, confidences, signal_threshold)
1636
+
1637
+ # Convert predictions back to price scale
1638
+ latest_close = latest_data['Close'].iloc[-1]
1639
+ prediction_values = []
1640
+
1641
+ # Scale based on the first feature (price) direction
1642
+ for i, pred in enumerate(predictions):
1643
+ if i == 0:
1644
+ direction = 1 if pred[0] > 0.5 else -1
1645
+ # Adjust strength by distance from 0.5
1646
+ strength = abs(pred[0] - 0.5) * 4 # Max 2% change
1647
+ predicted_price = latest_close * (1 + direction * strength/100)
1648
+ else:
1649
+ prev_predicted = prediction_values[-1]
1650
+ direction = 1 if pred[0] > 0.5 else -1
1651
+ strength = abs(pred[0] - 0.5) * 4
1652
+ predicted_price = prev_predicted * (1 + direction * strength/100)
1653
+
1654
+ prediction_values.append(predicted_price)
1655
+
1656
+ # Create date range for predictions
1657
+ last_date = latest_data.index[-1]
1658
+ prediction_dates = pd.date_range(start=last_date + pd.Timedelta(days=1), periods=days_ahead, freq='B')
1659
+
1660
+ # Display predictions
1661
+ st.subheader(f"Predictions for Next {days_ahead} Trading Days")
1662
+
1663
+ pred_df = pd.DataFrame({
1664
+ 'Date': prediction_dates,
1665
+ 'Predicted Price': [f"${price:.2f}" for price in prediction_values],
1666
+ 'Signal': signals,
1667
+ 'Confidence': [f"{conf:.2f}" for conf in confidences]
1668
+ })
1669
+
1670
+ st.dataframe(pred_df, use_container_width=True)
1671
+
1672
+ # Plot historical + predictions
1673
+ fig = go.Figure()
1674
+
1675
+ # Add historical prices
1676
+ fig.add_trace(go.Scatter(
1677
+ x=combined_data.index,
1678
+ y=combined_data['Close'],
1679
+ mode='lines',
1680
+ name='Historical',
1681
+ line=dict(color='blue', width=2)
1682
+ ))
1683
+
1684
+ # Add predictions
1685
+ fig.add_trace(go.Scatter(
1686
+ x=prediction_dates,
1687
+ y=prediction_values,
1688
+ mode='lines+markers',
1689
+ name='Predicted',
1690
+ line=dict(dash='dash', color='darkblue'),
1691
+ marker=dict(size=10)
1692
+ ))
1693
+
1694
+ # Shade prediction confidence intervals
1695
+ high_bound = [price * (1 + (1 - conf) * 0.05) for price, conf in zip(prediction_values, confidences)]
1696
+ low_bound = [price * (1 - (1 - conf) * 0.05) for price, conf in zip(prediction_values, confidences)]
1697
+
1698
+ fig.add_trace(go.Scatter(
1699
+ x=prediction_dates,
1700
+ y=high_bound,
1701
+ mode='lines',
1702
+ line=dict(width=0),
1703
+ showlegend=False
1704
+ ))
1705
+
1706
+ fig.add_trace(go.Scatter(
1707
+ x=prediction_dates,
1708
+ y=low_bound,
1709
+ mode='lines',
1710
+ line=dict(width=0),
1711
+ fill='tonexty',
1712
+ fillcolor='rgba(0, 0, 255, 0.1)',
1713
+ name='Confidence Interval'
1714
+ ))
1715
+
1716
+ # Add signals
1717
+ for i, signal in enumerate(signals):
1718
+ color = 'green' if signal == 'BUY' else 'red' if signal == 'SELL' else 'gray'
1719
+
1720
+ fig.add_annotation(
1721
+ x=prediction_dates[i],
1722
+ y=prediction_values[i],
1723
+ text=signal,
1724
+ showarrow=True,
1725
+ arrowhead=1,
1726
+ arrowsize=1,
1727
+ arrowwidth=2,
1728
+ arrowcolor=color
1729
+ )
1730
+
1731
+ fig.update_layout(
1732
+ title=f"{ticker} Stock Price with DSA Predictions",
1733
+ xaxis_title="Date",
1734
+ yaxis_title="Price",
1735
+ legend_title="Data Source",
1736
+ hovermode="x unified"
1737
+ )
1738
+
1739
+ st.plotly_chart(fig, use_container_width=True)
1740
+
1741
+ # Progress update - complete
1742
+ current_step += 1
1743
+ progress_bar.progress(current_step / total_steps)
1744
+ progress_bar.empty()
1745
+
1746
+ # Visualize dendritic network
1747
+ with st.spinner("Visualizing dendritic network..."):
1748
+ st.subheader("Dendritic Network Visualization")
1749
+
1750
+ # Network structure
1751
+ fig, grid, important_nodes = network.visualize_dendrites()
1752
+ st.pyplot(fig)
1753
+
1754
+ # Activation grid (fractal visualization)
1755
+ st.subheader("Dendritic Activation Pattern (The Fractal Boundary)")
1756
+ st.markdown("""
1757
+ This visualization represents the dendritic network's activation pattern, showing how information
1758
+ is processed at the boundaries between different dendrite clusters. The fractal patterns emerge
1759
+ at these boundaries - just as we discussed about event horizons and neural boundaries.
1760
+
1761
+ Key observations:
1762
+ - Brighter regions show stronger dendrite activations
1763
+ - The complex patterns along boundaries represent areas where the network is processing the most information
1764
+ - Higher fractal dimension values indicate more complex boundary structures, which typically correlate with better prediction capability
1765
+ """)
1766
+
1767
+ st.write(f"**Estimated Fractal Dimension: {network.fractal_dim:.3f}**")
1768
+
1769
+ if network.fractal_dim > 1.5:
1770
+ st.success("High fractal dimension suggests complex boundary processing - good for prediction!")
1771
+ elif network.fractal_dim > 1.2:
1772
+ st.info("Moderate fractal dimension indicates developing complexity at boundaries")
1773
+ else:
1774
+ st.warning("Low fractal dimension suggests simple boundaries - prediction may be limited")
1775
+
1776
+ # Plot the grid as a heatmap
1777
+ fig, ax = plt.subplots(figsize=(8, 8))
1778
+ im = ax.imshow(grid, cmap='viridis')
1779
+ plt.colorbar(im, ax=ax, label='Activation Strength')
1780
+ ax.set_title("Dendritic Activation Grid - Fractal Boundary Patterns")
1781
+ st.pyplot(fig)
1782
+
1783
+ # Show important dendrites
1784
+ if important_nodes:
1785
+ st.subheader("Active Specialized Dendrites")
1786
+ st.markdown("These specialized dendrites have developed strong activations, indicating the network has learned to recognize specific patterns:")
1787
+
1788
+ # Format into two columns
1789
+ col1, col2 = st.columns(2)
1790
+ half_nodes = len(important_nodes) // 2 + len(important_nodes) % 2
1791
+
1792
+ with col1:
1793
+ for name, level, strength in important_nodes[:half_nodes]:
1794
+ if strength > 0.7:
1795
+ st.success(f"**{name}:** {strength:.2f}")
1796
+ elif strength > 0.5:
1797
+ st.info(f"**{name}:** {strength:.2f}")
1798
+ else:
1799
+ st.write(f"**{name}:** {strength:.2f}")
1800
+
1801
+ with col2:
1802
+ for name, level, strength in important_nodes[half_nodes:]:
1803
+ if strength > 0.7:
1804
+ st.success(f"**{name}:** {strength:.2f}")
1805
+ elif strength > 0.5:
1806
+ st.info(f"**{name}:** {strength:.2f}")
1807
+ else:
1808
+ st.write(f"**{name}:** {strength:.2f}")
1809
+
1810
+ # Explain the connection to our theory
1811
+ st.markdown("""
1812
+ ### Connection to Boundary Theory
1813
+
1814
+ The patterns you see above demonstrate our theory about boundary-emergent complexity:
1815
+
1816
+ 1. **Temporal Integration**: These patterns encode the network's memory (past), processing (present), and prediction (future)
1817
+
1818
+ 2. **Critical Behavior**: The dendrites naturally organize at the "edge of chaos" - not too ordered, not too random
1819
+
1820
+ 3. **Fractal Structure**: The self-similar patterns at multiple scales allow the system to recognize patterns across different timeframes
1821
+
1822
+ This visual representation shows how our dendritic network creates complex structures at the boundaries between different processing regimes - exactly as our theory predicted.
1823
+ """)
1824
+
1825
+ # If advanced metrics were requested, show them
1826
+ if show_advanced:
1827
+ st.subheader("Advanced Analysis")
1828
+
1829
+ # Show feature importance
1830
+ feature_names = [
1831
+ "Price", "Returns", "Volatility", "Volume", "Momentum",
1832
+ "MACD", "Bollinger", "RSI", "Stochastic", "ATR",
1833
+ "OBV", "MFI", "SMA Dist", "EMA Cross", "Fibonacci"
1834
+ ]
1835
+
1836
+ # Only show top features to keep it clean
1837
+ imp_idx = np.argsort(network.feature_importance)[-10:]
1838
+
1839
+ feature_imp_df = pd.DataFrame({
1840
+ 'Feature': [feature_names[i] if i < len(feature_names) else f"Feature {i}" for i in imp_idx],
1841
+ 'Importance': network.feature_importance[imp_idx]
1842
+ })
1843
+
1844
+ fig_imp = px.bar(feature_imp_df, x='Feature', y='Importance',
1845
+ title="Feature Importance",
1846
+ color='Importance',
1847
+ color_continuous_scale=px.colors.sequential.Viridis)
1848
+
1849
+ st.plotly_chart(fig_imp, use_container_width=True)
1850
+
1851
+ # Show prediction confidence over time
1852
+ if 'confidences' in eval_results:
1853
+ conf_df = pd.DataFrame({
1854
+ 'Time Step': list(range(len(eval_results['confidences']))),
1855
+ 'Confidence': eval_results['confidences']
1856
+ })
1857
+
1858
+ fig_conf = px.line(conf_df, x='Time Step', y='Confidence',
1859
+ title="Prediction Confidence Over Time")
1860
+
1861
+ st.plotly_chart(fig_conf, use_container_width=True)
1862
+
1863
+ if __name__ == "__main__":
1864
+ main()
requirements.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ numpy>=1.20.0
2
+ pandas>=1.3.0
3
+ yfinance>=0.1.70
4
+ matplotlib>=3.4.0
5
+ plotly>=5.5.0
6
+ scikit-learn>=1.0.0
7
+ streamlit>=1.8.0
8
+ statsmodels>=0.13.0
9
+ scipy>=1.7.0
10
+ tqdm>=4.62.0
11
+ pytz>=2021.3
12
+ requests>=2.26.0
13
+ joblib>=1.1.0
14
+ ta>=0.7.0 # Technical analysis indicators
15
+ numba>=0.54.0 # For performance optimization (optional)