markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**Note**: Computing the optimal number of topics for LDA will take a pretty long amount of timeThe time complexity of this process : **O( m * x * p)**where:* m = no. of tokens in the corpus* x = summation of no. of topics given in the range* p = no. of passes for LDA model
lda_model.compute_optimal_topics(processed_data,2,4,1,path) lda = lda_model.Optimal_lda_model('./results/lda_tuning_results1.csv') lda.print_topics()
_____no_output_____
MIT
LDA_New.ipynb
parikshitsaikia1619/LDA_modeing_IQVIA
Step 6: Visualization
import pyLDAvis import pyLDAvis.gensim_models as gensimvis pyLDAvis.enable_notebook() vis = gensimvis.prepare(lda, new_corpus, id_word) vis
_____no_output_____
MIT
LDA_New.ipynb
parikshitsaikia1619/LDA_modeing_IQVIA
Monte Carlo MethodsIn this notebook, you will write your own implementations of many Monte Carlo (MC) algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore BlackjackEnvWe begin by importing the necessary packages.
import sys import gym import numpy as np from collections import defaultdict from plot_utils import plot_blackjack_values, plot_policy
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Use the code cell below to create an instance of the [Blackjack](https://github.com/openai/gym/blob/master/gym/envs/toy_text/blackjack.py) environment.
env = gym.make('Blackjack-v0')
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Each state is a 3-tuple of:- the player's current sum $\in \{0, 1, \ldots, 31\}$,- the dealer's face up card $\in \{1, \ldots, 10\}$, and- whether or not the player has a usable ace (`no` $=0$, `yes` $=1$).The agent has two potential actions:``` STICK = 0 HIT = 1```Verify this by running the code cell below.
print(env.observation_space) print(env.action_space)
Tuple(Discrete(32), Discrete(11), Discrete(2)) Discrete(2)
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Execute the code cell below to play Blackjack with a random policy. (_The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to get some experience with the output that is returned as the agent interacts with the environment._)
for i_episode in range(10): state = env.reset() while True: print(state) action = env.action_space.sample() state, reward, done, info = env.step(action) if done: print('End game! Reward: ', reward) print('You won :)\n') if reward > 0 else print('You lost :(\n') break
(12, 10, False) End game! Reward: -1.0 You lost :( (15, 9, False) (20, 9, False) End game! Reward: 1.0 You won :) (20, 5, False) End game! Reward: -1 You lost :( (8, 4, False) (19, 4, True) (14, 4, False) End game! Reward: 1.0 You won :) (19, 9, False) (20, 9, False) End game! Reward: 1.0 You won :) (14, 6, False) End game! Reward: -1.0 You lost :( (18, 5, False) End game! Reward: -1 You lost :( (11, 7, False) (21, 7, False) End game! Reward: 1.0 You won :) (5, 5, False) End game! Reward: 1.0 You won :) (15, 9, False) End game! Reward: -1.0 You lost :(
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Part 1: MC PredictionIn this section, you will write your own implementation of MC prediction (for estimating the action-value function). We will begin by investigating a policy where the player _almost_ always sticks if the sum of her cards exceeds 18. In particular, she selects action `STICK` with 80% probability if the sum is greater than 18; and, if the sum is 18 or below, she selects action `HIT` with 80% probability. The function `generate_episode_from_limit_stochastic` samples an episode using this policy. The function accepts as **input**:- `bj_env`: This is an instance of OpenAI Gym's Blackjack environment.It returns as **output**:- `episode`: This is a list of (state, action, reward) tuples (of tuples) and corresponds to $(S_0, A_0, R_1, \ldots, S_{T-1}, A_{T-1}, R_{T})$, where $T$ is the final time step. In particular, `episode[i]` returns $(S_i, A_i, R_{i+1})$, and `episode[i][0]`, `episode[i][1]`, and `episode[i][2]` return $S_i$, $A_i$, and $R_{i+1}$, respectively.
def generate_episode_from_limit_stochastic(bj_env): episode = [] state = bj_env.reset() while True: probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8] action = np.random.choice(np.arange(2), p=probs) next_state, reward, done, info = bj_env.step(action) episode.append((state, action, reward)) state = next_state if done: break return episode
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Execute the code cell below to play Blackjack with the policy. (*The code currently plays Blackjack three times - feel free to change this number, or to run the cell multiple times. The cell is designed for you to gain some familiarity with the output of the `generate_episode_from_limit_stochastic` function.*)
for i in range(10): print(generate_episode_from_limit_stochastic(env))
[((15, 3, False), 1, -1)] [((13, 9, False), 1, -1)] [((15, 9, False), 1, 0), ((18, 9, False), 1, -1)] [((18, 5, True), 1, 0), ((12, 5, False), 1, 0), ((21, 5, False), 0, 1.0)] [((21, 3, True), 0, 1.0)] [((11, 4, False), 1, 0), ((18, 4, False), 1, 0), ((21, 4, False), 0, 1.0)] [((9, 3, False), 1, 0), ((12, 3, False), 0, 1.0)] [((14, 8, False), 0, 1.0)] [((8, 2, False), 1, 0), ((19, 2, True), 0, 1.0)] [((19, 6, False), 0, -1.0)]
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Now, you are ready to write your own implementation of MC prediction. Feel free to implement either first-visit or every-visit MC prediction; in the case of the Blackjack environment, the techniques are equivalent.Your algorithm has three arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `generate_episode`: This is a function that returns an episode of interaction.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.
def mc_prediction_q(env, num_episodes, generate_episode, gamma=1.0): # initialize empty dictionaries of arrays returns_sum = defaultdict(lambda: np.zeros(env.action_space.n)) N = defaultdict(lambda: np.zeros(env.action_space.n)) Q = defaultdict(lambda: np.zeros(env.action_space.n)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() visited_states = set() episode = generate_episode(env) for i, (state, action, reward) in enumerate(episode): total_reward = reward for j in range(len(episode) - i - 1): total_reward += episode[j+i+1][2] * gamma ** j if state not in visited_states: if state not in returns_sum: returns_sum[state] = [0, 0] returns_sum[state][action] += total_reward if state not in N: N[state] = [0, 0] N[state][action] += 1 visited_states.add(state) Q[state][action] = returns_sum[state][action] / N[state][action] return Q Q
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Use the cell below to obtain the action-value function estimate $Q$. We have also plotted the corresponding state-value function.To check the accuracy of your implementation, compare the plot below to the corresponding plot in the solutions notebook **Monte_Carlo_Solution.ipynb**.
# obtain the action-value function Q = mc_prediction_q(env, 500000, generate_episode_from_limit_stochastic) # obtain the corresponding state-value function V_to_plot = dict((k,(k[0]>18)*(np.dot([0.8, 0.2],v)) + (k[0]<=18)*(np.dot([0.2, 0.8],v))) \ for k, v in Q.items()) # plot the state-value function plot_blackjack_values(V_to_plot)
Episode 500000/500000.
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Part 2: MC ControlIn this section, you will write your own implementation of constant-$\alpha$ MC control. Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.- `policy`: This is a dictionary where `policy[s]` returns the action that the agent chooses after observing state `s`.(_Feel free to define additional functions to help you to organize your code._)
def generate_policy(q_table): policy = {} for state in q_table.keys(): policy[state] = np.argmax(q_table[state]) return policy def take_action(state, policy, epsilon): probs = [0.8, 0.2] if state[0] > 18 else [0.2, 0.8] action = np.random.choice(np.arange(2), p=probs) if (state in policy) and np.random.random() > epsilon: action = policy[state] return action def generate_episode(env, policy, epsilon): episode = [] state = env.reset() while True: action = take_action(state, policy, epsilon) next_state, reward, done, info = env.step(action) episode.append((state, action, reward)) state = next_state if done: break return episode def mc_control(env, num_episodes, alpha, gamma=1.0, epsilon_min = 0.05): nA = env.action_space.n # initialize empty dictionary of arrays Q = defaultdict(lambda: np.zeros(nA)) # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 1000 == 0: print("\rEpisode {}/{}.".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function epsilon = max(1/i_episode, epsilon_min) policy = generate_policy(Q) episode = generate_episode(env, policy, epsilon) visited_states = set() for i, (state, action, reward) in enumerate(episode): total_reward = reward for j in range(len(episode) - i - 1): total_reward += episode[j+i+1][2] * gamma ** j if state not in visited_states: visited_states.add(state) Q[state][action] += alpha * (total_reward - Q[state][action]) policy = generate_policy(Q) return policy, Q
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Use the cell below to obtain the estimated optimal policy and action-value function. Note that you should fill in your own values for the `num_episodes` and `alpha` parameters.
# obtain the estimated optimal policy and action-value function policy, Q = mc_control(env, 1000000, 0.01)
Episode 1000000/1000000.
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Next, we plot the corresponding state-value function.
# obtain the corresponding state-value function V = dict((k,np.max(v)) for k, v in Q.items()) # plot the state-value function plot_blackjack_values(V)
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
Finally, we visualize the policy that is estimated to be optimal.
# plot the policy plot_policy(policy)
_____no_output_____
MIT
monte-carlo/Monte_Carlo.ipynb
romOlivo/NanodegreeExercises
IntroductionIn this notebook, we implement [YOLOv4](https://arxiv.org/pdf/2004.10934.pdf) for training on your own dataset.We also recommend reading our blog post on [Training YOLOv4 on custom data](https://blog.roboflow.ai/training-yolov4-on-a-custom-dataset/) side by side.We will take the following steps to implement YOLOv4 on our custom data:* Configure our GPU environment on Google Colab* Install the Darknet YOLOv4 training environment* Download our custom dataset for YOLOv4 and set up directories* Configure a custom YOLOv4 training config file for Darknet* Train our custom YOLOv4 object detector* Reload YOLOv4 trained weights and make inference on test imagesWhen you are done you will have a custom detector that you can use. It will make inference like this: **Reach out for support**If you run into any hurdles on your own data set or just want to share some cool results in your own domain, [reach out!](https://roboflow.ai) Configuring cuDNN on Colab for YOLOv4
# CUDA: Let's check that Nvidia CUDA drivers are already pre-installed and which version is it. !/usr/local/cuda/bin/nvcc --version # We need to install the correct cuDNN according to this output !nvidia-smi # Change the number depending on what GPU is listed above, under NVIDIA-SMI > Name. # Tesla K80: 30 # Tesla P100: 60 # Tesla T4: 75 %env compute_capability=75
env: compute_capability=75
MIT
YOLOv4_Darknet_Roboflow1.ipynb
qwerlarlgus/Darknet
STEP 1. Install cuDNN according to the current CUDA versionColab added cuDNN as an inherent install - so you don't have to do a thing - major win Step 2: Installing Darknet for YOLOv4 on Colab
%cd /content/ %rm -rf darknet #we clone the fork of darknet maintained by roboflow #small changes have been made to configure darknet for training !git clone https://github.com/roboflow-ai/darknet.git %cd /content/darknet/ %rm Makefile #colab occasionally shifts dependencies around, at the time of authorship, this Makefile works for building Darknet on Colab %%writefile Makefile GPU=1 CUDNN=1 CUDNN_HALF=0 OPENCV=1 AVX=0 OPENMP=0 LIBSO=1 ZED_CAMERA=0 ZED_CAMERA_v2_8=0 # set GPU=1 and CUDNN=1 to speedup on GPU # set CUDNN_HALF=1 to further speedup 3 x times (Mixed-precision on Tensor Cores) GPU: Volta, Xavier, Turing and higher # set AVX=1 and OPENMP=1 to speedup on CPU (if error occurs then set AVX=0) # set ZED_CAMERA=1 to enable ZED SDK 3.0 and above # set ZED_CAMERA_v2_8=1 to enable ZED SDK 2.X USE_CPP=0 DEBUG=0 ARCH= -gencode arch=compute_30,code=sm_30 \ -gencode arch=compute_35,code=sm_35 \ -gencode arch=compute_50,code=[sm_50,compute_50] \ -gencode arch=compute_52,code=[sm_52,compute_52] \ -gencode arch=compute_61,code=[sm_61,compute_61] OS := $(shell uname) # Tesla V100 # ARCH= -gencode arch=compute_70,code=[sm_70,compute_70] # GeForce RTX 2080 Ti, RTX 2080, RTX 2070, Quadro RTX 8000, Quadro RTX 6000, Quadro RTX 5000, Tesla T4, XNOR Tensor Cores # ARCH= -gencode arch=compute_75,code=[sm_75,compute_75] # Jetson XAVIER # ARCH= -gencode arch=compute_72,code=[sm_72,compute_72] # GTX 1080, GTX 1070, GTX 1060, GTX 1050, GTX 1030, Titan Xp, Tesla P40, Tesla P4 # ARCH= -gencode arch=compute_61,code=sm_61 -gencode arch=compute_61,code=compute_61 # GP100/Tesla P100 - DGX-1 # ARCH= -gencode arch=compute_60,code=sm_60 # For Jetson TX1, Tegra X1, DRIVE CX, DRIVE PX - uncomment: # ARCH= -gencode arch=compute_53,code=[sm_53,compute_53] # For Jetson Tx2 or Drive-PX2 uncomment: # ARCH= -gencode arch=compute_62,code=[sm_62,compute_62] VPATH=./src/ EXEC=darknet OBJDIR=./obj/ ifeq ($(LIBSO), 1) LIBNAMESO=libdarknet.so APPNAMESO=uselib endif ifeq ($(USE_CPP), 1) CC=g++ else CC=gcc endif CPP=g++ -std=c++11 NVCC=nvcc OPTS=-Ofast LDFLAGS= -lm -pthread COMMON= -Iinclude/ -I3rdparty/stb/include CFLAGS=-Wall -Wfatal-errors -Wno-unused-result -Wno-unknown-pragmas -fPIC ifeq ($(DEBUG), 1) #OPTS= -O0 -g #OPTS= -Og -g COMMON+= -DDEBUG CFLAGS+= -DDEBUG else ifeq ($(AVX), 1) CFLAGS+= -ffp-contract=fast -mavx -mavx2 -msse3 -msse4.1 -msse4.2 -msse4a endif endif CFLAGS+=$(OPTS) ifneq (,$(findstring MSYS_NT,$(OS))) LDFLAGS+=-lws2_32 endif ifeq ($(OPENCV), 1) COMMON+= -DOPENCV CFLAGS+= -DOPENCV LDFLAGS+= `pkg-config --libs opencv4 2> /dev/null || pkg-config --libs opencv` COMMON+= `pkg-config --cflags opencv4 2> /dev/null || pkg-config --cflags opencv` endif ifeq ($(OPENMP), 1) CFLAGS+= -fopenmp LDFLAGS+= -lgomp endif ifeq ($(GPU), 1) COMMON+= -DGPU -I/usr/local/cuda/include/ CFLAGS+= -DGPU ifeq ($(OS),Darwin) #MAC LDFLAGS+= -L/usr/local/cuda/lib -lcuda -lcudart -lcublas -lcurand else LDFLAGS+= -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand endif endif ifeq ($(CUDNN), 1) COMMON+= -DCUDNN ifeq ($(OS),Darwin) #MAC CFLAGS+= -DCUDNN -I/usr/local/cuda/include LDFLAGS+= -L/usr/local/cuda/lib -lcudnn else CFLAGS+= -DCUDNN -I/usr/local/cudnn/include LDFLAGS+= -L/usr/local/cudnn/lib64 -lcudnn endif endif ifeq ($(CUDNN_HALF), 1) COMMON+= -DCUDNN_HALF CFLAGS+= -DCUDNN_HALF ARCH+= -gencode arch=compute_70,code=[sm_70,compute_70] endif ifeq ($(ZED_CAMERA), 1) CFLAGS+= -DZED_STEREO -I/usr/local/zed/include ifeq ($(ZED_CAMERA_v2_8), 1) LDFLAGS+= -L/usr/local/zed/lib -lsl_core -lsl_input -lsl_zed #-lstdc++ -D_GLIBCXX_USE_CXX11_ABI=0 else LDFLAGS+= -L/usr/local/zed/lib -lsl_zed #-lstdc++ -D_GLIBCXX_USE_CXX11_ABI=0 endif endif OBJ=image_opencv.o http_stream.o gemm.o utils.o dark_cuda.o convolutional_layer.o list.o image.o activations.o im2col.o col2im.o blas.o crop_layer.o dropout_layer.o maxpool_layer.o softmax_layer.o data.o matrix.o network.o connected_layer.o cost_layer.o parser.o option_list.o darknet.o detection_layer.o captcha.o route_layer.o writing.o box.o nightmare.o normalization_layer.o avgpool_layer.o coco.o dice.o yolo.o detector.o layer.o compare.o classifier.o local_layer.o swag.o shortcut_layer.o activation_layer.o rnn_layer.o gru_layer.o rnn.o rnn_vid.o crnn_layer.o demo.o tag.o cifar.o go.o batchnorm_layer.o art.o region_layer.o reorg_layer.o reorg_old_layer.o super.o voxel.o tree.o yolo_layer.o gaussian_yolo_layer.o upsample_layer.o lstm_layer.o conv_lstm_layer.o scale_channels_layer.o sam_layer.o ifeq ($(GPU), 1) LDFLAGS+= -lstdc++ OBJ+=convolutional_kernels.o activation_kernels.o im2col_kernels.o col2im_kernels.o blas_kernels.o crop_layer_kernels.o dropout_layer_kernels.o maxpool_layer_kernels.o network_kernels.o avgpool_layer_kernels.o endif OBJS = $(addprefix $(OBJDIR), $(OBJ)) DEPS = $(wildcard src/*.h) Makefile include/darknet.h all: $(OBJDIR) backup results setchmod $(EXEC) $(LIBNAMESO) $(APPNAMESO) ifeq ($(LIBSO), 1) CFLAGS+= -fPIC $(LIBNAMESO): $(OBJDIR) $(OBJS) include/yolo_v2_class.hpp src/yolo_v2_class.cpp $(CPP) -shared -std=c++11 -fvisibility=hidden -DLIB_EXPORTS $(COMMON) $(CFLAGS) $(OBJS) src/yolo_v2_class.cpp -o $@ $(LDFLAGS) $(APPNAMESO): $(LIBNAMESO) include/yolo_v2_class.hpp src/yolo_console_dll.cpp $(CPP) -std=c++11 $(COMMON) $(CFLAGS) -o $@ src/yolo_console_dll.cpp $(LDFLAGS) -L ./ -l:$(LIBNAMESO) endif $(EXEC): $(OBJS) $(CPP) -std=c++11 $(COMMON) $(CFLAGS) $^ -o $@ $(LDFLAGS) $(OBJDIR)%.o: %.c $(DEPS) $(CC) $(COMMON) $(CFLAGS) -c $< -o $@ $(OBJDIR)%.o: %.cpp $(DEPS) $(CPP) -std=c++11 $(COMMON) $(CFLAGS) -c $< -o $@ $(OBJDIR)%.o: %.cu $(DEPS) $(NVCC) $(ARCH) $(COMMON) --compiler-options "$(CFLAGS)" -c $< -o $@ $(OBJDIR): mkdir -p $(OBJDIR) backup: mkdir -p backup results: mkdir -p results setchmod: chmod +x *.sh .PHONY: clean clean: rm -rf $(OBJS) $(EXEC) $(LIBNAMESO) $(APPNAMESO) #install environment from the Makefile #note if you are on Colab Pro this works on a P100 GPU #if you are on Colab free, you may need to change the Makefile for the K80 GPU #this goes for any GPU, you need to change the Makefile to inform darknet which GPU you are running on. #note the Makefile above should work for you, if you need to tweak, try the below %cd darknet/ #!sed -i 's/OPENCV=0/OPENCV=1/g' Makefile #!sed -i 's/GPU=0/GPU=1/g' Makefile #!sed -i 's/CUDNN=0/CUDNN=1/g' Makefile !#sed -i "s/ARCH= -gencode arch=compute_60,code=sm_60/ARCH= -gencode arch=compute_${compute_capability},code=sm_${compute_capability}/g" Makefile !make #download the newly released yolov4 ConvNet weights %cd /content/darknet !wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137 !env
CUDNN_VERSION=7.6.5.32 __EGL_VENDOR_LIBRARY_DIRS=/usr/lib64-nvidia:/usr/share/glvnd/egl_vendor.d/ LD_LIBRARY_PATH=/usr/lib64-nvidia CLOUDSDK_PYTHON=python3 LANG=en_US.UTF-8 HOSTNAME=d41f1c72eaaf OLDPWD=/ CLOUDSDK_CONFIG=/content/.config NVIDIA_VISIBLE_DEVICES=all DATALAB_SETTINGS_OVERRIDES={"kernelManagerProxyPort":6000,"kernelManagerProxyHost":"172.28.0.3","jupyterArgs":["--ip=\"172.28.0.2\""],"debugAdapterMultiplexerPath":"/usr/local/bin/dap_multiplexer"} ENV=/root/.bashrc PAGER=cat NCCL_VERSION=2.8.3 TF_FORCE_GPU_ALLOW_GROWTH=true JPY_PARENT_PID=50 NO_GCE_CHECK=True PWD=/content/darknet HOME=/root LAST_FORCED_REBUILD=20210119 CLICOLOR=1 DEBIAN_FRONTEND=noninteractive LIBRARY_PATH=/usr/local/cuda/lib64/stubs GCE_METADATA_TIMEOUT=0 GLIBCPP_FORCE_NEW=1 TBE_CREDS_ADDR=172.28.0.1:8008 TERM=xterm-color SHELL=/bin/bash GCS_READ_CACHE_BLOCK_SIZE_MB=16 PYTHONWARNINGS=ignore:::pip._internal.cli.base_command MPLBACKEND=module://ipykernel.pylab.backend_inline CUDA_PKG_VERSION=10-1=10.1.243-1 CUDA_VERSION=10.1.243 NVIDIA_DRIVER_CAPABILITIES=compute,utility SHLVL=1 PYTHONPATH=/env/python NVIDIA_REQUIRE_CUDA=cuda>=10.1 brand=tesla,driver>=396,driver<397 brand=tesla,driver>=410,driver<411 brand=tesla,driver>=418,driver<419 COLAB_GPU=1 GLIBCXX_FORCE_NEW=1 PATH=/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/google-cloud-sdk/bin:/opt/bin LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4 compute_capability=75 GIT_PAGER=cat _=/usr/bin/env
MIT
YOLOv4_Darknet_Roboflow1.ipynb
qwerlarlgus/Darknet
Set up Custom Dataset for YOLOv4 We'll use Roboflow to convert our dataset from any format to the YOLO Darknet format. 1. To do so, create a free [Roboflow account](https://app.roboflow.ai).2. Upload your images and their annotations (in any format: VOC XML, COCO JSON, TensorFlow CSV, etc).3. Apply preprocessing and augmentation steps you may like. We recommend at least `auto-orient` and a `resize` to 416x416. Generate your dataset.4. Export your dataset in the **YOLO Darknet format**.5. Copy your download link, and paste it below.See our [blog post](https://blog.roboflow.ai/training-yolov4-on-a-custom-dataset/) for greater detail.In this example, I used the open source [BCCD Dataset](https://public.roboflow.ai/object-detection/bccd). (You can `fork` it to your Roboflow account to follow along.)
from google.colab import drive drive.mount('/content/drive') #if you already have YOLO darknet format, you can skip this step %cd /content/darknet #!curl -L [YOUR LINK HERE] > roboflow.zip; unzip roboflow.zip; rm roboflow.zip !cp /content/drive/MyDrive/Aquarium-darknet.zip . !pwd !unzip ./Aquarium-darknet.zip #Set up training file directories for custom dataset %cd /content/darknet/ %cp train/_darknet.labels data/obj.names %mkdir data/obj #copy image and labels %cp train/*.jpg data/obj/ %cp valid/*.jpg data/obj/ %cp train/*.txt data/obj/ %cp valid/*.txt data/obj/ with open('data/obj.data', 'w') as out: out.write('classes = 3\n') out.write('train = data/train.txt\n') out.write('valid = data/valid.txt\n') out.write('names = data/obj.names\n') out.write('backup = backup/') #write train file (just the image list) import os with open('data/train.txt', 'w') as out: for img in [f for f in os.listdir('train') if f.endswith('jpg')]: out.write('data/obj/' + img + '\n') #write the valid file (just the image list) import os with open('data/valid.txt', 'w') as out: for img in [f for f in os.listdir('valid') if f.endswith('jpg')]: out.write('data/obj/' + img + '\n')
/content/darknet
MIT
YOLOv4_Darknet_Roboflow1.ipynb
qwerlarlgus/Darknet
Write Custom Training Config for YOLOv4
#we build config dynamically based on number of classes #we build iteratively from base config files. This is the same file shape as cfg/yolo-obj.cfg def file_len(fname): with open(fname) as f: for i, l in enumerate(f): pass return i + 1 num_classes = file_len('train/_darknet.labels') print("writing config for a custom YOLOv4 detector detecting number of classes: " + str(num_classes)) #Instructions from the darknet repo #change line max_batches to (classes*2000 but not less than number of training images, and not less than 6000), f.e. max_batches=6000 if you train for 3 classes #change line steps to 80% and 90% of max_batches, f.e. steps=4800,5400 if os.path.exists('./cfg/custom-yolov4-detector.cfg'): os.remove('./cfg/custom-yolov4-detector.cfg') with open('./cfg/custom-yolov4-detector.cfg', 'a') as f: f.write('[net]' + '\n') f.write('batch=64' + '\n') #####smaller subdivisions help the GPU run faster. 12 is optimal, but you might need to change to 24,36,64#### f.write('subdivisions=24' + '\n') f.write('width=416' + '\n') f.write('height=416' + '\n') f.write('channels=3' + '\n') f.write('momentum=0.949' + '\n') f.write('decay=0.0005' + '\n') f.write('angle=0' + '\n') f.write('saturation = 1.5' + '\n') f.write('exposure = 1.5' + '\n') f.write('hue = .1' + '\n') f.write('\n') f.write('learning_rate=0.001' + '\n') f.write('burn_in=1000' + '\n') ######you can adjust up and down to change training time##### ##Darknet does iterations with batches, not epochs#### max_batches = num_classes*2000 #max_batches = 2000 f.write('max_batches=' + str(max_batches) + '\n') f.write('policy=steps' + '\n') steps1 = .8 * max_batches steps2 = .9 * max_batches f.write('steps='+str(steps1)+','+str(steps2) + '\n') #Instructions from the darknet repo #change line classes=80 to your number of objects in each of 3 [yolo]-layers: #change [filters=255] to filters=(classes + 5)x3 in the 3 [convolutional] before each [yolo] layer, keep in mind that it only has to be the last [convolutional] before each of the [yolo] layers. with open('cfg/yolov4-custom2.cfg', 'r') as f2: content = f2.readlines() for line in content: f.write(line) num_filters = (num_classes + 5) * 3 f.write('filters='+str(num_filters) + '\n') f.write('activation=linear') f.write('\n') f.write('\n') f.write('[yolo]' + '\n') f.write('mask = 0,1,2' + '\n') f.write('anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401' + '\n') f.write('classes=' + str(num_classes) + '\n') with open('cfg/yolov4-custom3.cfg', 'r') as f3: content = f3.readlines() for line in content: f.write(line) num_filters = (num_classes + 5) * 3 f.write('filters='+str(num_filters) + '\n') f.write('activation=linear') f.write('\n') f.write('\n') f.write('[yolo]' + '\n') f.write('mask = 3,4,5' + '\n') f.write('anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401' + '\n') f.write('classes=' + str(num_classes) + '\n') with open('cfg/yolov4-custom4.cfg', 'r') as f4: content = f4.readlines() for line in content: f.write(line) num_filters = (num_classes + 5) * 3 f.write('filters='+str(num_filters) + '\n') f.write('activation=linear') f.write('\n') f.write('\n') f.write('[yolo]' + '\n') f.write('mask = 6,7,8' + '\n') f.write('anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401' + '\n') f.write('classes=' + str(num_classes) + '\n') with open('cfg/yolov4-custom5.cfg', 'r') as f5: content = f5.readlines() for line in content: f.write(line) print("file is written!") #here is the file that was just written. #you may consider adjusting certain things #like the number of subdivisions 64 runs faster but Colab GPU may not be big enough #if Colab GPU memory is too small, you will need to adjust subdivisions to 16 %cat cfg/custom-yolov4-detector.cfg
[net] batch=64 subdivisions=24 width=416 height=416 channels=3 momentum=0.949 decay=0.0005 angle=0 saturation = 1.5 exposure = 1.5 hue = .1 learning_rate=0.001 burn_in=1000 max_batches=14000 policy=steps steps=11200.0,12600.0 scales=.1,.1 #cutmix=1 mosaic=1 #:104x104 54:52x52 85:26x26 104:13x13 for 416 [convolutional] batch_normalize=1 filters=32 size=3 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=64 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=32 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-7 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=128 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=64 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=64 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-10 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=256 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=128 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-28 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=512 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=256 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-28 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish # Downsample [convolutional] batch_normalize=1 filters=1024 size=3 stride=2 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [route] layers = -2 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [convolutional] batch_normalize=1 filters=512 size=3 stride=1 pad=1 activation=mish [shortcut] from=-3 activation=linear [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=mish [route] layers = -1,-16 [convolutional] batch_normalize=1 filters=1024 size=1 stride=1 pad=1 activation=mish ########################## [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky ### SPP ### [maxpool] stride=1 size=5 [route] layers=-2 [maxpool] stride=1 size=9 [route] layers=-4 [maxpool] stride=1 size=13 [route] layers=-1,-3,-5,-6 ### End SPP ### [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [upsample] stride=2 [route] layers = 85 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [route] layers = -1, -3 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [upsample] stride=2 [route] layers = 54 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [route] layers = -1, -3 [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] batch_normalize=1 filters=128 size=1 stride=1 pad=1 activation=leaky ########################## [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=256 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 0,1,2 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.2 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5 [route] layers = -4 [convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=256 activation=leaky [route] layers = -1, -16 [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] batch_normalize=1 filters=256 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=512 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 3,4,5 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 scale_x_y = 1.1 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5 [route] layers = -4 [convolutional] batch_normalize=1 size=3 stride=2 pad=1 filters=512 activation=leaky [route] layers = -1, -37 [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] batch_normalize=1 filters=512 size=1 stride=1 pad=1 activation=leaky [convolutional] batch_normalize=1 size=3 stride=1 pad=1 filters=1024 activation=leaky [convolutional] size=1 stride=1 pad=1 filters=36 activation=linear [yolo] mask = 6,7,8 anchors = 12, 16, 19, 36, 40, 28, 36, 75, 76, 55, 72, 146, 142, 110, 192, 243, 459, 401 classes=7 num=9 jitter=.3 ignore_thresh = .7 truth_thresh = 1 random=1 scale_x_y = 1.05 iou_thresh=0.213 cls_normalizer=1.0 iou_normalizer=0.07 iou_loss=ciou nms_kind=greedynms beta_nms=0.6 max_delta=5
MIT
YOLOv4_Darknet_Roboflow1.ipynb
qwerlarlgus/Darknet
Train Custom YOLOv4 Detector
!./darknet detector train data/obj.data cfg/custom-yolov4-detector.cfg yolov4.conv.137 -dont_show -map #If you get CUDA out of memory adjust subdivisions above! #adjust max batches down for shorter training above
(next mAP calculation at 2568 iterations) Last accuracy [email protected] = 59.40 %, best = 59.40 % 2461: 10.640112, 8.513406 avg loss, 0.001000 rate, 8.034781 seconds, 118128 images, 16.766305 hours left Loaded: 0.000033 seconds ^C
MIT
YOLOv4_Darknet_Roboflow1.ipynb
qwerlarlgus/Darknet
Infer Custom Objects with Saved YOLOv4 Weights
#define utility function def imShow(path): import cv2 import matplotlib.pyplot as plt %matplotlib inline image = cv2.imread(path) height, width = image.shape[:2] resized_image = cv2.resize(image,(3*width, 3*height), interpolation = cv2.INTER_CUBIC) fig = plt.gcf() fig.set_size_inches(18, 10) plt.axis("off") #plt.rcParams['figure.figsize'] = [10, 5] plt.imshow(cv2.cvtColor(resized_image, cv2.COLOR_BGR2RGB)) plt.show() #check if weigths have saved yet #backup houses the last weights for our detector #(file yolo-obj_last.weights will be saved to the build\darknet\x64\backup\ for each 100 iterations) #(file yolo-obj_xxxx.weights will be saved to the build\darknet\x64\backup\ for each 1000 iterations) #After training is complete - get result yolo-obj_final.weights from path build\darknet\x64\bac !ls backup #if it is empty you haven't trained for long enough yet, you need to train for at least 100 iterations #coco.names is hardcoded somewhere in the detector %cp data/obj.names data/coco.names Aquarium-darknet.zip !pwd !chmod 755 ./darknet import os test_images = [f for f in os.listdir('test') if f.endswith('.jpg')] #/test has images that we can test our detector on import random img_path = "test/" + random.choice(test_images); #test out our detector! !./darknet detect cfg/custom-yolov4-detector.cfg backup/custom-yolov4-detector_last.weights {img_path} -dont-show imShow('predictions.jpg')
_____no_output_____
MIT
YOLOv4_Darknet_Roboflow1.ipynb
qwerlarlgus/Darknet
Programming Exercise 2: Logistic Regression IntroductionIn this exercise, you will implement logistic regression and apply it to two different datasets. Before starting on the programming exercise, we strongly recommend watching the video lectures and completing the review questions for the associated topics.All the information you need for solving this assignment is in this notebook, and all the code you will be implementing will take place within this notebook. The assignment can be promptly submitted to the coursera grader directly from this notebook (code and instructions are included below).Before we begin with the exercises, we need to import all libraries required for this programming exercise. Throughout the course, we will be using [`numpy`](http://www.numpy.org/) for all arrays and matrix operations, and [`matplotlib`](https://matplotlib.org/) for plotting. In this assignment, we will also use [`scipy`](https://docs.scipy.org/doc/scipy/reference/), which contains scientific and numerical computation functions and tools. You can find instructions on how to install required libraries in the README file in the [github repository](https://github.com/dibgerge/ml-coursera-python-assignments).
# used for manipulating directory paths import os # Scientific and vector computation for python import numpy as np # Plotting library from matplotlib import pyplot # Optimization module in scipy from scipy import optimize # library written for this exercise providing additional functions for assignment submission, and others import utils # define the submission/grader object for this exercise grader = utils.Grader() # tells matplotlib to embed plots within the notebook %matplotlib inline
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
Submission and GradingAfter completing each part of the assignment, be sure to submit your solutions to the grader. The following is a breakdown of how each part of this exercise is scored.| Section | Part | Submission function | Points | :- |:- | :- | :-:| 1 | [Sigmoid Function](section1) | [`sigmoid`](sigmoid) | 5 | 2 | [Compute cost for logistic regression](section2) | [`costFunction`](costFunction) | 30 | 3 | [Gradient for logistic regression](section2) | [`costFunction`](costFunction) | 30 | 4 | [Predict Function](section4) | [`predict`](predict) | 5 | 5 | [Compute cost for regularized LR](section5) | [`costFunctionReg`](costFunctionReg) | 15 | 6 | [Gradient for regularized LR](section5) | [`costFunctionReg`](costFunctionReg) | 15 | | Total Points | | 100 You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.At the end of each section in this notebook, we have a cell which contains code for submitting the solutions thus far to the grader. Execute the cell to see your score up to the current section. For all your work to be submitted properly, you must execute those cells at least once. They must also be re-executed everytime the submitted function is updated. 1 Logistic RegressionIn this part of the exercise, you will build a logistic regression model to predict whether a student gets admitted into a university. Suppose that you are the administrator of a university department andyou want to determine each applicant’s chance of admission based on their results on two exams. You have historical data from previous applicants that you can use as a training set for logistic regression. For each training example, you have the applicant’s scores on two exams and the admissionsdecision. Your task is to build a classification model that estimates an applicant’s probability of admission based the scores from those two exams. The following cell will load the data and corresponding labels:
# Load data # The first two columns contains the exam scores and the third column # contains the label. data = np.loadtxt(os.path.join('Data', 'ex2data1.txt'), delimiter=',') X, y = data[:, 0:2], data[:, 2]
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
1.1 Visualizing the dataBefore starting to implement any learning algorithm, it is always good to visualize the data if possible. We display the data on a 2-dimensional plot by calling the function `plotData`. You will now complete the code in `plotData` so that it displays a figure where the axes are the two exam scores, and the positive and negative examples are shown with different markers.To help you get more familiar with plotting, we have left `plotData` empty so you can try to implement it yourself. However, this is an optional (ungraded) exercise. We also provide our implementation below so you cancopy it or refer to it. If you choose to copy our example, make sure you learnwhat each of its commands is doing by consulting the `matplotlib` and `numpy` documentation.```python Find Indices of Positive and Negative Examplespos = y == 1neg = y == 0 Plot Examplespyplot.plot(X[pos, 0], X[pos, 1], 'k*', lw=2, ms=10)pyplot.plot(X[neg, 0], X[neg, 1], 'ko', mfc='y', ms=8, mec='k', mew=1)```
def plotData(X, y): """ Plots the data points X and y into a new figure. Plots the data points with * for the positive examples and o for the negative examples. Parameters ---------- X : array_like An Mx2 matrix representing the dataset. y : array_like Label values for the dataset. A vector of size (M, ). Instructions ------------ Plot the positive and negative examples on a 2D plot, using the option 'k*' for the positive examples and 'ko' for the negative examples. """ # Create New Figure fig = pyplot.figure() pos = y == 1 neg = y == 0 # ====================== YOUR CODE HERE ====================== pyplot.plot(X[pos, 0], X[pos, 1], 'k*', lw=2, ms=10) pyplot.plot(X[neg, 0], X[neg, 1], 'ko', mfc='y', ms=8, mec='k', mew=1) # ============================================================
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
Now, we call the implemented function to display the loaded data:
plotData(X, y) # add axes labels pyplot.xlabel('Exam 1 score') pyplot.ylabel('Exam 2 score') pyplot.legend(['Admitted', 'Not admitted']) pass
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
1.2 Implementation 1.2.1 Warmup exercise: sigmoid functionBefore you start with the actual cost function, recall that the logistic regression hypothesis is defined as:$$ h_\theta(x) = g(\theta^T x)$$where function $g$ is the sigmoid function. The sigmoid function is defined as: $$g(z) = \frac{1}{1+e^{-z}}$$.Your first step is to implement this function `sigmoid` so it can becalled by the rest of your program. When you are finished, try testing a fewvalues by calling `sigmoid(x)` in a new cell. For large positive values of `x`, the sigmoid should be close to 1, while for large negative values, the sigmoid should be close to 0. Evaluating `sigmoid(0)` should give you exactly 0.5. Your code should also work with vectors and matrices. **For a matrix, your function should perform the sigmoid function on every element.**
def sigmoid(z): """ Compute sigmoid function given the input z. Parameters ---------- z : array_like The input to the sigmoid function. This can be a 1-D vector or a 2-D matrix. Returns ------- g : array_like The computed sigmoid function. g has the same shape as z, since the sigmoid is computed element-wise on z. Instructions ------------ Compute the sigmoid of each value of z (z can be a matrix, vector or scalar). """ # convert input to a numpy array z = np.array(z) # You need to return the following variables correctly g = np.zeros(z.shape) # ====================== YOUR CODE HERE ====================== g = 1 / (1 + np.exp(-z)) # ============================================================= return g
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
The following cell evaluates the sigmoid function at `z=0`. You should get a value of 0.5. You can also try different values for `z` to experiment with the sigmoid function.
# Test the implementation of sigmoid function here z = 0 g = sigmoid(z) print('g(', z, ') = ', g)
g( 0 ) = 0.5
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
After completing a part of the exercise, you can submit your solutions for grading by first adding the function you modified to the submission object, and then sending your function to Coursera for grading. The submission script will prompt you for your login e-mail and submission token. You can obtain a submission token from the web page for the assignment. You are allowed to submit your solutions multiple times, and we will take only the highest score into consideration.Execute the following cell to grade your solution to the first part of this exercise.*You should now submit your solutions.*
# appends the implemented function in part 1 to the grader object grader[1] = sigmoid # send the added functions to coursera grader for getting a grade on this part grader.grade()
Submitting Solutions | Programming Exercise logistic-regression Login (email address): [email protected] Token: c71I4EgVY0bY6fIL Part Name | Score | Feedback --------- | ----- | -------- Sigmoid Function | 5 / 5 | Nice work! Logistic Regression Cost | 0 / 30 | Logistic Regression Gradient | 0 / 30 | Predict | 0 / 5 | Regularized Logistic Regression Cost | 0 / 15 | Regularized Logistic Regression Gradient | 0 / 15 | -------------------------------- | 5 / 100 |
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
1.2.2 Cost function and gradientNow you will implement the cost function and gradient for logistic regression. Before proceeding we add the intercept term to X.
# Setup the data matrix appropriately, and add ones for the intercept term m, n = X.shape # Add intercept term to X X = np.concatenate([np.ones((m, 1)), X], axis=1)
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
Now, complete the code for the function `costFunction` to return the cost and gradient. Recall that the cost function in logistic regression is$$ J(\theta) = \frac{1}{m} \sum_{i=1}^{m} \left[ -y^{(i)} \log\left(h_\theta\left( x^{(i)} \right) \right) - \left( 1 - y^{(i)}\right) \log \left( 1 - h_\theta\left( x^{(i)} \right) \right) \right]$$and the gradient of the cost is a vector of the same length as $\theta$ where the $j^{th}$element (for $j = 0, 1, \cdots , n$) is defined as follows:$$ \frac{\partial J(\theta)}{\partial \theta_j} = \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left( x^{(i)} \right) - y^{(i)} \right) x_j^{(i)} $$Note that while this gradient looks identical to the linear regression gradient, the formula is actually different because linear and logistic regression have different definitions of $h_\theta(x)$.
def costFunction(theta, X, y): """ Compute cost and gradient for logistic regression. Parameters ---------- theta : array_like The parameters for logistic regression. This a vector of shape (n+1, ). X : array_like The input dataset of shape (m x n+1) where m is the total number of data points and n is the number of features. We assume the intercept has already been added to the input. y : arra_like Labels for the input. This is a vector of shape (m, ). Returns ------- J : float The computed value for the cost function. grad : array_like A vector of shape (n+1, ) which is the gradient of the cost function with respect to theta, at the current values of theta. Instructions ------------ Compute the cost of a particular choice of theta. You should set J to the cost. Compute the partial derivatives and set grad to the partial derivatives of the cost w.r.t. each parameter in theta. """ # Initialize some useful values m = y.size # number of training examples # You need to return the following variables correctly J = 0 grad = np.zeros(theta.shape) # ====================== YOUR CODE HERE ====================== h = sigmoid(np.dot(X, theta)) temp = -np.dot(np.log(h).T, y) - np.dot(np.log(1-h).T, 1- y) J = np.sum(temp) / m grad = np.dot(X.T, (h-y)) / m if np.isnan(J): J = np.inf # ============================================================= return J, grad
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
Once you are done call your `costFunction` using two test cases for $\theta$ by executing the next cell.
# Initialize fitting parameters initial_theta = np.zeros(n+1) cost, grad = costFunction(initial_theta, X, y) print('Cost at initial theta (zeros): {:.3f}'.format(cost)) print('Expected cost (approx): 0.693\n') print('Gradient at initial theta (zeros):') print('\t[{:.4f}, {:.4f}, {:.4f}]'.format(*grad)) print('Expected gradients (approx):\n\t[-0.1000, -12.0092, -11.2628]\n') # Compute and display cost and gradient with non-zero theta test_theta = np.array([-24, 0.2, 0.2]) cost, grad = costFunction(test_theta, X, y) print('Cost at test theta: {:.3f}'.format(cost)) print('Expected cost (approx): 0.218\n') print('Gradient at test theta:') print('\t[{:.3f}, {:.3f}, {:.3f}]'.format(*grad)) print('Expected gradients (approx):\n\t[0.043, 2.566, 2.647]')
Cost at initial theta (zeros): 0.693 Expected cost (approx): 0.693 Gradient at initial theta (zeros): [-0.1000, -12.0092, -11.2628] Expected gradients (approx): [-0.1000, -12.0092, -11.2628] Cost at test theta: 0.218 Expected cost (approx): 0.218 Gradient at test theta: [0.043, 2.566, 2.647] Expected gradients (approx): [0.043, 2.566, 2.647]
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
*You should now submit your solutions.*
grader[2] = costFunction grader[3] = costFunction grader.grade()
Submitting Solutions | Programming Exercise logistic-regression Use token from last successful submission ([email protected])? (Y/n): y Part Name | Score | Feedback --------- | ----- | -------- Sigmoid Function | 5 / 5 | Nice work! Logistic Regression Cost | 30 / 30 | Nice work! Logistic Regression Gradient | 30 / 30 | Nice work! Predict | 0 / 5 | Regularized Logistic Regression Cost | 0 / 15 | Regularized Logistic Regression Gradient | 0 / 15 | -------------------------------- | 65 / 100 |
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
1.2.3 Learning parameters using `scipy.optimize`In the previous assignment, you found the optimal parameters of a linear regression model by implementing gradient descent. You wrote a cost function and calculated its gradient, then took a gradient descent step accordingly. This time, instead of taking gradient descent steps, you will use the [`scipy.optimize` module](https://docs.scipy.org/doc/scipy/reference/optimize.html). SciPy is a numerical computing library for `python`. It provides an optimization module for root finding and minimization. As of `scipy 1.0`, the function `scipy.optimize.minimize` is the method to use for optimization problems(both constrained and unconstrained).For logistic regression, you want to optimize the cost function $J(\theta)$ with parameters $\theta$.Concretely, you are going to use `optimize.minimize` to find the best parameters $\theta$ for the logistic regression cost function, given a fixed dataset (of X and y values). You will pass to `optimize.minimize` the following inputs:- `costFunction`: A cost function that, when given the training set and a particular $\theta$, computes the logistic regression cost and gradient with respect to $\theta$ for the dataset (X, y). It is important to note that we only pass the name of the function without the parenthesis. This indicates that we are only providing a reference to this function, and not evaluating the result from this function.- `initial_theta`: The initial values of the parameters we are trying to optimize.- `(X, y)`: These are additional arguments to the cost function.- `jac`: Indication if the cost function returns the Jacobian (gradient) along with cost value. (True)- `method`: Optimization method/algorithm to use- `options`: Additional options which might be specific to the specific optimization method. In the following, we only tell the algorithm the maximum number of iterations before it terminates.If you have completed the `costFunction` correctly, `optimize.minimize` will converge on the right optimization parameters and return the final values of the cost and $\theta$ in a class object. Notice that by using `optimize.minimize`, you did not have to write any loops yourself, or set a learning rate like you did for gradient descent. This is all done by `optimize.minimize`: you only needed to provide a function calculating the cost and the gradient.In the following, we already have code written to call `optimize.minimize` with the correct arguments.
# set options for optimize.minimize options= {'maxiter': 400} # see documention for scipy's optimize.minimize for description about # the different parameters # The function returns an object `OptimizeResult` # We use truncated Newton algorithm for optimization which is # equivalent to MATLAB's fminunc # See https://stackoverflow.com/questions/18801002/fminunc-alternate-in-numpy res = optimize.minimize(costFunction, initial_theta, (X, y), jac=True, method='TNC', options=options) # the fun property of `OptimizeResult` object returns # the value of costFunction at optimized theta cost = res.fun # the optimized theta is in the x property theta = res.x # Print theta to screen print('Cost at theta found by optimize.minimize: {:.3f}'.format(cost)) print('Expected cost (approx): 0.203\n'); print('theta:') print('\t[{:.3f}, {:.3f}, {:.3f}]'.format(*theta)) print('Expected theta (approx):\n\t[-25.161, 0.206, 0.201]')
Cost at theta found by optimize.minimize: 0.203 Expected cost (approx): 0.203 theta: [-25.161, 0.206, 0.201] Expected theta (approx): [-25.161, 0.206, 0.201]
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
Once `optimize.minimize` completes, we want to use the final value for $\theta$ to visualize the decision boundary on the training data as shown in the figure below. ![](Figures/decision_boundary1.png)To do so, we have written a function `plotDecisionBoundary` for plotting the decision boundary on top of training data. You do not need to write any code for plotting the decision boundary, but we also encourage you to look at the code in `plotDecisionBoundary` to see how to plot such a boundary using the $\theta$ values. You can find this function in the `utils.py` file which comes with this assignment.
# Plot Boundary utils.plotDecisionBoundary(plotData, theta, X, y)
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
1.2.4 Evaluating logistic regressionAfter learning the parameters, you can use the model to predict whether a particular student will be admitted. For a student with an Exam 1 score of 45 and an Exam 2 score of 85, you should expect to see an admissionprobability of 0.776. Another way to evaluate the quality of the parameters we have found is to see how well the learned model predicts on our training set. In this part, your task is to complete the code in function `predict`. The predict function will produce “1” or “0” predictions given a dataset and a learned parameter vector $\theta$.
def predict(theta, X): """ Predict whether the label is 0 or 1 using learned logistic regression. Computes the predictions for X using a threshold at 0.5 (i.e., if sigmoid(theta.T*x) >= 0.5, predict 1) Parameters ---------- theta : array_like Parameters for logistic regression. A vecotor of shape (n+1, ). X : array_like The data to use for computing predictions. The rows is the number of points to compute predictions, and columns is the number of features. Returns ------- p : array_like Predictions and 0 or 1 for each row in X. Instructions ------------ Complete the following code to make predictions using your learned logistic regression parameters.You should set p to a vector of 0's and 1's """ m = X.shape[0] # Number of training examples # You need to return the following variables correctly p = np.zeros(m) # ====================== YOUR CODE HERE ====================== threshold = 0.5 h = sigmoid(np.dot(X, theta.T)) p = h >= threshold #for each row in h(x), check if >= 0.5, if yes then set to one # ============================================================ return p
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
After you have completed the code in `predict`, we proceed to report the training accuracy of your classifier by computing the percentage of examples it got correct.
# Predict probability for a student with score 45 on exam 1 # and score 85 on exam 2 prob = sigmoid(np.dot([1, 45, 85], theta)) print('For a student with scores 45 and 85,' 'we predict an admission probability of {:.3f}'.format(prob)) print('Expected value: 0.775 +/- 0.002\n') # Compute accuracy on our training set p = predict(theta, X) print('Train Accuracy: {:.2f} %'.format(np.mean(p == y) * 100)) print('Expected accuracy (approx): 89.00 %')
For a student with scores 45 and 85,we predict an admission probability of 0.776 Expected value: 0.775 +/- 0.002 Train Accuracy: 89.00 % Expected accuracy (approx): 89.00 %
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
*You should now submit your solutions.*
grader[4] = predict grader.grade()
Submitting Solutions | Programming Exercise logistic-regression Use token from last successful submission ([email protected])? (Y/n): y Part Name | Score | Feedback --------- | ----- | -------- Sigmoid Function | 5 / 5 | Nice work! Logistic Regression Cost | 30 / 30 | Nice work! Logistic Regression Gradient | 30 / 30 | Nice work! Predict | 5 / 5 | Nice work! Regularized Logistic Regression Cost | 0 / 15 | Regularized Logistic Regression Gradient | 0 / 15 | -------------------------------- | 70 / 100 |
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
2 Regularized logistic regressionIn this part of the exercise, you will implement regularized logistic regression to predict whether microchips from a fabrication plant passes quality assurance (QA). During QA, each microchip goes through various tests to ensure it is functioning correctly.Suppose you are the product manager of the factory and you have the test results for some microchips on two different tests. From these two tests, you would like to determine whether the microchips should be accepted or rejected. To help you make the decision, you have a dataset of test results on past microchips, from which you can build a logistic regression model.First, we load the data from a CSV file:
# Load Data # The first two columns contains the X values and the third column # contains the label (y). data = np.loadtxt(os.path.join('Data', 'ex2data2.txt'), delimiter=',') X = data[:, :2] y = data[:, 2]
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
2.1 Visualize the dataSimilar to the previous parts of this exercise, `plotData` is used to generate a figure, where the axes are the two test scores, and the positive (y = 1, accepted) and negative (y = 0, rejected) examples are shown withdifferent markers.
plotData(X, y) # Labels and Legend pyplot.xlabel('Microchip Test 1') pyplot.ylabel('Microchip Test 2') # Specified in plot order pyplot.legend(['y = 1', 'y = 0'], loc='upper right') pass
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
The above figure shows that our dataset cannot be separated into positive and negative examples by a straight-line through the plot. Therefore, a straight-forward application of logistic regression will not perform well on this dataset since logistic regression will only be able to find a linear decision boundary. 2.2 Feature mappingOne way to fit the data better is to create more features from each data point. In the function `mapFeature` defined in the file `utils.py`, we will map the features into all polynomial terms of $x_1$ and $x_2$ up to the sixth power.$$ \text{mapFeature}(x) = \begin{bmatrix} 1 & x_1 & x_2 & x_1^2 & x_1 x_2 & x_2^2 & x_1^3 & \dots & x_1 x_2^5 & x_2^6 \end{bmatrix}^T $$As a result of this mapping, our vector of two features (the scores on two QA tests) has been transformed into a 28-dimensional vector. A logistic regression classifier trained on this higher-dimension feature vector will have a more complex decision boundary and will appear nonlinear when drawn in our 2-dimensional plot.While the feature mapping allows us to build a more expressive classifier, it also more susceptible to overfitting. In the next parts of the exercise, you will implement regularized logistic regression to fit the data and also see for yourself how regularization can help combat the overfitting problem.
# Note that mapFeature also adds a column of ones for us, so the intercept # term is handled X = utils.mapFeature(X[:, 0], X[:, 1])
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
2.3 Cost function and gradientNow you will implement code to compute the cost function and gradient for regularized logistic regression. Complete the code for the function `costFunctionReg` below to return the cost and gradient.Recall that the regularized cost function in logistic regression is$$ J(\theta) = \frac{1}{m} \sum_{i=1}^m \left[ -y^{(i)}\log \left( h_\theta \left(x^{(i)} \right) \right) - \left( 1 - y^{(i)} \right) \log \left( 1 - h_\theta \left( x^{(i)} \right) \right) \right] + \frac{\lambda}{2m} \sum_{j=1}^n \theta_j^2 $$Note that you should not regularize the parameters $\theta_0$. The gradient of the cost function is a vector where the $j^{th}$ element is defined as follows:$$ \frac{\partial J(\theta)}{\partial \theta_0} = \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left(x^{(i)}\right) - y^{(i)} \right) x_j^{(i)} \qquad \text{for } j =0 $$$$ \frac{\partial J(\theta)}{\partial \theta_j} = \left( \frac{1}{m} \sum_{i=1}^m \left( h_\theta \left(x^{(i)}\right) - y^{(i)} \right) x_j^{(i)} \right) + \frac{\lambda}{m}\theta_j \qquad \text{for } j \ge 1 $$
def costFunctionReg(theta, X, y, lambda_): """ Compute cost and gradient for logistic regression with regularization. Parameters ---------- theta : array_like Logistic regression parameters. A vector with shape (n, ). n is the number of features including any intercept. If we have mapped our initial features into polynomial features, then n is the total number of polynomial features. X : array_like The data set with shape (m x n). m is the number of examples, and n is the number of features (after feature mapping). y : array_like The data labels. A vector with shape (m, ). lambda_ : float The regularization parameter. Returns ------- J : float The computed value for the regularized cost function. grad : array_like A vector of shape (n, ) which is the gradient of the cost function with respect to theta, at the current values of theta. Instructions ------------ Compute the cost `J` of a particular choice of theta. Compute the partial derivatives and set `grad` to the partial derivatives of the cost w.r.t. each parameter in theta. """ # Initialize some useful values m = y.size # number of training examples # You need to return the following variables correctly J = 0 grad = np.zeros(theta.shape) # ===================== YOUR CODE HERE ====================== J, grad = costFunction(theta, X, y) #from old costFunction without reg. param J = J + (lambda_/(2*m))*np.sum(np.square(theta[1:])) h = sigmoid(X.dot(theta)) if np.isnan(J): J = np.inf grad = grad + (lambda_/m)*(theta) grad[0] = grad[0] - (lambda_/m)*theta[0] #remove regularization for first theta # ============================================================= return J, grad.flatten()
_____no_output_____
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
Once you are done with the `costFunctionReg`, we call it below using the initial value of $\theta$ (initialized to all zeros), and also another test case where $\theta$ is all ones.
# Initialize fitting parameters initial_theta = np.zeros(X.shape[1]) # Set regularization parameter lambda to 1 # DO NOT use `lambda` as a variable name in python # because it is a python keyword lambda_ = 1 # Compute and display initial cost and gradient for regularized logistic # regression cost, grad = costFunctionReg(initial_theta, X, y, lambda_) print('Cost at initial theta (zeros): {:.3f}'.format(cost)) print('Expected cost (approx) : 0.693\n') print('Gradient at initial theta (zeros) - first five values only:') print('\t[{:.4f}, {:.4f}, {:.4f}, {:.4f}, {:.4f}]'.format(*grad[:5])) print('Expected gradients (approx) - first five values only:') print('\t[0.0085, 0.0188, 0.0001, 0.0503, 0.0115]\n') # Compute and display cost and gradient # with all-ones theta and lambda = 10 test_theta = np.ones(X.shape[1]) cost, grad = costFunctionReg(test_theta, X, y, 10) print('------------------------------\n') print('Cost at test theta : {:.2f}'.format(cost)) print('Expected cost (approx): 3.16\n') print('Gradient at initial theta (zeros) - first five values only:') print('\t[{:.4f}, {:.4f}, {:.4f}, {:.4f}, {:.4f}]'.format(*grad[:5])) print('Expected gradients (approx) - first five values only:') print('\t[0.3460, 0.1614, 0.1948, 0.2269, 0.0922]')
Cost at initial theta (zeros): 0.693 Expected cost (approx) : 0.693 Gradient at initial theta (zeros) - first five values only: [0.0085, 0.0188, 0.0001, 0.0503, 0.0115] Expected gradients (approx) - first five values only: [0.0085, 0.0188, 0.0001, 0.0503, 0.0115] ------------------------------ Cost at test theta : 3.16 Expected cost (approx): 3.16 Gradient at initial theta (zeros) - first five values only: [0.3460, 0.1614, 0.1948, 0.2269, 0.0922] Expected gradients (approx) - first five values only: [0.3460, 0.1614, 0.1948, 0.2269, 0.0922]
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
*You should now submit your solutions.*
grader[5] = costFunctionReg grader[6] = costFunctionReg grader.grade()
Submitting Solutions | Programming Exercise logistic-regression Use token from last successful submission ([email protected])? (Y/n): y Part Name | Score | Feedback --------- | ----- | -------- Sigmoid Function | 5 / 5 | Nice work! Logistic Regression Cost | 30 / 30 | Nice work! Logistic Regression Gradient | 30 / 30 | Nice work! Predict | 5 / 5 | Nice work! Regularized Logistic Regression Cost | 15 / 15 | Nice work! Regularized Logistic Regression Gradient | 15 / 15 | Nice work! -------------------------------- | 100 / 100 |
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
2.3.1 Learning parameters using `scipy.optimize.minimize`Similar to the previous parts, you will use `optimize.minimize` to learn the optimal parameters $\theta$. If you have completed the cost and gradient for regularized logistic regression (`costFunctionReg`) correctly, you should be able to step through the next part of to learn the parameters $\theta$ using `optimize.minimize`. 2.4 Plotting the decision boundaryTo help you visualize the model learned by this classifier, we have provided the function `plotDecisionBoundary` which plots the (non-linear) decision boundary that separates the positive and negative examples. In `plotDecisionBoundary`, we plot the non-linear decision boundary by computing the classifier’s predictions on an evenly spaced grid and then and draw a contour plot where the predictions change from y = 0 to y = 1. 2.5 Optional (ungraded) exercisesIn this part of the exercise, you will get to try out different regularization parameters for the dataset to understand how regularization prevents overfitting.Notice the changes in the decision boundary as you vary $\lambda$. With a small$\lambda$, you should find that the classifier gets almost every training example correct, but draws a very complicated boundary, thus overfitting the data. See the following figures for the decision boundaries you should get for different values of $\lambda$. No regularization (overfitting) Decision boundary with regularization Decision boundary with too much regularization This is not a good decision boundary: for example, it predicts that a point at $x = (−0.25, 1.5)$ is accepted $(y = 1)$, which seems to be an incorrect decision given the training set.With a larger $\lambda$, you should see a plot that shows an simpler decision boundary which still separates the positives and negatives fairly well. However, if $\lambda$ is set to too high a value, you will not get a good fit and the decision boundary will not follow the data so well, thus underfitting the data.
# Initialize fitting parameters initial_theta = np.zeros(X.shape[1]) # Set regularization parameter lambda to 1 (you should vary this) lambdas = [0,0.5,1,10,100] # set options for optimize.minimize options= {'maxiter': 100} for lambda_ in lambdas: res = optimize.minimize(costFunctionReg, initial_theta, (X, y, lambda_), jac=True, method='TNC', options=options) # the fun property of OptimizeResult object returns # the value of costFunction at optimized theta cost = res.fun # the optimized theta is in the x property of the result theta = res.x utils.plotDecisionBoundary(plotData, theta, X, y) pyplot.xlabel('Microchip Test 1') pyplot.ylabel('Microchip Test 2') pyplot.legend(['y = 1', 'y = 0']) pyplot.grid(False) pyplot.title('lambda = %0.2f' % lambda_) # Compute accuracy on our training set p = predict(theta, X) print('Train Accuracy: %.1f %%' % (np.mean(p == y) * 100)) print('Expected accuracy (with lambda = 1): 83.1 % (approx)\n')
Train Accuracy: 86.4 % Expected accuracy (with lambda = 1): 83.1 % (approx) Train Accuracy: 82.2 % Expected accuracy (with lambda = 1): 83.1 % (approx) Train Accuracy: 83.1 % Expected accuracy (with lambda = 1): 83.1 % (approx) Train Accuracy: 74.6 % Expected accuracy (with lambda = 1): 83.1 % (approx) Train Accuracy: 61.0 % Expected accuracy (with lambda = 1): 83.1 % (approx)
MIT
learning-phase/machine-learning/week-3/tanmai/.ipynb_checkpoints/exercise2-checkpoint.ipynb
Saharsh007/open-qas
Working directoryThe notebooks that you save here persist in the host filesystem.
import os print(os.getcwd()) for d in os.listdir(): print(f'\t {d}')
/home/jovyan/work .ipynb_checkpoints demo.ipynb
Apache-2.0
work/demo.ipynb
vladiuz1/py3-jupyter-docker-compose
requirements.txtThe `requirements.txt` file contains list of modules you want to use in your notebook. The demo `requirements.txt` has flask module, check if below runs without errors:
import flask print('No problem, flask module installed.')
No problem, flask module installed.
Apache-2.0
work/demo.ipynb
vladiuz1/py3-jupyter-docker-compose
Imports
%load_ext autoreload %autoreload 2 import ib_insync print(ib_insync.__all__) import helpers.hdbg as dbg import helpers.hprint as pri import core.explore as exp import im.ib.data.extract.gateway.utils as ibutils
_____no_output_____
BSD-3-Clause
im/ib/data/extract/gateway/notebooks/Task114_Download_futures_price_data_from_IB.ipynb
alphamatic/amp
Connect
ib = ibutils.ib_connect(client_id=100, is_notebook=True)
_____no_output_____
BSD-3-Clause
im/ib/data/extract/gateway/notebooks/Task114_Download_futures_price_data_from_IB.ipynb
alphamatic/amp
Historical data
import logging #dbg.init_logger(verbosity=logging.DEBUG) dbg.init_logger(verbosity=logging.INFO) import datetime # start_ts = pd.to_datetime(pd.Timestamp("2018-02-01 06:00:00").tz_localize(tz="America/New_York")) # import datetime # #datetime.datetime.combine(start_ts, datetime.time()) # dt = start_ts.to_pydatetime() # print(dt) # #datetime.datetime.combine(dt, datetime.time()).tz_localize(tz="America/New_York") # dt.replace(hour=0, minute=0, second=0) start_ts = pd.Timestamp("2019-05-28 15:00").tz_localize(tz="America/New_York") end_ts = pd.Timestamp("2019-05-29 15:00").tz_localize(tz="America/New_York") barSizeSetting = "1 hour" bars = ibutils.get_data(ib, contract, start_ts, end_ts, barSizeSetting, whatToShow, useRTH) start_ts = pd.Timestamp("2019-05-28 15:00").tz_localize(tz="America/New_York") end_ts = pd.Timestamp("2019-05-29 15:00").tz_localize(tz="America/New_York") barSizeSetting = "1 hour" bars = ibutils.get_data(ib, contract, start_ts, end_ts, barSizeSetting, whatToShow, useRTH) start_ts = pd.Timestamp("2019-05-27").tz_localize(tz="America/New_York") end_ts = pd.Timestamp("2019-05-28").tz_localize(tz="America/New_York") barSizeSetting = "1 hour" bars = ibutils.get_data(ib, contract, start_ts, end_ts, barSizeSetting, whatToShow, useRTH) start_ts2 = start_ts - pd.DateOffset(days=1) end_ts2 = end_ts + pd.DateOffset(days=1) barSizeSetting = "1 hour" bars2 = ibutils.get_data(ib, contract, start_ts2, end_ts2, barSizeSetting, whatToShow, useRTH) set(bars.index).issubset(bars2.index) start_ts = pd.Timestamp("2019-04-01 15:00").tz_localize(tz="America/New_York") end_ts = pd.Timestamp("2019-05-01 15:00").tz_localize(tz="America/New_York") df = ibutils.get_historical_data2(ib, contract, start_ts, end_ts, barSizeSetting, whatToShow, useRTH) import pandas as pd contract = ib_insync.ContFuture("ES", "GLOBEX", "USD") whatToShow = 'TRADES' durationStr = '2 D' barSizeSetting = '1 min' useRTH = False #useRTH = True #start_ts = pd.to_datetime(pd.Timestamp("2018-02-01")) # Saturday June 1, 2019 end_ts = pd.to_datetime(pd.Timestamp("2019-05-30 00:00:00") + pd.DateOffset(days=1)) #end_ts = pd.to_datetime(pd.Timestamp("2019-05-30 18:00:00")) #print(start_ts, end_ts) print("end_ts=", end_ts) bars = ibutils.get_historical_data(ib, contract, end_ts, durationStr, barSizeSetting, whatToShow, useRTH) print("durationStr=%s barSizeSetting=%s useRTH=%s" % (durationStr, barSizeSetting, useRTH)) print("bars=[%s, %s]" % (bars.index[0], bars.index[-1])) print("diff=", bars.index[-1] - bars.index[0]) bars["close"].plot() pd.date_range(start='2019-04-01 00:00:00', end='2019-05-01 00:00:00', freq='2D') (bars.index[-1] - bars.index[0]) import pandas as pd # 1 = Live # 2 = Frozen # 3 = Delayed # 4 = Delayed frozen ib.reqMarketDataType(4) if False: contract = ib_insync.Stock('TSLA', 'SMART', 'USD') whatToShow = 'TRADES' elif False: contract = ib_insync.Future('ES', '202109', 'GLOBEX') whatToShow = 'TRADES' elif True: contract = ib_insync.ContFuture("ES", "GLOBEX", "USD") whatToShow = 'TRADES' else: contract = ib_insync.Forex('EURUSD') whatToShow = 'MIDPOINT' if False: durationStr = '1 Y' barSizeSetting = '1 day' #barSizeSetting='1 hour' else: durationStr = '1 D' barSizeSetting = '1 hour' print("contract=", contract) print("whatToShow=", whatToShow) print("durationStr=", durationStr) print("barSizeSetting=", barSizeSetting) #endDateTime = pd.Timestamp("2020-12-11 18:00:00") endDateTime = pd.Timestamp("2020-12-13 18:00:00") #endDateTime = "" # Get the datetime of earliest available historical data for the contract. start_ts = ib.reqHeadTimeStamp(contract, whatToShow=whatToShow, useRTH=True) print("start_ts=", start_ts) bars = ib.reqHistoricalData( contract, endDateTime=endDateTime, durationStr=durationStr, barSizeSetting=barSizeSetting, whatToShow=whatToShow, useRTH=True, formatDate=1) print("len(bars)=", len(bars)) print(ib_insync.util.df(bars)) ib_insync.IB.RaiseRequestErrors = True bars dbg.shutup_chatty_modules(verbose=True) import pandas as pd contract = ib_insync.ContFuture("ES", "GLOBEX", "USD") whatToShow = 'TRADES' durationStr = '2 D' barSizeSetting = '1 min' useRTH = False start_ts = pd.Timestamp("2018-01-28 15:00").tz_localize(tz="America/New_York") end_ts = pd.Timestamp("2018-02-28 15:00").tz_localize(tz="America/New_York") tasks = ibutils.get_historical_data_workload(contract, start_ts, end_ts, barSizeSetting, whatToShow, useRTH) print(len(tasks)) ibutils.get_historical_data2(ib, tasks)
_____no_output_____
BSD-3-Clause
im/ib/data/extract/gateway/notebooks/Task114_Download_futures_price_data_from_IB.ipynb
alphamatic/amp
%load_ext autoreload %autoreload 2 import ib_insync print(ib_insync.__all__) import helpers.hdbg as dbg import helpers.hprint as pri import core.explore as exp import im.ib.data.extract.gateway.utils as ibutils import logging dbg.init_logger(verbosity=logging.DEBUG) #dbg.init_logger(verbosity=logging.INFO) import pandas as pd dbg.shutup_chatty_modules(verbose=False) ib = ibutils.ib_connect(8, is_notebook=True) # #start_ts = pd.Timestamp("2018-01-28 15:00").tz_localize(tz="America/New_York") # start_ts = pd.Timestamp("2018-01-28 18:00").tz_localize(tz="America/New_York") # end_ts = pd.Timestamp("2018-02-28 15:00").tz_localize(tz="America/New_York") # dates = [] # if (start_ts.hour, start_ts.minute) > (18, 0): # dates = [start_ts] # # Align start_ts to 18:00. # start_ts = start_ts.replace(hour=18, minute=18) # elif (start_ts.hour, start_ts.minute) < (18, 0): # dates = [start_ts] # # Align start_ts to 18:00 of the day before. # start_ts = start_ts.replace(hour=18, minute=18) # start_ts -= pd.DateOffset(days=1) # dbg.dassert_eq((start_ts.hour, start_ts.minute), (18, 0)) # dates += pd.date_range(start=start_ts, end=end_ts, freq='2D').tolist() # print(dates) # import datetime # start_ts = pd.Timestamp(datetime.datetime(2017,6,25,0,31,53,993000)) # print(start_ts) # start_ts.round('1s') contract = ib_insync.ContFuture("ES", "GLOBEX", "USD") whatToShow = 'TRADES' durationStr = '2 D' barSizeSetting = '1 hour' useRTH = False start_ts = pd.Timestamp("2018-01-28 15:00").tz_localize(tz="America/New_York") end_ts = pd.Timestamp("2018-02-01 15:00").tz_localize(tz="America/New_York") tasks = ibutils.get_historical_data_workload(ib, contract, start_ts, end_ts, barSizeSetting, whatToShow, useRTH) df = ibutils.get_historical_data2(tasks) df df2 = ibutils.get_historical_data_with_IB_loop(ib, contract, start_ts, end_ts, durationStr, barSizeSetting, whatToShow, useRTH) pri.print(df.index df2 contract = ib_insync.ContFuture("ES", "GLOBEX", "USD") whatToShow = "TRADES" durationStr = '1 D' barSizeSetting = '1 hour' # 2021-02-18 is a Thursday and it's full day. start_ts = pd.Timestamp("2021-02-17 00:00:00") end_ts = pd.Timestamp("2021-02-18 23:59:59") useRTH = False df, return_ts_seq = ibutils.get_historical_data_with_IB_loop(ib, contract, start_ts, end_ts, durationStr, barSizeSetting, whatToShow, useRTH, return_ts_seq=True) print(return_ts_seq) print("\n".join(map(str, return_ts_seq)))
(datetime.datetime(2021, 2, 18, 23, 59, 59, tzinfo=<DstTzInfo 'America/New_York' EST-1 day, 19:00:00 STD>), Timestamp('2021-02-18 18:00:00-0500', tz='America/New_York')) (Timestamp('2021-02-18 18:00:00-0500', tz='America/New_York'), Timestamp('2021-02-17 18:00:00-0500', tz='America/New_York')) (Timestamp('2021-02-17 18:00:00-0500', tz='America/New_York'), Timestamp('2021-02-16 18:00:00-0500', tz='America/New_York'))
BSD-3-Clause
im/ib/data/extract/gateway/notebooks/Task114_Download_futures_price_data_from_IB.ipynb
alphamatic/amp
As explained in the [Composing Data](Composing_Data.ipynb) and [Containers](Containers.ipynb) tutorials, HoloViews allows you to build up hierarchical containers that express the natural relationships between your data items, in whatever multidimensional space best characterizes your application domain. Once your data is in such containers, individual visualizations are then made by choosing subregions of this multidimensional space, either smaller numeric ranges (as in cropping of photographic images), or lower-dimensional subsets (as in selecting frames from a movie, or a specific movie from a large library), or both (as in selecting a cropped version of a frame from a specific movie from a large library). In this tutorial, we show how to specify such selections, using four different (but related) operations that can act on an element ``e``:| Operation | Example syntax | Description ||:---------------|:----------------:|:-------------|| **indexing** | e[5.5], e[3,5.5] | Selecting a single data value, returning one actual numerical value from the existing data| **slice** | e[3:5.5], e[3:5.5,0:1] | Selecting a contiguous portion from an Element, returning the same type of Element| **sample** | e.sample(y=5.5),e.sample((3,3)) | Selecting one or more regularly spaced data values, returning a new type of Element| **select** | e.select(y=5.5),e.select(y=(3,5.5)) | More verbose notation covering all supporting slice and index operations by dimension name.These operations are all concerned with selecting some subset of your data values, without combining across data values (e.g. averaging) or otherwise transforming your actual data. In the [Columnar Data](Columnar_Data.ipynb) tutorial we will look at other operations on the data that reduce, summarize, or transform the data in other ways, rather than selections as covered here.We'll be going through each operation in detail and provide a visual illustration to help make the semantics of each operation clear. This Tutorial assumes that you are familiar with continuous and discrete coordinate systems, so please review our [Continuous Coordinates Tutorial](Continuous_Coordinates.ipynb) if you have not done so already. Indexing and slicing Elements In the [Exploring Data Tutorial](Exploring_Data.ipynb) we saw examples of how to select individual elements embedded in a multi-dimensional space. We also briefly introduced "deep slicing" of the ``RGB`` elements to select a subregion of the images. The [Continuous Coordinates Tutorial](Continuous_Coordinates.ipynb) covered slicing and indexing in Elements representing continuous coordinate coordinate systems such as ``Image`` types. Here we'll be going through each operation in full detail, providing a visual illustration to help make the semantics of each operation clear.How the Element may be indexed depends on the key dimensions (or ``kdims``) of the Element. It is thus important to consider the nature and dimensionality of your data when choosing the Element type for it.
import numpy as np import holoviews as hv hv.notebook_extension() %opts Layout [fig_size=125] Points [size_index=None] (s=50) Scatter3D [size_index=None] %opts Bounds (linewidth=2 color='k') {+axiswise} Text (fontsize=16 color='k') Image (cmap='Reds')
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
1D Elements: Slicing and indexing Certain Chart elements support both single-dimensional indexing and slicing: ``Scatter``, ``Curve``, ``Histogram``, and ``ErrorBars``. Here we'll look at how we can easily slice a ``Histogram`` to select a subregion of it:
np.random.seed(42) edges, data = np.histogram(np.random.randn(100)) hist = hv.Histogram(edges, data) subregion = hist[0:1] hist * subregion
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
The two bins in a different color show the selected region, overlaid on top of the full histogram. We can also access the value for a specific bin in the ``Histogram``. A continuous-valued index that falls inside a particular bin will return the corresponding value or frequency.
hist[0.25], hist[0.5], hist[0.55]
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
We can slice a ``Curve`` the same way:
xs = np.linspace(0, np.pi*2, 21) curve = hv.Curve((xs, np.sin(xs))) subregion = curve[np.pi/2:np.pi*1.5] curve * subregion * hv.Scatter(curve)
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
Here again the region in a different color is the specified subregion, and we've also marked each discrete point with a dot using the ``Scatter`` ``Element``. As before we can also get the value for a specific sample point; whatever x-index is provided will snap to the closest sample point and return the dependent value:
curve[4.05], curve[4.1], curve[4.17], curve[4.3]
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
It is important to note that an index (or a list of indices, as for the 2D and 3D cases below) will always return the raw indexed (dependent) value, i.e. a number. A slice (indicated with `:`), on the other hand, will retain the Element type even in cases where the plot might not be useful, such as having only a single value, two values, or no value at all in that range:
curve[4:4.5]
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
2D and 3D Elements: slicing For data defined in a 2D space, there are 2D equivalents of the 1D Curve and Scatter types. A ``Points``, for example, can be thought of as a number of points in a 2D space.
r = np.arange(0, 1, 0.005) xs, ys = (r * fn(85*np.pi*r) for fn in (np.cos, np.sin)) paths = hv.Points((xs, ys)) paths + paths[0:1, 0:1]
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
However, indexing is not supported in this space, because there could be many possible points near a given set of coordinates, and finding the nearest one would require a search across potentially incommensurable dimensions, which is poorly defined and difficult to support.Slicing in 3D works much like slicing in 2D, but indexing is not supported for the same reason as in 2D:
xs = np.linspace(0, np.pi*8, 201) scatter = hv.Scatter3D((xs, np.sin(xs), np.cos(xs))) scatter + scatter[5:10, :, 0:]
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
2D Raster and Image: slicing and indexingRaster and the various other image-like objects (Images, RGB, HSV, etc.) can all sliced and indexed, as can Surface, because they all have an underlying regular grid of key dimension values:
%opts Image (cmap='Blues') Bounds (color='red') np.random.seed(0) extents = (0, 0, 10, 10) img = hv.Image(np.random.rand(10, 10), bounds=extents) img_slice = img[1:9,4:5] box = hv.Bounds((1,4,9,5)) img*box + img_slice img[4.2,4.2], img[4.3,4.2], img[5.0,4.2]
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
SamplingSampling is essentially a process of indexing an Element at multiple index locations, and collecting the results. Thus any Element that can be indexed can also be sampled. Compared to regular indexing, sampling is different in that multiple indices may be supplied at the same time. Also, indexing will only return the value at that location, whereas the return type from a sampling operation is another ``Element`` type, usually either a ``Table`` or a ``Curve``, to allow both key and value dimensions to be returned. Sampling ElementsSampling can use either an explicit list of samples, or or by passing the samples for each dimension keyword arguments.We'll start by taking a single sample of an Image object, to make it clear how sampling and indexing are similar operations yet different in their results:
img_coords = hv.Points(img.table(), extents=extents) labeled_img = img * img_coords * hv.Points([img.closest([(5.1,4.9)])]).opts(style=dict(color='r')) img + labeled_img + img.sample([(5.1,4.9)]) img[5.1,4.9]
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
Here, the output of the indexing operation is the value (0.1965823616800535) from the location closest to the specified , whereas ``.sample()`` returns a Table that lists both the coordinates *and* the value, and slicing (in previous section) returns an Element of the same type, not a Table.Next we can try sampling along only one Dimension on our 2D Image, leaving us with a 1D Element (in this case a ``Curve``):
sampled = img.sample(y=5) labeled_img = img * img_coords * hv.Points(zip(sampled['x'], [img.closest(y=5)]*10)) img + labeled_img + sampled
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
Sampling works on any regularly sampled Element type. For example, we can select multiple samples along the x-axis of a Curve.
xs = np.arange(10) samples = [2, 4, 6, 8] curve = hv.Curve(zip(xs, np.sin(xs))) curve_samples = hv.Scatter(zip(xs, [0] * 10)) * hv.Scatter(zip(samples, [0]*len(samples))) curve + curve_samples + curve.sample(samples)
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
Sampling HoloMapsSampling is often useful when you have more data than you wish to visualize or analyze at one time. First, let's create a HoloMap containing a number of observations of some noisy data.
obs_hmap = hv.HoloMap({i: hv.Image(np.random.randn(10, 10), bounds=extents) for i in range(3)}, kdims=['Observation'])
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
HoloMaps also provide additional functionality to perform regular sampling on your data. In this case we'll take 3x3 subsamples of each of the Images.
sample_style = dict(edgecolors='k', alpha=1) all_samples = obs_hmap.table().to.scatter3d().opts(style=dict(alpha=0.15)) sampled = obs_hmap.sample((3,3)) subsamples = sampled.to.scatter3d().opts(style=sample_style) all_samples * subsamples + sampled
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
By supplying bounds in as a (left, bottom, right, top) tuple we can also sample a subregion of our images:
sampled = obs_hmap.sample((3,3), bounds=(2,5,5,10)) subsamples = sampled.to.scatter3d().opts(style=sample_style) all_samples * subsamples + sampled
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
Since this kind of sampling is only well supported for continuous coordinate systems, we can only apply this kind of sampling to Image types for now. Sampling ChartsSampling Chart-type Elements like Curve, Scatter, Histogram is only supported by providing an explicit list of samples, since those Elements have no underlying regular grid.
xs = np.arange(10) extents = (0, 0, 2, 10) curve = hv.HoloMap({(i) : hv.Curve(zip(xs, np.sin(xs)*i)) for i in np.linspace(0.5, 1.5, 3)}, kdims=['Observation']) all_samples = curve.table().to.points() sampled = curve.sample([0, 2, 4, 6, 8]) sampling = all_samples * sampled.to.points(extents=extents).opts(style=dict(color='r')) sampling + sampled
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
Alternatively, you can always deconstruct your data into a Table (see the [Columnar Data](Columnar_Data.ipynb) tutorial) and perform ``select`` operations instead. This is also the easiest way to sample ``NdElement`` types like Bars. Individual samples should be supplied as a set, while ranges can be specified as a two-tuple.
sampled = curve.table().select(Observation=(0, 1.1), x={0, 2, 4, 6, 8}) sampling = all_samples * sampled.to.points(extents=extents).opts(style=dict(color='r')) sampling + sampled
_____no_output_____
BSD-3-Clause
doc/Tutorials/Sampling_Data.ipynb
stuarteberg/holoviews
C6.01: Introduction
from sklearn.datasets.samples_generator import make_blobs n_clusters = 2 centers = [[-1,-1],[1,1]] stdevs = [0.4, 0.6] X, y = make_blobs(n_samples = 20, centers = centers, cluster_std = stdevs, random_state = 1200000) from sklearn.cluster import KMeans model = KMeans(n_clusters = n_clusters) preds = model.fit_predict(X) plt.figure(figsize = [3,3]) for k, col in zip(range(n_clusters), plot_colors[:n_clusters]): my_members = (preds == k) plt.scatter(X[my_members, 0], X[my_members, 1], s = 100, c = col) plt.xticks([]) plt.yticks([]) ax = plt.gca() [side.set_visible(False) for side in ax.spines.values()] plt.axis('square') plt.savefig('C01_TypesOfUL_01.png', transparent = True) n_points = 10 X = np.random.RandomState(1800000).multivariate_normal([0,0], [[1,0.8],[0.8,1]], n_points) model = PCA() model.fit_transform(X) pc1 = model.components_[0,:] X_center = X.mean(axis=0) X_new = np.matmul( np.dot(X - X_center, pc1).reshape(-1,1), pc1.reshape(1,-1) ) + X_center plt.figure(figsize = [3,3]) plt.scatter(X[:,0], X[:,1], s = 100, c = '#ffffff') plt.plot([X_new[:,0].min(), X_new[:,0].max()], [X_new[:,1].min(), X_new[:,1].max()], '-', c = plot_lcolors[0], lw = 4, zorder = 0) plt.scatter(X_new[:,0], X_new[:,1], s = 100, c = plot_colors[0]) for x, x_new in zip(X, X_new): plt.plot([x[0],x_new[0]], [x[1],x_new[1]], '--', c = plot_lcolors[0], lw = 2, zorder = 0) plt.xticks([]) plt.yticks([]) ax = plt.gca() [side.set_visible(False) for side in ax.spines.values()] plt.axis('square') plt.savefig('C01_TypesOfUL_02.png', transparent = True)
_____no_output_____
MIT
course/3_unsupervised_learning/01_clustering/UL6_PCA_Storyboard_Assets.ipynb
claudiocmp/udacity-dsnd
C6.09: Dimensionality Reduction
n_points = [4, 7, 9, 9, 7, 4] n_rooms = [3, 4, 5, 6, 7, 8] size_lower = [ 600, 800, 1000, 1300, 1600, 2000] size_upper = [1000, 1300, 1600, 2000, 2400, 3000] np.random.seed(147258369) rooms = [] sizes = [] for i in range(6): # use beta dist to generate sizes, find bounds with # lower @ 0.2, upper @ 0.8. size_range = size_upper[i] - size_lower[i] size_min = size_lower[i] - size_range / 3 rooms.append(np.repeat([n_rooms[i]], n_points[i])) sizes.append(size_min + size_range * np.random.beta(5, 5, n_points[i])) X = np.hstack([np.hstack(rooms).reshape(-1,1), np.hstack(sizes).round().reshape(-1,1)]) X = StandardScaler().fit_transform(X) plt.figure(figsize = [15,15]) plt.scatter(X[:,0], X[:,1], s = 64, c = plot_colors[-1]) plt.xticks([]) plt.yticks([]) plt.xlabel('# of rooms (standardized)', fontsize = 30) plt.ylabel('Square footage (standardized)', fontsize = 30) ax = plt.gca() [side.set_linewidth(2) for side in ax.spines.values()] # plt.axis('square') plt.savefig('C09_DimensReduct_01.png', transparent = True) pca_model = PCA() X_prime = pca_model.fit_transform(X) pc1 = pca_model.components_[0,:] X_center = X.mean(axis=0) X_new = np.matmul( np.dot(X - X_center, pc1).reshape(-1,1), pc1.reshape(1,-1) ) + X_center plt.figure(figsize = [15,15]) plt.scatter(X[:,0], X[:,1], s = 64, c = plot_colors[-1]) plt.plot([X_new[:,0].min(), X_new[:,0].max()], [X_new[:,1].min(), X_new[:,1].max()], '-', c = plot_lcolors[0], lw = 4, zorder = 0) plt.scatter(X_new[:,0], X_new[:,1], s = 64, c = plot_colors[0]) for x, x_new in zip(X, X_new): plt.plot([x[0],x_new[0]], [x[1],x_new[1]], '--', c = plot_lcolors[0], lw = 2, zorder = 0) plt.xticks([]) plt.yticks([]) plt.xlabel('# of rooms (standardized)', fontsize = 30) plt.ylabel('Square footage (standardized)', fontsize = 30) ax = plt.gca() [side.set_linewidth(2) for side in ax.spines.values()] # plt.axis('square') plt.savefig('C09_DimensReduct_02.png', transparent = True) from sklearn.linear_model import LinearRegression lr_model = LinearRegression() lr_model.fit(X[:,0].reshape(-1,1),X[:,1]) y_hat = lr_model.predict(X[:,0].reshape(-1,1)) plt.figure(figsize = [15,15]) plt.scatter(X[:,0], X[:,1], s = 64, c = plot_colors[-1]) plt.plot([X[:,0].min(), X[:,0].max()], [y_hat.min(), y_hat.max()], '-', c = plot_lcolors[1], lw = 4, zorder = 0) plt.scatter(X[:,0], y_hat, s = 64, c = plot_colors[1]) plt.xticks([]) plt.yticks([]) plt.xlabel('# of rooms (standardized)', fontsize = 30) plt.ylabel('Square footage (standardized)', fontsize = 30) ax = plt.gca() [side.set_linewidth(2) for side in ax.spines.values()] # plt.axis('square') plt.savefig('C09_DimensReduct_03.png', transparent = True)
_____no_output_____
MIT
course/3_unsupervised_learning/01_clustering/UL6_PCA_Storyboard_Assets.ipynb
claudiocmp/udacity-dsnd
C6.10: PCA Properties
alt_slope = np.array([1.3,1.1]) alt_slope = alt_slope / np.sqrt((alt_slope ** 2).sum()) alt_center = np.array([-.35,-.35]) X_alt = np.matmul( np.dot(X - alt_center, alt_slope).reshape(-1,1), alt_slope.reshape(1,-1) ) + alt_center plt.figure(figsize = [15,15]) plt.scatter(X[:,0], X[:,1], s = 64, c = plot_colors[-1]) plt.plot([X_new[:,0].min(), X_new[:,0].max()], [X_new[:,1].min(), X_new[:,1].max()], '-', c = plot_lcolors[0], lw = 4, zorder = 0) plt.scatter(X_new[:,0], X_new[:,1], s = 64, c = plot_colors[0]) for x, x_new in zip(X, X_new): plt.plot([x[0],x_new[0]], [x[1],x_new[1]], '--', c = plot_lcolors[0], lw = 2, zorder = 0) # plt.plot([X_alt[:,0].min(), X_alt[:,0].max()], [X_alt[:,1].min(), X_alt[:,1].max()], # '-', c = plot_lcolors[1], lw = 4, zorder = 0) # plt.scatter(X_alt[:,0], X_alt[:,1], s = 64, c = plot_colors[1]) # for x, x_alt in zip(X, X_alt): # plt.plot([x[0],x_alt[0]], [x[1],x_alt[1]], '--', c = plot_lcolors[1], lw = 2, zorder = 0) plt.xticks([]) plt.yticks([]) plt.xlabel('# of rooms (standardized)', fontsize = 30) plt.ylabel('Square footage (standardized)', fontsize = 30) ax = plt.gca() [side.set_linewidth(2) for side in ax.spines.values()] plt.axis('square') plt.savefig('C10_PCAProps_04.png', transparent = True) np.sqrt(((X - X_new) ** 2).sum(axis=1)).sum() np.sqrt(((X - X_alt) ** 2).sum(axis=1)).sum()
_____no_output_____
MIT
course/3_unsupervised_learning/01_clustering/UL6_PCA_Storyboard_Assets.ipynb
claudiocmp/udacity-dsnd
Real caseFirst, the real case is relatively simple.The integral that we want to do is:$$k_\Delta(\tau) = \frac{1}{\Delta^2}\int_{t_i-\Delta/2}^{t_i+\Delta/2} \mathrm{d}t \,\int_{t_j-\Delta/2}^{t_j+\Delta/2}\mathrm{d}t^\prime\,k(t - t^\prime)$$For celerite kernels it helps to make the assumtion that $t_j + \Delta/2 < t_i - \Delta/2$ (in other words, the exposure times do not overlap).
import sympy as sm cr = sm.symbols("cr", positive=True) ti, tj, dt, t, tp = sm.symbols("ti, tj, dt, t, tp", real=True) k = sm.exp(-cr*(t - tp)) k0 = k.subs([(t, ti), (tp, tj)]) kint = sm.simplify(sm.integrate( sm.integrate(k, (t, ti-dt/2, ti+dt/2)), (tp, tj-dt/2, tj+dt/2)) / dt**2) res = sm.simplify(kint / k0) print(res)
(exp(2*cr*dt) - 2*exp(cr*dt) + 1)*exp(-cr*dt)/(cr**2*dt**2)
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
This is the factor that we want.Let's make sure that it is identical to what we have in the note.
kD = 2 * (sm.cosh(cr*dt) - 1) / (cr*dt)**2 sm.simplify(res.expand() - kD.expand())
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Excellent.Let's double check that this reduces to the original kernel in the limit $\Delta \to 0$:
sm.limit(kD, dt, 0)
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Complex caseThe complex cases proceeds similarly, but it's a bit more involved.In this case,$$k(\tau) = (a + i\,b)\,\exp(-(c+i\,d)\,(t_i-t_j))$$
a, b, c, d = sm.symbols("a, b, c, d", real=True, positive=True) k = sm.exp(-(c + sm.I*d) * (t - tp)) k0 = k.subs([(t, ti), (tp, tj)]) kint = sm.simplify(sm.integrate(k, (t, ti-dt/2, ti+dt/2)) / dt) kint = sm.simplify(sm.integrate(kint.expand(), (tp, tj-dt/2, tj+dt/2)) / dt) print(sm.simplify(kint / k0))
(2*cos(I*dt*(c + I*d)) - 2)/(dt**2*(c**2 + 2*I*c*d - d**2))
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
That doesn't look so bad!But, I'm going to re-write it by hand and make sure that it's correct:
coeff = (c-sm.I*d)**2 / (dt*(c**2+d**2))**2 coeff *= (sm.exp((c+sm.I*d)*dt) + sm.exp(-(c+sm.I*d)*dt)-2) sm.simplify(coeff * k0 - kint)
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Good.Now we need to work out nice expressions for the real and imaginary parts of this.First, the real part.I found that it was easiest to look at the prefactors for the trig functions directly and simplify those.Here we go:
res = (a+sm.I*b) * coeff A = sm.simplify((res.expand(complex=True) + sm.conjugate(res).expand(complex=True)) / 2) sm.simplify(sm.poly(A, sm.cos(dt*d)).coeff_monomial(sm.cos(dt*d))) sm.simplify(sm.poly(sm.poly(A, sm.cos(dt*d)).coeff_monomial(1), sm.sin(dt*d)).coeff_monomial(sm.sin(dt*d))) sm.simplify(sm.poly(sm.poly(A, sm.cos(dt*d)).coeff_monomial(1), sm.sin(dt*d)).coeff_monomial(1))
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Then, same thing for the imaginary part:
B = sm.simplify(-sm.I * (res.expand(complex=True) - sm.conjugate(res).expand(complex=True)) / 2) sm.simplify(sm.poly(B, sm.cos(dt*d)).coeff_monomial(sm.cos(dt*d))) sm.simplify(sm.poly(sm.poly(B, sm.cos(dt*d)).coeff_monomial(1), sm.sin(dt*d)).coeff_monomial(sm.sin(dt*d))) sm.simplify(sm.poly(sm.poly(B, sm.cos(dt*d)).coeff_monomial(1), sm.sin(dt*d)).coeff_monomial(1))
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Ok.Now let's make sure that the simplified expressions are right.
C1 = (a*c**2 - a*d**2 + 2*b*c*d) C2 = (b*c**2 - b*d**2 - 2*a*c*d) cos_term = (sm.exp(c*dt) + sm.exp(-c*dt)) * sm.cos(d*dt) - 2 sin_term = (sm.exp(c*dt) - sm.exp(-c*dt)) * sm.sin(d*dt) denom = dt**2 * (c**2 + d**2)**2 A0 = (C1 * cos_term - C2 * sin_term) / denom B0 = (C2 * cos_term + C1 * sin_term) / denom sm.simplify(A.expand() - A0.expand()) sm.simplify(B.expand() - B0.expand())
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Finally let's rewrite things in terms of hyperbolic trig functions.
sm.simplify(2*(sm.cosh(c*dt) * sm.cos(d*dt) - 1).expand() - cos_term.expand()) sm.simplify(2*(sm.sinh(c*dt) * sm.sin(d*dt)).expand() - sin_term.expand())
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Looks good!Let's make sure that this actually reproduces the target integral:
sm.simplify(((a+sm.I*b)*kint/k0 - (A+sm.I*B)).expand(complex=True))
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Finally, let's make sure that this reduces to the original kernel when $\Delta \to 0$:
sm.limit(A, dt, 0), sm.limit(B, dt, 0)
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Overlapping exposures & the power spectrumIf we directly evaluate the power spectrum of this kernel, we'll have some issues because there will be power from lags where our assumption of non-overlapping exposures will break down.Instead, we can evaluate the correct power spectrum by realizing that the integrals that we're doing are convolutions.Therefore, the power spectrum of the integrated kernel will be product of the original power spectrum with the square of the Fourier transform of the top hat exposure function.
omega = sm.symbols("omega", real=True) sm.simplify(sm.integrate(sm.exp(sm.I * t * omega) / dt, (t, -dt / 2, dt / 2)))
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Therefore, the integrated power spectrum is$$S_\Delta(\omega) = \frac{\sin^2(\Delta\,\omega/2)}{(\Delta\,\omega/2)^2}\,S(\omega) = \mathrm{sinc}^2(\Delta\,\omega/2)\,S(\omega)$$ For overlapping exposures, some care must be taken when computing the autocorrelation because of the absolute value.This also means that celerite cannot be used (as far as I can tell) to evaluate exposure time integrated models with overlapping exposures.In this case, the integral we want to do is:$$k_\Delta(\tau) = \frac{1}{\Delta^2}\int_{t_i-\Delta/2}^{t_i+\Delta/2} \mathrm{d}t \,\int_{t_j-\Delta/2}^{t_j+\Delta/2}\mathrm{d}t^\prime\,k(|t - t^\prime|)$$which can be broken into three integrals when $\tau = |t_i - t_j| \le \Delta$ (assuming still that $t_i \ge t_j$):$$\Delta^2\,k_\Delta(\tau)= \int_{t_j+\Delta/2}^{t_i+\Delta/2} \mathrm{d}t \,\int_{t_j-\Delta/2}^{t_j+\Delta/2}\mathrm{d}t^\prime\,k(t - t^\prime)+ \int_{t_i-\Delta/2}^{t_j+\Delta/2} \mathrm{d}t \,\int_{t_j-\Delta/2}^{t}\mathrm{d}t^\prime\,k(t - t^\prime)+ \int_{t_i-\Delta/2}^{t_j+\Delta/2} \mathrm{d}t \,\int_{t}^{t_j+\Delta/2}\mathrm{d}t^\prime\,k(t^\prime - t)$$
tau = sm.symbols("tau", real=True, positive=True) kp = sm.exp(-cr*(t - tp)) km = sm.exp(-cr*(tp - t)) k1 = sm.simplify(sm.integrate( sm.integrate(kp, (tp, tj-dt/2, tj+dt/2)), (t, tj+dt/2, ti+dt/2)) / dt**2) k2 = sm.simplify(sm.integrate( sm.integrate(kp, (tp, tj-dt/2, t)), (t, ti-dt/2, tj+dt/2)) / dt**2) k3 = sm.simplify(sm.integrate( sm.integrate(km, (tp, t, tj+dt/2)), (t, ti-dt/2, tj+dt/2)) / dt**2) kD = sm.simplify((k1 + k2 + k3).expand()) res = sm.simplify(kD.subs([(ti, tau + tj)])) res kint = (2*cr*(dt-tau) + sm.exp(-cr*(dt-tau)) - 2*sm.exp(-cr*tau) + sm.exp(-cr*(dt+tau))) / (cr*dt)**2 sm.simplify(kint - res)
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Ok. That's the result for the real case. Now let's work through the result for the complex case.
arg1 = ((a+sm.I*b) * kint.subs([(cr, c+sm.I*d)])).expand(complex=True) arg2 = ((a-sm.I*b) * kint.subs([(cr, c-sm.I*d)])).expand(complex=True) res = sm.simplify((arg1 + arg2) / 2) res C1 = (a*c**2 - a*d**2 + 2*b*c*d) C2 = (b*c**2 - b*d**2 - 2*a*c*d) denom = dt**2 * (c**2 + d**2)**2 dpt = dt + tau dmt = dt - tau cos_term = sm.exp(-c*dmt)*sm.cos(d*dmt) + sm.exp(-c*dpt)*sm.cos(d*dpt) - 2*sm.exp(-c*tau)*sm.cos(d*tau) sin_term = sm.exp(-c*dmt)*sm.sin(d*dmt) + sm.exp(-c*dpt)*sm.sin(d*dpt) - 2*sm.exp(-c*tau)*sm.sin(d*tau) ktest = 2*(a*c + b*d)*(c**2+d**2)*dmt ktest += C1 * cos_term + C2 * sin_term ktest /= denom sm.simplify(ktest - res)
_____no_output_____
MIT
paper/proofs/celerite-integral.ipynb
exowanderer/exoplanet
Module1> module 1
#export def nothing(): pass
_____no_output_____
Apache-2.0
module1.ipynb
hamelsmu/nbdev_export_demo
目标规划**基本上分为两种思路: 加权系数法(转化为单一目标), 优先等级法(按重要程度不同, 转化为单目标模型)**正负偏差变量: $d_i^+=max\{f_i-d_i^0,0\},\ d_i^-=-min\{f_i-d_i^0,0\}$, 显然恒有$d_i^+\times d_i^-=0$
# 例16.3 # 该题的规划模型如下: # min z =P[0]*dminus[0] + P[1]*(dplus[1]+dminus[1]) + P[2]*(3 * (dplus[2]+dminus[2]) + dplus[3]) # 2*x[0] + 2*x[1] <= 12 # 200*x[0] + 300*x[1] + dminus[0] - dplus[0] = 1500 # 2*x[0] - x[1] + dminus[1] - dplus[1] = 0 # 4*x[0] + dminus[2] - dplus[2] = 16 # 5*x[1] + dminus[3] - dplus[3] = 15 # x[0], x[1], dminus[i], dplus[i] >= 0, i = 0, 1, 2, 3
_____no_output_____
MIT
.ipynb_checkpoints/16 目标规划-checkpoint.ipynb
ZDDWLIG/math-model
Fully-Connected Neural NetsIn this exercise we will implement fully-connected networks using a modular approach. For each layer we will implement a `forward` and a `backward` function. The `forward` function will receive inputs, weights, and other parameters and will return both an output and a `cache` object storing data needed for the backward pass, like this:```pythondef layer_forward(x, w): """ Receive inputs x and weights w """ Do some computations ... z = ... some intermediate value Do some more computations ... out = the output cache = (x, w, z, out) Values we need to compute gradients return out, cache```The backward pass will receive upstream derivatives and the `cache` object, and will return gradients with respect to the inputs and weights, like this:```pythondef layer_backward(dout, cache): """ Receive dout (derivative of loss with respect to outputs) and cache, and compute derivative with respect to inputs. """ Unpack cache values x, w, z, out = cache Use values in cache to compute derivatives dx = Derivative of loss with respect to x dw = Derivative of loss with respect to w return dx, dw```After implementing a bunch of layers this way, we will be able to easily combine them to build classifiers with different architectures.
# As usual, a bit of setup from __future__ import print_function import time import numpy as np import matplotlib.pyplot as plt from cs231n.classifiers.fc_net import * from cs231n.data_utils import get_CIFAR10_data from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array from cs231n.solver import Solver %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # for auto-reloading external modules # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 def rel_error(x, y): """ returns relative error """ return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y)))) # Load the (preprocessed) CIFAR10 data. data = get_CIFAR10_data() for k, v in list(data.items()): print(('%s: ' % k, v.shape))
('X_train: ', (49000, 3, 32, 32)) ('y_train: ', (49000,)) ('X_val: ', (1000, 3, 32, 32)) ('y_val: ', (1000,)) ('X_test: ', (1000, 3, 32, 32)) ('y_test: ', (1000,))
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Affine layer: forwardOpen the file `cs231n/layers.py` and implement the `affine_forward` function.Once you are done you can test your implementaion by running the following:
# Test the affine_forward function num_inputs = 2 input_shape = (4, 5, 6) output_dim = 3 input_size = num_inputs * np.prod(input_shape) weight_size = output_dim * np.prod(input_shape) x = np.linspace(-0.1, 0.5, num=input_size).reshape(num_inputs, *input_shape) w = np.linspace(-0.2, 0.3, num=weight_size).reshape(np.prod(input_shape), output_dim) b = np.linspace(-0.3, 0.1, num=output_dim) out, _ = affine_forward(x, w, b) correct_out = np.array([[ 1.49834967, 1.70660132, 1.91485297], [ 3.25553199, 3.5141327, 3.77273342]]) # Compare your output with ours. The error should be around e-9 or less. print('Testing affine_forward function:') print('difference: ', rel_error(out, correct_out))
Testing affine_forward function: difference: 9.769849468192957e-10
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Affine layer: backwardNow implement the `affine_backward` function and test your implementation using numeric gradient checking.
# Test the affine_backward function np.random.seed(231) x = np.random.randn(10, 2, 3) w = np.random.randn(6, 5) b = np.random.randn(5) dout = np.random.randn(10, 5) dx_num = eval_numerical_gradient_array(lambda x: affine_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_forward(x, w, b)[0], b, dout) _, cache = affine_forward(x, w, b) dx, dw, db = affine_backward(dout, cache) # The error should be around e-10 or less print('Testing affine_backward function:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db))
Testing affine_backward function: dx error: 5.399100368651805e-11 dw error: 9.904211865398145e-11 db error: 2.4122867568119087e-11
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
ReLU activation: forwardImplement the forward pass for the ReLU activation function in the `relu_forward` function and test your implementation using the following:
# Test the relu_forward function x = np.linspace(-0.5, 0.5, num=12).reshape(3, 4) out, _ = relu_forward(x) correct_out = np.array([[ 0., 0., 0., 0., ], [ 0., 0., 0.04545455, 0.13636364,], [ 0.22727273, 0.31818182, 0.40909091, 0.5, ]]) # Compare your output with ours. The error should be on the order of e-8 print('Testing relu_forward function:') print('difference: ', rel_error(out, correct_out))
Testing relu_forward function: difference: 4.999999798022158e-08
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
ReLU activation: backwardNow implement the backward pass for the ReLU activation function in the `relu_backward` function and test your implementation using numeric gradient checking:
np.random.seed(231) x = np.random.randn(10, 10) dout = np.random.randn(*x.shape) dx_num = eval_numerical_gradient_array(lambda x: relu_forward(x)[0], x, dout) _, cache = relu_forward(x) dx = relu_backward(dout, cache) # The error should be on the order of e-12 print('Testing relu_backward function:') print('dx error: ', rel_error(dx_num, dx))
Testing relu_backward function: dx error: 3.2756349136310288e-12
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Inline Question 1: We've only asked you to implement ReLU, but there are a number of different activation functions that one could use in neural networks, each with its pros and cons. In particular, an issue commonly seen with activation functions is getting zero (or close to zero) gradient flow during backpropagation. Which of the following activation functions have this problem? If you consider these functions in the one dimensional case, what types of input would lead to this behaviour?1. Sigmoid2. ReLU3. Leaky ReLU Answer:[FILL THIS IN] "Sandwich" layersThere are some common patterns of layers that are frequently used in neural nets. For example, affine layers are frequently followed by a ReLU nonlinearity. To make these common patterns easy, we define several convenience layers in the file `cs231n/layer_utils.py`.For now take a look at the `affine_relu_forward` and `affine_relu_backward` functions, and run the following to numerically gradient check the backward pass:
from cs231n.layer_utils import affine_relu_forward, affine_relu_backward np.random.seed(231) x = np.random.randn(2, 3, 4) w = np.random.randn(12, 10) b = np.random.randn(10) dout = np.random.randn(2, 10) out, cache = affine_relu_forward(x, w, b) dx, dw, db = affine_relu_backward(dout, cache) dx_num = eval_numerical_gradient_array(lambda x: affine_relu_forward(x, w, b)[0], x, dout) dw_num = eval_numerical_gradient_array(lambda w: affine_relu_forward(x, w, b)[0], w, dout) db_num = eval_numerical_gradient_array(lambda b: affine_relu_forward(x, w, b)[0], b, dout) # Relative error should be around e-10 or less print('Testing affine_relu_forward and affine_relu_backward:') print('dx error: ', rel_error(dx_num, dx)) print('dw error: ', rel_error(dw_num, dw)) print('db error: ', rel_error(db_num, db))
Testing affine_relu_forward and affine_relu_backward: dx error: 2.299579177309368e-11 dw error: 8.162011105764925e-11 db error: 7.826724021458994e-12
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Loss layers: Softmax and SVMNow implement the loss and gradient for softmax and SVM in the `softmax_loss` and `svm_loss` function in `cs231n/layers.py`. These should be similar to what you implemented in `cs231n/classifiers/softmax.py` and `cs231n/classifiers/linear_svm.py`.You can make sure that the implementations are correct by running the following:
np.random.seed(231) num_classes, num_inputs = 10, 50 x = 0.001 * np.random.randn(num_inputs, num_classes) y = np.random.randint(num_classes, size=num_inputs) dx_num = eval_numerical_gradient(lambda x: svm_loss(x, y)[0], x, verbose=False) loss, dx = svm_loss(x, y) # Test svm_loss function. Loss should be around 9 and dx error should be around the order of e-9 print('Testing svm_loss:') print('loss: ', loss) print('dx error: ', rel_error(dx_num, dx)) dx_num = eval_numerical_gradient(lambda x: softmax_loss(x, y)[0], x, verbose=False) loss, dx = softmax_loss(x, y) # Test softmax_loss function. Loss should be close to 2.3 and dx error should be around e-8 print('\nTesting softmax_loss:') print('loss: ', loss) print('dx error: ', rel_error(dx_num, dx))
Testing svm_loss: loss: 8.999602749096233 dx error: 1.4021566006651672e-09 Testing softmax_loss: loss: 2.3025458445007376 dx error: 8.234144091578429e-09
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Two-layer networkOpen the file `cs231n/classifiers/fc_net.py` and complete the implementation of the `TwoLayerNet` class. Read through it to make sure you understand the API. You can run the cell below to test your implementation.
np.random.seed(231) N, D, H, C = 3, 5, 50, 7 X = np.random.randn(N, D) y = np.random.randint(C, size=N) std = 1e-3 model = TwoLayerNet(input_dim=D, hidden_dim=H, num_classes=C, weight_scale=std) print('Testing initialization ... ') W1_std = abs(model.params['W1'].std() - std) b1 = model.params['b1'] W2_std = abs(model.params['W2'].std() - std) b2 = model.params['b2'] assert W1_std < std / 10, 'First layer weights do not seem right' assert np.all(b1 == 0), 'First layer biases do not seem right' assert W2_std < std / 10, 'Second layer weights do not seem right' assert np.all(b2 == 0), 'Second layer biases do not seem right' print('Testing test-time forward pass ... ') model.params['W1'] = np.linspace(-0.7, 0.3, num=D*H).reshape(D, H) model.params['b1'] = np.linspace(-0.1, 0.9, num=H) model.params['W2'] = np.linspace(-0.3, 0.4, num=H*C).reshape(H, C) model.params['b2'] = np.linspace(-0.9, 0.1, num=C) X = np.linspace(-5.5, 4.5, num=N*D).reshape(D, N).T scores = model.loss(X) correct_scores = np.asarray( [[11.53165108, 12.2917344, 13.05181771, 13.81190102, 14.57198434, 15.33206765, 16.09215096], [12.05769098, 12.74614105, 13.43459113, 14.1230412, 14.81149128, 15.49994135, 16.18839143], [12.58373087, 13.20054771, 13.81736455, 14.43418138, 15.05099822, 15.66781506, 16.2846319 ]]) scores_diff = np.abs(scores - correct_scores).sum() assert scores_diff < 1e-6, 'Problem with test-time forward pass' print('Testing training loss (no regularization)') y = np.asarray([0, 5, 1]) loss, grads = model.loss(X, y) correct_loss = 3.4702243556 assert abs(loss - correct_loss) < 1e-10, 'Problem with training-time loss' model.reg = 1.0 loss, grads = model.loss(X, y) correct_loss = 26.5948426952 assert abs(loss - correct_loss) < 1e-10, 'Problem with regularization loss' # Errors should be around e-7 or less for reg in [0.0, 0.7]: print('Running numeric gradient check with reg = ', reg) model.reg = reg loss, grads = model.loss(X, y) for name in sorted(grads): f = lambda _: model.loss(X, y)[0] grad_num = eval_numerical_gradient(f, model.params[name], verbose=False) print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
Testing initialization ... Testing test-time forward pass ... Testing training loss (no regularization) Running numeric gradient check with reg = 0.0 W1 relative error: 1.83e-08 W2 relative error: 3.20e-10 b1 relative error: 9.83e-09 b2 relative error: 4.33e-10 Running numeric gradient check with reg = 0.7 W1 relative error: 2.53e-07 W2 relative error: 7.98e-08 b1 relative error: 1.56e-08 b2 relative error: 9.09e-10
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
SolverOpen the file `cs231n/solver.py` and read through it to familiarize yourself with the API. You also need to imeplement the `sgd` function in `cs231n/optim.py`. After doing so, use a `Solver` instance to train a `TwoLayerNet` that achieves about `36%` accuracy on the validation set.
input_size = 32 * 32 * 3 hidden_size = 50 num_classes = 10 model = TwoLayerNet(input_size, hidden_size, num_classes) solver = None ############################################################################## # TODO: Use a Solver instance to train a TwoLayerNet that achieves about 36% # # accuracy on the validation set. # ############################################################################## # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** data = { 'X_train': data['X_train'], 'y_train': data['y_train'], 'X_val': data['X_val'], 'y_val': data['y_val'], } solver = Solver(model, data, update_rule='sgd', optim_config={ 'learning_rate': 1e-3, }, lr_decay=0.95, num_epochs=10, batch_size=100, print_every=100) solver.train() # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ############################################################################## # END OF YOUR CODE # ##############################################################################
(Iteration 1 / 4900) loss: 2.300089 (Epoch 0 / 10) train acc: 0.171000; val_acc: 0.170000 (Iteration 101 / 4900) loss: 1.782419 (Iteration 201 / 4900) loss: 1.803466 (Iteration 301 / 4900) loss: 1.712676 (Iteration 401 / 4900) loss: 1.693946 (Epoch 1 / 10) train acc: 0.399000; val_acc: 0.428000 (Iteration 501 / 4900) loss: 1.711237 (Iteration 601 / 4900) loss: 1.443683 (Iteration 701 / 4900) loss: 1.574904 (Iteration 801 / 4900) loss: 1.526751 (Iteration 901 / 4900) loss: 1.352554 (Epoch 2 / 10) train acc: 0.463000; val_acc: 0.443000 (Iteration 1001 / 4900) loss: 1.340071 (Iteration 1101 / 4900) loss: 1.386843 (Iteration 1201 / 4900) loss: 1.489919 (Iteration 1301 / 4900) loss: 1.353077 (Iteration 1401 / 4900) loss: 1.467951 (Epoch 3 / 10) train acc: 0.477000; val_acc: 0.430000 (Iteration 1501 / 4900) loss: 1.337133 (Iteration 1601 / 4900) loss: 1.426819 (Iteration 1701 / 4900) loss: 1.348675 (Iteration 1801 / 4900) loss: 1.412626 (Iteration 1901 / 4900) loss: 1.354764 (Epoch 4 / 10) train acc: 0.497000; val_acc: 0.467000 (Iteration 2001 / 4900) loss: 1.422221 (Iteration 2101 / 4900) loss: 1.360665 (Iteration 2201 / 4900) loss: 1.546539 (Iteration 2301 / 4900) loss: 1.196704 (Iteration 2401 / 4900) loss: 1.401958 (Epoch 5 / 10) train acc: 0.508000; val_acc: 0.483000 (Iteration 2501 / 4900) loss: 1.243567 (Iteration 2601 / 4900) loss: 1.513808 (Iteration 2701 / 4900) loss: 1.266416 (Iteration 2801 / 4900) loss: 1.485956 (Iteration 2901 / 4900) loss: 1.049706 (Epoch 6 / 10) train acc: 0.533000; val_acc: 0.492000 (Iteration 3001 / 4900) loss: 1.264002 (Iteration 3101 / 4900) loss: 1.184786 (Iteration 3201 / 4900) loss: 1.231162 (Iteration 3301 / 4900) loss: 1.353725 (Iteration 3401 / 4900) loss: 1.217830 (Epoch 7 / 10) train acc: 0.545000; val_acc: 0.489000 (Iteration 3501 / 4900) loss: 1.446003 (Iteration 3601 / 4900) loss: 1.152495 (Iteration 3701 / 4900) loss: 1.421970 (Iteration 3801 / 4900) loss: 1.222752 (Iteration 3901 / 4900) loss: 1.213468 (Epoch 8 / 10) train acc: 0.546000; val_acc: 0.474000 (Iteration 4001 / 4900) loss: 1.187435 (Iteration 4101 / 4900) loss: 1.284799 (Iteration 4201 / 4900) loss: 1.135251 (Iteration 4301 / 4900) loss: 1.212217 (Iteration 4401 / 4900) loss: 1.213544 (Epoch 9 / 10) train acc: 0.586000; val_acc: 0.486000 (Iteration 4501 / 4900) loss: 1.306174 (Iteration 4601 / 4900) loss: 1.213528 (Iteration 4701 / 4900) loss: 1.220260 (Iteration 4801 / 4900) loss: 1.233231 (Epoch 10 / 10) train acc: 0.545000; val_acc: 0.483000
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Debug the trainingWith the default parameters we provided above, you should get a validation accuracy of about 0.36 on the validation set. This isn't very good.One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
# Run this cell to visualize training loss and train / val accuracy plt.subplot(2, 1, 1) plt.title('Training loss') plt.plot(solver.loss_history, 'o') plt.xlabel('Iteration') plt.subplot(2, 1, 2) plt.title('Accuracy') plt.plot(solver.train_acc_history, '-o', label='train') plt.plot(solver.val_acc_history, '-o', label='val') plt.plot([0.5] * len(solver.val_acc_history), 'k--') plt.xlabel('Epoch') plt.legend(loc='lower right') plt.gcf().set_size_inches(15, 12) plt.show() from cs231n.vis_utils import visualize_grid # Visualize the weights of the network def show_net_weights(net): W1 = net.params['W1'] W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2) plt.imshow(visualize_grid(W1, padding=3).astype('uint8')) plt.gca().axis('off') plt.show() show_net_weights(model)
_____no_output_____
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Tune your hyperparameters**What's wrong?**. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.**Tuning**. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.**Approximate results**. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.**Experiment**: You goal in this exercise is to get as good of a result on CIFAR-10 as you can (52% could serve as a reference), with a fully-connected Neural Network. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
best_model = None best_acc = 0 ################################################################################# # TODO: Tune hyperparameters using the validation set. Store your best trained # # model in best_model. # # # # To help debug your network, it may help to use visualizations similar to the # # ones we used above; these visualizations will have significant qualitative # # differences from the ones we saw above for the poorly tuned network. # # # # Tweaking hyperparameters by hand can be fun, but you might find it useful to # # write code to sweep through possible combinations of hyperparameters # # automatically like we did on thexs previous exercises. # ################################################################################# # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** learning_rates = [1e-3] regularization_strengths = [0.05, 0.1, 0.15] # *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** for (lr, reg) in [(lr, reg) for lr in learning_rates for reg in regularization_strengths]: model = TwoLayerNet(input_size, hidden_size, num_classes, reg=reg) solver = None data = { 'X_train': data['X_train'], 'y_train': data['y_train'], 'X_val': data['X_val'], 'y_val': data['y_val'] } solver = Solver(model, data, update_rule='sgd', optim_config={ 'learning_rate': lr, }, lr_decay=0.95, num_epochs=10, batch_size=100, print_every=100) solver.train() acc = solver.check_accuracy(data['X_val'], data['y_val']) print('lr %e reg %e val accuracy: %f' % (lr, reg, acc)) if(acc > best_acc): best_acc = acc best_model = model # *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)***** ################################################################################ # END OF YOUR CODE # ################################################################################
(Iteration 1 / 4900) loss: 2.306069 (Epoch 0 / 10) train acc: 0.104000; val_acc: 0.114000 (Iteration 101 / 4900) loss: 1.724592 (Iteration 201 / 4900) loss: 1.782996 (Iteration 301 / 4900) loss: 1.663951 (Iteration 401 / 4900) loss: 1.639033 (Epoch 1 / 10) train acc: 0.443000; val_acc: 0.435000 (Iteration 501 / 4900) loss: 1.732685 (Iteration 601 / 4900) loss: 1.450637 (Iteration 701 / 4900) loss: 1.544135 (Iteration 801 / 4900) loss: 1.342042 (Iteration 901 / 4900) loss: 1.524098 (Epoch 2 / 10) train acc: 0.460000; val_acc: 0.468000 (Iteration 1001 / 4900) loss: 1.504554 (Iteration 1101 / 4900) loss: 1.562994 (Iteration 1201 / 4900) loss: 1.531669 (Iteration 1301 / 4900) loss: 1.580678 (Iteration 1401 / 4900) loss: 1.610020 (Epoch 3 / 10) train acc: 0.491000; val_acc: 0.468000 (Iteration 1501 / 4900) loss: 1.472904 (Iteration 1601 / 4900) loss: 1.420686 (Iteration 1701 / 4900) loss: 1.221016 (Iteration 1801 / 4900) loss: 1.438085 (Iteration 1901 / 4900) loss: 1.399311 (Epoch 4 / 10) train acc: 0.487000; val_acc: 0.470000 (Iteration 2001 / 4900) loss: 1.328356 (Iteration 2101 / 4900) loss: 1.327213 (Iteration 2201 / 4900) loss: 1.468259 (Iteration 2301 / 4900) loss: 1.359908 (Iteration 2401 / 4900) loss: 1.654595 (Epoch 5 / 10) train acc: 0.544000; val_acc: 0.479000 (Iteration 2501 / 4900) loss: 1.324480 (Iteration 2601 / 4900) loss: 1.415154 (Iteration 2701 / 4900) loss: 1.388397 (Iteration 2801 / 4900) loss: 1.282842 (Iteration 2901 / 4900) loss: 1.567757 (Epoch 6 / 10) train acc: 0.522000; val_acc: 0.476000 (Iteration 3001 / 4900) loss: 1.198947 (Iteration 3101 / 4900) loss: 1.411820 (Iteration 3201 / 4900) loss: 1.409265 (Iteration 3301 / 4900) loss: 1.546007 (Iteration 3401 / 4900) loss: 1.298864 (Epoch 7 / 10) train acc: 0.572000; val_acc: 0.477000 (Iteration 3501 / 4900) loss: 1.265209 (Iteration 3601 / 4900) loss: 1.261452 (Iteration 3701 / 4900) loss: 1.294712 (Iteration 3801 / 4900) loss: 1.273777 (Iteration 3901 / 4900) loss: 1.221081 (Epoch 8 / 10) train acc: 0.544000; val_acc: 0.484000 (Iteration 4001 / 4900) loss: 1.300183 (Iteration 4101 / 4900) loss: 1.377406 (Iteration 4201 / 4900) loss: 1.382470 (Iteration 4301 / 4900) loss: 1.290416 (Iteration 4401 / 4900) loss: 1.379400 (Epoch 9 / 10) train acc: 0.529000; val_acc: 0.496000 (Iteration 4501 / 4900) loss: 1.376942 (Iteration 4601 / 4900) loss: 1.132249 (Iteration 4701 / 4900) loss: 1.152085 (Iteration 4801 / 4900) loss: 1.318604 (Epoch 10 / 10) train acc: 0.570000; val_acc: 0.486000 lr 1.000000e-03 reg 5.000000e-02 val accuracy: 0.496000 (Iteration 1 / 4900) loss: 2.310147 (Epoch 0 / 10) train acc: 0.117000; val_acc: 0.103000 (Iteration 101 / 4900) loss: 1.851684 (Iteration 201 / 4900) loss: 1.804342 (Iteration 301 / 4900) loss: 1.513165 (Iteration 401 / 4900) loss: 1.557443 (Epoch 1 / 10) train acc: 0.473000; val_acc: 0.433000 (Iteration 501 / 4900) loss: 1.675617 (Iteration 601 / 4900) loss: 1.428126 (Iteration 701 / 4900) loss: 1.495615 (Iteration 801 / 4900) loss: 1.783007 (Iteration 901 / 4900) loss: 1.644330 (Epoch 2 / 10) train acc: 0.464000; val_acc: 0.471000 (Iteration 1001 / 4900) loss: 1.421902 (Iteration 1101 / 4900) loss: 1.482220 (Iteration 1201 / 4900) loss: 1.673121 (Iteration 1301 / 4900) loss: 1.560792 (Iteration 1401 / 4900) loss: 1.626430 (Epoch 3 / 10) train acc: 0.480000; val_acc: 0.443000 (Iteration 1501 / 4900) loss: 1.547922 (Iteration 1601 / 4900) loss: 1.453693 (Iteration 1701 / 4900) loss: 1.314836 (Iteration 1801 / 4900) loss: 1.390783 (Iteration 1901 / 4900) loss: 1.297606 (Epoch 4 / 10) train acc: 0.506000; val_acc: 0.487000 (Iteration 2001 / 4900) loss: 1.464706 (Iteration 2101 / 4900) loss: 1.364239 (Iteration 2201 / 4900) loss: 1.352896 (Iteration 2301 / 4900) loss: 1.457134 (Iteration 2401 / 4900) loss: 1.254323 (Epoch 5 / 10) train acc: 0.543000; val_acc: 0.494000 (Iteration 2501 / 4900) loss: 1.575036 (Iteration 2601 / 4900) loss: 1.311684 (Iteration 2701 / 4900) loss: 1.388892 (Iteration 2801 / 4900) loss: 1.423586 (Iteration 2901 / 4900) loss: 1.400047 (Epoch 6 / 10) train acc: 0.529000; val_acc: 0.490000 (Iteration 3001 / 4900) loss: 1.275920 (Iteration 3101 / 4900) loss: 1.479168 (Iteration 3201 / 4900) loss: 1.525349 (Iteration 3301 / 4900) loss: 1.329852 (Iteration 3401 / 4900) loss: 1.335383 (Epoch 7 / 10) train acc: 0.533000; val_acc: 0.490000 (Iteration 3501 / 4900) loss: 1.492119 (Iteration 3601 / 4900) loss: 1.382005 (Iteration 3701 / 4900) loss: 1.428007 (Iteration 3801 / 4900) loss: 1.385181 (Iteration 3901 / 4900) loss: 1.351685 (Epoch 8 / 10) train acc: 0.561000; val_acc: 0.509000 (Iteration 4001 / 4900) loss: 1.135909 (Iteration 4101 / 4900) loss: 1.373260 (Iteration 4201 / 4900) loss: 1.218843 (Iteration 4301 / 4900) loss: 1.297008 (Iteration 4401 / 4900) loss: 1.426274 (Epoch 9 / 10) train acc: 0.534000; val_acc: 0.473000 (Iteration 4501 / 4900) loss: 1.451631 (Iteration 4601 / 4900) loss: 1.424421 (Iteration 4701 / 4900) loss: 1.314771 (Iteration 4801 / 4900) loss: 1.383842 (Epoch 10 / 10) train acc: 0.538000; val_acc: 0.488000 lr 1.000000e-03 reg 1.000000e-01 val accuracy: 0.509000 (Iteration 1 / 4900) loss: 2.312442 (Epoch 0 / 10) train acc: 0.109000; val_acc: 0.096000 (Iteration 101 / 4900) loss: 1.861943 (Iteration 201 / 4900) loss: 1.835878 (Iteration 301 / 4900) loss: 1.698791 (Iteration 401 / 4900) loss: 1.511703 (Epoch 1 / 10) train acc: 0.423000; val_acc: 0.444000 (Iteration 501 / 4900) loss: 1.499857 (Iteration 601 / 4900) loss: 1.402853 (Iteration 701 / 4900) loss: 1.656265 (Iteration 801 / 4900) loss: 1.654180 (Iteration 901 / 4900) loss: 1.421730 (Epoch 2 / 10) train acc: 0.446000; val_acc: 0.424000 (Iteration 1001 / 4900) loss: 1.827052 (Iteration 1101 / 4900) loss: 1.424511 (Iteration 1201 / 4900) loss: 1.331444 (Iteration 1301 / 4900) loss: 1.392123 (Iteration 1401 / 4900) loss: 1.455621 (Epoch 3 / 10) train acc: 0.504000; val_acc: 0.489000 (Iteration 1501 / 4900) loss: 1.439697 (Iteration 1601 / 4900) loss: 1.585881 (Iteration 1701 / 4900) loss: 1.428888 (Iteration 1801 / 4900) loss: 1.319450 (Iteration 1901 / 4900) loss: 1.400379 (Epoch 4 / 10) train acc: 0.509000; val_acc: 0.482000 (Iteration 2001 / 4900) loss: 1.499542 (Iteration 2101 / 4900) loss: 1.441353 (Iteration 2201 / 4900) loss: 1.365738 (Iteration 2301 / 4900) loss: 1.475881 (Iteration 2401 / 4900) loss: 1.581152 (Epoch 5 / 10) train acc: 0.470000; val_acc: 0.470000 (Iteration 2501 / 4900) loss: 1.492029 (Iteration 2601 / 4900) loss: 1.490493 (Iteration 2701 / 4900) loss: 1.174688 (Iteration 2801 / 4900) loss: 1.419202 (Iteration 2901 / 4900) loss: 1.535868 (Epoch 6 / 10) train acc: 0.483000; val_acc: 0.490000 (Iteration 3001 / 4900) loss: 1.393103 (Iteration 3101 / 4900) loss: 1.294975 (Iteration 3201 / 4900) loss: 1.275845 (Iteration 3301 / 4900) loss: 1.411851 (Iteration 3401 / 4900) loss: 1.269081 (Epoch 7 / 10) train acc: 0.546000; val_acc: 0.502000 (Iteration 3501 / 4900) loss: 1.264167 (Iteration 3601 / 4900) loss: 1.432768 (Iteration 3701 / 4900) loss: 1.218371 (Iteration 3801 / 4900) loss: 1.208629 (Iteration 3901 / 4900) loss: 1.325607 (Epoch 8 / 10) train acc: 0.540000; val_acc: 0.518000 (Iteration 4001 / 4900) loss: 1.316241 (Iteration 4101 / 4900) loss: 1.117369 (Iteration 4201 / 4900) loss: 1.358598 (Iteration 4301 / 4900) loss: 1.291494 (Iteration 4401 / 4900) loss: 1.339091 (Epoch 9 / 10) train acc: 0.534000; val_acc: 0.511000 (Iteration 4501 / 4900) loss: 1.280720 (Iteration 4601 / 4900) loss: 1.341328 (Iteration 4701 / 4900) loss: 1.112138 (Iteration 4801 / 4900) loss: 1.218325 (Epoch 10 / 10) train acc: 0.568000; val_acc: 0.484000 lr 1.000000e-03 reg 1.500000e-01 val accuracy: 0.518000
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Test your model!Run your best model on the validation and test sets. You should achieve above 48% accuracy on the validation set and the test set.
y_val_pred = np.argmax(best_model.loss(data['X_val']), axis=1) print('Validation set accuracy: ', (y_val_pred == data['y_val']).mean()) data = get_CIFAR10_data() y_test_pred = np.argmax(best_model.loss(data["X_test"]), axis=1) print('Test set accuracy: ', (y_test_pred == data['y_test']).mean())
Test set accuracy: 0.495
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Inline Question 2: Now that you have trained a Neural Network classifier, you may find that your testing accuracy is much lower than the training accuracy. In what ways can we decrease this gap? Select all that apply.1. Train on a larger dataset.2. Add more hidden units.3. Increase the regularization strength.4. None of the above.$\color{blue}{\textit Your Answer:}$$\color{blue}{\textit Your Explanation:}$
_____no_output_____
Apache-2.0
assignment1/two_layer_net.ipynb
qiaw99/CS231n-Convolutional-Neural-Networks-for-Visual-Recognition
Welcome to 101 Exercises for Python FundamentalsSolving these exercises will help make you a better programmer. Solve them in order, because each solution builds scaffolding, working code, and knowledge you can use on future problems. Read the directions carefully, and have fun!> "Learning to program takes a little bit of study and a *lot* of practice" - Luis Montealegre Getting Started1. Go to https://colab.research.google.com/github/ryanorsinger/101-exercises/blob/main/101-exercises.ipynb2. To save your work to your Google Drive, go to File then "Save Copy in Drive".3. Your own work will now appear in your Google Drive account!If you need a fresh, blank copy of this document, go to https://colab.research.google.com/github/ryanorsinger/101-exercises/blob/main/101-exercises.ipynb and save a fresh copy in your Google Drive. Orientation- This code notebook is composed of cells. Each cell is either text or Python code.- To run a cell of code, click the "play button" icon to the left of the cell or click on the cell and press "Shift+Enter" on your keyboard. This will execute the Python code contained in the cell. Executing a cell that defines a variable is important before executing or authoring a cell that depends on that previously created variable assignment.- **Expect to see lots of errors** the first time you load this page. - **Expect to see lots of errors** for all cells run without code that matches the assertion tests.- Until you click the blue "Copy and Edit" button to make your own copy, you will see an entire page of errors. This is part of the automated tests.- Each *assert* line is both an example and a test that tests for the presence and functionality of the instructed exercise. The only 3 conditions that produce no errors:1. When you make a fresh **copy** of the project to your own account (by clicking "Copy and Edit")2. When you go to "Run" and then click "Restart Session"3. When every single assertion passes. Outline- Each cell starts with a problem statement that describes the exercise to complete.- Underneath each problem statement, learners will need to write code to produce an answer.- The **assert** lines test to see that your code solves the problem appropriately- Many exercises will rely on previous solutions to be correctly completed- The `print("Exercise is complete")` line will only run if your solution passes the assertion test(s)- Be sure to create programmatic solutions that will work for all inputs:- For example, calling the `is_even(2)` returns `True`, but your function should work for all even numbers, both positive and negative. Guidance- Get Python to do the work for you. For example, if the exercise instructs you to reverse a list of numbers, your job is to find the - Save often by clicking the blue "Save" button.- If you need to clear the output or reset the notebook, go to "Run" then "Restart Session" to clear up any error messages.- Do not move or alter the lines of code that contain the `assert` statements. Those are what run your solution and test its actual output vs. expected outputs.- Seek to understand the problem before trying to solve it. Can you explain the problem to someone else in English? Can you explain the solution in English?- Slow down and read any error messages you encounter. Error messages provide insight into how to resolve the error. When in doubt, put your exact error into a search engine and look for results that reference an identical or similar problem. Get Python To Do The Work For YouOne of the main jobs of a programming language is to help people solve problems programatically, so we don't have to do so much by hand. For example, it's easy for a person to manually reverse the list `[1, 2, 3]`, but imagine reversing a list of a million things or sorting a list of even a hundred things. When we write programmatic solutions in code, we are providing instructions to the computer to do a task. Computers follow the letter of the code, not the intent, and do exactly what they are told to do. In this way, Python can reverse a list of 3 numbers or 100 numbers or ten million numbers with the same instructions. Repetition is a key idea behind programming languages.This means that your task with these exercises is to determine a sequence of steps that solve the problem and then find the Python code that will run those instructions. If you're sorting or reversing things by hand, you're not doing it right! How To Discover How To Do Something in Python1. The first step is to make sure you know what the problem is asking.2. The second step is to determine, in English (or your first spoken language), what steps you need to take.3. Use a search engine to look for code examples to identical or similar problems.One of the best ways to discover how to do things in Python is to use a search engine. Go to your favorite search engine and search for "how to reverse a list in Python" or "how to sort a list in Python". That's how both learners and professionals find answers and examples all the time. Search for what you want and add "in Python" and you'll get lots of code examples. Searching for "How to sum a list of numbers in Python" is a very effective way to discover exactly how to do that task. Learning to Program and Code- You can make a new blank cell for Python code at any time in this document.- If you want more freedom to explore learning Python in a blank notebook, go here https://colab.research.google.com/create=true and make yourself a blank, new notebook.- Programming is an intellectual activity of designing a solution. "Coding" means turning your programmatic solution into code w/ all the right syntax and parts of the programming language.- Expect to make mistakes and adopt the attitude that **the error message provides the information you need to proceed**. You will put lots of error messages into search engines to learn this craft!- Because computers have zero ability to read in between the lines or "catch the drift" or know what you mean, code only does what it is told to do.- Code doesn't do what you *want* it to do, code does what you've told it to do.- Before writing any code, figure out how you would solve the problem in spoken language to describe the sequence of steps in the solution.- Think about your solution in English (or your natural language). It's **critical** to solve the problem in your natural language before trying to get a programming language to do the work. Troubleshooting- If this entire document shows "Name Error" for many cells, it means you should read the "Getting Started" instructions above to make your own copy.- Be sure to commit your work to make save points, as you go.- If you load this page and you see your code but not the results of the code, be sure to run each cell (shift + Enter makes this quick)- "Name Error" means that you need to assign a variable or define the function as instructed.- "Assertion Error" means that your provided solution does not match the correct answer.- "Type Error" means that your data type provided is not accurate- If your kernel freezes, click on "Run" then select "Restart Session"- If you require additional troubleshooting assistance, click on "Help" and then "Docs" to access documentation for this platform.- If you have discoverd a bug or typo, please triple check your spelling then create a new issue at [https://github.com/ryanorsinger/101-exercises/issues](https://github.com/ryanorsinger/101-exercises/issues) to notify the author.
# Example problem: # Uncomment the line below and run this cell. # The hashtag "#" character in a line of Python code is the comment character. doing_python_right_now = True # The lines below will test your answer. If you see an error, then it means that your answer is incorrect or incomplete. assert doing_python_right_now == True, "If you see a NameError, it means that the variable is not created and assigned a value. An 'Assertion Error' means that the value of the variable is incorrect." print("Exercise 0 is correct") # This line will print if your solution passes the assertion above. # Exercise 1 # On the line below, create a variable named on_mars_right_now and assign it the boolean value of False on_mars_right_now = False assert on_mars_right_now == False, "If you see a Name Error, be sure to create the variable and assign it a value." print("Exercise 1 is correct.") # Exercise 2 # Create a variable named fruits and assign it a list of fruits containing the following fruit names as strings: # mango, banana, guava, kiwi, and strawberry. fruits = ['mango', 'banana', 'guava', 'kiwi', 'strawberry'] assert fruits == ["mango", "banana", "guava", "kiwi", "strawberry"], "If you see an Assert Error, ensure the variable contains all the strings in the provided order" print("Exercise 2 is correct.") # Exercise 3 # Create a variable named vegetables and assign it a list of fruits containing the following vegetable names as strings: # eggplant, broccoli, carrot, cauliflower, and zucchini vegetables = ['eggplant', 'broccoli', 'carrot', 'cauliflower', 'zucchini'] assert vegetables == ["eggplant", "broccoli", "carrot", "cauliflower", "zucchini"], "Ensure the variable contains all the strings in the provided order" print("Exercise 3 is correct.") # Exercise 4 # Create a variable named numbers and assign it a list of numbers, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] assert numbers == [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], "Ensure the variable contains the numbers 1-10 in order." print("Exercise 4 is correct.")
Exercise 4 is correct.
MIT
101_exercises.ipynb
barbmarques/python-exercises
List Operations**Hint** Recommend finding and using built-in Python functionality whenever possible.
# Exercise 5 # Given the following assigment of the list of fruits, add "tomato" to the end of the list. fruits = ["mango", "banana", "guava", "kiwi", "strawberry"] fruits.append('tomato') assert fruits == ["mango", "banana", "guava", "kiwi", "strawberry", "tomato"], "Ensure the variable contains all the strings in the right order" print("Exercise 5 is correct") # Exercise 6 # Given the following assignment of the vegetables list, add "tomato" to the end of the list. vegetables = ["eggplant", "broccoli", "carrot", "cauliflower", "zucchini"] vegetables.append('tomato') assert vegetables == ["eggplant", "broccoli", "carrot", "cauliflower", "zucchini", "tomato"], "Ensure the variable contains all the strings in the provided order" print("Exercise 6 is correct") # Exercise 7 # Given the list of numbers defined below, reverse the list of numbers that you created above. numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] numbers.reverse() assert numbers == [10, 9, 8, 7, 6, 5, 4, 3, 2, 1], "Assert Error means that the answer is incorrect." print("Exercise 7 is correct.") # Exercise 8 # Sort the vegetables in alphabetical order vegetables.sort() assert vegetables == ['broccoli', 'carrot', 'cauliflower', 'eggplant', 'tomato', 'zucchini'] print("Exercise 8 is correct.") # Exercise 9 # Write the code necessary to sort the fruits in reverse alphabetical order fruits.sort(reverse = True) assert fruits == ['tomato', 'strawberry', 'mango', 'kiwi', 'guava', 'banana'] print("Exercise 9 is correct.") # Exercise 10 # Write the code necessary to produce a single list that holds all fruits then all vegetables in the order as they were sorted above. fruits_and_veggies = fruits + vegetables assert fruits_and_veggies == ['tomato', 'strawberry', 'mango', 'kiwi', 'guava', 'banana', 'broccoli', 'carrot', 'cauliflower', 'eggplant', 'tomato', 'zucchini'] print("Exercise 10 is correct")
Exercise 10 is correct
MIT
101_exercises.ipynb
barbmarques/python-exercises