markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
In the Cartpole environment:- `observation` is an array of 4 floats: - the position and velocity of the cart - the angular position and velocity of the pole - `reward` is a scalar float value- `action` is a scalar integer with only two possible values: - `0` — "move left" - `1` — "move right" | time_step = env.reset()
print('Time step:')
print(time_step)
action = np.array(1, dtype=np.int32)
next_time_step = env.step(action)
print('Next time step:')
print(next_time_step) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Usually two environments are instantiated: one for training and one for evaluation. | train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
The Cartpole environment, like most environments, is written in pure Python. This is converted to TensorFlow using the `TFPyEnvironment` wrapper.The original environment's API uses Numpy arrays. The `TFPyEnvironment` converts these to `Tensors` to make it compatible with Tensorflow agents and policies. | train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
AgentThe algorithm used to solve an RL problem is represented by an `Agent`. TF-Agents provides standard implementations of a variety of `Agents`, including:- [DQN](https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf) (used in this tutorial)- [REINFORCE](http://www-anw.cs.umass.edu/~barto/courses/cs687/williams92simple.pdf)- [DDPG](https://arxiv.org/pdf/1509.02971.pdf)- [TD3](https://arxiv.org/pdf/1802.09477.pdf)- [PPO](https://arxiv.org/abs/1707.06347)- [SAC](https://arxiv.org/abs/1801.01290).The DQN agent can be used in any environment which has a discrete action space.At the heart of a DQN Agent is a `QNetwork`, a neural network model that can learn to predict `QValues` (expected returns) for all actions, given an observation from the environment.Use `tf_agents.networks.q_network` to create a `QNetwork`, passing in the `observation_spec`, `action_spec`, and a tuple describing the number and size of the model's hidden layers. | fc_layer_params = (100,)
q_net = q_network.QNetwork(
train_env.observation_spec(),
train_env.action_spec(),
fc_layer_params=fc_layer_params) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Now use `tf_agents.agents.dqn.dqn_agent` to instantiate a `DqnAgent`. In addition to the `time_step_spec`, `action_spec` and the QNetwork, the agent constructor also requires an optimizer (in this case, `AdamOptimizer`), a loss function, and an integer step counter. | optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate=learning_rate)
train_step_counter = tf.Variable(0)
agent = dqn_agent.DqnAgent(
train_env.time_step_spec(),
train_env.action_spec(),
q_network=q_net,
optimizer=optimizer,
td_errors_loss_fn=common.element_wise_squared_loss,
train_step_counter=train_step_counter)
agent.initialize() | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
PoliciesA policy defines the way an agent acts in an environment. Typically, the goal of reinforcement learning is to train the underlying model until the policy produces the desired outcome.In this tutorial:- The desired outcome is keeping the pole balanced upright over the cart.- The policy returns an action (left or right) for each `time_step` observation.Agents contain two policies: - `agent.policy` — The main policy that is used for evaluation and deployment.- `agent.collect_policy` — A second policy that is used for data collection. | eval_policy = agent.policy
collect_policy = agent.collect_policy | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Policies can be created independently of agents. For example, use `tf_agents.policies.random_tf_policy` to create a policy which will randomly select an action for each `time_step`. | random_policy = random_tf_policy.RandomTFPolicy(train_env.time_step_spec(),
train_env.action_spec()) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
To get an action from a policy, call the `policy.action(time_step)` method. The `time_step` contains the observation from the environment. This method returns a `PolicyStep`, which is a named tuple with three components:- `action` — the action to be taken (in this case, `0` or `1`)- `state` — used for stateful (that is, RNN-based) policies- `info` — auxiliary data, such as log probabilities of actions | example_environment = tf_py_environment.TFPyEnvironment(
suite_gym.load('CartPole-v0'))
time_step = example_environment.reset()
random_policy.action(time_step) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Metrics and EvaluationThe most common metric used to evaluate a policy is the average return. The return is the sum of rewards obtained while running a policy in an environment for an episode. Several episodes are run, creating an average return.The following function computes the average return of a policy, given the policy, environment, and a number of episodes. | #@test {"skip": true}
def compute_avg_return(environment, policy, num_episodes=10):
total_return = 0.0
for _ in range(num_episodes):
time_step = environment.reset()
episode_return = 0.0
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = environment.step(action_step.action)
episode_return += time_step.reward
total_return += episode_return
avg_return = total_return / num_episodes
return avg_return.numpy()[0]
# See also the metrics module for standard implementations of different metrics.
# https://github.com/tensorflow/agents/tree/master/tf_agents/metrics | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Running this computation on the `random_policy` shows a baseline performance in the environment. | compute_avg_return(eval_env, random_policy, num_eval_episodes) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Replay BufferThe replay buffer keeps track of data collected from the environment. This tutorial uses `tf_agents.replay_buffers.tf_uniform_replay_buffer.TFUniformReplayBuffer`, as it is the most common. The constructor requires the specs for the data it will be collecting. This is available from the agent using the `collect_data_spec` method. The batch size and maximum buffer length are also required. | replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec=agent.collect_data_spec,
batch_size=train_env.batch_size,
max_length=replay_buffer_max_length) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
For most agents, `collect_data_spec` is a named tuple called `Trajectory`, containing the specs for observations, actions, rewards, and other items. | agent.collect_data_spec
agent.collect_data_spec._fields | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Data CollectionNow execute the random policy in the environment for a few steps, recording the data in the replay buffer. | #@test {"skip": true}
def collect_step(environment, policy, buffer):
time_step = environment.current_time_step()
action_step = policy.action(time_step)
next_time_step = environment.step(action_step.action)
traj = trajectory.from_transition(time_step, action_step, next_time_step)
# Add trajectory to the replay buffer
buffer.add_batch(traj)
def collect_data(env, policy, buffer, steps):
for _ in range(steps):
collect_step(env, policy, buffer)
collect_data(train_env, random_policy, replay_buffer, initial_collect_steps)
# This loop is so common in RL, that we provide standard implementations.
# For more details see the drivers module.
# https://www.tensorflow.org/agents/api_docs/python/tf_agents/drivers | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
The replay buffer is now a collection of Trajectories. | # For the curious:
# Uncomment to peel one of these off and inspect it.
# iter(replay_buffer.as_dataset()).next() | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
The agent needs access to the replay buffer. This is provided by creating an iterable `tf.data.Dataset` pipeline which will feed data to the agent.Each row of the replay buffer only stores a single observation step. But since the DQN Agent needs both the current and next observation to compute the loss, the dataset pipeline will sample two adjacent rows for each item in the batch (`num_steps=2`).This dataset is also optimized by running parallel calls and prefetching data. | # Dataset generates trajectories with shape [Bx2x...]
dataset = replay_buffer.as_dataset(
num_parallel_calls=3,
sample_batch_size=batch_size,
num_steps=2).prefetch(3)
dataset
iterator = iter(dataset)
print(iterator)
# For the curious:
# Uncomment to see what the dataset iterator is feeding to the agent.
# Compare this representation of replay data
# to the collection of individual trajectories shown earlier.
# iterator.next() | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Training the agentTwo things must happen during the training loop:- collect data from the environment- use that data to train the agent's neural network(s)This example also periodicially evaluates the policy and prints the current score.The following will take ~5 minutes to run. | #@test {"skip": true}
try:
%%time
except:
pass
# (Optional) Optimize by wrapping some of the code in a graph using TF function.
agent.train = common.function(agent.train)
# Reset the train step
agent.train_step_counter.assign(0)
# Evaluate the agent's policy once before training.
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
returns = [avg_return]
for _ in range(num_iterations):
# Collect a few steps using collect_policy and save to the replay buffer.
collect_data(train_env, agent.collect_policy, replay_buffer, collect_steps_per_iteration)
# Sample a batch of data from the buffer and update the agent's network.
experience, unused_info = next(iterator)
train_loss = agent.train(experience).loss
step = agent.train_step_counter.numpy()
if step % log_interval == 0:
print('step = {0}: loss = {1}'.format(step, train_loss))
if step % eval_interval == 0:
avg_return = compute_avg_return(eval_env, agent.policy, num_eval_episodes)
print('step = {0}: Average Return = {1}'.format(step, avg_return))
returns.append(avg_return) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Visualization PlotsUse `matplotlib.pyplot` to chart how the policy improved during training.One iteration of `Cartpole-v0` consists of 200 time steps. The environment gives a reward of `+1` for each step the pole stays up, so the maximum return for one episode is 200. The charts shows the return increasing towards that maximum each time it is evaluated during training. (It may be a little unstable and not increase monotonically each time.) | #@test {"skip": true}
iterations = range(0, num_iterations + 1, eval_interval)
plt.plot(iterations, returns)
plt.ylabel('Average Return')
plt.xlabel('Iterations')
plt.ylim(top=250) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Videos Charts are nice. But more exciting is seeing an agent actually performing a task in an environment. First, create a function to embed videos in the notebook. | def embed_mp4(filename):
"""Embeds an mp4 file in the notebook."""
video = open(filename,'rb').read()
b64 = base64.b64encode(video)
tag = '''
<video width="640" height="480" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4">
Your browser does not support the video tag.
</video>'''.format(b64.decode())
return IPython.display.HTML(tag) | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
Now iterate through a few episodes of the Cartpole game with the agent. The underlying Python environment (the one "inside" the TensorFlow environment wrapper) provides a `render()` method, which outputs an image of the environment state. These can be collected into a video. | def create_policy_eval_video(policy, filename, num_episodes=5, fps=30):
filename = filename + ".mp4"
with imageio.get_writer(filename, fps=fps) as video:
for _ in range(num_episodes):
time_step = eval_env.reset()
video.append_data(eval_py_env.render())
while not time_step.is_last():
action_step = policy.action(time_step)
time_step = eval_env.step(action_step.action)
video.append_data(eval_py_env.render())
return embed_mp4(filename)
create_policy_eval_video(agent.policy, "trained-agent") | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
For fun, compare the trained agent (above) to an agent moving randomly. (It does not do as well.) | create_policy_eval_video(random_policy, "random-agent") | _____no_output_____ | Apache-2.0 | docs/tutorials/1_dqn_tutorial.ipynb | FlorisHoogenboom/agents |
CB LAD matchWe geocode CrunchBase with Local Authority District data. 0. Preamble | %run ../notebook_preamble.ipy
import geopandas as gp
from shapely.geometry import Point | _____no_output_____ | MIT | notebooks/dev/04_jmg_cb_lad_merge.ipynb | Juan-Mateos/cb_processing |
Load CB data | cb = pd.read_csv('../../data/processed/18_9_2019_cb_sector_labelled.csv')
shapes = gp.read_file('../../data/external/lad_shape/Local_Authority_Districts_December_2018_Boundaries_GB_BFC.shp')
#Create geodataframe
cb_uk = cb.loc[cb['country_alpha_2']=='GB']
cb_uk_geo = gp.GeoDataFrame(cb_uk, geometry=[Point(x, y) for x, y in zip(cb_uk['longitude'], cb_uk['latitude'])])
#Reproject the LADs and create spatial join
shapes = shapes.to_crs({'init':'epsg:4326'})
cb_joined = gp.sjoin(cb_uk_geo,shapes,how='left',op='within') | _____no_output_____ | MIT | notebooks/dev/04_jmg_cb_lad_merge.ipynb | Juan-Mateos/cb_processing |
Names to keep | keep_cols = list(cb.columns) + ['lad18nm','lad18cd']
cb_joined_keep = cb_joined[keep_cols]
#Concatenate cb with the names above
cb_all = pd.concat([cb.loc[cb['country_alpha_2']!='GB'],cb_joined_keep],axis=0)[keep_cols]
cb_all.to_csv('../../data/processed/18_9_2019_cb_sector_labelled_geo.csv')
#from data_getters.labs.core import upload_file
#upload_file('../../data/processed/18_9_2019_cb_sector_labelled_geo.csv') | _____no_output_____ | MIT | notebooks/dev/04_jmg_cb_lad_merge.ipynb | Juan-Mateos/cb_processing |
**Riego de Dios, Celyssa Chryse** **Question 1.** Create a Python code that displays a square matrix whose length is 5 (10 points) | import numpy as np #Import library
A = np.array([[1,2,3,4,5],[2,3,4,5,1],[3,4,5,1,2],[4,5,1,2,3],[5,1,2,3,4]]) #SET OF 5X5 MATRIX
print("Square Matrix whose length is 5")
print(A) | Square Matrix whose length is 5
[[1 2 3 4 5]
[2 3 4 5 1]
[3 4 5 1 2]
[4 5 1 2 3]
[5 1 2 3 4]]
| Apache-2.0 | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 |
**Question 2.** Create a Python code that displays a square matrix whose elements below the principal diagonal are zero (10 points) | import numpy as np
B = np.triu([[1,2,3,4,5],[2,3,4,5,1],[3,4,5,1,2],[4,5,1,2,3],[5,1,2,3,4]])
print("Square Matrix whose elements below the principal diagonal are zero")
print(B) | Square Matrix whose elements below the principal diagonal are zero
[[1 2 3 4 5]
[0 3 4 5 1]
[0 0 5 1 2]
[0 0 0 2 3]
[0 0 0 0 4]]
| Apache-2.0 | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 |
**Question 3.** Create a Python code that displays a square matrix which is symmetrical (10 points) | import numpy as np
F = np.array([[1,2,3],[2,3,3],[3,4,-2]])
print("Symmetric form of Matrix")
print(F)
G = np.transpose(F)
print("Transpose of the Matrix")
print(G) | Symmetric form of Matrix
[[ 1 2 3]
[ 2 3 3]
[ 3 4 -2]]
Transpose of the Matrix
[[ 1 2 3]
[ 2 3 4]
[ 3 3 -2]]
| Apache-2.0 | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 |
**Question 4.** What is the inverse of matrix C? Show your solution by python coding. (20 points) | #Python Program to Inverse a 3x3 Matrix C = ([[1,2,3],[2,3,3],[3,4,-2]])
C = np.array([[1,2,3],[2,3,3],[3,4,-2]])
print(C,"\n")
D = np.linalg.inv(C)
print(D) | [[ 1 2 3]
[ 2 3 3]
[ 3 4 -2]]
[[-3.6 3.2 -0.6]
[ 2.6 -2.2 0.6]
[-0.2 0.4 -0.2]]
| Apache-2.0 | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 |
**Question 5.** What is the determinant of the given matrix in Question 4? Show your solution by python coding. (20 points) | import numpy as np
C = np.array([[1,2,3],[2,3,3],[3,4,-2]])
print(C,"\n")
H = np.linalg.det(C)
print(round(H)) | [[ 1 2 3]
[ 2 3 3]
[ 3 4 -2]]
5
| Apache-2.0 | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 |
**Question 6.** Find the roots of the linear equations by showing its python codes (30 points) | import numpy as np
A = np.array([[5,4,1],[10,9,4],[10,13,15]])
print(A,"\n")
A_ = np.linalg.inv(A)
print(A_,"\n")
B = np.array([[3.4],[8.8],[19.2]])
print(B,"\n")
AA_ = np.dot(A,A_)
print(AA_,"\n")
BA_ = np.dot(A_,B)
print(BA_) | [[ 5 4 1]
[10 9 4]
[10 13 15]]
[[ 5.53333333 -3.13333333 0.46666667]
[-7.33333333 4.33333333 -0.66666667]
[ 2.66666667 -1.66666667 0.33333333]]
[[ 3.4]
[ 8.8]
[19.2]]
[[ 1.00000000e+00 -4.44089210e-16 -1.66533454e-16]
[-1.77635684e-15 1.00000000e+00 -2.22044605e-16]
[-2.22044605e-15 -1.33226763e-15 1.00000000e+00]]
[[0.2]
[0.4]
[0.8]]
| Apache-2.0 | Midterm_Exam.ipynb | itsmecelyssa/Linear-Algebra-58020 |
Race classification Sarah Santiago and Carlos Ortiz initially wrote this notebook. Jae Yeon Kim reviwed the notebook, edited the markdown, and commented on the code.Racial demographic dialect predictions were made by the model developed by [Blodgett, S. L., Green, L., & O'Connor, B. (2016)](https://arxiv.org/pdf/1608.08868.pdf). We modified their predict function in [the public Git repository](https://github.com/slanglab/twitteraae) to work in the notebook environment. | # Import libraries
import pandas as pd
import numpy as np
import re
import seaborn as sns
import matplotlib.pyplot as plt
## Language-demography model
import predict | _____no_output_____ | MIT | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification |
Import Tweets |
# Import file
tweets = pd.read_csv("tweet.csv").drop(['Unnamed: 0'], axis=1)
# Index variable
tweets.index.name = 'ID'
# First five rows
tweets.head() | _____no_output_____ | MIT | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification |
Clean Tweets | url_re = r'http\S+'
at_re = r'@[\w]*'
rt_re = r'^[rt]{2}'
punct_re = r'[^\w\s]'
tweets_clean = tweets.copy()
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.lower() # Lower Case
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(url_re, '') # Remove Links/URL
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(at_re, '') # Remove @
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(rt_re, '') # Remove rt
tweets_clean['Tweet'] = tweets_clean['Tweet'].str.replace(punct_re, '') # Remove Punctation
tweets_clean['Tweet'] = tweets_clean['Tweet'].apply(unicode) # Applied unicode for compatability with model
tweets_clean.head() | _____no_output_____ | MIT | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification |
Apply Predictions | predict.load_model()
def prediction(string):
return predict.predict(string.split())
predictions = tweets_clean['Tweet'].apply(prediction)
tweets_clean['Predictions'] = predictions
# Fill tweets that have no predictions with None
tweets_clean = tweets_clean.fillna("NA")
tweets_clean.head()
def first_last(item):
if item is 'NA':
return 'NA'
return np.array([item[0], item[3]])
# Add "Predictions_AAE_WAE" column which is predictions for AAE dialect and WAE dialect
tweets_clean['Predictions_AAE_W'] = tweets_clean['Predictions'].apply(first_last)
tweets_clean.head()
# Model 1
def detect_two(item):
if item is 'NA':
return None
if item[0] >= item[1]:
return 0
else:
return 1
# Model 2
def detect_all(item):
if item is "NA":
return None
if item[0] >= item[1] and item[0] >= item[2] and item[0] >= item[3]:
return 0
elif item[3] >= item[0] and item[3] >= item[1] and item[3] >= item[2]:
return 1
else:
return 2
# Add "Racial Demographic" column such that AAE is represented by 0 and WAE is represented by 1
tweets_clean['Racial Demographic (Two)'] = tweets_clean['Predictions_AAE_W'].apply(detect_two)
tweets_clean['Racial Demographic (All)'] = tweets_clean['Predictions'].apply(detect_all) | _____no_output_____ | MIT | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification |
Tweets with Predictions Based on Racial Demographics (AAE, WAE) | final_tweets = tweets_clean.drop(columns=["Predictions", "Predictions_AAE_W"])
final_tweets['Tweet'] = tweets['Tweet']
final_tweets.head() | _____no_output_____ | MIT | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification |
Export Tweets to CSV | final_tweets.to_csv('r_d_tweets_3.csv') | _____no_output_____ | MIT | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification |
Analysis | sns.countplot(x=final_tweets['Racial Demographic (Two)'])
plt.title("Racial Demographic (Two)")
sns.countplot(x=final_tweets['Racial Demographic (All)'])
plt.title("Racial Demographic (All)")
aae = final_tweets[final_tweets['Racial Demographic (All)'] == 0]
aae.head()
counts = aae.groupby("Type").count()
counts = counts.reset_index().rename(columns = {'Number of Votes': 'Count'})
counts
sns.barplot(x="Type", y="Count", data = counts)
plt.title("Type Counts AAE")
wae = final_tweets[final_tweets['Racial Demographic (All)'] == 1]
wae.head()
counts_wae = wae.groupby("Type").count()
counts_wae = counts_wae.reset_index().rename(columns = {'Number of Votes': 'Count'})
counts_wae
sns.barplot(x="Type", y="Count", data = counts_wae)
plt.title("Type Counts WAE")
other = final_tweets[(final_tweets['Racial Demographic (All)'] == 2)] #| (final_tweets['Racial Demographic (All)'] == 0)]
counts_other = other.groupby("Type").count()
counts_other = counts_other.reset_index().rename(columns = {'Number of Votes': 'Count'})
sns.barplot(x="Type", y="Count", data = counts_other)
plt.title("Type Counts Other") | _____no_output_____ | MIT | code/.ipynb_checkpoints/Racial Demographic Predictions on Tweets, Santiago and Ortiz-checkpoint.ipynb | rjvkothari/race-classification |
Semen want to rent a flat. You're given 3 equivalent params: distance to subway (minutes), number of subway station to get to work, rent price (thousands rubles). Way from flat to subway should not exceed 20 minutes. Importing data | import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
data = pd.read_excel("../data/flat_rent_info.xlsx", 5, index_col="ID")
data | _____no_output_____ | MIT | AnimatedVisualizationAndFlatRent/notebooks/FlatOptionsAnalyzing.ipynb | SmirnovAlexander/InformationAnalysis |
Analyzing data Normalizing data. | normalized_data = (data - data.min())/(data.max() - data.min())
normalized_data | _____no_output_____ | MIT | AnimatedVisualizationAndFlatRent/notebooks/FlatOptionsAnalyzing.ipynb | SmirnovAlexander/InformationAnalysis |
As it told that params are equivalent, we should find top 3 minimum sums of params. | normalized_data.plot(stacked=True, kind='bar', colormap = 'Set2', figsize=(10, 8), fontsize=12)
plt.xticks(rotation = 0)
plt.show() | _____no_output_____ | MIT | AnimatedVisualizationAndFlatRent/notebooks/FlatOptionsAnalyzing.ipynb | SmirnovAlexander/InformationAnalysis |
Data analysis and visualization of knowledge graph for star war movies👉👉[**You can have a look at this Project first**](http://starwar-visualization.s3-website-us-west-1.amazonaws.com) 👈👈This project collected data from online database [**SWAPI**](https://swapi.co), which is the world's first quantified and programmatically-accessible data source for all the data from the Star Wars canon universe!The dataset include 6 APIs: Planets, Spaceships, Vehicles, People, Films and Species, from all SEVEN Star Wars films. 1. Data collectionWe can get the json file of all data from this website, then use urllib in python3 to download and save data. | import warnings
warnings.simplefilter('ignore')
import urllib
import json
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib
%matplotlib inline
matplotlib.style.use('ggplot')
films = []
for x in range(1,8):
films.append('httP://swapi.co/api/films/' + str(x) + '/')
headers = {}
headers["User-Agent"] = "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.104 Safari/537.36 Core/1.53.3226.400 QQBrowser/9.6.11681.400"
fw = open('../csv/films.txt', 'w')
for item in films:
print(item)
request = urllib.request.Request(item, headers=headers)
response = urllib.request.urlopen(request, timeout=20)
result = response.read().decode('utf-8')
print(result)
fw.write(result + '\n')
fw.close()
fr = open('../csv/films.txt', 'r')
films = []
for line in fr:
line = json.loads(line.strip('\n'))
films.append(line)
fr.close()
# 获取 characters,planets,starships,vehicles,species
targets = ['characters', 'planets', 'starships', 'vehicles', 'species']
for target in targets:
fw = open('../csv/' + target + '.txt', 'w')
data = []
for item in films:
tmp = item[target]
for t in tmp:
if t in data:
continue
else:
data.append(t)
while 1:
print(t)
try:
request = urllib.request.Request(t, headers=headers)
response = urllib.request.urlopen(request, timeout=20)
result = response.read().decode('utf-8')
except Exception as e:
continue
else:
fw.write(result + '\n')
break
finally:
pass
print (str(len(data)), target)
fw.close() | https://swapi.co/api/people/1/
https://swapi.co/api/people/2/
https://swapi.co/api/people/3/
https://swapi.co/api/people/4/
https://swapi.co/api/people/5/
https://swapi.co/api/people/6/
https://swapi.co/api/people/7/
https://swapi.co/api/people/8/
https://swapi.co/api/people/9/
https://swapi.co/api/people/10/
https://swapi.co/api/people/12/
https://swapi.co/api/people/13/
https://swapi.co/api/people/14/
https://swapi.co/api/people/15/
https://swapi.co/api/people/16/
https://swapi.co/api/people/18/
https://swapi.co/api/people/19/
https://swapi.co/api/people/81/
https://swapi.co/api/people/20/
https://swapi.co/api/people/21/
https://swapi.co/api/people/22/
https://swapi.co/api/people/23/
https://swapi.co/api/people/24/
https://swapi.co/api/people/25/
https://swapi.co/api/people/26/
https://swapi.co/api/people/27/
https://swapi.co/api/people/28/
https://swapi.co/api/people/29/
https://swapi.co/api/people/30/
https://swapi.co/api/people/31/
https://swapi.co/api/people/45/
https://swapi.co/api/people/11/
https://swapi.co/api/people/32/
https://swapi.co/api/people/33/
https://swapi.co/api/people/34/
https://swapi.co/api/people/36/
https://swapi.co/api/people/37/
https://swapi.co/api/people/38/
https://swapi.co/api/people/39/
https://swapi.co/api/people/40/
https://swapi.co/api/people/41/
https://swapi.co/api/people/42/
https://swapi.co/api/people/43/
https://swapi.co/api/people/44/
https://swapi.co/api/people/46/
https://swapi.co/api/people/48/
https://swapi.co/api/people/49/
https://swapi.co/api/people/50/
https://swapi.co/api/people/51/
https://swapi.co/api/people/52/
https://swapi.co/api/people/53/
https://swapi.co/api/people/54/
https://swapi.co/api/people/55/
https://swapi.co/api/people/56/
https://swapi.co/api/people/57/
https://swapi.co/api/people/58/
https://swapi.co/api/people/59/
https://swapi.co/api/people/47/
https://swapi.co/api/people/35/
https://swapi.co/api/people/60/
https://swapi.co/api/people/61/
https://swapi.co/api/people/62/
https://swapi.co/api/people/63/
https://swapi.co/api/people/64/
https://swapi.co/api/people/65/
https://swapi.co/api/people/66/
https://swapi.co/api/people/67/
https://swapi.co/api/people/68/
https://swapi.co/api/people/69/
https://swapi.co/api/people/70/
https://swapi.co/api/people/71/
https://swapi.co/api/people/72/
https://swapi.co/api/people/73/
https://swapi.co/api/people/74/
https://swapi.co/api/people/75/
https://swapi.co/api/people/76/
https://swapi.co/api/people/77/
https://swapi.co/api/people/78/
https://swapi.co/api/people/82/
https://swapi.co/api/people/79/
https://swapi.co/api/people/80/
https://swapi.co/api/people/83/
https://swapi.co/api/people/84/
https://swapi.co/api/people/85/
https://swapi.co/api/people/86/
https://swapi.co/api/people/87/
https://swapi.co/api/people/88/
87 characters
https://swapi.co/api/planets/2/
https://swapi.co/api/planets/3/
https://swapi.co/api/planets/1/
https://swapi.co/api/planets/4/
https://swapi.co/api/planets/5/
https://swapi.co/api/planets/6/
https://swapi.co/api/planets/27/
https://swapi.co/api/planets/7/
https://swapi.co/api/planets/8/
https://swapi.co/api/planets/9/
https://swapi.co/api/planets/10/
https://swapi.co/api/planets/11/
https://swapi.co/api/planets/12/
https://swapi.co/api/planets/13/
https://swapi.co/api/planets/14/
https://swapi.co/api/planets/15/
https://swapi.co/api/planets/16/
https://swapi.co/api/planets/17/
https://swapi.co/api/planets/18/
https://swapi.co/api/planets/19/
https://swapi.co/api/planets/61/
21 planets
https://swapi.co/api/starships/2/
https://swapi.co/api/starships/3/
https://swapi.co/api/starships/5/
https://swapi.co/api/starships/9/
https://swapi.co/api/starships/10/
https://swapi.co/api/starships/11/
https://swapi.co/api/starships/12/
https://swapi.co/api/starships/13/
https://swapi.co/api/starships/15/
https://swapi.co/api/starships/21/
https://swapi.co/api/starships/22/
https://swapi.co/api/starships/23/
https://swapi.co/api/starships/17/
https://swapi.co/api/starships/27/
https://swapi.co/api/starships/28/
https://swapi.co/api/starships/29/
https://swapi.co/api/starships/40/
https://swapi.co/api/starships/41/
https://swapi.co/api/starships/31/
https://swapi.co/api/starships/32/
https://swapi.co/api/starships/39/
https://swapi.co/api/starships/43/
https://swapi.co/api/starships/47/
https://swapi.co/api/starships/48/
https://swapi.co/api/starships/49/
https://swapi.co/api/starships/52/
https://swapi.co/api/starships/58/
https://swapi.co/api/starships/59/
https://swapi.co/api/starships/61/
https://swapi.co/api/starships/63/
https://swapi.co/api/starships/64/
https://swapi.co/api/starships/65/
https://swapi.co/api/starships/66/
https://swapi.co/api/starships/74/
https://swapi.co/api/starships/75/
https://swapi.co/api/starships/68/
https://swapi.co/api/starships/77/
37 starships
https://swapi.co/api/vehicles/4/
https://swapi.co/api/vehicles/6/
https://swapi.co/api/vehicles/7/
https://swapi.co/api/vehicles/8/
https://swapi.co/api/vehicles/14/
https://swapi.co/api/vehicles/16/
https://swapi.co/api/vehicles/18/
https://swapi.co/api/vehicles/19/
https://swapi.co/api/vehicles/20/
https://swapi.co/api/vehicles/24/
https://swapi.co/api/vehicles/25/
https://swapi.co/api/vehicles/26/
https://swapi.co/api/vehicles/30/
https://swapi.co/api/vehicles/33/
https://swapi.co/api/vehicles/34/
https://swapi.co/api/vehicles/35/
https://swapi.co/api/vehicles/36/
https://swapi.co/api/vehicles/37/
https://swapi.co/api/vehicles/38/
https://swapi.co/api/vehicles/42/
https://swapi.co/api/vehicles/44/
https://swapi.co/api/vehicles/45/
https://swapi.co/api/vehicles/46/
https://swapi.co/api/vehicles/50/
https://swapi.co/api/vehicles/51/
https://swapi.co/api/vehicles/53/
https://swapi.co/api/vehicles/54/
https://swapi.co/api/vehicles/55/
https://swapi.co/api/vehicles/56/
https://swapi.co/api/vehicles/57/
https://swapi.co/api/vehicles/60/
https://swapi.co/api/vehicles/62/
https://swapi.co/api/vehicles/67/
https://swapi.co/api/vehicles/69/
https://swapi.co/api/vehicles/70/
https://swapi.co/api/vehicles/71/
https://swapi.co/api/vehicles/72/
https://swapi.co/api/vehicles/73/
https://swapi.co/api/vehicles/76/
39 vehicles
https://swapi.co/api/species/5/
https://swapi.co/api/species/3/
https://swapi.co/api/species/2/
https://swapi.co/api/species/1/
https://swapi.co/api/species/4/
https://swapi.co/api/species/6/
https://swapi.co/api/species/7/
https://swapi.co/api/species/8/
https://swapi.co/api/species/9/
https://swapi.co/api/species/10/
https://swapi.co/api/species/15/
https://swapi.co/api/species/11/
https://swapi.co/api/species/12/
https://swapi.co/api/species/13/
https://swapi.co/api/species/14/
https://swapi.co/api/species/16/
https://swapi.co/api/species/17/
https://swapi.co/api/species/18/
https://swapi.co/api/species/19/
https://swapi.co/api/species/20/
https://swapi.co/api/species/21/
https://swapi.co/api/species/22/
https://swapi.co/api/species/23/
https://swapi.co/api/species/24/
https://swapi.co/api/species/25/
https://swapi.co/api/species/26/
https://swapi.co/api/species/27/
https://swapi.co/api/species/32/
https://swapi.co/api/species/33/
https://swapi.co/api/species/35/
https://swapi.co/api/species/34/
https://swapi.co/api/species/28/
https://swapi.co/api/species/29/
https://swapi.co/api/species/30/
https://swapi.co/api/species/31/
https://swapi.co/api/species/36/
https://swapi.co/api/species/37/
37 species
| MIT-0 | Notebooks/star_war.ipynb | vertigo-yl/Projects |
2. Basic analysis | fr = open('../csv/films.txt','r')
fw = open('../csv/stat_basic.csv','w')
fw.write('title,key,value\n')
for line in fr:
tmp = json.loads(line.strip('\n'))
fw.write(tmp['title'] + ',' + 'characters,' + str(len(tmp['characters'])) + '\n')
fw.write(tmp['title'] + ',' + 'planets,' + str(len(tmp['planets'])) + '\n')
fw.write(tmp['title'] + ',' + 'starships,' + str(len(tmp['starships'])) + '\n')
fw.write(tmp['title'] + ',' + 'vehicles,' + str(len(tmp['vehicles'])) + '\n')
fw.write(tmp['title'] + ',' + 'species,' + str(len(tmp['species'])) + '\n')
fr.close()
fw.close()
stats = pd.read_csv('../csv/stat_basic.csv')
stats.head()
# Visualization of overall stats
fig, ax = plt.subplots(figsize=(12, 6))
sns.barplot(x='key', y ='value', hue='title', data=stats)
ax.set_title('Overview of all movies', fontsize=16)
plt.xlabel('')
plt.show() | _____no_output_____ | MIT-0 | Notebooks/star_war.ipynb | vertigo-yl/Projects |
"Attack of the Clones" has most characters | fr = open('../csv/characters.txt','r')
fw = open('../csv/stat_characters.csv','w')
fw.write('name,height,mass,gender,homeworld\n')
for line in fr:
tmp = json.loads(line.strip('\n'))
if tmp['height'] == 'unknown':
tmp['height'] = '-1'
if tmp['mass'] == 'unknown':
tmp['mass'] = '-1'
if tmp['gender'] == 'none':
tmp['gender'] = 'n/a'
fw.write(tmp['name'] + ',' + tmp['height'] + ',' + tmp['mass'] + ',' + tmp['gender'].strip() + ',' + tmp['homeworld'] + '\n')
fr.close()
fw.close()
stat_characters = pd.read_csv('../csv/stat_characters.csv')
stat_characters.head()
# Visualization of characters
fig, ax = plt.subplots(figsize=(12, 6))
sns.scatterplot(x='mass', y ='height', hue='gender', data=stat_characters)
ax.set_title('Visualization of characters', fontsize=16)
plt.show() | _____no_output_____ | MIT-0 | Notebooks/star_war.ipynb | vertigo-yl/Projects |
_____no_output_____ | MIT | tfkt_vis.ipynb | kghite/tfkt |
||
Inspired by: http://blog.varunajayasiri.com/numpy_lstm.html Imports | import numpy as np
from numpy import ndarray
from typing import Dict, List, Tuple
import matplotlib.pyplot as plt
from IPython import display
plt.style.use('seaborn-white')
%matplotlib inline
from copy import deepcopy
from collections import deque
from lincoln.utils.np_utils import assert_same_shape
from scipy.special import logsumexp | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
Activations | def sigmoid(x: ndarray):
return 1 / (1 + np.exp(-x))
def dsigmoid(x: ndarray):
return sigmoid(x) * (1 - sigmoid(x))
def tanh(x: ndarray):
return np.tanh(x)
def dtanh(x: ndarray):
return 1 - np.tanh(x) * np.tanh(x)
def softmax(x, axis=None):
return np.exp(x - logsumexp(x, axis=axis, keepdims=True))
def batch_softmax(input_array: ndarray):
out = []
for row in input_array:
out.append(softmax(row, axis=1))
return np.stack(out)
| _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`RNNOptimizer` | class RNNOptimizer(object):
def __init__(self,
lr: float = 0.01,
gradient_clipping: bool = True) -> None:
self.lr = lr
self.gradient_clipping = gradient_clipping
self.first = True
def step(self) -> None:
for layer in self.model.layers:
for key in layer.params.keys():
if self.gradient_clipping:
np.clip(layer.params[key]['deriv'], -2, 2, layer.params[key]['deriv'])
self._update_rule(param=layer.params[key]['value'],
grad=layer.params[key]['deriv'])
def _update_rule(self, **kwargs) -> None:
raise NotImplementedError()
| _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`SGD` and `AdaGrad` | class SGD(RNNOptimizer):
def __init__(self,
lr: float = 0.01,
gradient_clipping: bool = True) -> None:
super().__init__(lr, gradient_clipping)
def _update_rule(self, **kwargs) -> None:
update = self.lr*kwargs['grad']
kwargs['param'] -= update
class AdaGrad(RNNOptimizer):
def __init__(self,
lr: float = 0.01,
gradient_clipping: bool = True) -> None:
super().__init__(lr, gradient_clipping)
self.eps = 1e-7
def step(self) -> None:
if self.first:
self.sum_squares = {}
for i, layer in enumerate(self.model.layers):
self.sum_squares[i] = {}
for key in layer.params.keys():
self.sum_squares[i][key] = np.zeros_like(layer.params[key]['value'])
self.first = False
for i, layer in enumerate(self.model.layers):
for key in layer.params.keys():
if self.gradient_clipping:
np.clip(layer.params[key]['deriv'], -2, 2, layer.params[key]['deriv'])
self._update_rule(param=layer.params[key]['value'],
grad=layer.params[key]['deriv'],
sum_square=self.sum_squares[i][key])
def _update_rule(self, **kwargs) -> None:
# Update running sum of squares
kwargs['sum_square'] += (self.eps +
np.power(kwargs['grad'], 2))
# Scale learning rate by running sum of squareds=5
lr = np.divide(self.lr, np.sqrt(kwargs['sum_square']))
# Use this to update parameters
kwargs['param'] -= lr * kwargs['grad'] | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`Loss`es | class Loss(object):
def __init__(self):
pass
def forward(self,
prediction: ndarray,
target: ndarray) -> float:
assert_same_shape(prediction, target)
self.prediction = prediction
self.target = target
self.output = self._output()
return self.output
def backward(self) -> ndarray:
self.input_grad = self._input_grad()
assert_same_shape(self.prediction, self.input_grad)
return self.input_grad
def _output(self) -> float:
raise NotImplementedError()
def _input_grad(self) -> ndarray:
raise NotImplementedError()
class SoftmaxCrossEntropy(Loss):
def __init__(self, eps: float=1e-9) -> None:
super().__init__()
self.eps = eps
self.single_class = False
def _output(self) -> float:
out = []
for row in self.prediction:
out.append(softmax(row, axis=1))
softmax_preds = np.stack(out)
# clipping the softmax output to prevent numeric instability
self.softmax_preds = np.clip(softmax_preds, self.eps, 1 - self.eps)
# actual loss computation
softmax_cross_entropy_loss = -1.0 * self.target * np.log(self.softmax_preds) - \
(1.0 - self.target) * np.log(1 - self.softmax_preds)
return np.sum(softmax_cross_entropy_loss)
def _input_grad(self) -> np.ndarray:
return self.softmax_preds - self.target | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
RNNs `RNNNode` | class RNNNode(object):
def __init__(self):
pass
def forward(self,
x_in: ndarray,
H_in: ndarray,
params_dict: Dict[str, Dict[str, ndarray]]
) -> Tuple[ndarray]:
'''
param x: numpy array of shape (batch_size, vocab_size)
param H_prev: numpy array of shape (batch_size, hidden_size)
return self.x_out: numpy array of shape (batch_size, vocab_size)
return self.H: numpy array of shape (batch_size, hidden_size)
'''
self.X_in = x_in
self.H_in = H_in
self.Z = np.column_stack((x_in, H_in))
self.H_int = np.dot(self.Z, params_dict['W_f']['value']) \
+ params_dict['B_f']['value']
self.H_out = tanh(self.H_int)
self.X_out = np.dot(self.H_out, params_dict['W_v']['value']) \
+ params_dict['B_v']['value']
return self.X_out, self.H_out
def backward(self,
X_out_grad: ndarray,
H_out_grad: ndarray,
params_dict: Dict[str, Dict[str, ndarray]]) -> Tuple[ndarray]:
'''
param x_out_grad: numpy array of shape (batch_size, vocab_size)
param h_out_grad: numpy array of shape (batch_size, hidden_size)
param RNN_Params: RNN_Params object
return x_in_grad: numpy array of shape (batch_size, vocab_size)
return h_in_grad: numpy array of shape (batch_size, hidden_size)
'''
assert_same_shape(X_out_grad, self.X_out)
assert_same_shape(H_out_grad, self.H_out)
params_dict['B_v']['deriv'] += X_out_grad.sum(axis=0)
params_dict['W_v']['deriv'] += np.dot(self.H_out.T, X_out_grad)
dh = np.dot(X_out_grad, params_dict['W_v']['value'].T)
dh += H_out_grad
dH_int = dh * dtanh(self.H_int)
params_dict['B_f']['deriv'] += dH_int.sum(axis=0)
params_dict['W_f']['deriv'] += np.dot(self.Z.T, dH_int)
dz = np.dot(dH_int, params_dict['W_f']['value'].T)
X_in_grad = dz[:, :self.X_in.shape[1]]
H_in_grad = dz[:, self.X_in.shape[1]:]
assert_same_shape(X_out_grad, self.X_out)
assert_same_shape(H_out_grad, self.H_out)
return X_in_grad, H_in_grad | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`RNNLayer` | class RNNLayer(object):
def __init__(self,
hidden_size: int,
output_size: int,
weight_scale: float = None):
'''
param sequence_length: int - length of sequence being passed through the network
param vocab_size: int - the number of characters in the vocabulary of which we are predicting the next
character.
param hidden_size: int - the number of "hidden neurons" in the LSTM_Layer of which this node is a part.
param learning_rate: float - the learning rate
'''
self.hidden_size = hidden_size
self.output_size = output_size
self.weight_scale = weight_scale
self.start_H = np.zeros((1, hidden_size))
self.first = True
def _init_params(self,
input_: ndarray):
self.vocab_size = input_.shape[2]
if not self.weight_scale:
self.weight_scale = 2 / (self.vocab_size + self.output_size)
self.params = {}
self.params['W_f'] = {}
self.params['B_f'] = {}
self.params['W_v'] = {}
self.params['B_v'] = {}
self.params['W_f']['value'] = np.random.normal(loc = 0.0,
scale=self.weight_scale,
size=(self.hidden_size + self.vocab_size, self.hidden_size))
self.params['B_f']['value'] = np.random.normal(loc = 0.0,
scale=self.weight_scale,
size=(1, self.hidden_size))
self.params['W_v']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.hidden_size, self.output_size))
self.params['B_v']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(1, self.output_size))
self.params['W_f']['deriv'] = np.zeros_like(self.params['W_f']['value'])
self.params['B_f']['deriv'] = np.zeros_like(self.params['B_f']['value'])
self.params['W_v']['deriv'] = np.zeros_like(self.params['W_v']['value'])
self.params['B_v']['deriv'] = np.zeros_like(self.params['B_v']['value'])
self.cells = [RNNNode() for x in range(input_.shape[1])]
def _clear_gradients(self):
for key in self.params.keys():
self.params[key]['deriv'] = np.zeros_like(self.params[key]['deriv'])
def forward(self, x_seq_in: ndarray):
'''
param x_seq_in: numpy array of shape (batch_size, sequence_length, vocab_size)
return x_seq_out: numpy array of shape (batch_size, sequence_length, output_size)
'''
if self.first:
self._init_params(x_seq_in)
self.first=False
batch_size = x_seq_in.shape[0]
H_in = np.copy(self.start_H)
H_in = np.repeat(H_in, batch_size, axis=0)
sequence_length = x_seq_in.shape[1]
x_seq_out = np.zeros((batch_size, sequence_length, self.output_size))
for t in range(sequence_length):
x_in = x_seq_in[:, t, :]
y_out, H_in = self.cells[t].forward(x_in, H_in, self.params)
x_seq_out[:, t, :] = y_out
self.start_H = H_in.mean(axis=0, keepdims=True)
return x_seq_out
def backward(self, x_seq_out_grad: ndarray):
'''
param loss_grad: numpy array of shape (batch_size, sequence_length, vocab_size)
return loss_grad_out: numpy array of shape (batch_size, sequence_length, vocab_size)
'''
batch_size = x_seq_out_grad.shape[0]
h_in_grad = np.zeros((batch_size, self.hidden_size))
sequence_length = x_seq_out_grad.shape[1]
x_seq_in_grad = np.zeros((batch_size, sequence_length, self.vocab_size))
for t in reversed(range(sequence_length)):
x_out_grad = x_seq_out_grad[:, t, :]
grad_out, h_in_grad = \
self.cells[t].backward(x_out_grad, h_in_grad, self.params)
x_seq_in_grad[:, t, :] = grad_out
return x_seq_in_grad | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`RNNModel` | class RNNModel(object):
'''
The Model class that takes in inputs and targets and actually trains the network and calculates the loss.
'''
def __init__(self,
layers: List[RNNLayer],
sequence_length: int,
vocab_size: int,
loss: Loss):
'''
param num_layers: int - the number of layers in the network
param sequence_length: int - length of sequence being passed through the network
param vocab_size: int - the number of characters in the vocabulary of which we are predicting the next
character.
param hidden_size: int - the number of "hidden neurons" in the each layer of the network.
'''
self.layers = layers
self.vocab_size = vocab_size
self.sequence_length = sequence_length
self.loss = loss
for layer in self.layers:
setattr(layer, 'sequence_length', sequence_length)
def forward(self,
x_batch: ndarray):
'''
param inputs: list of integers - a list of indices of characters being passed in as the
input sequence of the network.
returns x_batch_in: numpy array of shape (batch_size, sequence_length, vocab_size)
'''
for layer in self.layers:
x_batch = layer.forward(x_batch)
return x_batch
def backward(self,
loss_grad: ndarray):
'''
param loss_grad: numpy array with shape (batch_size, sequence_length, vocab_size)
returns loss: float, representing mean squared error loss
'''
for layer in reversed(self.layers):
loss_grad = layer.backward(loss_grad)
return loss_grad
def single_step(self,
x_batch: ndarray,
y_batch: ndarray):
'''
The step that does it all:
1. Forward pass & softmax
2. Compute loss and loss gradient
3. Backward pass
4. Update parameters
param inputs: array of length sequence_length that represents the character indices of the inputs to
the network
param targets: array of length sequence_length that represents the character indices of the targets
of the network
return loss
'''
x_batch_out = self.forward(x_batch)
loss = self.loss.forward(x_batch_out, y_batch)
loss_grad = self.loss.backward()
for layer in self.layers:
layer._clear_gradients()
self.backward(loss_grad)
return loss | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`RNNTrainer` | class RNNTrainer:
'''
Takes in a text file and a model, and starts generating characters.
'''
def __init__(self,
text_file: str,
model: RNNModel,
optim: RNNOptimizer,
batch_size: int = 32):
self.data = open(text_file, 'r').read()
self.model = model
self.chars = list(set(self.data))
self.vocab_size = len(self.chars)
self.char_to_idx = {ch:i for i,ch in enumerate(self.chars)}
self.idx_to_char = {i:ch for i,ch in enumerate(self.chars)}
self.sequence_length = self.model.sequence_length
self.batch_size = batch_size
self.optim = optim
setattr(self.optim, 'model', self.model)
def _generate_inputs_targets(self,
start_pos: int):
inputs_indices = np.zeros((self.batch_size, self.sequence_length), dtype=int)
targets_indices = np.zeros((self.batch_size, self.sequence_length), dtype=int)
for i in range(self.batch_size):
inputs_indices[i, :] = np.array([self.char_to_idx[ch]
for ch in self.data[start_pos + i: start_pos + self.sequence_length + i]])
targets_indices[i, :] = np.array([self.char_to_idx[ch]
for ch in self.data[start_pos + 1 + i: start_pos + self.sequence_length + 1 + i]])
return inputs_indices, targets_indices
def _generate_one_hot_array(self,
indices: ndarray):
'''
param indices: numpy array of shape (batch_size, sequence_length)
return batch - numpy array of shape (batch_size, sequence_length, vocab_size)
'''
batch = []
for seq in indices:
one_hot_sequence = np.zeros((self.sequence_length, self.vocab_size))
for i in range(self.sequence_length):
one_hot_sequence[i, seq[i]] = 1.0
batch.append(one_hot_sequence)
return np.stack(batch)
def sample_output(self,
input_char: int,
sample_length: int):
'''
Generates a sample output using the current trained model, one character at a time.
param input_char: int - index of the character to use to start generating a sequence
param sample_length: int - the length of the sample output to generate
return txt: string - a string of length sample_length representing the sample output
'''
indices = []
sample_model = deepcopy(self.model)
for i in range(sample_length):
input_char_batch = np.zeros((1, 1, self.vocab_size))
input_char_batch[0, 0, input_char] = 1.0
x_batch_out = sample_model.forward(input_char_batch)
x_softmax = batch_softmax(x_batch_out)
input_char = np.random.choice(range(self.vocab_size), p=x_softmax.ravel())
indices.append(input_char)
txt = ''.join(self.idx_to_char[idx] for idx in indices)
return txt
def train(self,
num_iterations: int,
sample_every: int=100):
'''
Trains the "character generator" for a number of iterations.
Each "iteration" feeds a batch size of 1 through the neural network.
Continues until num_iterations is reached. Displays sample text generated using the latest version.
'''
plot_iter = np.zeros((0))
plot_loss = np.zeros((0))
num_iter = 0
start_pos = 0
moving_average = deque(maxlen=100)
while num_iter < num_iterations:
if start_pos + self.sequence_length + self.batch_size + 1 > len(self.data):
start_pos = 0
## Update the model
inputs_indices, targets_indices = self._generate_inputs_targets(start_pos)
inputs_batch, targets_batch = \
self._generate_one_hot_array(inputs_indices), self._generate_one_hot_array(targets_indices)
loss = self.model.single_step(inputs_batch, targets_batch)
self.optim.step()
moving_average.append(loss)
ma_loss = np.mean(moving_average)
start_pos += self.batch_size
plot_iter = np.append(plot_iter, [num_iter])
plot_loss = np.append(plot_loss, [ma_loss])
if num_iter % 100 == 0:
plt.plot(plot_iter, plot_loss)
display.clear_output(wait=True)
plt.show()
sample_text = self.sample_output(self.char_to_idx[self.data[start_pos]],
200)
print(sample_text)
num_iter += 1
layers = [RNNLayer(hidden_size=256, output_size=62)]
mod = RNNModel(layers=layers,
vocab_size=62, sequence_length=10,
loss=SoftmaxCrossEntropy())
optim = SGD(lr=0.001, gradient_clipping=True)
trainer = RNNTrainer('input.txt', mod, optim)
trainer.train(1000, sample_every=100) | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
With RNN cells, this gets stuck in a local max. Let's try `LSTM`s. LSTMs `LSTMNode` | class LSTMNode:
def __init__(self):
'''
param hidden_size: int - the number of "hidden neurons" in the LSTM_Layer of which this node is a part.
param vocab_size: int - the number of characters in the vocabulary of which we are predicting the next
character.
'''
pass
def forward(self,
X_in: ndarray,
H_in: ndarray,
C_in: ndarray,
params_dict: Dict[str, Dict[str, ndarray]]):
'''
param X_in: numpy array of shape (batch_size, vocab_size)
param H_in: numpy array of shape (batch_size, hidden_size)
param C_in: numpy array of shape (batch_size, hidden_size)
return self.X_out: numpy array of shape (batch_size, output_size)
return self.H: numpy array of shape (batch_size, hidden_size)
return self.C: numpy array of shape (batch_size, hidden_size)
'''
self.X_in = X_in
self.C_in = C_in
self.Z = np.column_stack((X_in, H_in))
self.f_int = np.dot(self.Z, params_dict['W_f']['value']) + params_dict['B_f']['value']
self.f = sigmoid(self.f_int)
self.i_int = np.dot(self.Z, params_dict['W_i']['value']) + params_dict['B_i']['value']
self.i = sigmoid(self.i_int)
self.C_bar_int = np.dot(self.Z, params_dict['W_c']['value']) + params_dict['B_c']['value']
self.C_bar = tanh(self.C_bar_int)
self.C_out = self.f * C_in + self.i * self.C_bar
self.o_int = np.dot(self.Z, params_dict['W_o']['value']) + params_dict['B_o']['value']
self.o = sigmoid(self.o_int)
self.H_out = self.o * tanh(self.C_out)
self.X_out = np.dot(self.H_out, params_dict['W_v']['value']) + params_dict['B_v']['value']
return self.X_out, self.H_out, self.C_out
def backward(self,
X_out_grad: ndarray,
H_out_grad: ndarray,
C_out_grad: ndarray,
params_dict: Dict[str, Dict[str, ndarray]]):
'''
param loss_grad: numpy array of shape (1, vocab_size)
param dh_next: numpy array of shape (1, hidden_size)
param dC_next: numpy array of shape (1, hidden_size)
param LSTM_Params: LSTM_Params object
return self.dx_prev: numpy array of shape (1, vocab_size)
return self.dH_prev: numpy array of shape (1, hidden_size)
return self.dC_prev: numpy array of shape (1, hidden_size)
'''
assert_same_shape(X_out_grad, self.X_out)
assert_same_shape(H_out_grad, self.H_out)
assert_same_shape(C_out_grad, self.C_out)
params_dict['W_v']['deriv'] += np.dot(self.H_out.T, X_out_grad)
params_dict['B_v']['deriv'] += X_out_grad.sum(axis=0)
dh_out = np.dot(X_out_grad, params_dict['W_v']['value'].T)
dh_out += H_out_grad
do = dh_out * tanh(self.C_out)
do_int = dsigmoid(self.o_int) * do
params_dict['W_o']['deriv'] += np.dot(self.Z.T, do_int)
params_dict['B_o']['deriv'] += do_int.sum(axis=0)
dC_out = dh_out * self.o * dtanh(self.C_out)
dC_out += C_out_grad
dC_bar = dC_out * self.i
dC_bar_int = dtanh(self.C_bar_int) * dC_bar
params_dict['W_c']['deriv'] += np.dot(self.Z.T, dC_bar_int)
params_dict['B_c']['deriv'] += dC_bar_int.sum(axis=0)
di = dC_out * self.C_bar
di_int = dsigmoid(self.i_int) * di
params_dict['W_i']['deriv'] += np.dot(self.Z.T, di_int)
params_dict['B_i']['deriv'] += di_int.sum(axis=0)
df = dC_out * self.C_in
df_int = dsigmoid(self.f_int) * df
params_dict['W_f']['deriv'] += np.dot(self.Z.T, df_int)
params_dict['B_f']['deriv'] += df_int.sum(axis=0)
dz = (np.dot(df_int, params_dict['W_f']['value'].T)
+ np.dot(di_int, params_dict['W_i']['value'].T)
+ np.dot(dC_bar_int, params_dict['W_c']['value'].T)
+ np.dot(do_int, params_dict['W_o']['value'].T))
dx_prev = dz[:, :self.X_in.shape[1]]
dH_prev = dz[:, self.X_in.shape[1]:]
dC_prev = self.f * dC_out
return dx_prev, dH_prev, dC_prev | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`LSTMLayer` | class LSTMLayer:
def __init__(self,
hidden_size: int,
output_size: int,
weight_scale: float = 0.01):
'''
param sequence_length: int - length of sequence being passed through the network
param vocab_size: int - the number of characters in the vocabulary of which we are predicting the next
character.
param hidden_size: int - the number of "hidden neurons" in the LSTM_Layer of which this node is a part.
param learning_rate: float - the learning rate
'''
self.hidden_size = hidden_size
self.output_size = output_size
self.weight_scale = weight_scale
self.start_H = np.zeros((1, hidden_size))
self.start_C = np.zeros((1, hidden_size))
self.first = True
def _init_params(self,
input_: ndarray):
self.vocab_size = input_.shape[2]
self.params = {}
self.params['W_f'] = {}
self.params['B_f'] = {}
self.params['W_i'] = {}
self.params['B_i'] = {}
self.params['W_c'] = {}
self.params['B_c'] = {}
self.params['W_o'] = {}
self.params['B_o'] = {}
self.params['W_v'] = {}
self.params['B_v'] = {}
self.params['W_f']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size =(self.hidden_size + self.vocab_size, self.hidden_size))
self.params['B_f']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(1, self.hidden_size))
self.params['W_i']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.hidden_size + self.vocab_size, self.hidden_size))
self.params['B_i']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(1, self.hidden_size))
self.params['W_c']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.hidden_size + self.vocab_size, self.hidden_size))
self.params['B_c']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(1, self.hidden_size))
self.params['W_o']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.hidden_size + self.vocab_size, self.hidden_size))
self.params['B_o']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(1, self.hidden_size))
self.params['W_v']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.hidden_size, self.output_size))
self.params['B_v']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(1, self.output_size))
for key in self.params.keys():
self.params[key]['deriv'] = np.zeros_like(self.params[key]['value'])
self.cells = [LSTMNode() for x in range(input_.shape[1])]
def _clear_gradients(self):
for key in self.params.keys():
self.params[key]['deriv'] = np.zeros_like(self.params[key]['deriv'])
def forward(self, x_seq_in: ndarray):
'''
param x_seq_in: numpy array of shape (batch_size, sequence_length, vocab_size)
return x_seq_out: numpy array of shape (batch_size, sequence_length, vocab_size)
'''
if self.first:
self._init_params(x_seq_in)
self.first=False
batch_size = x_seq_in.shape[0]
H_in = np.copy(self.start_H)
C_in = np.copy(self.start_C)
H_in = np.repeat(H_in, batch_size, axis=0)
C_in = np.repeat(C_in, batch_size, axis=0)
sequence_length = x_seq_in.shape[1]
x_seq_out = np.zeros((batch_size, sequence_length, self.output_size))
for t in range(sequence_length):
x_in = x_seq_in[:, t, :]
y_out, H_in, C_in = self.cells[t].forward(x_in, H_in, C_in, self.params)
x_seq_out[:, t, :] = y_out
self.start_H = H_in.mean(axis=0, keepdims=True)
self.start_C = C_in.mean(axis=0, keepdims=True)
return x_seq_out
def backward(self, x_seq_out_grad: ndarray):
'''
param loss_grad: numpy array of shape (batch_size, sequence_length, vocab_size)
return loss_grad_out: numpy array of shape (batch_size, sequence_length, vocab_size)
'''
batch_size = x_seq_out_grad.shape[0]
h_in_grad = np.zeros((batch_size, self.hidden_size))
c_in_grad = np.zeros((batch_size, self.hidden_size))
num_chars = x_seq_out_grad.shape[1]
x_seq_in_grad = np.zeros((batch_size, num_chars, self.vocab_size))
for t in reversed(range(num_chars)):
x_out_grad = x_seq_out_grad[:, t, :]
grad_out, h_in_grad, c_in_grad = \
self.cells[t].backward(x_out_grad, h_in_grad, c_in_grad, self.params)
x_seq_in_grad[:, t, :] = grad_out
return x_seq_in_grad | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`LSTMModel` | class LSTMModel(object):
'''
The Model class that takes in inputs and targets and actually trains the network and calculates the loss.
'''
def __init__(self,
layers: List[LSTMLayer],
sequence_length: int,
vocab_size: int,
hidden_size: int,
loss: Loss):
'''
param num_layers: int - the number of layers in the network
param sequence_length: int - length of sequence being passed through the network
param vocab_size: int - the number of characters in the vocabulary of which we are predicting the next
character.
param hidden_size: int - the number of "hidden neurons" in the each layer of the network.
'''
self.layers = layers
self.vocab_size = vocab_size
self.hidden_size = hidden_size
self.sequence_length = sequence_length
self.loss = loss
for layer in self.layers:
setattr(layer, 'sequence_length', sequence_length)
def forward(self,
x_batch: ndarray):
'''
param inputs: list of integers - a list of indices of characters being passed in as the
input sequence of the network.
returns x_batch_in: numpy array of shape (batch_size, sequence_length, vocab_size)
'''
for layer in self.layers:
x_batch = layer.forward(x_batch)
return x_batch
def backward(self,
loss_grad: ndarray):
'''
param loss_grad: numpy array with shape (batch_size, sequence_length, vocab_size)
returns loss: float, representing mean squared error loss
'''
for layer in reversed(self.layers):
loss_grad = layer.backward(loss_grad)
return loss_grad
def single_step(self,
x_batch: ndarray,
y_batch: ndarray):
'''
The step that does it all:
1. Forward pass & softmax
2. Compute loss and loss gradient
3. Backward pass
4. Update parameters
param inputs: array of length sequence_length that represents the character indices of the inputs to
the network
param targets: array of length sequence_length that represents the character indices of the targets
of the network
return loss
'''
x_batch_out = self.forward(x_batch)
loss = self.loss.forward(x_batch_out, y_batch)
loss_grad = self.loss.backward()
for layer in self.layers:
layer._clear_gradients()
self.backward(loss_grad)
return loss | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
GRUs `GRUNode` | class GRUNode(object):
def __init__(self):
'''
param hidden_size: int - the number of "hidden neurons" in the LSTM_Layer of which this node is a part.
param vocab_size: int - the number of characters in the vocabulary of which we are predicting the next
character.
'''
pass
def forward(self,
X_in: ndarray,
H_in: ndarray,
params_dict: Dict[str, Dict[str, ndarray]]) -> Tuple[ndarray]:
'''
param X_in: numpy array of shape (batch_size, vocab_size)
param H_in: numpy array of shape (batch_size, hidden_size)
return self.X_out: numpy array of shape (batch_size, vocab_size)
return self.H_out: numpy array of shape (batch_size, hidden_size)
'''
self.X_in = X_in
self.H_in = H_in
# reset gate
self.X_r = np.dot(X_in, params_dict['W_xr']['value'])
self.H_r = np.dot(H_in, params_dict['W_hr']['value'])
# update gate
self.X_u = np.dot(X_in, params_dict['W_xu']['value'])
self.H_u = np.dot(H_in, params_dict['W_hu']['value'])
# gates
self.r_int = self.X_r + self.H_r + params_dict['B_r']['value']
self.r = sigmoid(self.r_int)
self.u_int = self.X_r + self.H_r + params_dict['B_u']['value']
self.u = sigmoid(self.u_int)
# new state
self.h_reset = self.r * H_in
self.X_h = np.dot(X_in, params_dict['W_xh']['value'])
self.H_h = np.dot(self.h_reset, params_dict['W_hh']['value'])
self.h_bar_int = self.X_h + self.H_h + params_dict['B_h']['value']
self.h_bar = tanh(self.h_bar_int)
self.H_out = self.u * self.H_in + (1 - self.u) * self.h_bar
self.X_out = np.dot(self.H_out, params_dict['W_v']['value']) + params_dict['B_v']['value']
return self.X_out, self.H_out
def backward(self,
X_out_grad: ndarray,
H_out_grad: ndarray,
params_dict: Dict[str, Dict[str, ndarray]]):
params_dict['B_v']['deriv'] += X_out_grad.sum(axis=0)
params_dict['W_v']['deriv'] += np.dot(self.H_out.T, X_out_grad)
dh_out = np.dot(X_out_grad, params_dict['W_v']['value'].T)
dh_out += H_out_grad
du = self.H_in * H_out_grad - self.h_bar * H_out_grad
dh_bar = (1 - self.u) * H_out_grad
dh_bar_int = dh_bar * dtanh(self.h_bar_int)
params_dict['B_h']['deriv'] += dh_bar_int.sum(axis=0)
params_dict['W_xh']['deriv'] += np.dot(self.X_in.T, dh_bar_int)
dX_in = np.dot(dh_bar_int, params_dict['W_xh']['value'].T)
params_dict['W_hh']['deriv'] += np.dot(self.h_reset.T, dh_bar_int)
dh_reset = np.dot(dh_bar_int, params_dict['W_hh']['value'].T)
dr = dh_reset * self.H_in
dH_in = dh_reset * self.r
# update branch
du_int = dsigmoid(self.u_int) * du
params_dict['B_u']['deriv'] += du_int.sum(axis=0)
dX_in += np.dot(du_int, params_dict['W_xu']['value'].T)
params_dict['W_xu']['deriv'] += np.dot(self.X_in.T, du_int)
dH_in += np.dot(du_int, params_dict['W_hu']['value'].T)
params_dict['W_hu']['deriv'] += np.dot(self.H_in.T, du_int)
# reset branch
dr_int = dsigmoid(self.r_int) * dr
params_dict['B_r']['deriv'] += dr_int.sum(axis=0)
dX_in += np.dot(dr_int, params_dict['W_xr']['value'].T)
params_dict['W_xr']['deriv'] += np.dot(self.X_in.T, dr_int)
dH_in += np.dot(dr_int, params_dict['W_hr']['value'].T)
params_dict['W_hr']['deriv'] += np.dot(self.H_in.T, dr_int)
return dX_in, dH_in | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
`GRULayer` | class GRULayer(object):
def __init__(self,
hidden_size: int,
output_size: int,
weight_scale: float = 0.01):
'''
param sequence_length: int - length of sequence being passed through the network
param vocab_size: int - the number of characters in the vocabulary of which we are predicting the next
character.
param hidden_size: int - the number of "hidden neurons" in the LSTM_Layer of which this node is a part.
param learning_rate: float - the learning rate
'''
self.hidden_size = hidden_size
self.output_size = output_size
self.weight_scale = weight_scale
self.start_H = np.zeros((1, hidden_size))
self.first = True
def _init_params(self,
input_: ndarray):
self.vocab_size = input_.shape[2]
self.params = {}
self.params['W_xr'] = {}
self.params['W_hr'] = {}
self.params['B_r'] = {}
self.params['W_xu'] = {}
self.params['W_hu'] = {}
self.params['B_u'] = {}
self.params['W_xh'] = {}
self.params['W_hh'] = {}
self.params['B_h'] = {}
self.params['W_v'] = {}
self.params['B_v'] = {}
self.params['W_xr']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.vocab_size, self.hidden_size))
self.params['W_hr']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.hidden_size, self.hidden_size))
self.params['B_r']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(1, self.hidden_size))
self.params['W_xu']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.vocab_size, self.hidden_size))
self.params['W_hu']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.hidden_size, self.hidden_size))
self.params['B_u']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(1, self.hidden_size))
self.params['W_xh']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.vocab_size, self.hidden_size))
self.params['W_hh']['value'] = np.random.normal(loc=0.0,
scale=self.weight_scale,
size=(self.hidden_size, self.hidden_size))
self.params['B_h']['value'] = np.random.normal(loc=0.0,
scale=1.0,
size=(1, self.hidden_size))
self.params['W_v']['value'] = np.random.normal(loc=0.0,
scale=1.0,
size=(self.hidden_size, self.output_size))
self.params['B_v']['value'] = np.random.normal(loc=0.0,
scale=1.0,
size=(1, self.output_size))
for key in self.params.keys():
self.params[key]['deriv'] = np.zeros_like(self.params[key]['value'])
self.cells = [GRUNode() for x in range(input_.shape[1])]
def _clear_gradients(self):
for key in self.params.keys():
self.params[key]['deriv'] = np.zeros_like(self.params[key]['deriv'])
def forward(self, x_seq_in: ndarray):
'''
param x_seq_in: numpy array of shape (batch_size, sequence_length, vocab_size)
return x_seq_out: numpy array of shape (batch_size, sequence_length, vocab_size)
'''
if self.first:
self._init_params(x_seq_in)
self.first=False
batch_size = x_seq_in.shape[0]
H_in = np.copy(self.start_H)
H_in = np.repeat(H_in, batch_size, axis=0)
sequence_length = x_seq_in.shape[1]
x_seq_out = np.zeros((batch_size, sequence_length, self.output_size))
for t in range(sequence_length):
x_in = x_seq_in[:, t, :]
y_out, H_in = self.cells[t].forward(x_in, H_in, self.params)
x_seq_out[:, t, :] = y_out
self.start_H = H_in.mean(axis=0, keepdims=True)
return x_seq_out
def backward(self, x_seq_out_grad: ndarray):
'''
param loss_grad: numpy array of shape (batch_size, sequence_length, vocab_size)
return loss_grad_out: numpy array of shape (batch_size, sequence_length, vocab_size)
'''
batch_size = x_seq_out_grad.shape[0]
h_in_grad = np.zeros((batch_size, self.hidden_size))
num_chars = x_seq_out_grad.shape[1]
x_seq_in_grad = np.zeros((batch_size, num_chars, self.vocab_size))
for t in reversed(range(num_chars)):
x_out_grad = x_seq_out_grad[:, t, :]
grad_out, h_in_grad = \
self.cells[t].backward(x_out_grad, h_in_grad, self.params)
x_seq_in_grad[:, t, :] = grad_out
return x_seq_in_grad | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
Experiments Single LSTM layer | layers1 = [LSTMLayer(hidden_size=256, output_size=62, weight_scale=0.01)]
mod = RNNModel(layers=layers1,
vocab_size=62, sequence_length=25,
loss=SoftmaxCrossEntropy())
optim = AdaGrad(lr=0.01, gradient_clipping=True)
trainer = RNNTrainer('input.txt', mod, optim, batch_size=3)
trainer.train(1000, sample_every=100) | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
Three variants of multiple layers: | layers2 = [RNNLayer(hidden_size=256, output_size=128, weight_scale=0.1),
LSTMLayer(hidden_size=256, output_size=62, weight_scale=0.01)]
mod = RNNModel(layers=layers2,
vocab_size=62, sequence_length=25,
loss=SoftmaxCrossEntropy())
optim = AdaGrad(lr=0.01, gradient_clipping=True)
trainer = RNNTrainer('input.txt', mod, optim, batch_size=32)
trainer.train(2000, sample_every=100)
layers2 = [LSTMLayer(hidden_size=256, output_size=128, weight_scale=0.1),
LSTMLayer(hidden_size=256, output_size=62, weight_scale=0.01)]
mod = RNNModel(layers=layers2,
vocab_size=62, sequence_length=25,
loss=SoftmaxCrossEntropy())
optim = SGD(lr=0.01, gradient_clipping=True)
trainer = RNNTrainer('input.txt', mod, optim, batch_size=32)
trainer.train(2000, sample_every=100)
layers3 = [GRULayer(hidden_size=256, output_size=128, weight_scale=0.1),
LSTMLayer(hidden_size=256, output_size=62, weight_scale=0.01)]
mod = RNNModel(layers=layers3,
vocab_size=62, sequence_length=25,
loss=SoftmaxCrossEntropy())
optim = AdaGrad(lr=0.01, gradient_clipping=True)
trainer = RNNTrainer('input.txt', mod, optim, batch_size=32)
trainer.train(2000, sample_every=100) | _____no_output_____ | MIT | 06_rnns/RNN_DLFS.ipynb | tianminzheng/DLFS_code |
Live demo: Processing gravity data with Fatiando a Terra Import packages | import pygmt
import pyproj
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
import pooch
import verde as vd
import boule as bl
import harmonica as hm | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
Load Bushveld Igneous Complex gravity data (South Africa) and a DEM | url = "https://github.com/fatiando/2021-gsh/main/raw/notebook/data/bushveld_gravity.csv"
md5_hash = "md5:45539f7945794911c6b5a2eb43391051"
data = pd.read_csv(fname)
data
# Obtain the region to plot using Verde ([W, E, S, N])
region_deg = vd.get_region((data.longitude, data.latitude))
fig = pygmt.Figure()
fig.basemap(projection="M15c", region=region_deg, frame=True)
pygmt.makecpt(cmap="viridis", series=[data.gravity.min(), data.gravity.max()])
fig.plot(
x=data.longitude,
y=data.latitude,
color=data.gravity,
cmap=True,
style="c4p",
)
fig.colorbar(frame='af+l"Observed Gravity [mGal]"')
fig.show() | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
Let's download a DEM for the same area: | url = "https://github.com/fatiando/transform21/raw/main/data/bushveld_topography.nc"
md5_hash = "md5:62daf6a114dda89530e88942aa3b8c41"
fname = pooch.retrieve(url, known_hash=md5_hash, fname="bushveld_topography.nc")
fname | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
And use Xarray to load the netCDF file: | topography = xr.load_dataarray(fname)
topography
# Plot topography using pygmt
topo_region = vd.get_region((topography.longitude.values, topography.latitude.values))
fig = pygmt.Figure()
topo_region = vd.get_region((topography.longitude.values, topography.latitude.values))
fig.basemap(projection="M15c", region=topo_region, frame=True)
vmin, vmax = topography.values.min(), topography.values.max()
pygmt.makecpt(cmap="batlow", series=[vmin, vmax])
fig.grdimage(topography)
fig.colorbar(frame='af+l"Topography [m]"')
fig.show() | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
Compute gravity disturbance | data["disturbance"] = data.gravity - normal_gravity
data
fig = pygmt.Figure()
fig.basemap(projection="M15c", region=region_deg, frame=True)
maxabs = vd.maxabs(data.disturbance)
pygmt.makecpt(cmap="polar", series=[-maxabs, maxabs])
fig.plot(
x=data.longitude,
y=data.latitude,
color=data.disturbance,
cmap=True,
style="c4p",
)
fig.colorbar(frame='af+l"Gravity disturbance [mGal]"')
fig.show() | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
Remove terrain correction Project the data to plain coordinates | projection = pyproj.Proj(proj="merc", lat_ts=data.latitude.mean())
data["easting"] = easting
data["northing"] = northing
data | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
Project the topography to plain coordinates Compute gravitational effect of the layer of prisms Create a model of the terrain with prisms Calculate the gravitational effect of the terrain Calculate the Bouguer disturbance | data["bouguer"] = data.disturbance - terrain_effect
data
fig = pygmt.Figure()
fig.basemap(projection="M15c", region=region_deg, frame=True)
maxabs = vd.maxabs(data.bouguer)
pygmt.makecpt(cmap="polar", series=[-maxabs, maxabs])
fig.plot(
x=data.longitude,
y=data.latitude,
color=data.bouguer,
cmap=True,
style="c4p",
)
fig.colorbar(frame='af+l"Bouguer gravity disturbance [mGal]"')
fig.show() | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
Calculate residualsWe can use [Verde](https://www.fatiando.org/verde) to remove a second degree trend from the Bouguer disturbance | data["residuals"] = residuals
data
fig = pygmt.Figure()
fig.basemap(projection="M15c", region=region_deg, frame=True)
maxabs = np.quantile(np.abs(data.residuals), 0.99)
pygmt.makecpt(cmap="polar", series=[-maxabs, maxabs])
fig.plot(
x=data.longitude,
y=data.latitude,
color=data.residuals,
cmap=True,
style="c5p",
)
fig.colorbar(frame='af+l"Residuals [mGal]"')
fig.show() | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
Grid the residuals with Equivalent SourcesWe can use [Harmonica](https://www.fatiando.org/harmonica) to grid the residuals though the equivalent sources technique | fig = pygmt.Figure()
fig.basemap(projection="M15c", region=region_deg, frame=True)
scale = np.quantile(np.abs(grid.residuals), 0.995)
pygmt.makecpt(cmap="polar", series=[-scale, scale], no_bg=True)
fig.grdimage(
grid.residuals,
shading="+a45+nt0.15",
cmap=True,
)
fig.colorbar(frame='af+l"Residuals [mGal]"')
fig.show() | _____no_output_____ | CC-BY-4.0 | live.ipynb | fatiando/2021-gsh |
Title Graph Element Dependencies Matplotlib Backends Matplotlib Bokeh | import numpy as np
import pandas as pd
import holoviews as hv
from bokeh.sampledata.les_mis import data
hv.extension('matplotlib')
%output size=200 fig='svg' | _____no_output_____ | BSD-3-Clause | examples/reference/elements/matplotlib/Chord.ipynb | scaine1/holoviews |
The ``Chord`` element allows representing the inter-relationships between data points in a graph. The nodes are arranged radially around a circle with the relationships between the data points drawn as arcs (or chords) connecting the nodes. The number of chords is scaled by a weight declared as a value dimension on the ``Chord`` element.If the weight values are integers, they define the number of chords to be drawn between the source and target nodes directly. If the weights are floating point values, they are normalized to a default of 500 chords, which are divided up among the edges. Any non-zero weight will be assigned at least one chord.The ``Chord`` element is a type of ``Graph`` element and shares the same constructor. The most basic constructor accepts a columnar dataset of the source and target nodes and an optional value. Here we supply a dataframe containing the number of dialogues between characters of the *Les Misérables* musical. The data contains ``source`` and ``target`` node indices and an associated ``value`` column: | links = pd.DataFrame(data['links'])
print(links.head(3)) | _____no_output_____ | BSD-3-Clause | examples/reference/elements/matplotlib/Chord.ipynb | scaine1/holoviews |
In the simplest case we can construct the ``Chord`` by passing it just the edges: | hv.Chord(links) | _____no_output_____ | BSD-3-Clause | examples/reference/elements/matplotlib/Chord.ipynb | scaine1/holoviews |
To add node labels and other information we can construct a ``Dataset`` with a key dimension of node indices. | nodes = hv.Dataset(pd.DataFrame(data['nodes']), 'index')
nodes.data.head() | _____no_output_____ | BSD-3-Clause | examples/reference/elements/matplotlib/Chord.ipynb | scaine1/holoviews |
Additionally we can now color the nodes and edges by their index and add some labels. The ``label_index``, ``color_index`` and ``edge_color_index`` allow specifying columns to color by. | %%opts Chord [label_index='name' color_index='index' edge_color_index='source']
%%opts Chord (cmap='Category20' edge_cmap='Category20')
hv.Chord((links, nodes)).select(value=(5, None)) | _____no_output_____ | BSD-3-Clause | examples/reference/elements/matplotlib/Chord.ipynb | scaine1/holoviews |
Gram-Schmidt and Modified Gram-Schmidt | import numpy as np
import numpy.linalg as la
A = np.random.randn(3, 3)
def test_orthogonality(Q):
print("Q:")
print(Q)
print("Q^T Q:")
QtQ = np.dot(Q.T, Q)
QtQ[np.abs(QtQ) < 1e-15] = 0
print(QtQ)
Q = np.zeros(A.shape) | _____no_output_____ | Unlicense | cleared-demos/linear_least_squares/Gram-Schmidt and Modified Gram-Schmidt.ipynb | xywei/numerics-notes |
Now let us generalize the process we used for three vectors earlier: This procedure is called [Gram-Schmidt Orthonormalization](https://en.wikipedia.org/wiki/Gram–Schmidt_process). | test_orthogonality(Q) | _____no_output_____ | Unlicense | cleared-demos/linear_least_squares/Gram-Schmidt and Modified Gram-Schmidt.ipynb | xywei/numerics-notes |
Now let us try a different example ([Source](http://fgiesen.wordpress.com/2013/06/02/modified-gram-schmidt-orthogonalization/)): |
np.set_printoptions(precision=13)
eps = 1e-8
A = np.array([
[1, 1, 1],
[eps,eps,0],
[eps,0, eps]
])
A
Q = np.zeros(A.shape)
for k in range(A.shape[1]):
avec = A[:, k]
q = avec
for j in range(k):
print(q)
q = q - np.dot(avec, Q[:,j])*Q[:,j]
print(q)
q = q/la.norm(q)
Q[:, k] = q
print("norm -->", q)
print("-------")
test_orthogonality(Q) | _____no_output_____ | Unlicense | cleared-demos/linear_least_squares/Gram-Schmidt and Modified Gram-Schmidt.ipynb | xywei/numerics-notes |
Questions:* What happened?* How do we fix it? | Q = np.zeros(A.shape)
test_orthogonality(Q) | _____no_output_____ | Unlicense | cleared-demos/linear_least_squares/Gram-Schmidt and Modified Gram-Schmidt.ipynb | xywei/numerics-notes |
導入葡萄酒數據集(只考慮前兩個特徵) | from sklearn.datasets import load_wine
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
wine = load_wine()
#選取前兩個特徵
X = wine.data[:, :2]
y = wine.target
print('Class labels:', np.unique(y))
sc = StandardScaler()
sc.fit(X)
X_std = sc.transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size=0.3, random_state=1, stratify=y) | _____no_output_____ | MIT | svm wine.data - using sklearn.ipynb | yunglinchang/machinelearning_coursework |
訓練「線性核函數」模型 | from sklearn.metrics import accuracy_score
from matplotlib.colors import ListedColormap
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.svm import SVC
svmlin = SVC(kernel='linear', C=1.0, random_state=1)
svmlin.fit(X_train, y_train)
y_train_pred = svmlin.predict(X_train)
y_test_pred = svmlin.predict(X_test)
svmlin_train = accuracy_score(y_train, y_train_pred)
svmlin_test = accuracy_score(y_test, y_test_pred)
print('SVM with linear kernal train/test accuracies %.3f/%.3f'
%(svmlin_train, svmlin_test))
#畫出散佈圖及決策邊界
#chose colors
from matplotlib.colors import ListedColormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
#分別用資料兩個特徵當X, y軸
x_min, x_max = X_train[:, 0].min() - 1, X_train[:, 0].max() + 1
y_min, y_max = X_train[:, 1].min() - 1, X_train[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, .02),
np.arange(y_min, y_max, .02))
Z = svmlin.predict(np.c_[xx.ravel(), yy.ravel()])
#依據顏色分類
Z = Z.reshape(xx.shape)
#畫出散佈圖
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.scatter(X_std[:,0], X_std[:,1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max()) | _____no_output_____ | MIT | svm wine.data - using sklearn.ipynb | yunglinchang/machinelearning_coursework |
訓練「⾼斯核函數」模型 | svmrbf = SVC(kernel='rbf', gamma=0.7, C=1.0)
svmrbf.fit(X_train, y_train)
y_train_pred = svmrbf.predict(X_train)
y_test_pred = svmrbf.predict(X_test)
svmrbf_train = accuracy_score(y_train, y_train_pred)
svmrbf_test = accuracy_score(y_test, y_test_pred)
print('SVM with RBF kernal train/test accuracies %.3f/%.3f'
%(svmrbf_train, svmrbf_test))
#畫出散佈圖及決策邊界
#choose colors
from matplotlib.colors import ListedColormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
#分別用資料兩個特徵當X, y軸
x_min, x_max = X_train[:, 0].min() - 1, X_train[:, 0].max() + 1
y_min, y_max = X_train[:, 1].min() - 1, X_train[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, .02),
np.arange(y_min, y_max, .02))
Z = svmrbf.predict(np.c_[xx.ravel(), yy.ravel()])
#依據顏色分類
Z = Z.reshape(xx.shape)
#畫出散佈圖
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.scatter(X_std[:,0], X_std[:,1], c=y, cmap=cmap_bold, edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max()) | _____no_output_____ | MIT | svm wine.data - using sklearn.ipynb | yunglinchang/machinelearning_coursework |
⾼斯核函數參數的影響 | C=1.0
models = (SVC(kernel='rbf', gamma=0.1, C=C),
SVC(kernel='rbf', gamma=1, C=C),
SVC(kernel='rbf', gamma=10, C=C))
models = (clf.fit(X_train, y_train) for clf in models)
svmrbf_train = accuracy_score(y_train, y_train_pred)
svmrbf_test = accuracy_score(y_test, y_test_pred)
titles = ('gamma=0.1', 'gamma=1', 'gamma=10')
fig, sub = plt.subplots(1, 3, figsize=(10,3))
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
x_min, x_max = X_train[:, 0].min() - 1, X_train[:, 0].max() + 1
y_min, y_max = X_train[:, 1].min() - 1, X_train[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, .02),
np.arange(y_min, y_max, .02))
for clf, title, ax in zip(models, titles, sub.flatten()):
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
ax.pcolormesh(xx, yy, Z, cmap=cmap_light)
ax.scatter(X_std[:,0], X_std[:,1], c=y, cmap=cmap_bold)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_title(title)
svmrbf = SVC(kernel='rbf', gamma=0.1, C=1.0)
svmrbf.fit(X_train, y_train)
y_train_pred = svmrbf.predict(X_train)
y_test_pred = svmrbf.predict(X_test)
svmrbf_train = accuracy_score(y_train, y_train_pred)
svmrbf_test = accuracy_score(y_test, y_test_pred)
print('SVM with RBF kernal train/test accuracies %.3f/%.3f'
%(svmrbf_train, svmrbf_test))
svmrbf = SVC(kernel='rbf', gamma=1, C=1.0)
svmrbf.fit(X_train, y_train)
y_train_pred = svmrbf.predict(X_train)
y_test_pred = svmrbf.predict(X_test)
svmrbf_train = accuracy_score(y_train, y_train_pred)
svmrbf_test = accuracy_score(y_test, y_test_pred)
print('SVM with RBF kernal train/test accuracies %.3f/%.3f'
%(svmrbf_train, svmrbf_test))
svmrbf = SVC(kernel='rbf', gamma=10, C=1.0)
svmrbf.fit(X_train, y_train)
y_train_pred = svmrbf.predict(X_train)
y_test_pred = svmrbf.predict(X_test)
svmrbf_train = accuracy_score(y_train, y_train_pred)
svmrbf_test = accuracy_score(y_test, y_test_pred)
print('SVM with RBF kernal train/test accuracies %.3f/%.3f'
%(svmrbf_train, svmrbf_test)) | SVM with RBF kernal train/test accuracies 0.887/0.833
| MIT | svm wine.data - using sklearn.ipynb | yunglinchang/machinelearning_coursework |
Feature Selection* tf-idf* chi-square* likelihood* PMI* EMI=> build dictionary in 500 words LLR | dict_df = pd.read_csv('data/dictionary.txt',header=None,index_col=None,sep=' ')
terms = dict_df[1].tolist() #all terms
with open('data/training.txt','r') as f:
train_id = f.read().splitlines()
train_dict = {}
for trainid in train_id:
trainid = trainid.split(' ')
trainid = list(filter(None, trainid))
train_dict[trainid[0]] = trainid[1:]
train_dict #class:doc_id
train_dict = pickle.load(open('data/train_dict.pkl','rb'))
in_dir = 'data/IRTM/'
train_dict_ = {}
class_token = []
class_token_dict = {}
for c,d in train_dict.items():
for doc in d:
f = open('data/IRTM/'+doc+'.txt')
texts = f.read()
f.close()
tokens_all = preprocess(texts)
tokens_all = tokens_all.split(' ')
tokens_all = list(set(filter(None,tokens_all)))
class_token.append(tokens_all)
class_token_dict[c]=class_token
class_token=[]
len(class_token_dict['1'])
# train_dict = {}
# for c,t in class_token_dict.items():
# train_dict[c] = list(set(t))
# # train_dict[c] = dict(Counter(t))
# train_dict
dict_df.drop(0,axis=1,inplace=True)
dict_df.columns = ['term','score']
dict_df.index = dict_df['term']
dict_df.drop('term',axis=1,inplace=True)
dict_df
dict_df['score'] = 0
dict_df['score_chi'] = 0
dict_df['score_emi'] = 0
# c=1
for term in tqdm(terms): #each term
scores = []
scores_chi = []
scores_emi = []
c=1
for _ in range(len(class_token_dict)): # each class
n11=e11=m11=0
n10=e10=m10=0
n01=e01=m01=0
n00=e00=m00=0
for k,v in class_token_dict.items():
# print(k,c)
if k == str(c): #ontopic
for r in v:
if term in r:
n11+=1
else:
n10+=1
# c+=1
else: #off topic
for r in v:
if term in r:
n01+=1
else:
n00+=1
# c+=1
c+=1
n11+=1e-8
n10+=1e-8
n01+=1e-8
n00+=1e-8
N = n11+n10+n01+n00
e11 = N * (n11+n01)/N * (n11+n10)/N #chi-squre
e10 = N * (n11+n10)/N * (n10+n00)/N
e01 = N * (n11+n01)/N * (n01+n00)/N
e00 = N * (n01+n00)/N * (n10+n00)/N
score_chi = ((n11-e11)**2)/e11 + ((n10-e10)**2)/e10 + ((n01-e01)**2)/e01 + ((n00-e00)**2)/e00
scores_chi.append(score_chi)
n11 = n11 - 1e-8 + 1e-6
n10 = n10 - 1e-8 + 1e-6
n01 = n01 - 1e-8 + 1e-6
n00 = n00 - 1e-8 + 1e-6
N = n11+n10+n01+n00
m11 = (n11/N) * math.log(((n11/N)/((n11+n01)/N * (n11+n10)/N)),2) #EMI
m10 = n10/N * math.log((n10/N)/((n11+n10)/N * (n10+n00)/N),2)
m01 = n01/N * math.log((n01/N)/((n11+n01)/N * (n01+n00)/N),2)
m00 = n00/N * math.log((n00/N)/((n01+n00)/N * (n10+n00)/N),2)
score_emi = m11 + m10 + m01 + m00
scores_emi.append(score_emi)
# print(n11,n10,n01,n00)
n11-=1e-6
m10-=1e-6
n01-=1e-6
n00-=1e-6
N = n11+n10+n01+n00
score = (((n11+n01)/N) ** n11) * ((1 - ((n11+n01)/N)) ** n10) * (((n11+n01)/N) ** n01) * ((1 - ((n11+n01)/N)) ** n00)
score /= ((n11/(n11+n10)) ** n11) * ((1 - (n11/(n11+n10))) ** n10) * ((n01/(n01+n00)) ** n01) * ((1 - (n01/(n01+n00))) ** n00)
score = -2 * math.log(score, 10) #LLR
scores.append(score)
# c+=1
dict_df.loc[term,'score'] = np.mean(scores)
dict_df.loc[term,'score_chi'] = np.mean(scores_chi)
dict_df.loc[term,'score_emi'] = np.mean(scores_emi)
dict_df
dict_df2 = pd.read_csv('data/dictionary.txt',header=None,index_col=None,sep=' ')
dict_df2.columns = ['id','term','freq']
dict_df2['sum'] = 0.0
dict_df2
tf_list = next(os.walk('../HW2/output/tf-idf/'))[2]
# df_list = [dict_df2]
for tf in tf_list:
# print(tf)
df2 = pd.read_csv('../HW2/output/tf-idf/'+tf,header=None,index_col=None,sep=' ',skiprows=[0])
df2.columns = ['id','tfidf']
df3 = pd.merge(dict_df2,df2,on='id',how='outer')
df3.fillna(0,inplace=True)
dict_df2['sum']+=df3['tfidf']
dict_df2['avg_tfidf'] = dict_df2['sum']/dict_df2['freq']
dict_df2 = dict_df2.drop(['freq','sum'],axis=1)
dict_df2
# break
# df_list.append(df2)
# df3 = pd.concat(df_list).groupby(level=0).sum()
# df3 = pd.concat([df3,df2]).groupby(level=0).sum()
# df3 = pd.merge(dict_df2,df2,on='id',how='outer')
# df3 = pd.merge(df3,df2,on='id',how='outer')?
# df3[df3.id == 2]
dict_df['term'] = dict_df.index
dict_df3 = pd.merge(dict_df,dict_df2,on='term',how='outer')
dict_df3
cols = list(dict_df3)
cols[4], cols[3], cols[5], cols[1], cols[0], cols[2] = cols[0], cols[1] , cols[2] , cols[3], cols[4], cols[5]
dict_df3 = dict_df3.ix[:,cols]
dict_df3
dict_df3.columns = ['id','term','avg_tfidf','score_chi','score_llr','score_emi']
dict_df3.to_csv('output/feature_selection_df_rev.csv') | _____no_output_____ | MIT | NB_clf/Multinomial-NB_clf.ipynb | tychen5/IR_TextMining |
select top 500* 取各col的mean+1.45*std* 再去做投票,超過兩票的流下來看剩下哪幾個 | dict_df3 = pd.read_csv('output/feature_selection_df_rev.csv',index_col=None)
threshold_tfidf = np.mean(dict_df3['avg_tfidf'])+2.5*np.std(dict_df3['avg_tfidf']) #1.45=>502 數字大嚴格
threshold_chi = np.mean(dict_df3['score_chi'])+2.5*np.std(dict_df3['score_chi']) #1=>350
threshold_llr = np.mean(dict_df3['score_llr'])+2.5*np.std(dict_df3['score_llr']) #1.75=>543
threshold_emi = np.mean(dict_df3['score_emi'])+2.5*np.std(dict_df3['score_emi']) #1.75=>543
print('avg_tfidf',threshold_tfidf)
# dict_df3[dict_df3.score_llr>0.1]
df1 = dict_df3[dict_df3['avg_tfidf']>threshold_tfidf]
df2 = dict_df3[dict_df3['score_chi']>threshold_chi]
df3 = dict_df3[dict_df3['score_llr']>threshold_llr]
df4 = dict_df3[dict_df3['score_emi']>threshold_emi]
df_vote = dict_df3
df_vote['vote']=0
df_vote
df_vote.loc[df1.id-1,'vote'] += 1
df_vote.loc[df2.id-1,'vote'] += 1
df_vote.loc[df3.id-1,'vote'] += 1
df_vote.loc[df4.id-1,'vote'] += 1
# df_vote
df_vote_ = df_vote[df_vote.vote>2] #(1,2)=>375 #(1,1)=>422 #(1.6,2)=>482 #(2,2)=>330 #(1,3)=>100
df_vote_ = df_vote_.filter(['id','term','vote'])
df_vote_
df_vote_.to_csv('output/500terms_df_rev5.csv') | _____no_output_____ | MIT | NB_clf/Multinomial-NB_clf.ipynb | tychen5/IR_TextMining |
Classifier* 7-fold* MNB* BNB* self-train / co-train* ens voting (BNB lower weight)* auto-skleran / auto-kerasREF: http://kenzotakahashi.github.io/naive-bayes-from-scratch-in-python.html | df_vote = pd.read_csv('output/500terms_df_rev5.csv',index_col=False)
terms_li = list(set(df_vote.term.tolist()))
train_X = []
train_Y = []
len(terms_li)
with open('data/training.txt','r') as f:
train_id = f.read().splitlines()
train_dict = {}
for trainid in train_id:
trainid = trainid.split(' ')
trainid = list(filter(None, trainid))
train_dict[trainid[0]] = trainid[1:]
# train_dict #class:doc_id
train_dict = pickle.load(open('data/train_dict.pkl','rb'))
in_dir = 'data/IRTM/'
train_dict_ = {}
class_token = []
class_token_dict = {}
train_X = []
train_Y= []
train_ids = []
for c,d in tqdm(train_dict.items()):
for doc in d:
train_ids.append(doc)
trainX = np.array([0]*len(terms_li))
f = open('data/IRTM/'+doc+'.txt')
texts = f.read()
f.close()
tokens_all = preprocess(texts)
tokens_all = tokens_all.split(' ')
# tokens_all = list(filter(None,tokens_all))
tokens_all = dict(Counter(tokens_all))
for key,value in tokens_all.items():
if key in terms_li:
trainX[terms_li.index(key)] = int(value)
# trainX = np.array(trainX)
# for token in tokens_all:
# if token in terms_li:
# ind = terms_li.index(token)
# trainX[ind]+=1
train_X.append(trainX)
train_Y.append(int(c))
train_X = np.array(train_X)
train_Y = np.array(train_Y)
# tokens_all = list(set(filter(None,tokens_all)))
# class_token.append(tokens_all)
# class_token_dict[c]=class_token
# class_token=[]
# len(class_token_dict['1'])
print(train_X.shape , train_Y.shape)
#建立term index matrix
tokens_all_class=[]
term_tf_mat=[]
for c,d in tqdm(train_dict.items()):
for doc in d:
f = open('data/IRTM/'+doc+'.txt')
texts = f.read()
f.close()
tokens_all = preprocess(texts)
tokens_all = tokens_all.split(' ')
tokens_all = list(filter(None,tokens_all))
tokens_all_class.extend(tokens_all)
tokens_all = dict(Counter(tokens_all_class))
term_tf_mat.append(tokens_all)
def train_MNB(train_set=train_dict,term_list=terms_li,term_tf_mat=term_tf_mat):
prior = np.zeros(len(train_set))
cond_prob = np.zeros((len(train_set), len(term_list)))
for i,docs in train_set.items(): #13 classes 1~13
prior[int(i)-1] = len(docs)/len(train_ids) #那個類別的文章有幾個/總共的文章數目 0~12
token_count=0
class_tf = np.zeros(len(term_list))
for idx,term in enumerate(term_list):
try:
class_tf[idx] = term_tf_mat[int(i)-1][term] #term在class的出現次數
except:
token_count+=1
class_tf = class_tf + np.ones(len(term_list)) #smoothing (可改)
class_tf = class_tf/(sum(class_tf) +token_count) #該class總共的token數(可改)
cond_prob[int(i)-1] = class_tf #0~12
return prior, cond_prob
prior,cond_prob = train_MNB()
prior
def predict_MNB(test_id,prob=False,prior=prior,cond_prob=cond_prob,term_list=terms_li):
f = open('data/IRTM/'+str(test_id)+'.txt')
texts = f.read()
f.close()
tokens_all = preprocess(texts)
tokens_all = tokens_all.split(' ')
tokens_all = list(filter(None,tokens_all))
class_scores = []
# score = 0
for i in range(13):
score=0
# print(prior[i])
score += math.log(prior[i],10)
for token in tokens_all:
if token in term_list:
score += math.log(cond_prob[i][term_list.index(token)])
class_scores.append(score)
if prob:
return np.array(class_scores)
else:
return(np.argmax(class_scores)+1)
ans = predict_MNB(20,prob=False) # 17 18 20 21
ans
ans=[]
for i in tqdm(range(1095)):
ans.append(predict_MNB(i+1))
# np.max(ans)
ans
def MNB(input_X,input_Y=None,prior_log_class=None,log_prob_feature=None,train=True,prob=False,smooth=1.0):
if train:
sample_num = input_X.shape[0]
match_data = [[x for x, t in zip(input_X, input_Y) if t == c] for c in np.unique(input_Y)]
prior_log_class = [np.log(len(i) / sample_num) for i in match_data]
counts = np.array([np.array(i).sum(axis=0) for i in match_data]) + smooth
log_prob_feature = np.log(counts / counts.sum(axis=1)[np.newaxis].T)
return prior_log_class,log_prob_feature
else:
probability = [(log_prob_feature * x).sum(axis=1) + prior_log_class for x in input_X]
if prob:
return probability
else:
ans = np.argmax(probability,axis=1)
return ans
class BernoulliNB(object):
def __init__(self, alpha=1.0, binarize=0.0):
self.alpha = alpha
self.binarize = binarize
def _binarize_X(self, X):
return np.where(X > self.binarize, 1, 0) if self.binarize != None else X
def fit(self, X, y):
X = self._binarize_X(X)
count_sample = X.shape[0]
separated = [[x for x, t in zip(X, y) if t == c] for c in np.unique(y)]
self.class_log_prior_ = [np.log(len(i) / count_sample) for i in separated]
count = np.array([np.array(i).sum(axis=0) for i in separated]) + self.alpha
smoothing = 2 * self.alpha
n_doc = np.array([len(i) + smoothing for i in separated])
self.feature_prob_ = count / n_doc[np.newaxis].T
return self
def predict_log_proba(self, X):
X = self._binarize_X(X)
return [(np.log(self.feature_prob_) * x + \
np.log(1 - self.feature_prob_) * np.abs(x - 1)
).sum(axis=1) + self.class_log_prior_ for x in X]
def predict(self, X):
X = self._binarize_X(X)
return np.argmax(self.predict_log_proba(X), axis=1)
X = np.array([
[2,1,0,0,0,0],
[2,0,1,0,0,0],
[1,0,0,1,0,0],
[1,0,0,0,1,1]
])
y = np.array([0,1,2,3])
nb = BernoulliNB().fit(X, y)
X_test = np.array([[3,0,0,0,1,1],[0,1,1,0,1,1],[1,0,0,0,1,1],[2,1,0,0,0,0],[2,0,1,0,0,0],[1,0,0,1,0,0]])
print(nb.predict_log_proba(X_test)) | [array([-9.07658038, -9.07658038, -9.70406053, -8.90668135]), array([-9.48204549, -9.48204549, -9.70406053, -8.78889831]), array([-6.8793558 , -6.8793558 , -6.93147181, -5.89852655]), array([-5.08759634, -5.78074352, -6.23832463, -6.59167373]), array([-5.78074352, -5.08759634, -6.23832463, -6.59167373]), array([-4.68213123, -4.68213123, -4.15888308, -5.08759634])]
| MIT | NB_clf/Multinomial-NB_clf.ipynb | tychen5/IR_TextMining |
Prediction | df_vote = pd.read_csv('output/500terms_df_rev5.csv',index_col=False)
terms_li = list(set(df_vote.term.tolist()))
len(terms_li)
with open('data/training.txt','r') as f:
train_id = f.read().splitlines()
train_dict = {}
test_id = []
train_ids=[]
for trainid in train_id:
trainid = trainid.split(' ')
trainid = list(filter(None, trainid))
train_ids.extend(trainid[1:])
for i in range(1095):
if str(i+1) not in train_ids:
test_id.append(i+1)
ans=[]
for doc in tqdm(test_id):
ans.append(predict_MNB(doc))
print(ans)
df_ans = pd.DataFrame(list(zip(test_id,ans)),columns=['id','Value'])
df_ans.to_csv('output/MNB.csv',index=False)
df_ans | 100%|██████████| 900/900 [00:20<00:00, 43.87it/s]
| MIT | NB_clf/Multinomial-NB_clf.ipynb | tychen5/IR_TextMining |
combine all prediction df | import os
in_dir = './output/'
prefixed = [filename for filename in os.listdir('./output/') if filename.endswith("_sk.csv")]
df_from_each_file = [pd.read_csv(in_dir+f) for f in prefixed]
prefixed
merged_df = functools.reduce(lambda left,right: pd.merge(left,right,on='id'), df_from_each_file)
merged_df.columns = ['id',0,1,2,3,4,5,6,7,8]
merged_df
df01 = pd.DataFrame(merged_df.mode(axis=1)[0])
df02 = pd.DataFrame(merged_df['id'])
df_ans = pd.concat([df02,df01],axis=1)
df_ans = df_ans.astype('int')
df_ans.columns = ['id','Value']
df_ans.to_csv('output/voting_rev.csv',index=False)
df_ans
df1 = merged_df[(merged_df[0] == merged_df[1])&(merged_df[2]==merged_df[3])&(merged_df[1]==merged_df[2])]
df1.reset_index(inplace=True,drop=True)
df1['class'] = df1[0]
df1 = df1.filter(['id','class'])
df1
df1[df1['class']=='1']
for i in range(13):
df1 = df1.astype(str)
li = df1[df1['class'] == str(i+1)]['id'].tolist()
train_dict[str(i+1)].extend(li)
train_dict
pickle.dump(obj=train_dict,file=open('data/train_dict.pkl','wb'))
with open('data/training.txt','r') as f:
train_id = f.read().splitlines()
train_dict = {}
for trainid in train_id:
trainid = trainid.split(' ')
trainid = list(filter(None, trainid))
train_dict[trainid[0]] = trainid[1:]
train_dict #class:doc_id
df_from_each_file = [pd.read_csv(in_dir+f) for f in prefixed]
# concatenated_df = pd.concat(df_from_each_file,axis=1,join='inner')
concatenated_df = pd.DataFrame.join(df_from_each_file,on='id')
concatenated_df
df1 = pd.read_csv(in_dir+prefixed[0])
df2 = pd.read_csv(in_dir+prefixed[1])
pd.merge(df1,df2,on='id',how='inner')
with open('data/training.txt','r') as f:
train_id = f.read().splitlines()
train_dict = {}
test_id = []
train_ids=[]
for trainid in train_id:
trainid = trainid.split(' ')
trainid = list(filter(None, trainid))
train_ids.extend(trainid[1:])
for i in range(1095):
if str(i+1) not in train_ids:
test_id.append(i+1)
# train_dict[trainid[0]] = trainid[1:]
# train_dict #class:doc_id
in_dir = 'data/IRTM/'
train_dict_ = {}
class_token = []
class_token_dict = {}
test_X = []
# train_Y= []
# for c,d in tqdm(train_dict.items()):
for doc in tqdm(test_id):
testX = np.array([0]*len(terms_li))
f = open('data/IRTM/'+str(doc)+'.txt')
texts = f.read()
f.close()
tokens_all = preprocess(texts)
tokens_all = tokens_all.split(' ')
# tokens_all = list(filter(None,tokens_all))
tokens_all = dict(Counter(tokens_all))
for key,value in tokens_all.items():
if key in terms_li:
testX[terms_li.index(key)] = int(value)
test_X.append(testX)
test_X = np.array(test_X)
print(test_X.shape)
df_ans = pd.DataFrame(list(zip(test_id,ans2)),columns=['id','Value'])
df_ans.to_csv('output/MNB02_sk.csv',index=False)
df_ans
from sklearn.datasets import load_iris
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
from sklearn.naive_bayes import *
clf = MultinomialNB()
clf.fit(train_X, train_Y)
ans2 = clf.predict(test_X)
ans2
from sklearn.naive_bayes import *
clf = MultinomialNB()
clf.fit(train_X, train_Y)
ans2 = clf.predict(test_X)
ans2
clf = BernoulliNB()
clf.fit(train_X, train_Y)
ans2 = clf.predict(test_X)
ans2
print(train_X.shape, train_Y.shape , test_X.shape)
df_ans = pd.DataFrame(list(zip(test_id,ans2)),columns=['id','Value'])
df_ans.to_csv('output/MNB05_sk.csv',index=False)
df_ans | _____no_output_____ | MIT | NB_clf/Multinomial-NB_clf.ipynb | tychen5/IR_TextMining |
Stock Statistics Statistics is a branch of applied mathematics concerned with collecting, organizing, and interpreting data. Statistics is also the mathematical study of the likelihood and probability of events occurring based on known quantitative data or a collection of data.http://www.icoachmath.com/math_dictionary/Statistics | import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
# yfinance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AAPL'
start = '2014-01-01'
end = '2019-01-01'
# Read data
df = yf.download(symbol,start,end)
# View Columns
df.head()
returns = df['Adj Close'].pct_change()[1:].dropna() | _____no_output_____ | MIT | Python_Stock/Stock_Statistics.ipynb | eu90h/Stock_Analysis_For_Quant |
Mean is the average number, sum of the values divided by the number of values. Median is the middle value in the list of numbers. Mode is the value that occurs often. | import statistics as st
print('Mean of returns:', st.mean(returns))
print('Median of returns:', st.median(returns))
print('Median Low of returns:', st.median_low(returns))
print('Median High of returns:', st.median_high(returns))
print('Median Grouped of returns:', st.median_grouped(returns))
print('Mode of returns:', st.mode(returns))
from statistics import mode
print('Mode of returns:', mode(returns))
# Since all of the returns are distinct, we use a frequency distribution to get an alternative mode.
# np.histogram returns the frequency distribution over the bins as well as the endpoints of the bins
hist, bins = np.histogram(returns, 20) # Break data up into 20 bins
maxfreq = max(hist)
# Find all of the bins that are hit with frequency maxfreq, then print the intervals corresponding to them
print('Mode of bins:', [(bins[i], bins[i+1]) for i, j in enumerate(hist) if j == maxfreq]) | Mode of returns: 0.0
Mode of bins: [(-0.0070681808335254365, 0.0010272794824504605)]
| MIT | Python_Stock/Stock_Statistics.ipynb | eu90h/Stock_Analysis_For_Quant |
Arithmetic Average Returns is average return on the the stock or investment | print('Arithmetic average of returns:\n')
print(returns.mean()) | Arithmetic average of returns:
0.0007357373017012073
| MIT | Python_Stock/Stock_Statistics.ipynb | eu90h/Stock_Analysis_For_Quant |
Geometric mean is the average of a set of products, the calculation of which is commonly used to determine the performance results of an investment or portfolio. It is technically defined as "the nth root product of n numbers." The geometric mean must be used when working with percentages, which are derived from values, while the standard arithmetic mean works with the values themselves. https://www.investopedia.com/terms/h/harmonicaverage.asp | # Geometric mean
from scipy.stats.mstats import gmean
print('Geometric mean of stock:', gmean(returns))
ratios = returns + np.ones(len(returns))
R_G = gmean(ratios) - 1
print('Geometric mean of returns:', R_G) | Geometric mean of returns: 0.000622187293129
| MIT | Python_Stock/Stock_Statistics.ipynb | eu90h/Stock_Analysis_For_Quant |
Standard deviation of returns is the risk of returns | print('Standard deviation of returns')
print(returns.std())
T = len(returns)
init_price = df['Adj Close'][0]
final_price = df['Adj Close'][T]
print('Initial price:', init_price)
print('Final price:', final_price)
print('Final price as computed with R_G:', init_price*(1 + R_G)**T) | Initial price: 71.591667
Final price: 156.463837
Final price as computed with R_G: 156.463837
| MIT | Python_Stock/Stock_Statistics.ipynb | eu90h/Stock_Analysis_For_Quant |
Harmonic Mean is numerical average. Formula: A set of n numbers, add the reciprocals of the numbers in the set, divide the sum by n, then take the reciprocal of the result. | # Harmonic mean
print('Harmonic mean of returns:', len(returns)/np.sum(1.0/returns))
print('Skew:', stats.skew(returns))
print('Mean:', np.mean(returns))
print('Median:', np.median(returns))
plt.hist(returns, 30);
# Plot some example distributions stock's returns
xs = np.linspace(-6,6, 1257)
normal = stats.norm.pdf(xs)
plt.plot(returns,stats.laplace.pdf(returns), label='Leptokurtic')
print('Excess kurtosis of leptokurtic distribution:', (stats.laplace.stats(returns)))
plt.plot(returns, normal, label='Mesokurtic (normal)')
print('Excess kurtosis of mesokurtic distribution:', (stats.norm.stats(returns)))
plt.plot(returns,stats.cosine.pdf(returns), label='Platykurtic')
print('Excess kurtosis of platykurtic distribution:', (stats.cosine.stats(returns)))
plt.legend()
print("Excess kurtosis of returns: ", stats.kurtosis(returns))
from statsmodels.stats.stattools import jarque_bera
_, pvalue, _, _ = jarque_bera(returns)
if pvalue > 0.05:
print('The returns are likely normal.')
else:
print('The returns are likely not normal.') | The returns are likely not normal.
| MIT | Python_Stock/Stock_Statistics.ipynb | eu90h/Stock_Analysis_For_Quant |
Zero-Shot Image ClassificationThis example shows how [SentenceTransformers](https://www.sbert.net) can be used to map images and texts to the same vector space. We can use this to perform **zero-shot image classification** by providing the names for the labels.As model, we use the [OpenAI CLIP Model](https://github.com/openai/CLIP), which was trained on a large set of images and image alt texts.The images in this example are from [Unsplash](https://unsplash.com/). | from sentence_transformers import SentenceTransformer, util
from PIL import Image
import glob
import torch
import pickle
import zipfile
from IPython.display import display
from IPython.display import Image as IPImage
import os
from tqdm.autonotebook import tqdm
import torch
# We use the original CLIP model for computing image embeddings and English text embeddings
en_model = SentenceTransformer('clip-ViT-B-32')
# We download some images from our repository which we want to classify
img_names = ['eiffel-tower-day.jpg', 'eiffel-tower-night.jpg', 'two_dogs_in_snow.jpg', 'cat.jpg']
url = 'https://github.com/UKPLab/sentence-transformers/raw/master/examples/applications/image-search/'
for filename in img_names:
if not os.path.exists(filename):
util.http_get(url+filename, filename)
# And compute the embeddings for these images
img_emb = en_model.encode([Image.open(filepath) for filepath in img_names], convert_to_tensor=True)
# Then, we define our labels as text. Here, we use 4 labels
labels = ['dog', 'cat', 'Paris at night', 'Paris']
# And compute the text embeddings for these labels
en_emb = en_model.encode(labels, convert_to_tensor=True)
# Now, we compute the cosine similarity between the images and the labels
cos_scores = util.cos_sim(img_emb, en_emb)
# Then we look which label has the highest cosine similarity with the given images
pred_labels = torch.argmax(cos_scores, dim=1)
# Finally we output the images + labels
for img_name, pred_label in zip(img_names, pred_labels):
display(IPImage(img_name, width=200))
print("Predicted label:", labels[pred_label])
print("\n\n")
| _____no_output_____ | Apache-2.0 | examples/applications/image-search/Image_Classification.ipynb | danielperezr88/sentence-transformers |
Zero-Shot Image ClassificationThe original CLIP Model only works for English, hence, we used [Multilingual Knowlegde Distillation](https://arxiv.org/abs/2004.09813) to make this model work with 50+ languages.For this, we msut load the *clip-ViT-B-32-multilingual-v1* model to encode our labels.We can define our labels in 50+ languages and can also mix the languages we have | multi_model = SentenceTransformer('clip-ViT-B-32-multilingual-v1')
# Then, we define our labels as text. Here, we use 4 labels
labels = ['Hund', # German: dog
'gato', # Spanish: cat
'巴黎晚上', # Chinese: Paris at night
'Париж' # Russian: Paris
]
# And compute the text embeddings for these labels
txt_emb = multi_model.encode(labels, convert_to_tensor=True)
# Now, we compute the cosine similarity between the images and the labels
cos_scores = util.cos_sim(img_emb, txt_emb)
# Then we look which label has the highest cosine similarity with the given images
pred_labels = torch.argmax(cos_scores, dim=1)
# Finally we output the images + labels
for img_name, pred_label in zip(img_names, pred_labels):
display(IPImage(img_name, width=200))
print("Predicted label:", labels[pred_label])
print("\n\n") | _____no_output_____ | Apache-2.0 | examples/applications/image-search/Image_Classification.ipynb | danielperezr88/sentence-transformers |
Coding AssignmentQ: Write a python class with different function to fit LDA model, evaluate optimal number of topics based on best coherence scores and predict new instances based on best LDA model with optimal number of topics based on best coherence score. Function should take 2darray of embeddings as input and return a LDA model, optimal number of topics and topics. | """
author: Parikshit Saikia
email: [email protected]
github: https://github.com/parikshitsaikia1619
date: 20-08-2021
""" | _____no_output_____ | MIT | LDA_New.ipynb | parikshitsaikia1619/LDA_modeing_IQVIA |
Step 1: Import neccessary Libraries | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel
import spacy
from tqdm.notebook import tqdm
#spacy download en_core_web_sm
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop_words = stopwords.words('english') | [nltk_data] Downloading package stopwords to C:\Users\Parikshit
[nltk_data] Saikia\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
| MIT | LDA_New.ipynb | parikshitsaikia1619/LDA_modeing_IQVIA |
Step 2: Load the datasetThis dataset contains a set of research articles related to computer science, mathematics, physics and statistics. Each article is tagged into major and minor topics in the form one hot encoding.But for our task (topic modeling) we don't need the tags, we just need the articles text.From the dataset we will form a list of articles | data = pd.read_csv('./data/research_articles/Train.csv/Processed_train.csv')
data.head()
articles_data = data.ABSTRACT.values.tolist() | _____no_output_____ | MIT | LDA_New.ipynb | parikshitsaikia1619/LDA_modeing_IQVIA |
Step 3: Creating a Data Preprocessing PipelineThis is the most important step in this entire code . We cannot expect good results from a model trained on a uncleaned data.As the famous quote goes "garbage in,garbage out"We want our corpus consisting a list of representative words capturing the essence of each article, To achieve that we need to follow a sequence of steps: | def convert_lowercase(string_list):
"""
Convert the list of strings to lowercase and returns the list
"""
pbar = tqdm(total = len(string_list),desc='lowercase conversion progress')
for i in range(len(string_list)):
string_list[i] = string_list[i].lower()
pbar.update(1)
pbar.close()
return string_list
def remove_punctuation(doc_list):
"""
Tokenization and remove punctuation and return the list of tokens
"""
doc_word_list = []
pbar = tqdm(total = len(doc_list),desc='punctuation removal progress')
for doc in doc_list:
doc_word_list.append(gensim.utils.simple_preprocess(str(doc), deacc=True,min_len=4)) # deacc=True removes punctuations
pbar.update(1)
pbar.close()
return doc_word_list
def remove_stopwords(texts):
"""
Remove common occuring words from the list of tokens
stop words : a,an ,the ,so ,from ...
"""
docs_data=[]
stop_words = stopwords.words('english')
pbar = tqdm(total = len(texts),desc='stopword removal progress')
for doc in texts:
doc_data=[]
for word in doc:
if word not in stop_words:
doc_data.append(word)
docs_data.append(doc_data)
pbar.update(1)
pbar.close()
return docs_data
def filter_word_len(texts,size):
"""
Remove tokens if their length is smaller than threshold
"""
docs_data=[]
pbar = tqdm(total = len(texts),desc='word filtering progress')
for doc in texts:
doc_data=[]
for word in doc:
if len(word)>=size:
doc_data.append(word)
docs_data.append(doc_data)
pbar.update(1)
pbar.close()
return docs_data
def make_bigrams(texts):
"""
Combine words that are frequently occurs together
eg: Good Person , Great Content , Ever Growing
"""
# Build the bigram and models
pbar = tqdm(total = len(texts),desc='Bigram progress')
bigram = gensim.models.Phrases(texts, min_count=5, threshold=100) # higher threshold fewer phrases.
bigram_mod = gensim.models.phrases.Phraser(bigram)
pbar.update(len(texts))
pbar.close()
return [bigram_mod[doc] for doc in texts]
def lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']):
"""
Only allows token which are noun,adjective,verb and adverb.
Also converting the tokens into base form
"""
nlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])
texts_out = []
pbar = tqdm(total = len(texts),desc='lemmatization progress')
for sent in texts:
doc = nlp(" ".join(sent))
texts_out.append([token.lemma_ for token in doc if token.pos_ in allowed_postags])
pbar.update(1)
pbar.close()
return texts_out
def preprocessing_pipeline(doc_list,size):
"""
Converting each document into list of tokens that represents the document
"""
doc_list = convert_lowercase(doc_list)
doc_data_words = remove_punctuation(doc_list)
# Remove Stop Words
data_words_nostops = remove_stopwords(doc_data_words)
# Form Bigrams
data_words_bigrams = make_bigrams(data_words_nostops)
data_lemmatized = lemmatization(data_words_bigrams, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])
final_data_words = filter_word_len(data_lemmatized,size)
return final_data_words
def corpus_embeddings(data):
"""
Converting each token in each document into a tuple of (unique id, frequency of occuring in that document)
"""
# Create Dictionary
id2word = corpora.Dictionary(data)
# Create Corpus
texts = data
# Term Document Frequency
corpus = [id2word.doc2bow(text) for text in texts]
return corpus,id2word
processed_data = preprocessing_pipeline(articles_data,4) # this will take some time
processed_data[0] | _____no_output_____ | MIT | LDA_New.ipynb | parikshitsaikia1619/LDA_modeing_IQVIA |
Step 4: Finalizing the input dataIn this step we will form our the inputs of model, which are:* **Corpus**: A 2D embedded array of tuples, where each tuple is in the form of (token id, frequency of token in that document).* **dictionary**: A dictionary storing the mapping from token to id. | new_corpus,id_word = corpus_embeddings(processed_data)
new_corpus
id_word[0]
word_freq = [[(id_word[id], freq) for id, freq in cp] for cp in new_corpus[:1]]
word_freq # A more human reable form of our corpus | _____no_output_____ | MIT | LDA_New.ipynb | parikshitsaikia1619/LDA_modeing_IQVIA |
Step 5: Modeling and EvaluationIn this part we will fit our data to our the LDA model , some hyper parameter tuning , evaluate the results and select the optimal setting for our model. | class LDA_model:
"""
A LDA Class consist functions to fit the model, calculating coherence values
and finding the optimal no. of topic
input:
corpus : a 2D array of embedded tokens
dictionary: A dictionary with id to token mapping
"""
def __init__(self, corpus,dictionary):
self.corpus = corpus
self.dictionary = dictionary
def LDA(self,no_topics, a ='auto', b = 'auto',passes=10):
lda_model = gensim.models.ldamodel.LdaModel(corpus=self.corpus, id2word=self.dictionary,
num_topics=no_topics, random_state=100,chunksize=100,passes=passes,alpha=a,eta=b)
return lda_model
def compute_coherence_values(self,processed_data,no_topics,a ='auto', b = 'auto',passes=10):
"""
Computes the coherence value of a fitted LDA model
"""
pbar = tqdm(total = len(self.corpus),desc='LDA with '+str(no_topics)+' topics')
lda_model = self.LDA(no_topics,a,b,passes)
pbar.update(len(self.corpus)/2)
coherence_model_lda = CoherenceModel(model=lda_model, texts=processed_data, dictionary=self.dictionary, coherence='c_v')
pbar.update(len(self.corpus) -(len(self.corpus)/2))
pbar.close()
return coherence_model_lda.get_coherence()
def compute_optimal_topics(self,processed_data,min_topics,max_topics,step_size,path):
"""
Calucates the coherence value for a given range of topic size and forms a dataset
"""
topics_range = list(np.arange(min_topics,max_topics,step_size))
model_results = {'Topics': [],'Coherence': []}
# Can take a long time to run
for i in range(len(topics_range)):
# get the coherence score for the given parameters
cv = self.compute_coherence_values(processed_data,topics_range[i],a ='auto', b = 'auto',passes=10)
# Save the model results
model_results['Topics'].append(topics_range[i])
model_results['Coherence'].append(cv)
pd.DataFrame(model_results).to_csv(path, index=False)
def Optimal_no_topics(self,path):
"""
finds the topic size with max coherence score
"""
coherence_data = pd.read_csv(path)
x = coherence_data.Topics.tolist()
y = coherence_data.Coherence.tolist()
plt.xlabel('No. of topics')
plt.ylabel('Coherence Score')
plt.plot(x,y)
plt.show()
index = np.argmax(y)
no_topics = x[index]
c_v = y[index]
print('Optimal number of topics: '+str(no_topics)+' coherence score: '+str(c_v))
return no_topics,c_v
def Optimal_lda_model(self,path):
"""
Fits the LDA with optimal topic size
"""
no_topic, c_v = self.Optimal_no_topics(path)
pbar = tqdm(total = len(self.corpus),desc='LDA with '+str(no_topic)+' topics')
lda = self.LDA(no_topic)
pbar.update(len(self.corpus))
pbar.close()
return lda
lda_model = LDA_model(new_corpus,id_word)
path = './dat.csv' | _____no_output_____ | MIT | LDA_New.ipynb | parikshitsaikia1619/LDA_modeing_IQVIA |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.