markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
3) Select the day of the monthCreate a Pandas Series `day_of_month_earthquakes` containing the day of the month from the "date_parsed" column. | # try to get the day of the month from the date column
day_of_month_earthquakes = earthquakes['date_parsed'].dt.day
# Check your answer
q3.check()
# Lines below will give you a hint or solution code
#q3.hint()
#q3.solution() | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
4) Plot the day of the month to check the date parsingPlot the days of the month from your earthquake dataset. | # TODO: Your code here!
sns.displot(day_of_month_earthquakes, kde=False, bins=31); | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
Does the graph make sense to you? | # Check your answer (Run this code cell to receive credit!)
q4.check()
# Line below will give you a hint
#q4.hint() | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
(Optional) Bonus ChallengeFor an extra challenge, you'll work with a [Smithsonian dataset](https://www.kaggle.com/smithsonian/volcanic-eruptions) that documents Earth's volcanoes and their eruptive history over the past 10,000 years Run the next code cell to load the data. | volcanos = pd.read_csv("../input/volcanic-eruptions/database.csv") | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
Try parsing the column "Last Known Eruption" from the `volcanos` dataframe. This column contains a mixture of text ("Unknown") and years both before the common era (BCE, also known as BC) and in the common era (CE, also known as AD). | volcanos['Last Known Eruption'].sample(5) | _____no_output_____ | MIT | data_cleaning/03-parsing-dates.ipynb | drakearch/kaggle-courses |
Running on AnyscaleLet's connect to an existing 1GPU/16CPUs cluster via `ray.init(address=...)`. | ray.init(
# Connecting to an existing (and running) cluster ("cluster-12" in my account).
address="anyscale://cluster-12",
# This will upload this directory to Anyscale so that the code can be run on cluster.
project_dir=".",
#cloud="anyscale_default_cloud",
# Our Python dependencies, e.g. tensorflow
# (make sure everything is available on the cluster).
runtime_env={"pip": "./requirements.txt"}
) | [1m[36m(anyscale +0.1s)[0m Loaded Anyscale authentication token from ~/.anyscale/credentials.json
[1m[36m(anyscale +0.1s)[0m Loaded Anyscale authentication token from ~/.anyscale/credentials.json
[1m[36m(anyscale +0.9s)[0m .anyscale.yaml found in project_dir. Directory is attached to a project.
[1m[36m(anyscale +1.7s)[0m Using project (name: cuj_rllib, project_dir: /Users/sven/Dropbox/Projects/anyscale_projects/cuj-rl-in-production, id: prj_84JWkW5F1TqLJwhSqLDadyML).
[1m[36m(anyscale +3.1s)[0m cluster cluster-12 is currently running, the cluster will not be restarted.
| Apache-2.0 | rllib_industry_webinar_20211201/demo_dec1.ipynb | sven1977/rllib_tutorials |
Coding/defining our "problem" via an RL environment.We will use the following (adversarial) multi-agent environment throughout this demo. | # Let's code our multi-agent environment.
class MultiAgentArena(MultiAgentEnv):
def __init__(self, config=None):
config = config or {}
# Dimensions of the grid.
self.width = config.get("width", 10)
self.height = config.get("height", 10)
# End an episode after this many timesteps.
self.timestep_limit = config.get("ts", 100)
self.observation_space = MultiDiscrete([self.width * self.height,
self.width * self.height])
# 0=up, 1=right, 2=down, 3=left.
self.action_space = Discrete(4)
# Reset env.
self.reset()
# For rendering.
self.out = None
if config.get("render"):
self.out = Output()
display.display(self.out)
def reset(self):
"""Returns initial observation of next(!) episode."""
# Row-major coords.
self.agent1_pos = [0, 0] # upper left corner
self.agent2_pos = [self.height - 1, self.width - 1] # lower bottom corner
# Accumulated rewards in this episode.
self.agent1_R = 0.0
self.agent2_R = 0.0
# Reset agent1's visited fields.
self.agent1_visited_fields = set([tuple(self.agent1_pos)])
# How many timesteps have we done in this episode.
self.timesteps = 0
# Did we have a collision in recent step?
self.collision = False
# How many collisions in total have we had in this episode?
self.num_collisions = 0
# Return the initial observation in the new episode.
return self._get_obs()
def step(self, action: dict):
"""
Returns (next observation, rewards, dones, infos) after having taken the given actions.
e.g.
`action={"agent1": action_for_agent1, "agent2": action_for_agent2}`
"""
# increase our time steps counter by 1.
self.timesteps += 1
# An episode is "done" when we reach the time step limit.
is_done = self.timesteps >= self.timestep_limit
# Agent2 always moves first.
# events = [collision|agent1_new_field]
events = self._move(self.agent2_pos, action["agent2"], is_agent1=False)
events |= self._move(self.agent1_pos, action["agent1"], is_agent1=True)
# Useful for rendering.
self.collision = "collision" in events
if self.collision is True:
self.num_collisions += 1
# Get observations (based on new agent positions).
obs = self._get_obs()
# Determine rewards based on the collected events:
r1 = -1.0 if "collision" in events else 1.0 if "agent1_new_field" in events else -0.5
r2 = 1.0 if "collision" in events else -0.1
self.agent1_R += r1
self.agent2_R += r2
rewards = {
"agent1": r1,
"agent2": r2,
}
# Generate a `done` dict (per-agent and total).
dones = {
"agent1": is_done,
"agent2": is_done,
# special `__all__` key indicates that the episode is done for all agents.
"__all__": is_done,
}
return obs, rewards, dones, {} # <- info dict (not needed here).
def _get_obs(self):
"""
Returns obs dict (agent name to discrete-pos tuple) using each
agent's current x/y-positions.
"""
ag1_discrete_pos = self.agent1_pos[0] * self.width + \
(self.agent1_pos[1] % self.width)
ag2_discrete_pos = self.agent2_pos[0] * self.width + \
(self.agent2_pos[1] % self.width)
return {
"agent1": np.array([ag1_discrete_pos, ag2_discrete_pos]),
"agent2": np.array([ag2_discrete_pos, ag1_discrete_pos]),
}
def _move(self, coords, action, is_agent1):
"""
Moves an agent (agent1 iff is_agent1=True, else agent2) from `coords` (x/y) using the
given action (0=up, 1=right, etc..) and returns a resulting events dict:
Agent1: "new" when entering a new field. "bumped" when having been bumped into by agent2.
Agent2: "bumped" when bumping into agent1 (agent1 then gets -1.0).
"""
orig_coords = coords[:]
# Change the row: 0=up (-1), 2=down (+1)
coords[0] += -1 if action == 0 else 1 if action == 2 else 0
# Change the column: 1=right (+1), 3=left (-1)
coords[1] += 1 if action == 1 else -1 if action == 3 else 0
# Solve collisions.
# Make sure, we don't end up on the other agent's position.
# If yes, don't move (we are blocked).
if (is_agent1 and coords == self.agent2_pos) or (not is_agent1 and coords == self.agent1_pos):
coords[0], coords[1] = orig_coords
# Agent2 blocked agent1 (agent1 tried to run into agent2)
# OR Agent2 bumped into agent1 (agent2 tried to run into agent1)
return {"collision"}
# No agent blocking -> check walls.
if coords[0] < 0:
coords[0] = 0
elif coords[0] >= self.height:
coords[0] = self.height - 1
if coords[1] < 0:
coords[1] = 0
elif coords[1] >= self.width:
coords[1] = self.width - 1
# If agent1 -> "new" if new tile covered.
if is_agent1 and not tuple(coords) in self.agent1_visited_fields:
self.agent1_visited_fields.add(tuple(coords))
return {"agent1_new_field"}
# No new tile for agent1.
return set()
def render(self, mode=None):
if self.out is not None:
self.out.clear_output(wait=True)
print("_" * (self.width + 2))
for r in range(self.height):
print("|", end="")
for c in range(self.width):
field = r * self.width + c % self.width
if self.agent1_pos == [r, c]:
print("1", end="")
elif self.agent2_pos == [r, c]:
print("2", end="")
elif (r, c) in self.agent1_visited_fields:
print(".", end="")
else:
print(" ", end="")
print("|")
print("‾" * (self.width + 2))
print(f"{'!!Collision!!' if self.collision else ''}")
print("R1={: .1f}".format(self.agent1_R))
print("R2={: .1f} ({} collisions)".format(self.agent2_R, self.num_collisions))
print()
time.sleep(0.25)
env = MultiAgentArena(config={"render": True})
obs = env.reset()
with env.out:
# Agent1 moves down, Agent2 moves up.
obs, rewards, dones, infos = env.step(action={"agent1": 2, "agent2": 0})
env.render()
# Agent1 moves right, Agent2 moves left.
obs, rewards, dones, infos = env.step(action={"agent1": 1, "agent2": 3})
env.render()
# Agent1 moves right, Agent2 moves left.
obs, rewards, dones, infos = env.step(action={"agent1": 1, "agent2": 3})
env.render()
# Agent1 moves down, Agent2 moves up.
obs, rewards, dones, infos = env.step(action={"agent1": 2, "agent2": 0})
env.render()
print("Agent1's x/y position={}".format(env.agent1_pos))
print("Agent2's x/y position={}".format(env.agent2_pos))
print("Env timesteps={}".format(env.timesteps))
| _____no_output_____ | Apache-2.0 | rllib_industry_webinar_20211201/demo_dec1.ipynb | sven1977/rllib_tutorials |
Configuring our Trainer | TRAINER_CFG = {
# Using our environment class defined above.
"env": MultiAgentArena,
# Use `framework=torch` here for PyTorch.
"framework": "tf",
# Run on 1 GPU on the "learner".
"num_gpus": 1,
# Use 15 ray-parallelized environment workers,
# which collect samples to learn from. Each worker gets assigned
# 1 CPU.
"num_workers": 15,
# Each of the 15 workers has 10 environment copies ("vectorization")
# for faster (batched) forward passes.
"num_envs_per_worker": 10,
# Multi-agent setup: 2 policies.
"multiagent": {
"policies": {"policy1", "policy2"},
"policy_mapping_fn": lambda agent_id: "policy1" if agent_id == "agent1" else "policy2"
},
} | _____no_output_____ | Apache-2.0 | rllib_industry_webinar_20211201/demo_dec1.ipynb | sven1977/rllib_tutorials |
Training our 2 Policies (agent1 and agent2) | results = tune.run(
# RLlib Trainer class (we use the "PPO" algorithm today).
PPOTrainer,
# Give our experiment a name (we will find results/checkpoints
# under this name on the server's `~ray_results/` dir).
name=f"CUJ-RL",
# The RLlib config (defined in a cell above).
config=TRAINER_CFG,
# Take a snapshot every 2 iterations.
checkpoint_freq=2,
# Plus one at the very end of training.
checkpoint_at_end=True,
# Run for exactly 30 training iterations.
stop={"training_iteration": 20},
# Define what we are comparing for, when we search for the
# "best" checkpoint at the end.
metric="episode_reward_mean",
mode="max")
print("Best checkpoint: ", results.best_checkpoint)
| [2m[36m(run pid=None)[0m == Status ==
[2m[36m(run pid=None)[0m Current time: 2021-12-01 09:24:41 (running for 00:00:00.14)
[2m[36m(run pid=None)[0m Memory usage on this node: 4.7/119.9 GiB
[2m[36m(run pid=None)[0m Using FIFO scheduling algorithm.
[2m[36m(run pid=None)[0m Resources requested: 0/16 CPUs, 0/1 GPUs, 0.0/74.36 GiB heap, 0.0/35.86 GiB objects (0.0/1.0 accelerator_type:M60)
[2m[36m(run pid=None)[0m Result logdir: /home/ray/ray_results/CUJ-RL
[2m[36m(run pid=None)[0m Number of trials: 1/1 (1 PENDING)
[2m[36m(run pid=None)[0m +---------------------------------+----------+-------+
[2m[36m(run pid=None)[0m | Trial name | status | loc |
[2m[36m(run pid=None)[0m |---------------------------------+----------+-------|
[2m[36m(run pid=None)[0m | PPO_MultiAgentArena_9155f_00000 | PENDING | |
[2m[36m(run pid=None)[0m +---------------------------------+----------+-------+
[2m[36m(run pid=None)[0m
[2m[36m(run pid=None)[0m
| Apache-2.0 | rllib_industry_webinar_20211201/demo_dec1.ipynb | sven1977/rllib_tutorials |
Restoring from a checkpoint | local_checkpoint = "/Users/sven/Downloads/checkpoint-20-2"
if os.path.isfile(local_checkpoint):
print("yes, checkpoint files are on local machine ('Downloads' folder)")
# We'll restore the trained PPOTrainer locally on this laptop here and have it run
# through a new environment to demonstrate it has learnt useful policies for our agents:
cpu_config = TRAINER_CFG.copy()
cpu_config["num_gpus"] = 0
cpu_config["num_workers"] = 0
new_trainer = PPOTrainer(config=cpu_config)
# Restore weights of the learnt policies via `restore()`.
new_trainer.restore(local_checkpoint) | 2021-12-01 18:34:51,615 WARNING deprecation.py:38 -- DeprecationWarning: `SampleBatch['is_training']` has been deprecated. Use `SampleBatch.is_training` instead. This will raise an error in the future!
2021-12-01 18:34:54,804 WARNING trainer_template.py:185 -- `execution_plan` functions should accept `trainer`, `workers`, and `config` as args!
Install gputil for GPU system monitoring.
| Apache-2.0 | rllib_industry_webinar_20211201/demo_dec1.ipynb | sven1977/rllib_tutorials |
Running inference locally | env = MultiAgentArena(config={"render": True})
with env.out:
obs = env.reset()
env.render()
while True:
a1 = new_trainer.compute_single_action(obs["agent1"], policy_id="policy1", explore=True)
a2 = new_trainer.compute_single_action(obs["agent2"], policy_id="policy2", explore=False)
obs, rewards, dones, _ = env.step({"agent1": a1, "agent2": a2})
env.render()
if dones["agent1"] is True:
break
| _____no_output_____ | Apache-2.0 | rllib_industry_webinar_20211201/demo_dec1.ipynb | sven1977/rllib_tutorials |
Inference using Ray Serve | @serve.deployment(route_prefix="/multi-agent-arena")
class ServeRLlibTrainer:
def __init__(self, config, checkpoint_path):
# Link to our trainer.
self.trainer = PPOTrainer(cpu_config)
self.trainer.restore(checkpoint_path)
async def __call__(self, request: Request):
json_input = await request.json()
# Compute and return the action for the given observation.
obs1 = json_input["observation_agent1"]
obs2 = json_input["observation_agent2"]
a1 = self.trainer.compute_single_action(obs1, policy_id="policy1")
a2 = self.trainer.compute_single_action(obs2, policy_id="policy2")
return {"action": {"agent1": int(a1), "agent2": int(a2)}}
client = serve.start()
ServeRLlibTrainer.deploy(cpu_config, results.best_checkpoint) | _____no_output_____ | Apache-2.0 | rllib_industry_webinar_20211201/demo_dec1.ipynb | sven1977/rllib_tutorials |
**_Note: This notebook contains ALL the code for Sections 13.7 through 13.11, including the Self Check snippets because all the snippets in these sections are consecutively numbered in the text._** 13.7 Authenticating with Twitter Via Tweepy | import tweepy
import keys | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Creating and Configuring an `OAuthHandler` to Authenticate with Twitter | auth = tweepy.OAuthHandler(keys.consumer_key,
keys.consumer_secret)
auth.set_access_token(keys.access_token,
keys.access_token_secret) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Creating an API Object | api = tweepy.API(auth, wait_on_rate_limit=True,
wait_on_rate_limit_notify=True)
| _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
 13.7 Self Check**1. _(Fill-In)_** Authenticating with Twitter via Tweepy involves two steps. First, create an object of the Tweepy module’s `________` class, passing your API key and API secret key to its constructor. **Answer:** `OAuthHandler`.**2. _(True/False)_** The keyword argument `wait_on_rate_limit_notify=True` to the `tweepy.API` call tells Tweepy to terminate the user because of a rate-limit violation.**Answer:** False. The call tells Tweepy that if it needs to wait to avoid rate-limit violations it should display a message at the command line indicating that it’s waiting for the rate limit to replenish. 13.8 Getting Information About a Twitter Account | nasa = api.get_user('nasa') | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Getting Basic Account Information | nasa.id
nasa.name
nasa.screen_name
nasa.description | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Getting the Most Recent Status Update | nasa.status.text | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Getting the Number of Followers | nasa.followers_count | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Getting the Number of Friends | nasa.friends_count | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Getting Your Own Account’s Information  13.8 Self Check**1. _(Fill-In)_** After authenticating with Twitter, you can use the Tweepy `API` object’s `________` method to get a tweepy.models.User object containing information about a user’s Twitter account.**Answer:** get_user**2. _(True/False)_** Retweeting often results in truncation because a retweet adds characters that could exceed the character limit.**Answer:** True.**3. _(IPython Session)_** Use the `api` object to get a `User` object for the `NASAKepler` account, then display its number of followers and most recent tweet.**Answer:** | nasa_kepler = api.get_user('NASAKepler')
nasa_kepler.followers_count
nasa_kepler.status.text | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
13.9 Introduction to Tweepy `Cursor`s: Getting an Account’s Followers and Friends 13.9.1 Determining an Account’s Followers | followers = [] | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Creating a Cursor | cursor = tweepy.Cursor(api.followers, screen_name='nasa') | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Getting Results | for account in cursor.items(10):
followers.append(account.screen_name)
print('Followers:',
' '.join(sorted(followers, key=lambda s: s.lower()))) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Automatic Paging Getting Follower IDs Rather Than Followers  13.9.1 Self Check**1. _(Fill-In)_** Each Twitter API method’s documentation discusses the maximum number of items the method can return in one call—this is known as a `________` of results. **Answer:** page.**2. _(True/False)_** Though you can get complete `User` objects for a maximum of 200 followers at a time, you can get many more Twitter ID numbers by calling the `API` object’s `followers_ids` method.**Answer:** True.**3. _(IPython Session)_** Use a Cursor to get and display 10 followers of the `NASAKepler` account.**Answer:** | kepler_followers = []
cursor = tweepy.Cursor(api.followers, screen_name='NASAKepler')
for account in cursor.items(10):
kepler_followers.append(account.screen_name)
print(' '.join(kepler_followers)) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
13.9.2 Determining Whom an Account Follows | friends = []
cursor = tweepy.Cursor(api.friends, screen_name='nasa')
for friend in cursor.items(10):
friends.append(friend.screen_name)
print('Friends:',
' '.join(sorted(friends, key=lambda s: s.lower()))) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
 13.9.2 Self Check**1. _(Fill-In)_** The `API` object’s `friends` method calls the Twitter API’s `________` method to get a list of User objects representing an account’s friends. **Answer:** `friends/list`. 13.9.3 Getting a User’s Recent Tweets | nasa_tweets = api.user_timeline(screen_name='nasa', count=3)
for tweet in nasa_tweets:
print(f'{tweet.user.screen_name}: {tweet.text}\n')
| _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Grabbing Recent Tweets from Your Own Timeline  13.9.3 Self Check**1. _(Fill-In)_** You can call the `API` method `home_timeline` to get tweets from your home timeline, that is, your tweets and tweets from `________`. **Answer:** the people you follow.**2. _(IPython Session)_** Get and display two tweets from the `NASAKepler` account.**Answer:** | kepler_tweets = api.user_timeline(
screen_name='NASAKepler', count=2)
for tweet in kepler_tweets:
print(f'{tweet.user.screen_name}: {tweet.text}\n') | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
13.10 Searching Recent Tweets Tweet Printer | from tweetutilities import print_tweets | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Searching for Specific Words | tweets = api.search(q='Mars Opportunity Rover', count=3)
print_tweets(tweets) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Searching with Twitter Search Operators | tweets = api.search(q='from:nasa since:2018-09-01', count=3)
print_tweets(tweets) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
Searching for a Hashtag | tweets = api.search(q='#collegefootball', count=20)
print_tweets(tweets) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
 13.10 Self Check**1. _(Fill-In)_** The Tweepy `API` method `________` returns tweets that match a query string.**Answer:** search.**2. _(True/False)_** If you plan to request more results than can be returned by one call to search, you should use an `API` object.**Answer:** False. If you plan to request more results than can be returned by one call to `search`, you should use a `Cursor` object.**3. _(IPython Session)_** Search for one tweet from the `nasa` account containing `'astronaut'`.**Answer:** | tweets = api.search(q='astronaut from:nasa', count=1)
print_tweets(tweets) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
13.11 Spotting Trends with the Twitter Trends API 13.11.1 Places with Trending Topics | trends_available = api.trends_available()
len(trends_available)
trends_available[0]
trends_available[1] | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
 13.11.1 Self Check**1. _(Fill-In)_** If a topic “goes viral,” you could have thousands or even millions of people tweeting about that topic at once. Twitter refers to these as `________` topics.**Answer:** trending.**2. _(True/False)_** The Twitter Trends API’s `trends/place` method uses Yahoo! Where on Earth IDs (WOEIDs) to look up trending topics. The WOEID `1` represents worldwide. **Answer:** True. 13.11.2 Getting a List of Trending Topics Worldwide Trending Topics | world_trends = api.trends_place(id=1)
trends_list = world_trends[0]['trends']
trends_list[0]
trends_list = [t for t in trends_list if t['tweet_volume']]
from operator import itemgetter
trends_list.sort(key=itemgetter('tweet_volume'), reverse=True)
for trend in trends_list[:5]:
print(trend['name']) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
New York City Trending Topics | nyc_trends = api.trends_place(id=2459115) # New York City WOEID
nyc_list = nyc_trends[0]['trends']
nyc_list = [t for t in nyc_list if t['tweet_volume']]
nyc_list.sort(key=itemgetter('tweet_volume'), reverse=True)
for trend in nyc_list[:5]:
print(trend['name'])
| _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
 13.11.2 Self Check**1. _(Fill-In)_** You also can look up WOEIDs programmatically using Yahoo!’s web services via Python libraries like `________`.**Answer:** `woeid`.**2. _(True/False)_** The statement `todays_trends = api.trends_place(id=1)` gets today’s U. S. trending topics.**Answer:** False. Actually, it gets today’s worldwide trending topics.**3. _(IPython Session)_** Display the top 3 trending topics today in the United States.**Answer:** | us_trends = api.trends_place(id='23424977')
us_list = us_trends[0]['trends']
us_list = [t for t in us_list if t['tweet_volume']]
us_list.sort(key=itemgetter('tweet_volume'), reverse=True)
for trend in us_list[:3]:
print(trend['name']) | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
13.11.3 Create a Word Cloud from Trending Topics | topics = {}
for trend in nyc_list:
topics[trend['name']] = trend['tweet_volume']
from wordcloud import WordCloud
wordcloud = WordCloud(width=1600, height=900,
prefer_horizontal=0.5, min_font_size=10, colormap='prism',
background_color='white')
wordcloud = wordcloud.fit_words(topics)
wordcloud = wordcloud.to_file('TrendingTwitter.png') | _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
 13.11.3 Self Check**1. _(IPython Session)_** Create a word cloud using the `us_list` list from the previous section’s Self Check.**Answer:** | topics = {}
for trend in us_list:
topics[trend['name']] = trend['tweet_volume']
wordcloud = wordcloud.fit_words(topics)
wordcloud = wordcloud.to_file('USTrendingTwitter.png')
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
| _____no_output_____ | MIT | examples/ch13/snippets_ipynb/13_07-11withSelfChecks.ipynb | edson-gomes/Intro-to-Python |
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Exercises for Probabilistic Systems _prepared by Abuzer Yakaryilmaz_ Run the following cell to open the exercises.MathJax is used to express mathematical expressions and it requires internet connection. | import os, webbrowser
webbrowser.open(os.path.abspath("Exercises_Probabilistic_Systems.html")) | _____no_output_____ | Apache-2.0 | classical-systems/Exercises_Probabilistic_Systems.ipynb | jaorduz/QWorld2021_QMexico |
7. Share the Insight > “The goal is to turn data into insight” - Why do we need to communicate insight?- Types of communication - Exploration vs. Explanation- Explanation: Telling a story with data- Exploration: Building an interface for people to find storiesThere are two main insights we want to communicate. - Bangalore is the largest market for Onion Arrivals. - Onion Price variation has increased in the recent years.Let us explore how we can communicate these insight visually. Preprocessing to get the data | # Import the library we need, which is Pandas and Matplotlib
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# import seaborn as sns
# Set some parameters to get good visuals - style to ggplot and size to 15,10
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 10)
# Read the csv file of Monthwise Quantity and Price csv file we have.
df = pd.read_csv('MonthWiseMarketArrivals_clean.csv')
# Change the index to the date column
df.index = pd.PeriodIndex(df.date, freq='M')
# Sort the data frame by date
df = df.sort_values(by = "date")
# Get the data for year 2016
df2016 = df[df.year == 2016]
# Groupby on City to get the sum of quantity
df2016City = df2016.groupby(['city'], as_index=False)['quantity'].sum()
df2016City = df2016City.sort_values(by = "quantity", ascending = False)
df2016City.head() | _____no_output_____ | MIT | notebook/onion/7-Insight.ipynb | narnaik/art-data-science |
Let us plot the Cities in a Geographic Map | # Load the geocode file
dfGeo = pd.read_csv('city_geocode.csv')
dfGeo.head() | _____no_output_____ | MIT | notebook/onion/7-Insight.ipynb | narnaik/art-data-science |
PRINCIPLE: Joining two data framesThere will be many cases in which your data is in two different dataframe and you would like to merge them in to one dataframe. Let us look at one example of this - which is called left join | dfCityGeo = pd.merge(df2016City, dfGeo, how='left', on=['city', 'city'])
dfCityGeo.head()
dfCityGeo.isnull().describe()
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = 100) | _____no_output_____ | MIT | notebook/onion/7-Insight.ipynb | narnaik/art-data-science |
We can do a crude aspect ratio adjustment to make the cartesian coordinate systesm appear like a mercator map | dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = 100, figsize = [10,11])
# Let us at quanitity as the size of the bubble
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat',
s = dfCityGeo.quantity, figsize = [10,11])
# Let us scale down the quantity variable
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat',
s = dfCityGeo.quantity/1000, figsize = [10,11])
# Reduce the opacity of the color, so that we can see overlapping values
dfCityGeo.plot(kind = 'scatter', x = 'lon', y = 'lat', s = dfCityGeo.quantity/1000,
alpha = 0.5, figsize = [10,11]) | _____no_output_____ | MIT | notebook/onion/7-Insight.ipynb | narnaik/art-data-science |
Exercise Can you plot all the States by quantity in (pseudo) geographic map? Plotting on a Map | import folium
# Getting an India Map
map_osm = folium.Map(location=[20.5937, 78.9629])
map_osm
# Using any map provider
map_stamen = folium.Map(location=[20.5937, 78.9629],
tiles='Stamen Toner', zoom_start=5)
map_stamen | _____no_output_____ | MIT | notebook/onion/7-Insight.ipynb | narnaik/art-data-science |
Adding markers on the map | folium.CircleMarker(location=[20.5937, 78.9629],
radius=50000,
popup='Central India',
color='#3186cc',
fill_color='#3186cc',
).add_to(map_stamen)
map_stamen | _____no_output_____ | MIT | notebook/onion/7-Insight.ipynb | narnaik/art-data-science |
Add markers from a dataframe | length = dfCityGeo.shape[0]
length
map_india = folium.Map(location=[20.5937, 78.9629], tiles='Stamen Toner', zoom_start=5)
for i in range(length):
lon = dfCityGeo.iloc[i, 2]
lat = dfCityGeo.iloc[i, 3]
location = [lat, lon]
radius = dfCityGeo.iloc[i, 1]/25
name = dfCityGeo.iloc[i,0]
folium.CircleMarker(location=location, radius=radius,
popup=name, color='#3186cc', fill_color='#3186cc',
).add_to(map_india)
map_india | _____no_output_____ | MIT | notebook/onion/7-Insight.ipynb | narnaik/art-data-science |
Thanks for:* https://www.kaggle.com/sishihara/moa-lgbm-benchmarkPreprocessing* https://www.kaggle.com/ttahara/osic-baseline-lgbm-with-custom-metric* https://zenn.dev/fkubota/articles/2b8d46b11c178ac2fa2d* https://qiita.com/ryouta0506/items/619d9ac0d80f8c0aed92* https://github.com/nejumi/tools_for_kaggle/blob/master/semi_supervised_learner.py* https://upura.hatenablog.com/entry/2019/03/03/233534* https://pompom168.hatenablog.com/entry/2019/07/22/113433* https://www.kaggle.com/c/lish-moa/discussion/193878* https://tsumit.hatenablog.com/entry/2020/06/20/044835* https://www.kaggle.com/kushal1506/moa-pytorch-feature-engineering-0-01846* https://www.kaggle.com/c/lish-moa/discussion/195195 | # Version = "v1" # starter model
# Version = "v2" # Compare treat Vs. ctrl and minor modifications, StratifiedKFold
# Version = "v3" # Add debug mode and minor modifications
# Version = "v4" # Clipping a control with an outlier(25-75)
# Version = "v5" # Clipping a control with an outlier(20-80)
# Version = "v6" # under sampling 500 → oversamplling 500, lipping a control with an outlier(10-90)
# Version = "v7" # Use anotated data, under sampling 500 → oversamplling 500, clipping a control with an outlier(10-90)
# Version = "v8" # pseudo labeling (thresholds:0.5), timeout
# Version = "v9" # pseudo labeling (thresholds:0.6), timeout
# Version = "v10" # pseudo labeling (thresholds:0.6), ReduceCol: Kolmogorov-Smirnov, PCA(whiten)&UMAP
# Version = "v11" # pseudo labeling (thresholds:0.6), ReduceCol: Kolmogorov-Smirnov, PCA(whiten)&UMAP, lgbm parames adjust
# Version = "v12" # Feature engineering based on feature importance
# Version = "v13" # Calibration, SMOTE(k_neighbors=5→1)
# Version = "v14" # Removed the Calibration, SMOTE(k_neighbors=1), pseudo labeling (thresholds:0.7)
# Version = "v15" # Updata anotated data
# Version = "v16" # Remove noisy label(confidence: 0.5)
# Version = "v17" # Modifications with remove noisy label func, Calibration, confidence = y_prob.probability.max()*0.3
# Version = "v18" # SMOTE(k_neighbors=1→2), confidence = y_prob.probability.max()*0.2
# Version = "v19" # SMOTE(k_neighbors=2→3),
# Version = "v20" # Modifications with confidence, Removed the Calibration, SMOTE(k_neighbors=2),
# Version = "v21" # DEBUG = False
# Version = "v22" # minor modifications
# Version = "v23" # TOP100→PCA→UMAP(n_components=3)
# Version = "v24" # TOP100→PCA→UMAP(n_components=10), UMAP(n_components=2→3)
# Version = "v25" # Feature engineering based on Feature importance
# Version = "v26" # Modify pseudo labeling func to exclude low confidence pseudo labels in the TEST data.
# Version = "v27" # LGBMClassifie:clf.predict→clf.predict_proba
# Version = "v28" # Calibration (No calbration:CV:0.06542)
# Version = "v29" # Remove Calibration, is_unbalance': True, SMOTE(k_neighbors=2→3), Modify pseudo labeling func to include low confidence pseudo labels in the TEST data, target_rate *= 1.2
# Version = "v30" # drop_duplicates(keep="last")
# Version = "v31" # target_rate *= 1.1, if Threshold <= 0.2: break, if sum(p_label)*1.5 >= check: break, if sum(p_label) <= check*1.5: break
# Version = "v32" # y_prob.probability.quantile(0.3), if Threshold >= 0.95: break
# Version = "v33" # RankGauss, Scaled by category, SMOTE(k_neighbors=2),
# Version = "v34" # RankGauss apply c-columns, remove TOP100, Add f_diff = lambda x: x - med, Create features
# Version = "v35" # f_div = lambda x: ((x+d)*10 / (abs(med)+d))**2, f_diff = lambda x: ((x-med)*10)**2, select features
# Version = "v36" # Add feature importance func
# Version = "v37" # Remove RankGauss for gene expression, fix feature importance func
Version = "v38" # Add MultiLabel Stratification func, fix index of data before split with "data = data.sort_index(axis='index')""
# Feature engineering based on Feature importance with v36 notebook
DEBUG = True | _____no_output_____ | MIT | notebooks/20201103-moa-lgbm-v38.ipynb | KFurudate/kaggle_MoA |
Library | import lightgbm as lgb
from lightgbm import LGBMClassifier
import imblearn
from imblearn.over_sampling import SMOTE
from logging import getLogger, INFO, StreamHandler, FileHandler, Formatter
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
import random
from sklearn import preprocessing
from sklearn.metrics import log_loss, roc_auc_score
from sklearn.model_selection import StratifiedKFold
from tqdm.notebook import tqdm
import torch
import warnings
warnings.filterwarnings("ignore")
print("lightgbm Version: ", lgb.__version__)
print("imblearn Version: ", imblearn.__version__)
print("numpy Version: ", np.__version__)
print("pandas Version: ", pd.__version__) | lightgbm Version: 2.3.1
imblearn Version: 0.7.0
numpy Version: 1.18.5
pandas Version: 1.1.3
| MIT | notebooks/20201103-moa-lgbm-v38.ipynb | KFurudate/kaggle_MoA |
Utils | def get_logger(filename='log'):
logger = getLogger(__name__)
logger.setLevel(INFO)
handler1 = StreamHandler()
handler1.setFormatter(Formatter("%(message)s"))
handler2 = FileHandler(filename=f"{filename}.{Version}.log")
handler2.setFormatter(Formatter("%(message)s"))
logger.addHandler(handler1)
logger.addHandler(handler2)
return logger
logger = get_logger()
def seed_everything(seed=777):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True | _____no_output_____ | MIT | notebooks/20201103-moa-lgbm-v38.ipynb | KFurudate/kaggle_MoA |
Config | if DEBUG:
N_FOLD = 2
Num_boost_round=1000
Early_stopping_rounds=10
Learning_rate = 0.03
else:
N_FOLD = 4
Num_boost_round=10000
Early_stopping_rounds=30
Learning_rate = 0.01
SEED = 42
seed_everything(seed=SEED)
Max_depth = 7 | _____no_output_____ | MIT | notebooks/20201103-moa-lgbm-v38.ipynb | KFurudate/kaggle_MoA |
Data Loading | train = pd.read_csv("../input/lish-moa/train_features.csv")
test = pd.read_csv("../input/lish-moa/test_features.csv")
train_targets_scored = pd.read_csv("../input/lish-moa/train_targets_scored.csv")
train_targets_nonscored = pd.read_csv("../input/lish-moa/train_targets_nonscored.csv")
sub = pd.read_csv("../input/lish-moa/sample_submission.csv")
# New data file available from 3th November
drug = pd.read_csv('../input/lish-moa/train_drug.csv')
Targets = train_targets_scored.columns[1:]
Scored = train_targets_scored.merge(drug, on='sig_id', how='left')
Scored
def label_encoding(train: pd.DataFrame, test: pd.DataFrame, encode_cols):
n_train = len(train)
train = pd.concat([train, test], sort=False).reset_index(drop=True)
for f in encode_cols:
try:
lbl = preprocessing.LabelEncoder()
train[f] = lbl.fit_transform(list(train[f].values))
except:
print(f)
test = train[n_train:].reset_index(drop=True)
train = train[:n_train]
return train, test
# Manual annotation by myself
annot = pd.read_csv("../input/moa-annot-data/20201024_moa_sig_list.v2.csv")
annot
annot_sig = []
annot_sig = annot.sig_id.tolist()
print(annot_sig)
train_target = pd.concat([train_targets_scored, train_targets_nonscored], axis=1)
train_target.head() | _____no_output_____ | MIT | notebooks/20201103-moa-lgbm-v38.ipynb | KFurudate/kaggle_MoA |
Training Utils | def get_target(target_col, annot_sig):
if target_col in annot_sig:
t_cols = []
for t_col in list(annot[annot.sig_id == target_col].iloc[0]):
if t_col is not np.nan:
t_cols.append(t_col)
target = train_target[t_cols]
target = target.sum(axis=1)
#1 or more, replace it with 1.
target = target.where(target < 1, 1)
else:
target = train_targets_scored[target_col]
return target
def Multi_Stratification(df, target_col, target):
_df = df.copy()
sig_id_lst = [list(Scored.sig_id[Scored.drug_id == id_].sample())[0] for id_ in Scored.drug_id.unique()]
# Remove sig_id wih target
del_idx = train[target==1].sig_id.unique()
select_idx = [i for i in sig_id_lst if i not in del_idx]
print(f"neg labels: {len(sig_id_lst)}→ selected neg labels: {len(select_idx)}")
# Select negative target
_df = _df.set_index('sig_id')
_df = _df.loc[select_idx, :]
_df = _df.reset_index(drop=True)
_df["target"] = 0
return _df
#===========================================================
# model
#===========================================================
def run_lgbm(target_col: str):
target = get_target(target_col, annot_sig)
target_rate = target.sum() / len(target)
# Estimate test target rate
#target_rate *= (-0.001*target.sum()+1.1)
Adj_target_rate = (2 * target_rate) / (target.sum()**0.15)
trt = train[target==1].copy().reset_index(drop=True)
trt["target"] = 1
trt = trt.drop("sig_id", axis=1)
logger.info(f"{target_col}, len(trt):{len(trt)}, target_rate:{target_rate:.7f} → Adj_target_rate:{Adj_target_rate:.7f}")
othr = Multi_Stratification(train, target_col, target)
X_train = pd.concat([trt, othr], axis=0, sort=False, ignore_index=True)
y_train = X_train["target"]
X_train = X_train.drop("target", axis=1)
sm = SMOTE(0.1, k_neighbors=3, n_jobs=2, random_state=SEED)
X_train, y_train = sm.fit_sample(X_train, y_train)
X_test = test.drop("sig_id", axis=1)
train_X, train_y, feature_importance_df_ = pseudo_labeling(X_train, y_train, X_test, target_rate, target_col)
y_preds = []
models = []
oof_train = np.zeros((len(train_X),))
score = 0
for fold_, (train_index, valid_index) in enumerate(cv.split(train_X, train_y)):
logger.info(f'len(train_index) : {len(train_index)}')
logger.info(f'len(valid_index) : {len(valid_index)}')
X_tr = train_X.loc[train_index, :]
X_val = train_X.loc[valid_index, :]
y_tr = train_y[train_index]
y_val = train_y[valid_index]
lgb_train = lgb.Dataset(X_tr,
y_tr,
categorical_feature=categorical_cols)
lgb_eval = lgb.Dataset(X_val,
y_val,
reference=lgb_train,
categorical_feature=categorical_cols)
logger.info(f"================================= fold {fold_+1}/{cv.get_n_splits()} {target_col}=================================")
model = lgb.train(params,
lgb_train,
valid_sets=[lgb_train, lgb_eval],
verbose_eval=100,
num_boost_round=Num_boost_round,
early_stopping_rounds=Early_stopping_rounds)
oof_train[valid_index] = model.predict(X_val, num_iteration=model.best_iteration)
y_pred = model.predict(X_test, num_iteration=model.best_iteration)
y_preds.append(y_pred)
models.append(model)
score = log_loss(train_y, oof_train)
logger.info(f"{target_col} logloss: {score}")
logger.info(f"=========================================================================================")
return sum(y_preds) / len(y_preds), score, models, feature_importance_df_
def convert_label(df, conf_0, conf_1, threshold=0.5):
df = df.copy()
Probability = df.iloc[:,0]
# Remove low confidence labels
conf_index = df[(Probability <= conf_0) & (conf_1 <= Probability)].index.values
Probability = Probability.where(Probability < threshold, 1).copy()
p_label = Probability.where(Probability >= threshold, 0).copy()
return p_label, conf_index
classifier_params = {
'max_depth': Max_depth,
'num_leaves': int((Max_depth**2)*0.7),
'n_estimators': Num_boost_round,
'learning_rate': 0.03,
'objective': "binary",
'colsample_bytree':0.4,
'subsample':0.8,
'subsample_freq':5,
'reg_alpha':0.1,
'reg_lambda':0.1,
'random_state':SEED,
'n_jobs':2,
}
#===========================================================
# pseudo_labeling
#===========================================================
def pseudo_labeling(X_train, y_train, X_test, target_rate, target_col, max_iter=3):
X = X_train.copy()
y = y_train.copy()
feature_importance_df = pd.DataFrame()
for iter_ in range(1, max_iter+1):
logger.info(f"================= Pseudo labeling {iter_} / {max_iter} =================")
y_preds = np.zeros((X.shape[0], 2))
y_preds[:, 0] = y.copy()
y_prob = np.zeros((X_test.shape[0]))
X_conf = pd.DataFrame()
y_conf = pd.DataFrame()
_importance_df = pd.DataFrame()
_importance_df["Feature"] = X.columns
for fold_, (train_idx, valid_idx) in enumerate(cv.split(X, y)):
X_tr, X_val = X.loc[train_idx, :], X.loc[valid_idx, :]
y_tr, y_val = y[train_idx], y[valid_idx]
clf = LGBMClassifier(**classifier_params)
clf.fit(X_tr, y_tr,
eval_set=[(X_tr, y_tr), (X_val, y_val)],
eval_metric='logloss',
verbose=100,
early_stopping_rounds=Early_stopping_rounds)
y_preds[valid_idx, 1] = clf.predict_proba(X_val, num_iteration=clf.best_iteration_)[:, 1]
y_prob += clf.predict_proba(X_test, num_iteration=clf.best_iteration_)[:, 1] / N_FOLD
# feature importance with target col
_importance_df["importance"] = clf.feature_importances_
feature_importance_df = pd.concat([feature_importance_df, _importance_df], axis=0)
auc_score = roc_auc_score(y_preds[:, 0], y_preds[:, 1])
logger.info(f"{iter_} / {max_iter} AUC score:{auc_score:.3f}")
y_preds = pd.DataFrame(y_preds, index=X.index, columns=[["Labels", "Preds"]])
if iter_ == 1: Threshold = y_preds.iloc[:, 1].quantile(0.89)
logger.info(f"Threshold: {Threshold}")
y_preds.iloc[:,1] = y_preds.iloc[:,1].where(y_preds.iloc[:,1] < Threshold, 1).copy()
y_preds.iloc[:,1] = y_preds.iloc[:,1].where(y_preds.iloc[:,1] >= Threshold, 0).copy()
y_preds = y_preds.sum(axis=1)
corect_idx = y_preds[y_preds != 1].index.values
X_corect, y_corect = X[X.index.isin(corect_idx)], y[y.index.isin(corect_idx)]
logger.info(f"Remove_noisy_labels: {len(y)-len(y_corect)} → positive_corect_labels: {sum(y_corect)}/{len(y_corect)}")
# Remove low confidence labels
y_prob = pd.DataFrame(y_prob, index=X_test.index, columns=["probability"])
percentile = y_prob.probability.quantile(0.3)
high_conf_0 = min(y_prob.probability.min()*30, percentile)
high_conf_1 = max(y_prob.probability.max()*0.6,Threshold)
logger.info(f"30th percentile: {percentile:.7f}")
p_label, conf_idx = convert_label(y_prob, high_conf_0, high_conf_1, Threshold)
p_label_rate = sum(p_label)/len(p_label)
logger.info(f"p_label_rate: {p_label_rate:.7f} Vs.target_rate: {target_rate:.5f}, Num_p_label: {sum(p_label)}, conf_0:{high_conf_0:.5f}, conf_1:{high_conf_1:.5f}")
# Set the params of threshold based on train labels rate (target_rate).
# target_rate = target.sum() / len(target)
if p_label_rate*3 < target_rate:
check = len(y_prob)*target_rate
for i in range(10):
logger.info(f"Num_p_label: {sum(p_label)}, Expected: {check:.1f}, Adj_threshold_{i+1}: {Threshold:.7f}")
if sum(p_label)*1.5 >= check: break
if (Threshold-0.005) < 0: break
Threshold -= 0.005
high_conf_1 = max(y_prob.probability.max()*0.6,Threshold)
p_label, conf_idx = convert_label(y_prob, high_conf_0, high_conf_1, Threshold)
if p_label_rate > target_rate*3:
check = len(y_prob)*target_rate
for i in range(10):
logger.info(f"Num_p_label: {sum(p_label)}, Expected: {check:.1f}, Adj_threshold_{i+1}: {Threshold:.7f}")
if sum(p_label) <= check*1.5: break
if (Threshold+0.005) > 0.99: break
Threshold += 0.005
high_conf_1 = max(y_prob.probability.max()*0.6,Threshold)
p_label, conf_idx = convert_label(y_prob, high_conf_0, high_conf_1, Threshold)
if iter_ == max_iter:
X_conf = X_test.copy()
else:
X_conf = X_test[X_test.index.isin(conf_idx)].copy()
logger.info(f"threshold:{Threshold:.7f}, positive p_label:{sum(p_label)}/{len(p_label)}, p_label_rate: {sum(p_label)/len(p_label):.7f}")
X = pd.concat([X_corect, X_conf], axis=0, ignore_index=True)
y = pd.concat([y_corect, p_label], axis=0, ignore_index=True)
X = X.drop_duplicates(keep="last").reset_index(drop=True)
y = y[X.index.values].reset_index(drop=True)
logger.info(f"positive y_label:{sum(y)}/{len(y)}, y_label_rate: {sum(y)/len(y):.7f}")
if DEBUG:
show_feature_importance(feature_importance_df, target_col, num=10)
return X, y, feature_importance_df
categorical_cols = []
feature_importance_df = pd.DataFrame()
importance_cols_df = pd.DataFrame()
scores = []
models = []
for target_col in tqdm(train_targets_scored.columns[1:]):
_preds, _score, models, _feature_importance_df = run_lgbm(target_col)
sub[target_col] = _preds
scores.append(_score)
if DEBUG:
if _score > 0.02:
importance_cols_df[target_col] = select_importance_cols(_feature_importance_df)
print(importance_cols_df)
feature_importance_df = create_featureimprotance(models, feature_importance_df)
def show_feature_importance(feature_importance_df, title="all", num=100):
cols = (feature_importance_df[["Feature", "importance"]]
.groupby("Feature")
.mean()
.sort_values(by="importance", ascending=False)[:num].index)
best_features = feature_importance_df.loc[feature_importance_df.Feature.isin(cols)]
hight = int(num//3.3)
plt.figure(figsize=(8, hight))
sns.barplot(x="importance",
y="Feature",
data=best_features.sort_values(by="importance", ascending=False))
plt.title(f'{title}_Features importance (averaged)')
plt.tight_layout()
plt.savefig(f"./{title}_feature_importance_{Version}.png")
plt.show() | _____no_output_____ | MIT | notebooks/20201103-moa-lgbm-v38.ipynb | KFurudate/kaggle_MoA |
PreprocessingWe have to convert some categorical features into numbers in train and test. We can identify categorical features by `pd.DataFrame.select_dtypes`. | train.head()
train.select_dtypes(include=['object']).columns
train, test = label_encoding(train, test, ['cp_type', 'cp_time', 'cp_dose'])
train['WHERE'] = 'train'
test['WHERE'] = 'test'
data = train.append(test)
data = data.reset_index(drop=True)
data
# Select control data
ctl = train[(train.cp_type==0)].copy()
ctl = ctl.reset_index(drop=True)
ctl
# clipping
def outlaier_clip(df):
df = df.copy()
clipping = df.columns[4:6]
for col in clipping:
lower, upper= np.percentile(df[col], [10, 90])
df[col] = np.clip(df[col], lower, upper)
return df
ctl_df = pd.DataFrame(columns=train.columns)
for i in ctl.cp_time.unique():
for j in ctl.cp_dose.unique():
print(len(ctl[(ctl.cp_time==i) & (ctl.cp_dose==j)]))
tmp_ctl = ctl[(ctl.cp_time==i) & (ctl.cp_dose==j)]
tmp_ctl = outlaier_clip(tmp_ctl)
ctl_df = pd.concat([ctl_df, tmp_ctl], axis=0).reset_index(drop=True)
ctl_df
col_list = list(data.columns)[:-1]
data_df = pd.DataFrame(columns=col_list)
Splitdata = []
d = 1e-6
for i in tqdm(data.cp_time.unique()):
for j in data.cp_dose.unique():
select = data[(data.cp_time==i) & (data.cp_dose==j)]
print(len(select))
for k in list(select['WHERE']): Splitdata.append(k)
select = select.drop(columns='WHERE')
med = ctl[(ctl.cp_time==i) & (ctl.cp_dose==j)].iloc[:, 4:].median()
f_div = lambda x: ((x+d)*10 / (abs(med)+d))**3
select_div = select.iloc[:,4:].apply(f_div, axis=1).add_prefix('d_')
tmp_data = pd.concat([select, select_div], axis=1, sort=False)
f_diff = lambda x: ((x-med)*10)**2
select_diff = select.iloc[:,4:].apply(f_diff, axis=1).add_prefix('df_')
tmp_data = pd.concat([tmp_data, select_diff], axis=1, sort=False)
data_df = pd.concat([data_df, tmp_data], axis=0)
data_df
# clipping
clipping = data_df.columns[4:]
for col in tqdm(clipping):
lower, upper = np.percentile(data_df[col], [1, 99])
data_df[col] = np.clip(data_df[col], lower, upper)
data_df
data_df = data_df.replace([np.inf, -np.inf], np.nan)
data_df = data_df.dropna(how='any', axis=1)
data = data_df.copy()
g_list = [col for col in data.columns[4:] if col.startswith("g-")]
c_list = [col for col in data.columns[4:] if col.startswith("c-")]
d_g_list = [col for col in data.columns[4:] if col.startswith("d_g-")]
d_c_list = [col for col in data.columns[4:] if col.startswith("d_c-")]
df_g_list = [col for col in data.columns[4:] if col.startswith("df_g-")]
df_c_list = [col for col in data.columns[4:] if col.startswith("df_c-")]
g_all_list = g_list + d_g_list + df_g_list
c_all_list = c_list + d_c_list + df_c_list
from sklearn.preprocessing import StandardScaler, QuantileTransformer
# Z-score
#scaler = StandardScaler(with_mean=True, with_std=True)
# RankGauss
scaler = QuantileTransformer(output_distribution='normal', random_state=SEED)
size = len(data[col].values)
# Without Z-scored gene expression data
for col in tqdm(data.columns[4+len(g_list):]):
raw = data[col].values.reshape(size, 1)
scaler.fit(raw)
data[col] = scaler.transform(raw).reshape(1, size)[0]
data
std_df = data.iloc[:, 4:].copy()
data_df.cp_type = data_df.cp_type.astype('int16')
data_df.cp_time = data_df.cp_time.astype('int16')
data_df.cp_dose = data_df.cp_dose.astype('int16')
from sklearn.cluster import KMeans
n_clusters = 7
def create_cluster(data, features, kind, n_clusters):
data_ = data[features].copy()
kmeans = KMeans(n_clusters = n_clusters, random_state = SEED).fit(data_)
data[f'clusters_{kind}'] = kmeans.labels_[:data.shape[0]]
return data
def detect_cluster(data, feature_list, kind_list, n_clusters):
for idx, feature in enumerate(tqdm(feature_list)):
data = create_cluster(data, feature, kind=kind_list[idx], n_clusters=n_clusters)
clusters = data.iloc[:, -len(feature_list):].copy()
return clusters
feature_list = (g_list, c_list, d_g_list, d_c_list, df_g_list, df_c_list, g_all_list, c_all_list)
kind_list = ('g', 'c', 'd_g', 'd_c', 'df_g', 'df_c', 'g_all', 'c_all')
clusters = detect_cluster(data, feature_list, kind_list, n_clusters)
clusters
# Count cluster types
for i in tqdm(range(n_clusters-1, -1, -1)):
clusters[f"cnt_{i}"] = clusters.apply(lambda x: (x == i).sum(), axis=1)
clusters
def fe_stats(df, features, kind):
df_ = df.copy()
MAX, MIN = df_[features].max(axis = 1), df_[features].min(axis = 1)
Kurt = df_[features].kurtosis(axis = 1)
Skew = df_[features].skew(axis = 1)
df_[f'{kind}_max'] = MAX
df_[f'{kind}_min'] = MIN
df_[f'{kind}_max_min'] = (MAX * MIN)**2
df_[f'{kind}_kurt'] = Kurt**3
df_[f'{kind}_skew'] = Skew**3
df_[f'{kind}_max_kurt'] = MAX * Kurt
df_[f'{kind}_max_skew'] = MAX * Skew
df_[f'{kind}_kurt_skew'] = Kurt * Skew
df_[f'{kind}_sum'] = (df_[features].sum(axis = 1))**3
df_[f'{kind}_mean'] = (df_[features].mean(axis = 1))**3
df_[f'{kind}_median'] = (df_[features].median(axis = 1))**3
df_[f'{kind}_mad'] = (df_[features].mad(axis = 1))**3
df_[f'{kind}_std'] = (df_[features].std(axis = 1))**3
return df_
def detect_stats(data, feature_list, kind_list):
for idx, feature in enumerate(tqdm(feature_list)):
data = fe_stats(data, feature, kind=kind_list[idx])
stats = data.iloc[:, -9*len(feature_list):].copy()
return stats
stats = detect_stats(data, feature_list, kind_list)
stats
# Add data with sig_id, cp_type, cp_time, and cp_dose
data = pd.concat([data.iloc[:, :4], clusters], axis=1)
data = pd.concat([data, stats], axis=1)
data = pd.concat([data, std_df], axis=1)
data
# Create feature
import itertools
def CreateFeat(df):
def func_product(row):
return (row[col1]) * (row[col2])
def func_division(row):
delta = 1e-6
return (row[col1]+delta) / (row[col2]+delta)
Columns = df.columns
for col1, col2 in tqdm(tuple(itertools.permutations(Columns, 2))):
df[f"{col1}_{col2}_prd"] = df[[col1, col2]].apply(func_product, axis=1)
df[f"{col1}_{col2}_div"] = round(df[[col1, col2]].apply(func_division, axis=1), 0)
print(f"Crated {len(df.columns) - len(Columns)} columns")
return df
# Create feature2
def CreateFeat2(df):
func_list = ("max", "min", "mean", "median", "mad", "var", "std")
Columns = df.columns
for idx, func in enumerate(func_list):
print(f"{idx}/{len(func_list)}: Calucurating... {func}")
for col1, col2 in tqdm(tuple(itertools.permutations(Columns, 2))):
df[f"{col1}_{col2}_{func}"] = df[[col1, col2]].apply(func, axis=1)
print(f"Crated {len(df.columns) - len(Columns)} columns")
return df
#Reduce columens
def ReduceCol(df):
remove_cols = []
Columns = df.columns
for col1, col2 in tqdm(tuple(itertools.permutations(Columns, 2))):
# constant columns
if df[col1].std() == 0: remove_cols.append(col1)
# duplicated columns
if (col1 not in remove_cols) and (col2 not in remove_cols):
x, y = df[col1].values, df[col2].values
if np.array_equal(x, y): remove_cols.append(col1)
df.drop(remove_cols, inplace=True, axis=1)
print(f"Removed {len(remove_cols)} constant & duplicated columns")
return df
# Create feature based on feature importance with v24 notebook
#important_col = []
#tmp = CreateFeat(data[important_col])
#data = pd.concat([data, tmp], axis=1)
# Create feature based on feature importance with v24 notebook
#tmp = CreateFeat2(data[important_col])
#data = pd.concat([data, tmp], axis=1)
#remove dup colunes
#data = data.loc[:,~data.columns.duplicated()]
#tmp = ReduceCol(data.iloc[:,4:])
#data = pd.concat([data.iloc[:,:4], tmp], axis=1)
#data
# clipping
clipping = data.columns[4:]
for col in clipping:
lower, upper = np.percentile(data[col], [1, 99])
data[col] = np.clip(data[col], lower, upper)
data
data['WHERE'] = Splitdata
data = data.sort_index(axis='index')
Splitdata = data['WHERE']
data
from sklearn.feature_selection import VarianceThreshold
var_thresh = VarianceThreshold(0.99)
data_var_thresh = var_thresh.fit_transform(data.iloc[:, 4:-1])
Remove_columns = np.array(data.columns[4:-1])[var_thresh.get_support()==False]
tmp = pd.DataFrame(data_var_thresh, columns=np.array(data.columns[4:-1])[var_thresh.get_support()==True])
data = pd.concat([data.iloc[:,:4], tmp], axis=1)
print(f"Remove {len(Remove_columns)} columns: {Remove_columns}")
data['WHERE'] = Splitdata
train = data[data['WHERE']=="train"].drop('WHERE', axis=1).reset_index(drop=True)
test = data[data['WHERE']=="test"].drop('WHERE', axis=1).reset_index(drop=True)
# Kolmogorov-Smirnov test applied for train data and test data.
from scipy.stats import ks_2samp
tr, ts = train.iloc[:, 4:], test.iloc[:, 4:]
list_p_value =[ks_2samp(ts[i], tr[i])[1] for i in tqdm(tr.columns)]
Se = pd.Series(list_p_value, index=tr.columns).sort_values()
list_discarded = list(Se[Se < .1].index)
train, test = train.drop(list_discarded, axis=1), test.drop(list_discarded, axis=1)
print(f"Removed {len(list_discarded)} columns") | _____no_output_____ | MIT | notebooks/20201103-moa-lgbm-v38.ipynb | KFurudate/kaggle_MoA |
Modeling | cv = StratifiedKFold(n_splits=N_FOLD, shuffle=True, random_state=SEED)
params = {
'objective': 'binary',
'metric': 'binary_logloss',
'learning_rate': Learning_rate,
'num_threads': 2,
'verbose': -1,
'max_depth': Max_depth,
'num_leaves': int((Max_depth**2)*0.7),
'feature_fraction':0.4, # randomly select part of features on each iteration
'lambda_l1':0.1,
'lambda_l2':0.1,
'bagging_fraction': 0.8,
'bagging_freq': 5,
}
def select_importance_cols(feature_importance_df, num=10):
best_cols = (feature_importance_df[["Feature", "importance"]]
.groupby("Feature")
.mean()
.sort_values(by="importance", ascending=False)[:num].index)
return best_cols
def create_featureimprotance(models, feature_importance_df):
for model in models:
_importance_df = pd.DataFrame()
_importance_df["Feature"] = train.columns[1:]
_importance_df["importance"] = model.feature_importance(importance_type='gain')
feature_importance_df = pd.concat([feature_importance_df, _importance_df], axis=0)
return feature_importance_df
categorical_cols = []
feature_importance_df = pd.DataFrame()
importance_cols_df = pd.DataFrame()
scores = []
models = []
for target_col in tqdm(train_targets_scored.columns[1:]):
_preds, _score, models, _feature_importance_df = run_lgbm(target_col)
sub[target_col] = _preds
scores.append(_score)
if DEBUG:
if _score > 0.02:
importance_cols_df[target_col] = select_importance_cols(_feature_importance_df)
print(importance_cols_df)
feature_importance_df = create_featureimprotance(models, feature_importance_df)
sub.to_csv('submission.csv', index=False)
print(f"CV:{np.mean(scores)}")
if DEBUG:
show_feature_importance(feature_importance_df)
feature_importance_df.to_csv(f'feature_importance_df.{Version}.csv', index=False)
importance_cols_df.to_csv(f'importance_cols_df.{Version}.csv', index=False) | _____no_output_____ | MIT | notebooks/20201103-moa-lgbm-v38.ipynb | KFurudate/kaggle_MoA |
Before adding any more features, let's check the performance with the initial set. | #Use the following predictors
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest regression on initial features'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score)) | Random forest regression on initial features gives £19.64 mean difference and 0.3423 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Let's monitor how these (hopefully) improve as we add more features. | mean_diff_each_step = [mean_diff]
score_each_step = [score]
each_step = ['Initial features'] | _____no_output_____ | MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Year of releaseThis should have a large bearing on the price, but the years of some releases are unknown. Let's make some assumptions about these to generate the feature. | #Convert the known years to integers
albums['integer year'] = [int(year[-4:]) if not pd.isnull(year) else year for year in albums['year']]
#For the unknown years
for index in albums[albums['year'].isnull()].index:
ave_artist_year = np.round(albums.ix[albums['artist'] == albums.ix[index,'artist'],'integer year'].mean())
ave_label_year = np.round(albums.ix[albums['label'] == albums.ix[index,'label'],'integer year'].mean())
#If the artist has released other albums, take the year to be the artist's mean
if not pd.isnull(ave_artist_year):
albums.ix[index,'integer year'] = ave_artist_year
#Otherwise if the label has released other albums, take the year to be the label's mean
elif not pd.isnull(ave_label_year):
albums.ix[index,'integer year'] = ave_label_year
#Otherwise set the year to be the mean
else:
albums.ix[index,'integer year'] = albums['integer year'].mean()
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest regression with year added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score))
mean_diff_each_step.append(mean_diff)
score_each_step.append(score)
each_step.append('Year of release') | Random forest regression with year added gives £18.29 mean difference and 0.4012 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
So knowing the year helps a lot. Average ratingA lot of releases also have a rating given by users. | albums[~albums['average rating'].isnull()]['average rating'].hist(bins=100)
plt.xlabel('Average Rating')
plt.ylabel('Count')
plt.title('Distribution of average ratings');
for i in range(1,6):
print(str(i) + '* rated albums have mean price £{0:.2f}'.format(albums.ix[(albums['average rating'] <= i)
& (albums['average rating'] > i-1),
'median price sold'].mean()))
print('Unrated albums have mean price £{0:.2f}'.format(albums.ix[albums['average rating'].isnull(),
'median price sold'].mean())) | 1* rated albums have mean price £38.79
2* rated albums have mean price £32.74
3* rated albums have mean price £36.40
4* rated albums have mean price £24.29
5* rated albums have mean price £37.11
Unrated albums have mean price £41.45
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
These are distributed towards the high end (as expected: we're looking at the most wanted records), but a higher rating does not necessarily correspond to a more expensive record.Let's first try giving the unrated records the mean rating. | #If no rating is given, set it to be the average
albums['new rating'] = albums['average rating']
albums.ix[albums['new rating'].isnull(),'new rating'] = albums['new rating'].mean()
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'new rating']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Assuming the mean rating for those releases unrated'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score)) | Assuming the mean rating for those releases unrated gives £18.33 mean difference and 0.4007 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
What about setting the unrated records to 0, or 6? | #If no rating is given, set it to be the average
albums['new rating'] = albums['average rating']
albums.ix[albums['new rating'].isnull(),'new rating'] = 0
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'new rating']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Assuming a rating of 0 for those releases unrated'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score))
#If no rating is given, set it to be the average
albums['new rating'] = albums['average rating']
albums.ix[albums['new rating'].isnull(),'new rating'] = 6
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'new rating']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Assuming a rating of 6 for those releases unrated'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score)) | Assuming a rating of 6 for those releases unrated gives £18.33 mean difference and 0.4021 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
So assigning a 6 for the unrated items gives the best score. | albums.ix[albums['average rating'].isnull(),'average rating'] = 6
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
mean_diff_each_step.append(mean_diff)
score_each_step.append(score)
each_step.append('Average rating') | _____no_output_____ | MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Number of records Obviously all the records are LPs, but some may be double or even triple, let's add a feature of the number of records. | albums['format'].unique()
mean_diff = get_mean_diff(np.array(new_albums['median price sold'][new_albums['format']!='Vinyl']),
predictions[np.array(new_albums['format']!='Vinyl')])
score = get_score(np.array(new_albums['median price sold'][new_albums['format']!='Vinyl']),
predictions[np.array(new_albums['format']!='Vinyl')])
print('For boxsets we have £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff, score)) | For boxsets we have £15.45 mean difference and 0.3669 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
So these boxsets are not being served well by the current predictor. | #If the format is 'Vinyl', assume one record
#Otherwise extract the number
albums['number of records'] = [1 if format_str == 'Vinyl'
else [s for s in format_str.split() if s.isdigit()][0] if '×' in format_str
else None
for format_str in albums['format']]
#For the formats labelled just 'box set' look for the number of records in format details
for index in albums[albums['number of records'].isnull()].index:
split_formats = albums.ix[index, 'format details'].split(';')
num_records = [[s for s in format_str.split() if s.isdigit()][0]
for format_str in split_formats if '×' in format_str]
#Assume the maximum number is correct
if len(num_records) != 0:
albums.ix[index,'number of records'] = max(num_records)
#If no number specified, assume 1
else:
albums.ix[index,'number of records'] = 1
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating', 'number of records']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest with number of records added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score))
mean_diff_each_step.append(mean_diff)
score_each_step.append(score)
each_step.append('Number of records')
mean_diff = get_mean_diff(np.array(new_albums['median price sold'][new_albums['format']!='Vinyl']),
predictions[np.array(new_albums['format']!='Vinyl')])
score = get_score(np.array(new_albums['median price sold'][new_albums['format']!='Vinyl']),
predictions[np.array(new_albums['format']!='Vinyl')])
print('For boxsets we now have £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff, score)) | For boxsets we now have £14.33 mean difference and 0.4546 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
So adding the number of records doesn't affect the score much, but offers a great improvement for boxsets themselves. Limited editionsSome records are described as 'limited edition' in either the format details or release notes. This could correlate with higher prices so let's add a binary variable for it. | #Pick out if limited edition is specified in format details or notes
albums.ix[albums['notes'].isnull(),'notes'] = ''
albums['limited edition'] = [(('Limited Edition' in albums.ix[index,'format details']) or
('Limited Edition' in albums.ix[index,'notes']))
for index in albums.index]
print('Limited edition albums have mean price £{0:.2f}'.format(albums.ix[albums['limited edition'] == 1,
'median price sold'].mean()))
print('Non-limited edition albums have mean price £{0:.2f}'.format(albums.ix[albums['limited edition'] == 0,
'median price sold'].mean()))
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating', 'number of records', 'limited edition']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest with limited edition added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score))
mean_diff_each_step.append(mean_diff)
score_each_step.append(score)
each_step.append('Limited edition') | Random forest with limited edition added gives £18.08 mean difference and 0.4018 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Seems to make it worse. ReissuesReissues are likely to be cheaper, while the original records that have been reissued will be more in demand, and so more expensive. Some reissues are specified in the format details and notes. | #Pick out if reissue is specified in format details or notes
albums.ix[albums['notes'].isnull(),'notes'] = ''
albums['reissue'] = [(('Reissue' in albums.ix[index,'format details']) or ('Reissue' in albums.ix[index,'notes']))
for index in albums.index]
print('Reissue albums have mean price £{0:.2f}'.format(albums.ix[albums['reissue'] == 1,
'median price sold'].mean()))
print('Non-reissue albums have mean price £{0:.2f}'.format(albums.ix[albums['reissue'] == 0,
'median price sold'].mean()))
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating', 'number of records', 'limited edition', 'reissue']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest with reissue added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score)) | Random forest with reissue added gives £18.03 mean difference and 0.4012 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Not a great difference, and makes no distinction whether a record has itself been reissued. We could perhaps get a better grasp of reissues if we look at the difference between year of release between versions. | #If other versions have been released, find the difference to the earliest and latest versions
albums.ix[albums['years of versions'].isnull(),'years of versions'] = ''
albums['list version years'] = [[int(s) for s in albums.ix[index,'years of versions'].split('; ') if s.isdigit()]
+ [int(albums.ix[index,'integer year'])] for index in albums.index]
albums['difference to earliest version'] = [min(albums.ix[index,'list version years']) - int(albums.ix[index,'integer year'])
for index in albums.index]
albums['difference to latest version'] = [max(albums.ix[index,'list version years']) - int(albums.ix[index,'integer year'])
for index in albums.index]
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating', 'number of records', 'limited edition',
'reissue', 'difference to earliest version', 'difference to latest version']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest with years to reissues added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score))
mean_diff_each_step.append(mean_diff)
score_each_step.append(score)
each_step.append('Reissues') | Random forest with years to reissues added gives £17.86 mean difference and 0.4075 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Together these improve both the mean difference and score. CountriesThere are lots of different countries, many with few entries. First let's try binary variables for countries with more (>10) entries than the likely min leaf size in the random forest regressor (as otherwise they won't be used in the regressor). | albums.ix[albums['country'].isnull(),'country'] = ''
#Find all the countries
country_list = albums['country'].unique()
#Create a binary variable for each country
for country in country_list:
albums['c_'+country] = 0
#Populate the binary variable
albums.ix[albums['country'] == country, 'c_'+country] = 1
#Ignore the styles with few (<10) entries
c_country_list = []
for country in country_list:
if sum(albums['c_'+country])<10:
albums.drop('c_'+country, axis=1, inplace=True)
else:
c_country_list.append('c_'+country)
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating', 'number of records', 'limited edition',
'reissue', 'difference to earliest version', 'difference to latest version'] + c_country_list
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest with countries added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score))
mean_diff_each_step.append(mean_diff)
score_each_step.append(score)
each_step.append('Countries') | Random forest with countries added gives £17.44 mean difference and 0.4231 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Now let's try using k-means to cluster together the countries into a few groups, and using these as features. | from sklearn.cluster import KMeans
#Make a dataframe with the price statistics for each country
country_stats = pd.DataFrame(albums.groupby(by = 'country')
.describe()['median price sold']).reset_index().pivot(index = 'country',
columns = 'level_1',
values = 'median price sold')
#Replace the nan stds with 0
country_stats.ix[country_stats['std'].isnull(),'std'] = 0
#scale each column by the median value
country_stats_scaled = pd.DataFrame()
for column in country_stats.columns:
#Take the square root of each value to reduce the effect of outlying, really expensive records
country_stats_scaled[column] = [np.sqrt(country_stats.ix[index,column] / country_stats[column].median())
for index in country_stats.index]
#Cluster with kmeans
km = KMeans(n_clusters=6)
#Cluster on the limits, mean and percentiles
km.fit(country_stats_scaled[['max', 'mean', 'min', '25%', '50%', '75%']].as_matrix())
country_stats['label'] = km.labels_
fig = plt.figure(figsize=(15,3))
c_list = 'rgbmkc'
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
count = 0
for label in country_stats['label'].unique():
x = list(country_stats.ix[country_stats['label'] == label,'min'])
y = list(country_stats.ix[country_stats['label'] == label,'max'])
z = list(country_stats.ix[country_stats['label'] == label,'mean'])
ax1.scatter(z, y, c=c_list[count], marker='o')
ax2.scatter(z, x, c=c_list[count], marker='o')
count = count+1
ax1.set_xlabel('mean')
ax1.set_ylabel('max')
ax1.set_xlim([0,500])
ax1.set_ylim([0,2500])
ax2.set_xlabel('mean')
ax2.set_ylabel('min')
ax2.set_xlim([0,500])
ax2.set_ylim([0,500]);
for i in range(0,6):
print(c_list[i] + ' corresponds to:')
print(list(country_stats[country_stats['label'] == country_stats['label'].unique()[i]].index[:5]))
albums.ix[albums['country'] == 'Mongolia', ['artist','title','label','year','median price sold']] | _____no_output_____ | MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
The six groups are:1) Red: low mean/min, middling max; uniformly cheap countries places where major labels release records with fewer small pressings e.g. Australia, Canada, Cuba, Poland, plus hybrid 'countries' like Europe2) Green: high mean/min, low max; uniformly expensive countries places with only a few, sought after releases e.g. Haiti, Lebanon, Pakistan3) Blue: middling mean/min, low max; uniformly mid-range countries places where records are slightly rarer, with local scenes of interest e.g. Colombia, Ghana, Jamaica, Mexico4) Magenta: low mean, low min, high max; diversely cheaper countries, major labels releasing records in bulk but also smaller (private?) pressings e.g. US, UK, Brazil, France, Japan 5) Black: high mean, low min, high max; diversely expensive countries places with both major label operations, but also local scenes of interest e.g. Belgium, India, South Africa, Turkey6) Cyan: Mongolia, or specifically one record of a Soviet variety show by the Bayan Mongol Group this fits in best with the records in 1) so reassign Mongolia thereThese are pretty neat divisions (except for the last one), so we should add these as binary variables. I don't agree with where some of the countries end up, but these have few data points - more data will place them in the 'correct' category.N.B. This is a bit dodgy as we're using the test data to build the classifier, so perhaps in future we should only cluster on those in the training data. We also should create a more rigorous way of weeding out 'Mongolias': if there's less than a certain number of countries in a cluster, assign them to the next nearest cluster. | #Reassign Mongolia
country_stats.ix['Mongolia', 'label'] = country_stats.ix['Lebanon', 'label']
#Create binary variables for each label
count = 1
for label in country_stats['label'].unique():
albums['country label ' + str(count)] = 0
for country in country_stats[country_stats['label'] == label].index:
albums.ix[albums['country'] == country, 'country label ' + str(count)] = 1
count = count + 1
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating', 'number of records', 'limited edition',
'reissue', 'difference to earliest version', 'difference to latest version',
'country label 1', 'country label 2', 'country label 3', 'country label 4', 'country label 5']
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest with countries added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score)) | Random forest with countries added gives £17.84 mean difference and 0.4017 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
This isn't as good as the binary variables. GenresGenre may affect price: bebop collectors will probably pay more for originals than smooth jazz collectors, for example. | #Extract all the genres
genre_list = []
for genres in albums['genre'].unique():
for genre in genres.split('; '):
if genre not in genre_list:
genre_list.append(genre)
#Create a binary variable for each genre
for genre in genre_list:
albums['g_'+genre] = 0
#Populate the binary variable
for genres in albums['genre'].unique():
for genre in genres.split('; '):
albums.ix[albums['genre'] == genres, 'g_'+genre] = 1
#Ignore the genres with few (<10) entries
g_genre_list = []
for genre in genre_list:
if sum(albums['g_'+genre])<10:
albums.drop('g_'+genre, axis=1, inplace=True)
else:
print(genre + ': {0}, £{1:.2f}'.format(sum(albums['g_'+genre]),albums.ix[albums['g_'+genre] == 1,
'median price sold'].mean()))
g_genre_list.append('g_'+genre) | Electronic: 2474, £32.80
Jazz: 23124, £35.27
Hip Hop: 365, £24.21
Stage & Screen: 1865, £51.57
Funk / Soul: 6067, £34.71
Rock: 3636, £35.45
Latin: 1763, £37.27
Folk, World, & Country: 1262, £46.08
Pop: 1131, £33.75
Classical: 388, £49.13
Non-Music: 284, £43.94
Reggae: 153, £30.66
Blues: 521, £30.02
Children's: 19, £22.49
Brass & Military: 38, £38.46
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Hip hop records are invariably newer, so less expensive. Stage & Screen is more expensive (perhaps less popular upon initial release, at least in the realms of jazz). Folk, World, & Country will be pressings from more 'exotic' locations, so more expensive. Classical/non-music is more expensive (perhaps less popular upon initial release, at least in the realms of jazz).Let's use some of these as features (with >100 entries) | predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating', 'number of records', 'limited edition',
'reissue', 'difference to earliest version', 'difference to latest version'] + c_country_list + g_genre_list
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest with genres added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score))
mean_diff_each_step.append(mean_diff)
score_each_step.append(score)
each_step.append('Genres') | Random forest with genres added gives £17.45 mean difference and 0.4155 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Little difference. StylesSlightly more specific categorisations, but may have a similar effect to genres. | albums.ix[albums['style'].isnull(),'style'] = ''
#Extract all the styles
style_list = []
for styles in albums['style'].unique():
for style in styles.split('; '):
if style not in style_list:
style_list.append(style)
#Create a binary variable for each style
for style in style_list:
albums['s_'+style] = 0
#Populate the binary variable
for styles in albums['style'].unique():
for style in styles.split('; '):
albums.ix[albums['style'] == styles, 's_'+style] = 1
#Ignore the styles with few (<10) entries
s_style_list = []
for style in style_list:
if sum(albums['s_'+style])<10:
albums.drop('s_'+style, axis=1, inplace=True)
else:
print(style + ': {0}, £{1:.2f}'.format(sum(albums['s_'+style]),albums.ix[albums['s_'+style] == 1,
'median price sold'].mean()))
s_style_list.append('s_'+style)
predictors = ['compilation','number for sale', 'number have', 'number of ratings', 'number of tracks',
'number of versions', 'number on label', 'number on label for sale', 'number want',
'integer year', 'average rating', 'number of records', 'limited edition',
'reissue', 'difference to earliest version', 'difference to latest version'] + c_country_list + g_genre_list + s_style_list
#Randomly shuffle the row order
new_albums = albums.sample(frac=1)
#Generate predictions
predictions = get_random_forest_predictions(new_albums, predictors)
#Calculate
mean_diff = get_mean_diff(np.array(new_albums['median price sold']), predictions)
score = get_score(np.array(new_albums['median price sold']), predictions)
print('Random forest with styles added'
' gives £{0:.2f} mean difference and {1:.4f} score'.format(mean_diff,score))
mean_diff_each_step.append(mean_diff)
score_each_step.append(score)
each_step.append('Styles') | Random forest with styles added gives £17.27 mean difference and 0.4179 score
| MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Improves mean difference a little at least. | xticks = list(range(0,len(score_each_step)))
fig = plt.figure(figsize=(12,3))
ax1 = fig.add_subplot(121)
ax1.plot(xticks,mean_diff_each_step,'-or')
ax1.set_title('mean diff')
ax1.set_xticks(xticks)
ax1.set_xticklabels(each_step, rotation=90)
ax1.set_ylabel('mean diff (£)')
ax2 = fig.add_subplot(122)
ax2.plot(xticks,score_each_step,'-ob')
ax2.set_title('score')
ax2.set_xticks(xticks)
ax2.set_xticklabels(each_step, rotation=90)
ax2.set_ylabel('score'); | _____no_output_____ | MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
Although there's obviously some noise in both the mean difference and score associated with the random partitioning of the data into train and test sets, there's still a clear improvement in both measures as features have been added. | albums.to_csv('../data/interim/albums.csv',encoding='utf-8') | _____no_output_____ | MIT | notebooks/02-20_9_16-generating_features.ipynb | sfblake/Record_Price_Predictor |
2020년 6월 24일 수요일 BaekJoon - 11052 : 카드 구매하기 (Python) 문제 : https://www.acmicpc.net/problem/11052 블로그 : https://somjang.tistory.com/entry/BaekJoon-11052%EB%B2%88-%EC%B9%B4%EB%93%9C-%EA%B5%AC%EB%A7%A4%ED%95%98%EA%B8%B0-Python 첫번째 시도 | cardNum = int(input())
NC = [0]*(cardNum+1)
cardPrice = [0]+list(map(int, input().split()))
def answer():
NC[0], NC[1] = 0, cardPrice[1]
for i in range(2, cardNum+1):
for j in range(1, i+1):
NC[i] = max(NC[i], NC[i-j]+cardPrice[j])
print(NC[cardNum])
answer() | _____no_output_____ | MIT | DAY 101 ~ 200/DAY139_[BaekJoon] 카드 구매하기 (Python).ipynb | SOMJANG/CODINGTEST_PRACTICE |
This code uses the Brian2 neuromorphic simulator code to implement a version of role/filler binding and unbinding based on the paper :High-Dimensional Computing with Sparse Vectors" by Laiho et al 2016. The vector representation is a block structure comprising slots_per_vector where the number of slots_per_vector is the vector dimension. In each slot there are anumber of possible bit positions with one bit set per slot. In this implementation we implement the role/filler binding and unbinding operations in Brian2 by representing each slot as a neuron and the time delay of the neuron's spike as the bit position. To ensure that the Brian2 network is performing correctly the first section of the code computes the expected sparse bound vector. The neuromorphic equivalent is implemented as two Brian2 networks. The first network (net1) implementsthe role/filler binding and the second netwok (net2) implements the role/filler unbinding and the clean-up memoryoperation which compares the unbound vector with all the memory vectors to find the best match.The sparse bound vector resulting from net1 is passed to net2 to initiate the unbinding. Init base vars | # Init base vars
show_bound_vecs_slot_detail = False
slots_per_vector = 100 # This is the number of neurons used to represent a vector
bits_per_slot = 100 # This is the number of bit positions
mem_size = 500 # The number of vectors against which the resulting unbound vector is compared
Num_bound = 5 # The number of vectors that are to be bound
input_delay = bits_per_slot # Time delay between adding cyclically shifted vectors to construct the bound vector is set to 'bits_per_slot' milliseconds.
#NB all timings use milliseconds and we can use a random seed if required.
np.random.seed(54321)
target_neuron = 1
y_low=target_neuron-1 # This is used to select the lowest index of the range of neurons that are to be displayed
y_high=target_neuron+1 # This is used to select the highest index of the range of neurons that are to be displayed
delta = (2*Num_bound) * bits_per_slot #This determins the time period over which the Brian2 simulation is to be run.
| _____no_output_____ | MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
Create a set of sparse VSA vectorsGenerate a random matrix (P_matrix) which represents all of the sparse vectors that are to be used.This matrix has columns equal to the number of slots_per_vector in each vector with the number of rows equal to the memory size (mem_size) | P_matrix = np.random.randint(0, bits_per_slot, size=(mem_size,slots_per_vector))
Role_matrix = P_matrix[::2]
Val_matrix = P_matrix[1::2] | _____no_output_____ | MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
Demonstration of modulo addition binding | test_target_neuron = target_neuron # Change this to try different columns e.g. 0, 1, 2....
print(f"Showing modulo {bits_per_slot} addition role+filler bind/unbind for target column {test_target_neuron}\n")
for n in range(0,2*Num_bound,2):
bind_val = (P_matrix[n][test_target_neuron]+P_matrix[n+1][test_target_neuron])%bits_per_slot
print(f"\t{P_matrix[n][test_target_neuron]:2d}+{P_matrix[n+1][test_target_neuron]:2d}="
f"{P_matrix[n][test_target_neuron]+P_matrix[n+1][test_target_neuron]:2d} %"
f" {bits_per_slot} = {bind_val:2d} Bind")
unbind_val = (bind_val-P_matrix[n+1][test_target_neuron]) % bits_per_slot
print(f"\t{bind_val:2d}-{P_matrix[n+1][test_target_neuron]:2d}="
f"{bind_val-P_matrix[n+1][test_target_neuron]:2d} %"
f" {bits_per_slot} = {unbind_val:2d} Unbind\n")
| Showing modulo 100 addition role+filler bind/unbind for target column 1
10+51=61 % 100 = 61 Bind
61-51=10 % 100 = 10 Unbind
0+69=69 % 100 = 69 Bind
69-69= 0 % 100 = 0 Unbind
22+77=99 % 100 = 99 Bind
99-77=22 % 100 = 22 Unbind
30+31=61 % 100 = 61 Bind
61-31=30 % 100 = 30 Unbind
27+63=90 % 100 = 90 Bind
90-63=27 % 100 = 27 Unbind
| MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
Empirical calc This section of the code computes the theoretical values for the sparse vector (which can then be compared withthe output of the net1 neuromorphic circuit. It then computes the expected number of bits_per_slot that will align in the clean-up memory operation (which can then be compared with the net2 neuromorphic circuit output). Create sparse representation of the bound vector | # print the cyclically shifted version of the vectors that are to be bound
np.set_printoptions(threshold=24)
np.set_printoptions(edgeitems=11)
for n in range(0, Num_bound):
print(np.roll(P_matrix[n], n))
np.set_printoptions()
# Init sparse bound vector (s_bound) with zeros
s_bound = np.zeros((slots_per_vector, bits_per_slot)) # Create a slotted vector with
#Create sparse representation of the bound vector
#We take pairs of vector and bind them together and in each slot
# we cyclically shift the bit position of the filler vector by the
# bit position of the role vector then we add them together to get the vector s_bound
# Do the binding
for n in range(0, Num_bound):
for s in range(0, slots_per_vector): # For each slot
role_pos = Role_matrix[n][s] # Position of the set bit in this role vector for this slot
filler_pos = Val_matrix[n][s] # Position of the set bit in this value vector for this slot
b = (filler_pos+role_pos) % bits_per_slot # Get new 'phase' (bit position) to set in the bound vector's slot
s_bound[s][b] += 1
if show_bound_vecs_slot_detail:
np.set_printoptions(formatter={'int':lambda x: f"{x:2d}"})
print()
fancy_print_vector(s_bound)
np.set_printoptions() | _____no_output_____ | MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
Make s_bound sparse using the argmax function which finds the bit position with the highest random value. | # Make s_bound sparse using the argmax function which finds the bit position with the highest random value.
np.set_printoptions(threshold=24)
np.set_printoptions(edgeitems=11)
print("\nResultant Sparse vector, value indicates 'SET' bit position in each slot. "
"\n(Note, a value of '0' means bit zero is set).\n")
sparse_bound = np.array([np.argmax(s_bound[s]) for s in range(0,slots_per_vector)])
print(sparse_bound)
print()
np.set_printoptions() |
Resultant Sparse vector, value indicates 'SET' bit position in each slot.
(Note, a value of '0' means bit zero is set).
[81 61 19 21 68 3 12 9 8 62 17 ... 6 15 15 64 1 53 1 1 24 16 9]
| MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
Perform the unbinding Unbind the vector sparse_bound vector and compare with each of the vectors in the P_matrix couting thenumber of slots_per_vector that have matching bit positions. This gives the number of spikes that should line up in the clean up memory operation. | np.set_printoptions(threshold=24)
np.set_printoptions(edgeitems=11)
hd_threshold = 0.1
results_set = []
results = None
miss_matches = []
min_match = slots_per_vector
for nn in range(0, len(Role_matrix)):
# for pvec in P_matrix:
pvec = Role_matrix[nn]
results = []
for test_vec in Val_matrix:
unbound_vals = np.array([(sparse_bound[ss] - pvec[ss]) % bits_per_slot for ss in range(0, slots_per_vector)])
match = np.count_nonzero(unbound_vals == test_vec)
results.append(match)
win_indx = np.argmax(results)
max_count = np.max(results)
hd = max(results) / slots_per_vector
# print(nn, end=' ')
if hd > hd_threshold:
print(f"Role_vec_idx[{nn:02d}], Val_match_idx[{win_indx:02d}]:\t{np.array(results)}")
if max_count <= min_match:
min_match = max_count
else:
# store the a failed match
miss_matches.append((nn, win_indx, np.array(results)))
results_set.append(results)
np.set_printoptions()
print("\nShowing failed matches:\n")
for fm in miss_matches:
print(f"Role_vec_idx[{fm[0]:02d}], Val_match_idx[{fm[1]:02d}]:\t{fm[2]}") |
Showing failed matches:
Role_vec_idx[05], Val_match_idx[94]: [3 2 1 0 1 0 0 0 2 2 0 ... 0 2 0 1 0 0 1 3 1 1 1]
Role_vec_idx[06], Val_match_idx[116]: [0 0 0 2 0 1 1 0 1 0 0 ... 1 1 1 0 1 2 1 0 0 2 2]
Role_vec_idx[07], Val_match_idx[27]: [0 1 0 1 1 1 3 1 1 2 2 ... 0 0 2 0 1 2 0 3 0 1 3]
Role_vec_idx[08], Val_match_idx[44]: [2 0 1 0 0 0 0 1 0 0 2 ... 2 2 1 1 0 0 3 2 2 0 1]
Role_vec_idx[09], Val_match_idx[159]: [1 2 0 1 0 0 1 1 2 0 1 ... 1 2 0 2 0 0 0 1 0 1 1]
Role_vec_idx[10], Val_match_idx[49]: [0 2 2 0 3 3 1 1 2 1 0 ... 0 1 2 3 2 0 0 2 0 3 0]
Role_vec_idx[11], Val_match_idx[15]: [0 2 2 0 2 1 1 0 2 1 0 ... 2 0 1 0 2 1 2 0 1 0 3]
Role_vec_idx[12], Val_match_idx[42]: [2 2 1 0 0 1 3 1 0 0 1 ... 2 2 1 0 1 1 0 3 0 2 1]
Role_vec_idx[13], Val_match_idx[23]: [0 1 0 1 3 0 0 0 1 1 1 ... 0 0 0 3 0 0 0 1 1 0 0]
Role_vec_idx[14], Val_match_idx[174]: [2 0 0 1 1 0 3 3 1 0 1 ... 1 0 0 1 0 1 2 0 0 0 0]
Role_vec_idx[15], Val_match_idx[40]: [2 2 0 0 1 0 1 2 1 0 2 ... 0 3 2 1 2 1 1 0 2 3 0]
Role_vec_idx[16], Val_match_idx[35]: [0 2 1 1 2 2 1 0 0 1 2 ... 0 0 0 1 2 1 3 2 1 1 0]
Role_vec_idx[17], Val_match_idx[195]: [0 1 0 1 0 3 0 0 4 1 3 ... 1 2 0 1 1 1 0 1 0 2 1]
Role_vec_idx[18], Val_match_idx[81]: [0 1 1 1 1 0 1 1 1 1 0 ... 1 1 1 0 2 1 2 1 1 0 1]
Role_vec_idx[19], Val_match_idx[36]: [0 3 1 0 0 1 2 2 1 1 0 ... 0 1 2 1 1 3 0 1 1 1 2]
Role_vec_idx[20], Val_match_idx[53]: [0 0 1 2 2 1 2 3 2 1 0 ... 2 0 2 2 1 0 1 1 0 1 1]
Role_vec_idx[21], Val_match_idx[184]: [1 0 0 1 0 0 3 1 1 0 1 ... 0 1 2 0 1 0 1 0 0 1 1]
Role_vec_idx[22], Val_match_idx[130]: [2 2 0 1 1 2 2 0 1 3 0 ... 1 0 0 3 1 0 1 2 0 1 1]
Role_vec_idx[23], Val_match_idx[23]: [0 2 2 1 0 2 0 1 0 2 1 ... 1 0 1 0 1 2 1 0 1 0 1]
Role_vec_idx[24], Val_match_idx[201]: [0 1 1 2 1 0 0 1 1 1 1 ... 0 1 2 1 2 1 1 0 1 2 1]
Role_vec_idx[25], Val_match_idx[107]: [1 0 2 1 0 0 0 0 0 3 1 ... 1 0 2 1 0 1 0 2 2 1 1]
Role_vec_idx[26], Val_match_idx[32]: [1 2 2 1 0 1 0 0 2 1 1 ... 0 0 0 0 1 1 3 2 0 3 1]
Role_vec_idx[27], Val_match_idx[11]: [2 0 0 0 0 1 1 1 1 2 3 ... 2 0 1 0 0 0 0 2 2 0 2]
Role_vec_idx[28], Val_match_idx[181]: [1 0 1 1 1 1 1 1 0 2 2 ... 1 2 0 1 2 0 0 1 0 0 0]
Role_vec_idx[29], Val_match_idx[50]: [2 0 0 1 2 0 0 2 1 1 1 ... 1 1 0 1 1 2 2 0 0 0 1]
Role_vec_idx[30], Val_match_idx[29]: [0 0 0 3 0 0 1 1 0 1 0 ... 3 0 1 1 2 0 0 1 2 0 1]
Role_vec_idx[31], Val_match_idx[49]: [0 1 2 0 0 0 0 1 1 0 2 ... 1 1 3 0 0 1 0 0 1 1 2]
Role_vec_idx[32], Val_match_idx[04]: [0 2 2 2 4 0 1 0 2 1 2 ... 1 0 2 0 1 1 1 0 0 1 0]
Role_vec_idx[33], Val_match_idx[175]: [0 1 2 1 0 0 0 0 1 2 1 ... 2 0 1 1 0 0 0 0 0 0 1]
Role_vec_idx[34], Val_match_idx[124]: [3 3 0 0 1 2 2 2 0 1 1 ... 3 1 2 0 1 0 2 1 0 0 1]
Role_vec_idx[35], Val_match_idx[194]: [0 2 1 2 3 1 0 0 1 2 1 ... 1 0 0 0 0 4 1 0 2 0 0]
Role_vec_idx[36], Val_match_idx[162]: [0 0 3 1 2 0 1 0 0 1 1 ... 1 3 0 1 1 0 1 2 0 1 1]
Role_vec_idx[37], Val_match_idx[236]: [3 1 0 3 3 2 1 2 1 0 0 ... 1 1 3 2 2 3 5 0 2 2 0]
Role_vec_idx[38], Val_match_idx[26]: [1 2 0 1 1 1 2 1 2 1 1 ... 1 0 0 1 0 1 0 1 1 1 0]
Role_vec_idx[39], Val_match_idx[70]: [2 3 3 0 0 1 0 0 3 1 3 ... 0 2 1 3 0 1 0 1 0 0 1]
Role_vec_idx[40], Val_match_idx[51]: [1 1 0 1 2 0 1 2 1 1 1 ... 1 0 1 2 0 1 2 0 0 0 1]
Role_vec_idx[41], Val_match_idx[189]: [0 1 1 0 1 4 2 3 2 0 1 ... 1 1 0 1 1 1 1 1 0 1 1]
Role_vec_idx[42], Val_match_idx[54]: [2 0 0 1 1 1 0 1 3 1 0 ... 1 0 0 0 0 2 1 1 0 3 0]
Role_vec_idx[43], Val_match_idx[79]: [1 2 2 2 0 3 2 1 1 1 0 ... 1 2 1 1 2 2 0 2 0 0 1]
Role_vec_idx[44], Val_match_idx[61]: [2 2 4 1 2 0 3 0 2 1 1 ... 0 0 2 1 1 1 4 2 0 1 2]
Role_vec_idx[45], Val_match_idx[25]: [1 0 0 1 1 2 1 0 0 0 2 ... 0 1 0 0 3 0 2 1 3 2 1]
Role_vec_idx[46], Val_match_idx[46]: [1 1 1 1 1 0 0 3 1 1 0 ... 2 0 1 3 2 2 1 0 1 2 0]
Role_vec_idx[47], Val_match_idx[51]: [1 1 1 2 0 1 1 0 2 1 1 ... 1 1 2 3 2 0 1 1 1 0 2]
Role_vec_idx[48], Val_match_idx[88]: [1 1 1 1 1 1 0 4 0 1 1 ... 0 1 2 0 0 0 4 2 1 0 0]
Role_vec_idx[49], Val_match_idx[116]: [1 0 1 1 1 1 0 0 0 0 2 ... 2 0 0 0 1 2 0 0 0 2 1]
Role_vec_idx[50], Val_match_idx[45]: [1 0 1 0 0 1 2 0 2 2 2 ... 1 0 0 1 1 1 1 2 1 0 0]
Role_vec_idx[51], Val_match_idx[14]: [2 1 1 2 1 3 0 3 1 1 1 ... 1 0 1 2 1 0 0 0 1 1 3]
Role_vec_idx[52], Val_match_idx[15]: [0 0 0 1 0 3 0 0 0 2 1 ... 1 2 2 1 0 0 1 1 1 1 1]
Role_vec_idx[53], Val_match_idx[64]: [0 1 1 2 0 1 0 0 0 3 2 ... 0 1 0 1 1 0 2 1 3 0 1]
Role_vec_idx[54], Val_match_idx[138]: [1 1 1 2 1 2 1 1 0 2 1 ... 0 1 0 2 0 0 0 3 1 2 0]
Role_vec_idx[55], Val_match_idx[227]: [1 1 2 0 0 0 2 2 1 1 0 ... 0 1 1 0 0 0 2 1 0 1 1]
Role_vec_idx[56], Val_match_idx[192]: [1 1 2 0 1 1 1 0 0 1 0 ... 0 2 1 2 0 2 0 2 4 1 1]
Role_vec_idx[57], Val_match_idx[142]: [1 2 0 3 0 0 1 1 0 0 1 ... 1 1 1 1 3 1 1 1 0 2 2]
Role_vec_idx[58], Val_match_idx[189]: [2 0 1 2 1 1 1 1 1 2 0 ... 3 0 3 0 0 0 1 0 2 1 1]
Role_vec_idx[59], Val_match_idx[06]: [2 2 0 0 0 0 4 1 1 2 1 ... 2 0 2 1 1 1 1 1 0 4 1]
Role_vec_idx[60], Val_match_idx[37]: [0 0 0 1 0 1 0 1 3 0 1 ... 2 2 1 1 0 0 1 1 0 1 0]
Role_vec_idx[61], Val_match_idx[26]: [2 1 0 0 2 1 1 1 0 3 0 ... 2 2 2 2 1 0 1 3 0 1 1]
Role_vec_idx[62], Val_match_idx[06]: [0 0 1 1 0 1 4 1 0 0 0 ... 0 2 1 2 0 1 2 1 0 2 0]
Role_vec_idx[63], Val_match_idx[18]: [2 0 2 1 1 2 2 0 1 1 3 ... 2 1 1 1 0 0 1 1 2 1 1]
Role_vec_idx[64], Val_match_idx[118]: [1 0 1 2 1 1 0 0 1 0 1 ... 0 1 0 1 2 1 2 1 2 2 3]
Role_vec_idx[65], Val_match_idx[65]: [1 2 2 0 1 2 1 2 1 1 2 ... 4 1 0 2 1 0 0 0 0 1 3]
Role_vec_idx[66], Val_match_idx[25]: [3 1 2 1 1 1 0 0 1 1 0 ... 0 3 0 0 1 1 1 0 2 0 0]
Role_vec_idx[67], Val_match_idx[48]: [3 1 1 0 1 0 1 1 1 1 0 ... 1 1 0 1 2 3 1 1 0 1 1]
Role_vec_idx[68], Val_match_idx[34]: [1 2 3 0 1 2 2 1 0 2 2 ... 0 0 1 2 0 0 1 0 0 3 2]
Role_vec_idx[69], Val_match_idx[67]: [0 1 0 3 1 0 0 0 0 1 0 ... 0 1 1 0 2 1 2 1 1 1 0]
Role_vec_idx[70], Val_match_idx[16]: [2 1 1 1 0 0 1 0 1 2 2 ... 1 3 1 1 1 1 1 3 1 3 2]
Role_vec_idx[71], Val_match_idx[24]: [1 2 1 1 1 1 2 2 1 1 1 ... 0 0 0 1 0 0 0 1 1 0 1]
Role_vec_idx[72], Val_match_idx[32]: [2 3 0 0 0 2 0 0 2 3 0 ... 2 0 1 1 1 1 0 1 1 1 0]
Role_vec_idx[73], Val_match_idx[01]: [0 4 2 3 1 1 0 3 0 2 1 ... 0 1 0 1 1 1 4 2 1 0 0]
Role_vec_idx[74], Val_match_idx[16]: [1 0 1 2 0 0 0 2 1 1 0 ... 0 3 0 1 0 1 0 0 1 1 1]
Role_vec_idx[75], Val_match_idx[54]: [1 2 0 3 2 1 3 0 0 0 2 ... 0 0 2 1 2 1 1 4 0 0 0]
Role_vec_idx[76], Val_match_idx[41]: [1 2 0 0 0 2 1 1 0 1 1 ... 0 0 2 0 4 2 1 2 1 1 1]
Role_vec_idx[77], Val_match_idx[36]: [0 1 0 1 2 2 2 0 1 0 0 ... 1 3 2 0 0 1 0 1 1 1 0]
Role_vec_idx[78], Val_match_idx[82]: [2 2 1 1 1 0 2 0 1 2 3 ... 0 3 0 3 0 1 0 2 1 1 1]
Role_vec_idx[79], Val_match_idx[160]: [0 0 3 1 0 1 0 0 1 3 0 ... 1 2 2 0 0 0 1 1 1 2 0]
Role_vec_idx[80], Val_match_idx[11]: [1 2 1 2 0 3 2 0 2 1 1 ... 3 0 1 1 0 1 1 0 0 2 0]
Role_vec_idx[81], Val_match_idx[18]: [2 1 0 0 0 0 1 2 1 1 1 ... 2 3 1 1 0 2 3 1 1 1 2]
Role_vec_idx[82], Val_match_idx[42]: [0 0 1 1 0 0 0 1 1 4 0 ... 1 1 1 0 1 1 0 2 0 3 1]
Role_vec_idx[83], Val_match_idx[235]: [0 1 1 0 1 2 2 0 1 3 1 ... 0 0 0 0 0 3 0 1 2 3 2]
Role_vec_idx[84], Val_match_idx[43]: [3 1 0 1 1 0 0 2 2 2 0 ... 2 1 1 2 2 3 0 0 0 2 1]
Role_vec_idx[85], Val_match_idx[12]: [1 1 1 1 1 1 0 1 2 0 0 ... 3 1 1 1 3 0 1 0 2 0 1]
Role_vec_idx[86], Val_match_idx[107]: [0 1 0 0 0 1 3 1 2 1 1 ... 1 0 0 1 0 1 0 1 0 1 1]
Role_vec_idx[87], Val_match_idx[155]: [1 1 0 4 4 1 2 2 0 1 1 ... 0 0 2 1 1 2 3 1 1 3 2]
Role_vec_idx[88], Val_match_idx[110]: [1 1 2 1 2 1 1 2 0 0 2 ... 1 2 1 0 1 3 2 1 1 2 1]
Role_vec_idx[89], Val_match_idx[194]: [0 1 1 0 1 0 2 1 2 1 1 ... 2 2 0 1 4 1 1 0 2 0 0]
Role_vec_idx[90], Val_match_idx[246]: [0 1 0 0 1 0 0 0 1 0 2 ... 1 3 2 1 0 1 1 5 1 0 2]
Role_vec_idx[91], Val_match_idx[14]: [0 2 0 2 0 1 1 3 1 2 0 ... 2 2 1 0 0 2 0 1 2 0 0]
Role_vec_idx[92], Val_match_idx[109]: [1 0 0 1 0 2 2 1 1 0 1 ... 0 0 1 1 2 1 1 0 1 0 2]
Role_vec_idx[93], Val_match_idx[222]: [2 0 1 1 0 1 1 1 0 0 1 ... 2 0 1 0 0 1 0 0 1 1 1]
Role_vec_idx[94], Val_match_idx[88]: [1 0 1 1 1 1 1 1 1 0 0 ... 2 0 0 1 0 0 0 0 2 0 0]
Role_vec_idx[95], Val_match_idx[172]: [3 2 0 3 1 3 1 0 1 0 0 ... 0 3 0 2 0 2 3 1 2 1 0]
Role_vec_idx[96], Val_match_idx[37]: [1 3 2 0 1 0 2 2 1 1 0 ... 0 1 2 0 1 0 0 1 1 1 0]
Role_vec_idx[97], Val_match_idx[24]: [0 0 1 0 0 0 0 3 0 2 2 ... 2 1 0 0 3 0 1 2 3 3 1]
Role_vec_idx[98], Val_match_idx[149]: [0 0 2 0 1 0 0 2 2 2 3 ... 1 2 3 0 0 1 2 0 1 2 0]
Role_vec_idx[99], Val_match_idx[86]: [1 1 1 2 1 0 1 1 0 0 2 ... 2 0 0 1 0 2 0 1 1 1 3]
Role_vec_idx[100], Val_match_idx[211]: [0 1 0 1 0 1 0 1 0 0 1 ... 1 5 0 1 0 3 0 3 1 1 2]
Role_vec_idx[101], Val_match_idx[86]: [2 0 1 0 0 0 2 1 0 1 0 ... 0 1 2 0 4 0 2 1 2 1 3]
Role_vec_idx[102], Val_match_idx[33]: [1 2 2 1 2 0 2 0 1 2 0 ... 2 2 0 0 1 0 0 3 0 3 2]
Role_vec_idx[103], Val_match_idx[181]: [4 2 1 0 1 1 0 0 2 0 0 ... 3 2 0 0 1 1 0 1 2 0 0]
Role_vec_idx[104], Val_match_idx[59]: [1 1 2 2 1 0 0 2 2 0 1 ... 1 0 1 2 1 1 1 1 0 0 1]
Role_vec_idx[105], Val_match_idx[34]: [0 3 0 1 2 1 1 2 1 2 2 ... 0 0 1 0 0 1 0 0 2 1 0]
Role_vec_idx[106], Val_match_idx[39]: [1 0 0 0 1 1 1 0 1 0 0 ... 0 0 2 1 1 0 0 0 1 1 0]
Role_vec_idx[107], Val_match_idx[89]: [1 0 0 0 2 1 1 1 1 2 1 ... 1 0 1 1 1 0 1 1 0 0 2]
Role_vec_idx[108], Val_match_idx[122]: [0 1 1 0 1 1 0 2 2 0 1 ... 1 0 0 1 0 2 1 0 1 0 2]
Role_vec_idx[109], Val_match_idx[119]: [0 1 0 1 0 1 0 1 1 1 2 ... 2 1 0 3 0 0 0 2 1 1 0]
Role_vec_idx[110], Val_match_idx[65]: [0 2 1 2 3 2 3 2 2 0 1 ... 1 3 1 2 1 1 2 0 0 0 0]
Role_vec_idx[111], Val_match_idx[100]: [1 2 0 1 4 1 0 3 0 1 3 ... 2 1 1 1 0 1 0 2 1 3 2]
Role_vec_idx[112], Val_match_idx[102]: [1 1 0 0 0 0 0 0 0 0 2 ... 1 1 0 0 1 0 2 0 2 0 0]
Role_vec_idx[113], Val_match_idx[04]: [1 0 0 1 4 1 1 1 1 1 0 ... 0 1 0 0 2 0 3 1 2 0 1]
Role_vec_idx[114], Val_match_idx[235]: [2 1 2 0 0 0 2 3 1 0 2 ... 1 3 1 3 0 1 2 0 1 0 1]
Role_vec_idx[115], Val_match_idx[02]: [0 0 5 2 0 1 1 3 0 0 0 ... 0 0 2 1 1 1 1 2 1 0 3]
Role_vec_idx[116], Val_match_idx[15]: [0 0 1 1 1 0 2 0 2 2 3 ... 1 1 1 0 0 1 0 1 2 1 1]
Role_vec_idx[117], Val_match_idx[37]: [0 0 3 3 0 1 2 3 2 1 2 ... 0 1 0 0 1 0 0 3 0 1 2]
Role_vec_idx[118], Val_match_idx[194]: [1 2 1 0 1 2 0 1 2 1 1 ... 2 0 1 2 0 0 0 2 0 1 1]
Role_vec_idx[119], Val_match_idx[58]: [1 1 2 1 1 0 2 2 1 0 3 ... 2 1 1 0 4 0 0 1 1 3 1]
Role_vec_idx[120], Val_match_idx[39]: [1 0 1 0 0 2 1 1 0 1 1 ... 0 1 0 4 0 0 2 3 1 0 0]
Role_vec_idx[121], Val_match_idx[00]: [4 0 1 4 1 1 0 1 2 2 1 ... 4 1 1 1 0 1 1 0 2 0 4]
Role_vec_idx[122], Val_match_idx[01]: [1 5 1 0 2 1 3 2 1 1 1 ... 1 2 4 0 1 3 0 0 0 1 0]
Role_vec_idx[123], Val_match_idx[67]: [0 2 1 1 2 1 0 0 0 1 1 ... 1 0 0 1 0 3 0 1 1 0 2]
Role_vec_idx[124], Val_match_idx[128]: [0 0 2 0 0 2 0 3 0 2 1 ... 0 1 0 0 3 1 0 1 1 1 2]
Role_vec_idx[125], Val_match_idx[21]: [1 0 0 0 1 1 0 0 1 0 0 ... 1 0 1 0 0 2 1 1 1 2 1]
Role_vec_idx[126], Val_match_idx[57]: [0 1 0 1 0 3 1 1 0 2 0 ... 1 1 1 1 1 4 2 1 0 1 0]
Role_vec_idx[127], Val_match_idx[73]: [2 2 0 0 0 0 2 3 0 0 1 ... 0 1 3 1 1 1 0 0 0 2 2]
Role_vec_idx[128], Val_match_idx[06]: [1 0 0 0 1 0 4 0 0 1 2 ... 1 0 2 2 2 2 3 1 1 1 0]
Role_vec_idx[129], Val_match_idx[168]: [1 1 3 0 1 0 1 1 2 1 2 ... 0 0 0 0 0 1 2 1 1 1 0]
Role_vec_idx[130], Val_match_idx[55]: [0 3 1 1 0 2 1 0 0 3 1 ... 0 1 0 2 0 0 0 2 0 1 0]
Role_vec_idx[131], Val_match_idx[40]: [0 0 0 0 1 2 1 0 0 2 0 ... 1 1 2 1 2 3 0 2 0 2 0]
Role_vec_idx[132], Val_match_idx[23]: [1 0 0 1 0 1 2 1 2 1 0 ... 0 1 4 0 0 1 1 1 1 1 0]
Role_vec_idx[133], Val_match_idx[108]: [1 2 1 1 2 1 2 2 0 1 2 ... 1 2 1 0 0 1 0 1 0 1 2]
Role_vec_idx[134], Val_match_idx[04]: [0 2 1 2 5 0 2 0 1 2 1 ... 1 1 1 1 1 0 0 2 0 0 1]
Role_vec_idx[135], Val_match_idx[181]: [1 3 1 1 1 1 0 2 1 2 1 ... 2 0 1 0 0 1 0 0 0 2 0]
Role_vec_idx[136], Val_match_idx[111]: [0 0 0 2 0 2 2 1 2 0 2 ... 0 2 2 1 1 2 0 0 0 0 1]
Role_vec_idx[137], Val_match_idx[246]: [1 3 0 0 0 2 0 1 0 1 2 ... 1 0 1 1 3 0 1 7 2 0 1]
Role_vec_idx[138], Val_match_idx[57]: [1 0 1 2 2 0 0 1 1 3 1 ... 0 2 0 1 2 0 2 1 0 1 1]
Role_vec_idx[139], Val_match_idx[79]: [2 0 2 3 1 1 0 3 1 1 0 ... 2 1 0 0 1 1 0 0 1 0 1]
Role_vec_idx[140], Val_match_idx[79]: [0 0 0 1 0 2 2 2 4 0 1 ... 1 1 0 2 0 0 1 2 1 1 0]
Role_vec_idx[141], Val_match_idx[185]: [1 1 1 0 2 0 0 1 0 1 2 ... 0 1 0 0 0 3 1 0 0 1 0]
Role_vec_idx[142], Val_match_idx[74]: [1 1 1 1 1 0 1 2 0 2 1 ... 1 1 1 0 0 0 0 0 1 1 2]
Role_vec_idx[143], Val_match_idx[72]: [1 1 2 4 0 1 4 1 2 1 1 ... 1 3 1 3 1 1 0 1 2 1 1]
Role_vec_idx[144], Val_match_idx[25]: [0 2 2 1 0 0 2 0 0 1 1 ... 0 1 1 0 1 1 2 2 1 0 1]
Role_vec_idx[145], Val_match_idx[199]: [1 1 0 1 1 2 1 0 0 1 0 ... 3 1 0 0 0 0 0 1 0 1 0]
Role_vec_idx[146], Val_match_idx[248]: [1 2 0 1 0 0 3 1 0 1 2 ... 1 2 1 0 2 2 1 2 0 5 4]
Role_vec_idx[147], Val_match_idx[59]: [0 2 1 0 1 0 2 1 0 1 0 ... 1 1 1 2 3 2 2 1 2 0 0]
Role_vec_idx[148], Val_match_idx[104]: [3 3 2 1 2 0 0 2 1 0 0 ... 0 2 3 2 2 0 0 0 3 2 1]
Role_vec_idx[149], Val_match_idx[204]: [3 1 0 1 0 2 1 0 0 0 0 ... 1 0 1 1 0 1 0 0 0 0 2]
Role_vec_idx[150], Val_match_idx[156]: [0 1 3 0 0 2 0 2 2 3 0 ... 2 1 0 1 3 2 0 0 0 0 0]
Role_vec_idx[151], Val_match_idx[85]: [1 0 1 0 1 0 1 1 1 0 1 ... 0 0 0 2 0 1 1 1 3 0 3]
Role_vec_idx[152], Val_match_idx[110]: [4 1 1 1 0 0 0 2 0 0 2 ... 3 1 0 1 0 0 0 0 2 0 0]
Role_vec_idx[153], Val_match_idx[84]: [0 0 1 3 1 1 1 0 1 0 0 ... 1 1 2 1 2 0 0 1 1 1 2]
Role_vec_idx[154], Val_match_idx[108]: [1 0 2 0 1 0 1 0 2 3 0 ... 1 3 3 3 1 2 0 1 2 3 1]
Role_vec_idx[155], Val_match_idx[11]: [3 2 0 0 0 1 0 1 0 0 0 ... 2 0 0 1 1 1 2 1 1 1 0]
Role_vec_idx[156], Val_match_idx[232]: [1 0 1 1 0 1 1 0 4 1 1 ... 1 2 1 1 0 1 1 3 1 0 0]
Role_vec_idx[157], Val_match_idx[245]: [2 1 2 1 2 1 2 1 0 0 1 ... 0 3 3 0 1 2 5 2 1 0 1]
Role_vec_idx[158], Val_match_idx[15]: [2 0 0 0 1 2 0 3 1 0 1 ... 0 1 0 1 0 1 0 1 1 4 2]
Role_vec_idx[159], Val_match_idx[63]: [1 1 0 2 1 1 0 1 0 0 2 ... 1 0 1 2 0 1 1 1 1 1 0]
Role_vec_idx[160], Val_match_idx[109]: [2 3 0 0 0 3 2 0 1 1 1 ... 1 2 0 1 1 1 1 3 0 0 0]
Role_vec_idx[161], Val_match_idx[02]: [0 1 3 0 0 1 0 0 2 3 1 ... 1 1 0 1 1 0 2 1 2 1 1]
Role_vec_idx[162], Val_match_idx[221]: [0 3 2 1 2 1 1 1 0 2 1 ... 0 1 3 0 1 2 2 1 0 2 1]
Role_vec_idx[163], Val_match_idx[66]: [1 1 3 1 1 1 1 1 0 1 3 ... 1 0 2 0 1 0 0 0 1 1 3]
Role_vec_idx[164], Val_match_idx[183]: [2 0 1 1 0 0 2 2 0 3 1 ... 0 2 2 2 0 0 2 2 2 1 0]
Role_vec_idx[165], Val_match_idx[132]: [0 1 2 2 0 0 1 1 1 0 1 ... 2 2 2 3 2 1 1 2 0 1 1]
Role_vec_idx[166], Val_match_idx[18]: [2 1 0 2 0 3 0 2 2 1 2 ... 1 4 0 1 0 0 1 1 1 2 3]
Role_vec_idx[167], Val_match_idx[47]: [0 0 2 1 0 1 2 1 0 3 1 ... 0 0 0 0 3 1 1 1 0 0 0]
Role_vec_idx[168], Val_match_idx[25]: [2 1 2 0 0 3 1 4 0 3 1 ... 2 2 0 2 1 1 1 2 1 3 0]
Role_vec_idx[169], Val_match_idx[11]: [0 2 2 1 1 0 0 1 0 1 2 ... 0 0 0 0 0 0 0 2 3 1 1]
Role_vec_idx[170], Val_match_idx[29]: [0 0 1 1 0 3 0 0 0 0 1 ... 1 1 0 1 2 0 1 0 0 0 2]
Role_vec_idx[171], Val_match_idx[30]: [1 2 2 0 1 1 1 0 1 0 1 ... 1 1 0 1 1 1 1 0 1 2 0]
Role_vec_idx[172], Val_match_idx[14]: [1 1 0 0 1 1 0 2 3 2 2 ... 0 1 1 0 1 1 4 1 1 0 2]
Role_vec_idx[173], Val_match_idx[42]: [0 1 0 2 0 0 1 0 2 1 2 ... 0 3 0 0 2 2 2 1 1 3 1]
Role_vec_idx[174], Val_match_idx[47]: [1 0 4 0 2 1 0 2 0 0 1 ... 0 1 1 1 0 0 1 1 1 1 2]
Role_vec_idx[175], Val_match_idx[86]: [3 2 2 1 1 0 0 1 1 1 2 ... 0 0 1 1 0 1 2 1 0 2 2]
Role_vec_idx[176], Val_match_idx[129]: [0 0 0 1 1 0 0 0 1 1 2 ... 0 0 1 0 0 2 0 1 1 1 1]
Role_vec_idx[177], Val_match_idx[193]: [3 1 2 0 1 2 1 0 0 0 1 ... 2 0 1 2 1 0 0 0 1 0 1]
Role_vec_idx[178], Val_match_idx[16]: [0 1 0 1 0 2 0 1 0 1 2 ... 0 0 1 1 0 0 1 0 1 3 1]
Role_vec_idx[179], Val_match_idx[51]: [3 0 0 2 1 2 2 1 1 2 2 ... 2 1 2 1 2 0 0 0 2 0 1]
Role_vec_idx[180], Val_match_idx[138]: [1 2 0 1 2 3 0 3 2 1 1 ... 2 1 1 1 1 0 0 1 1 0 1]
Role_vec_idx[181], Val_match_idx[28]: [0 2 0 0 0 0 0 1 2 1 0 ... 2 1 0 0 0 1 0 0 1 1 1]
Role_vec_idx[182], Val_match_idx[248]: [1 0 0 0 2 2 1 2 2 0 0 ... 0 2 0 2 0 1 1 0 2 5 1]
Role_vec_idx[183], Val_match_idx[88]: [1 1 0 2 0 1 0 0 1 2 0 ... 0 1 0 1 0 2 0 0 0 1 1]
Role_vec_idx[184], Val_match_idx[35]: [0 2 2 0 0 0 3 1 2 0 2 ... 1 0 2 2 2 2 1 0 0 0 3]
Role_vec_idx[185], Val_match_idx[17]: [1 1 2 2 1 0 1 1 1 2 0 ... 0 0 1 2 1 1 0 0 0 1 0]
Role_vec_idx[186], Val_match_idx[244]: [2 1 1 0 2 1 0 0 0 1 0 ... 0 1 1 0 3 5 2 1 1 1 3]
Role_vec_idx[187], Val_match_idx[152]: [2 1 0 1 0 1 0 0 2 2 1 ... 0 1 2 1 3 1 0 1 1 1 3]
Role_vec_idx[188], Val_match_idx[33]: [3 0 2 3 2 0 2 1 1 0 0 ... 1 2 0 2 2 1 3 2 1 2 0]
Role_vec_idx[189], Val_match_idx[202]: [0 0 1 2 0 2 1 0 2 1 1 ... 1 0 2 1 1 0 0 0 1 1 0]
Role_vec_idx[190], Val_match_idx[50]: [4 0 0 2 0 0 1 2 3 0 2 ... 1 0 0 1 2 0 0 1 1 0 2]
Role_vec_idx[191], Val_match_idx[111]: [3 2 4 0 1 1 2 1 1 2 2 ... 1 1 0 1 0 0 0 0 2 1 0]
Role_vec_idx[192], Val_match_idx[150]: [2 0 2 1 0 3 1 0 0 1 2 ... 1 1 0 0 0 0 1 1 1 0 4]
Role_vec_idx[193], Val_match_idx[223]: [0 2 2 1 2 0 1 0 1 0 0 ... 1 1 0 1 0 2 3 3 2 1 4]
Role_vec_idx[194], Val_match_idx[85]: [0 0 0 0 2 0 0 0 1 1 1 ... 1 1 1 2 0 1 4 2 1 0 2]
Role_vec_idx[195], Val_match_idx[127]: [2 2 0 1 0 0 2 0 2 1 2 ... 1 1 1 2 0 0 0 0 0 2 1]
Role_vec_idx[196], Val_match_idx[47]: [1 1 0 0 2 0 0 1 0 0 1 ... 0 0 2 0 2 2 0 2 1 1 2]
Role_vec_idx[197], Val_match_idx[124]: [0 1 4 0 3 0 0 2 1 2 1 ... 0 0 0 0 1 2 1 1 0 0 1]
Role_vec_idx[198], Val_match_idx[95]: [1 2 1 1 2 2 2 2 0 3 1 ... 0 1 2 3 0 1 1 1 1 0 1]
Role_vec_idx[199], Val_match_idx[22]: [0 2 0 1 0 1 0 0 0 0 1 ... 1 1 2 3 1 0 2 0 1 2 1]
Role_vec_idx[200], Val_match_idx[59]: [2 0 1 0 3 2 1 0 0 1 0 ... 1 0 4 0 0 1 1 1 2 1 1]
Role_vec_idx[201], Val_match_idx[14]: [1 2 1 1 2 1 0 0 1 1 0 ... 4 0 0 1 0 0 1 0 1 0 0]
Role_vec_idx[202], Val_match_idx[138]: [0 0 2 1 0 2 2 1 0 2 4 ... 1 1 2 2 1 1 0 2 0 2 1]
Role_vec_idx[203], Val_match_idx[71]: [1 1 1 0 0 1 0 1 0 1 1 ... 0 0 1 0 1 0 0 0 0 0 0]
Role_vec_idx[204], Val_match_idx[66]: [0 0 2 3 0 2 1 0 0 2 2 ... 0 2 2 1 0 2 0 0 2 0 2]
Role_vec_idx[205], Val_match_idx[121]: [2 0 2 0 1 1 3 1 0 0 1 ... 3 1 3 1 0 0 2 1 1 2 3]
Role_vec_idx[206], Val_match_idx[04]: [2 0 1 1 4 2 0 1 1 0 0 ... 1 2 0 0 0 1 1 1 1 0 0]
Role_vec_idx[207], Val_match_idx[128]: [0 4 1 2 1 0 1 1 0 1 0 ... 3 0 0 1 1 3 1 1 0 1 1]
Role_vec_idx[208], Val_match_idx[128]: [2 2 0 2 1 1 2 2 0 1 2 ... 1 1 1 1 0 1 1 2 0 1 1]
Role_vec_idx[209], Val_match_idx[110]: [1 3 1 1 0 0 0 2 0 1 1 ... 0 2 0 1 0 1 1 0 2 3 1]
Role_vec_idx[210], Val_match_idx[51]: [0 0 0 0 1 0 2 0 1 0 2 ... 1 1 2 2 1 2 1 1 0 0 1]
Role_vec_idx[211], Val_match_idx[61]: [0 1 3 0 0 2 1 0 1 0 2 ... 0 1 0 1 3 0 0 4 2 0 0]
Role_vec_idx[212], Val_match_idx[146]: [1 2 0 4 0 0 0 1 1 1 1 ... 0 1 0 0 0 3 1 1 0 0 2]
Role_vec_idx[213], Val_match_idx[104]: [3 1 0 1 2 0 1 2 1 2 0 ... 0 3 1 1 0 0 0 1 0 0 1]
Role_vec_idx[214], Val_match_idx[04]: [0 0 1 0 4 0 0 0 0 1 3 ... 0 0 0 0 1 1 0 1 0 1 0]
Role_vec_idx[215], Val_match_idx[98]: [2 3 0 1 1 2 2 1 2 0 1 ... 1 0 1 2 1 1 1 3 2 0 0]
Role_vec_idx[216], Val_match_idx[18]: [0 1 1 3 0 1 0 1 1 2 2 ... 2 1 2 0 2 0 1 0 1 1 2]
Role_vec_idx[217], Val_match_idx[152]: [0 3 1 1 1 1 1 2 2 2 2 ... 2 0 2 3 2 1 0 0 0 0 1]
Role_vec_idx[218], Val_match_idx[08]: [0 0 0 2 1 3 0 0 5 1 1 ... 1 2 1 1 0 0 2 3 0 3 2]
Role_vec_idx[219], Val_match_idx[27]: [2 0 0 1 1 2 1 1 1 2 1 ... 2 1 1 2 3 3 1 1 0 1 0]
Role_vec_idx[220], Val_match_idx[14]: [1 1 0 2 0 1 1 0 2 2 1 ... 1 1 1 3 0 4 1 0 0 1 1]
Role_vec_idx[221], Val_match_idx[190]: [1 1 1 0 1 1 0 2 0 2 4 ... 1 0 1 1 0 2 0 0 2 1 1]
Role_vec_idx[222], Val_match_idx[92]: [1 0 1 1 1 0 1 2 3 3 2 ... 3 0 0 1 0 0 0 1 0 1 1]
Role_vec_idx[223], Val_match_idx[07]: [0 2 1 1 2 1 0 4 1 0 1 ... 0 1 1 1 0 1 1 3 1 1 2]
Role_vec_idx[224], Val_match_idx[153]: [1 3 0 0 3 0 2 0 0 1 1 ... 1 1 2 1 3 2 2 2 2 4 0]
Role_vec_idx[225], Val_match_idx[120]: [1 1 2 1 2 0 0 0 1 2 1 ... 2 1 3 3 2 1 1 0 1 0 1]
Role_vec_idx[226], Val_match_idx[223]: [2 1 0 0 0 1 1 3 1 1 1 ... 1 1 0 1 0 0 1 2 2 1 2]
Role_vec_idx[227], Val_match_idx[116]: [1 2 2 3 1 1 1 0 0 2 0 ... 0 0 1 1 1 0 2 0 1 2 1]
Role_vec_idx[228], Val_match_idx[00]: [4 1 3 0 0 1 0 0 1 1 0 ... 2 2 0 2 1 0 0 1 0 1 1]
Role_vec_idx[229], Val_match_idx[67]: [2 1 1 0 3 2 1 0 0 0 0 ... 2 1 4 2 3 1 0 0 1 1 1]
Role_vec_idx[230], Val_match_idx[24]: [1 1 2 0 1 0 0 0 2 1 1 ... 1 2 3 1 1 3 0 1 1 1 0]
Role_vec_idx[231], Val_match_idx[108]: [2 0 1 3 2 1 0 0 2 0 1 ... 0 0 0 1 3 0 0 1 0 0 0]
Role_vec_idx[232], Val_match_idx[246]: [0 1 0 0 1 1 1 1 2 1 1 ... 0 3 1 1 0 1 0 4 2 1 0]
Role_vec_idx[233], Val_match_idx[07]: [0 1 2 4 2 1 1 5 0 1 2 ... 0 0 0 1 1 0 1 1 0 0 0]
Role_vec_idx[234], Val_match_idx[15]: [0 0 1 0 1 1 0 1 1 1 0 ... 1 1 2 0 0 1 0 1 0 0 1]
Role_vec_idx[235], Val_match_idx[08]: [0 1 2 0 0 3 3 1 4 1 1 ... 0 2 3 2 2 2 2 3 2 3 1]
Role_vec_idx[236], Val_match_idx[231]: [2 0 2 1 0 2 1 2 0 2 1 ... 0 0 1 2 0 0 2 1 1 1 0]
Role_vec_idx[237], Val_match_idx[140]: [0 2 2 2 1 2 0 1 3 0 0 ... 0 0 1 0 2 0 0 4 2 0 1]
Role_vec_idx[238], Val_match_idx[08]: [0 3 2 0 2 2 1 3 4 3 0 ... 1 0 0 2 0 1 2 1 1 1 2]
Role_vec_idx[239], Val_match_idx[180]: [3 0 1 5 2 1 0 1 0 1 0 ... 1 0 1 1 3 1 4 1 0 2 1]
Role_vec_idx[240], Val_match_idx[224]: [2 0 0 3 1 2 2 1 0 1 1 ... 0 0 0 0 2 2 0 0 2 1 1]
Role_vec_idx[241], Val_match_idx[121]: [1 1 1 1 1 1 1 1 0 2 1 ... 1 0 2 0 5 1 1 1 0 1 3]
Role_vec_idx[242], Val_match_idx[01]: [1 5 3 2 0 1 3 0 1 0 2 ... 0 1 1 0 0 2 0 2 2 0 0]
Role_vec_idx[243], Val_match_idx[20]: [1 0 0 0 0 1 0 1 0 0 0 ... 0 0 0 2 2 0 0 1 0 2 2]
Role_vec_idx[244], Val_match_idx[68]: [2 1 3 0 1 2 1 0 1 0 0 ... 1 1 0 1 0 3 2 3 2 1 1]
Role_vec_idx[245], Val_match_idx[21]: [1 1 2 1 1 3 0 1 0 2 1 ... 0 4 1 1 2 2 1 0 0 1 0]
Role_vec_idx[246], Val_match_idx[52]: [0 1 0 1 0 0 2 1 1 1 2 ... 0 3 1 0 1 1 0 1 0 4 0]
Role_vec_idx[247], Val_match_idx[03]: [1 3 1 4 0 2 0 2 1 2 1 ... 0 0 2 2 0 0 2 2 2 0 1]
Role_vec_idx[248], Val_match_idx[227]: [0 1 1 0 0 1 1 0 2 2 0 ... 4 0 0 1 2 2 1 1 2 1 1]
Role_vec_idx[249], Val_match_idx[136]: [0 0 0 0 2 0 0 2 1 1 1 ... 1 2 2 1 2 1 0 1 2 0 0]
| MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
Binding: Brian 2 Code - SIMPLIFIED CIRCUIT ==================  Generate the time delay data_matrix from the so that the input vector time delay in each slot plus the delay matrix line up at the number of bits_per_slot per slot (e.g. a time delay in slot 0 of the input vector of say 10 will have a corresponding delay of 90 in the corresponding data_matrix so that if this vector is received then the match condition is an input potential to the neuron at 100)---------------------------------------------------------------------------------------------------------------This section of the code implements the role/filler binding in the Brian2 network (net1) | net1=Network()
#We first create an array of time delays which will be used to select the first Num_bound vectors from
# the P_matrix with a time delay (input_delay) between each vector.
#Calculate the array for the input spike generator
array1 = np.ones(mem_size) * slots_per_vector * bits_per_slot
# The input spike generator creates pairs of spkies corresponding to contiguous pairs of vectors from the memory that are
# going to be bound together (i.e., vector_0 & vector_1 then vector_2 and Vector_3 etc.)
for b in range(0,2*Num_bound,2):
array1[b] = (b)*input_delay
array1[b+1] = (b)*input_delay
# Create the corresponding spike generator group.
P = SpikeGeneratorGroup(mem_size,np.arange(mem_size), (array1)*ms)
net1.add(P)
#We now define the set of equation and reset definitions that will be used to generate the neuron action
#potentials and spike reset operations. Note that we make use of the Brian2 refractory operation.
equ1 ='''
dv/dt = (I)/tau : 1
I : 1
tau : second
'''
equ2 = '''
dv/dt = -v/tau : 1 (unless refractory)
tau : second
ts : second
'''
reset1 = '''
I=0.0
v=0.0
'''
reset2 = '''
ts=t
v=0.0
'''
# The G1 neurons perform the addition operation in the two selected vectors. Equ1 is a linearly increasing function
# with a time constant of 2*bits_per_slot*ms (I=1.0). The G1 neuron group is stimulated from the P spike generator group with
# spikes that simultaneously select a role and filler vector using the time delay on the G1 dendrites obtained from the P_matrix (S0.delay)
# On receiving the first spike from either role or filler vector the value of I
# is changed to 0.0 which holds the neuron potential constant until the second spike is received when I = -1.0 and the neuron
# potential decays until the threshold value v<0.0 when it fires to give the required modulus addition. The value of I is
# reset to 1.0 using the spike from the P spikemonitorgroup (S1) and the next two vectors are added.
G1 = NeuronGroup(slots_per_vector, equ1,
threshold='v < 0.0 or v>=1.0', reset=reset1, method='euler')
G1.v =0.0
G1.I = 1.0
G1.tau = 2 * bits_per_slot * ms
net1.add(G1)
S0 = Synapses(P, G1, 'w : 1',on_pre= 'I = (I-1)')
range_array1 = range(0, slots_per_vector)
for n in range(0,mem_size):
S0.connect(i=n,j=range_array1)
S0.delay = np.reshape(P_matrix, mem_size * slots_per_vector) * ms
net1.add(S0)
SP1 = Synapses(P, G1, 'w : 1',on_pre= 'I=1.0')
for n in range(0,mem_size):
SP1.connect(i=n,j=range_array1)
net1.add(SP1)
#The G2 neurons select the earliest spike across all of the Num_bound cycles by remembering the time of the previous spike
#such that if there is no earlier spike from the G1 neurons this spike is regenerated. The refractory property of the
#neuron is used to supress any later spike in the curret cycle.
G2 = NeuronGroup(slots_per_vector, equ2, threshold='v>=0.5 or (t>=ts-0.01*ms+2*bits_per_slot*ms and t<=ts+0.01*ms+2*bits_per_slot*ms) ', reset=reset2, method='euler', refractory ='t%(2*bits_per_slot*ms)')
G2.v = 0.0
G2.tau = 1*ms
G2.ts = 0.0*ms
net1.add(G2)
S12 = Synapses(G1, G2, 'w : 1', on_pre='v=1.0')
S12.connect(j='i')
net1.add(S12)
# Create the required monitors
SMP = SpikeMonitor(P)
net1.add(SMP)
M1 = StateMonitor(G1, 'v', record=True)
net1.add(M1)
SM1= SpikeMonitor(G1)
net1.add(SM1)
M2 = StateMonitor(G2, 'v', record=True)
net1.add(M2)
M2ts = StateMonitor(G2, 'ts', record=True)
net1.add(M2ts)
SM2= SpikeMonitor(G2)
net1.add(SM2)
# Run Net1 for delta milliseconds
net1.run(delta*ms)
# Obtain the sparse vector timings from the SM2 monitor and print the timings so that they can be compared with the theoretical values.
array2 = np.array([SM2.i,SM2.t/ms])
sub_array2 = array2[0:2, slots_per_vector:]
print()
sorted_sub_array2 = sub_array2[:,sub_array2[0].argsort()].astype(int)
P1_timing = sorted_sub_array2[1]
P1_timing = P1_timing[P1_timing >= 2 * (Num_bound-1) * bits_per_slot] - 2 * (Num_bound - 1) * bits_per_slot
print(len(P1_timing))
print(P1_timing)
# The following plots output from the different monitors
fig, axs = plt.subplots(6,1,figsize=(16,14), gridspec_kw={'height_ratios': [2, 2, 2, 2, 2, 2]})
subplot(6,1,1)
plot(SMP.t/ms, SMP.i,'|')
xlabel('Time (ms)')
ylabel('P Neuron id')
subplot(6,1,2)
plot(M1.t/ms, M1.v[target_neuron].T)
xlabel('Time (ms)')
ylabel('G1 Neuron Voltage')
subplot(6,1,3)
plot(SM1.t/ms, SM1.i,'|')
xlabel('Time (ms)')
ylabel('G1 Neuron id')
plt.ylim(y_low,y_high)
subplot(6,1,4)
plot(M2.t/ms, M2.v[target_neuron].T)
xlabel('Time (ms)')
ylabel('G2 Neuron Voltage')
#plt.xlim(12000,14000)
subplot(6,1,5)
plot(M2ts.t/ms, M2ts.ts[target_neuron].T)
xlabel('Time (ms)')
ylabel('Previous Spike Time')
#plt.xlim(12000,14000)
subplot(6,1,6)
plot(SM2.t/ms, SM2.i,'|')
xlabel('Time (ms)')
ylabel('G2 Neuron id')
#plt.ylim(y_low,y_high)
| _____no_output_____ | MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
UnBinding: Brian 2 Code =====================================  | #--------------------------------------------------------------------------------------------------------
#This section of the code implements the Brian2 neuromorphic circuit which unbinds the vector.
#The unbound vector and a selected role vector are processed to give the corresponding 'noisy' filler vector.
# which is then compared to the memory vectors to find the best match (i.e. the clean-up memory operation)
# We first generate the time delay data_matrix which will be used in the 'clean-up memory' so that the input vector
# time delay in each slot plus the delay matrix line up at the number of bits per slot
# (e.g. a time delay in slot 0 of the input vector of say 10 will have a corresponding delay of 90 in the corresponding
# data_matrix so that if this vector is received then the match condition is an input potential to the neuron at 100)
data_matrix = bits_per_slot - P_matrix
net2=Network()
print()
# To pass the sparse vector from Net1 into Net2 we create a SpikeGeneratorGroup that uses the P1_timing from Net1 to generate
# the sparse bound vector which is the input to NeuronGroup G6 (S6).
P1 = SpikeGeneratorGroup(slots_per_vector, np.arange(slots_per_vector), P1_timing * ms)
net2.add(P1)
# We now define the neuron potential equations and resets plus a preset
equ2 = '''
dv/dt = -v/tau : 1
I : 1
tau : second
'''
equ3 ='''
dv/dt = (I)/tau : 1
I : 1
tau : second
'''
reset3 = '''
I=1.0
v=0.0
'''
preset1 = '''
I = 1.0
v= 0.0
'''
# NeuronGroup G7 is a recurrent circuit which simply repeates the sparse bound vector from P1 every 3*bits milliseconds
# and feeds the output vector into the G6 neurongroup (see S7 below)
G7 = NeuronGroup(slots_per_vector, equ2, threshold='v>=1.0', reset='v=0.0', method='euler')
G7.v=0.0
G7.tau = 0.5*ms
SP17 = Synapses(P1, G7, 'w : 1',on_pre= 'v=1.25')
SP17.connect(j='i')
#SP17.delay = bits_per_slot*ms
S77 = Synapses(G7, G7, 'w : 1',on_pre= 'v=1.25')
S77.connect(j='i')
#S77.delay = 3*bits_per_slot*ms
S77.delay = 2 * bits_per_slot * ms
net2.add(G7)
net2.add(SP17)
net2.add(S77)
#Calculate the array for the input spike generator which cycles through the role vectors 0,2,4 etc
array2 = np.ones(mem_size) * slots_per_vector * bits_per_slot
for b in range(0,Num_bound):
#array2[b*2] = (b*3)*input_delay
array2[b*2] = (b*2)*input_delay
P2 = SpikeGeneratorGroup(mem_size,np.arange(mem_size), (array2)*ms)
net2.add(P2) | MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
|
The G6 neuron group is stimulated from the P spike generator group with and the G7 neuron group. The P spike generator generates a role vector role using the time delay on the G6 dendrites obtained from the P_matrix (S5.delay) and the G6 neuron group produces the sparse bound vector.The G6 neurons perform the subtraction operation on the selected vectors. In this case Equ3 is a linearly increasing function with a time constant of bits_per_slot*ms (I=1.0). On receiving the first spike from either role or filler vector the value of I=0.0 which holds the neuron potential constant until the second spike is received when I again becomes 1.0 and the neuron potential continues to increase until the threshold value v>1.0 when it fires. To give the required modulus addition the value of I is maintained at 1.0 to ensure a second vector is generated. One of these two vector will have the correct modulus timings and so we compare both vectors in the final neuron group stage (G8) to get the best match. | G6 = NeuronGroup(slots_per_vector, equ3, threshold='v>=1.0', reset=reset3, method='euler', refractory ='2*Num_bound*ms')
G6.v =0.0
G6.I = 1.0
G6.tau = bits_per_slot * ms
net2.add(G6)
S5 = Synapses(P2, G6, 'w : 1',on_pre= 'I = (I-1)%2')
range_array2 = range(0, slots_per_vector)
for n in range(0,mem_size):
S5.connect(i=n,j=range_array2)
S5.delay = np.reshape(P_matrix, mem_size * slots_per_vector) * ms
net2.add(S5)
S6 = Synapses(P2, G6, 'w : 1',on_pre= preset1)
for n in range(0,mem_size):
S6.connect(i=n,j=range_array2)
net2.add(S6)
S7 = Synapses(G7, G6, 'w : 1',on_pre= 'I = (I-1)%2')
S7.connect(j='i')
net2.add(S7)
| _____no_output_____ | MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
This final NeuronGroup,G8, stage is the clean up memory operation using the transpose of the data_matrix to set the synaptic delays on the G8 dendrites. We only produce one output spike per match by using the refractory operator to suppress any further spikes. This could be improved to choose the larget matching spike. | G8 = NeuronGroup(mem_size, equ2, threshold='v >= 13.0', reset='v=0.0', method='euler')
G8.v = 1.0
G8.tau = 2.0*ms
net2.add(G8)
range_array3 = range(0,mem_size)
S8 = Synapses(G6, G8, on_pre='v += 1.0')
for n in range(0, slots_per_vector):
S8.connect(i=n,j=range_array3)
data_matrix2 = np.transpose(data_matrix)
S8.delay = np.reshape(data_matrix2, mem_size * slots_per_vector) * ms
net2.add(S8)
# Create the required monitors
SMP1 = SpikeMonitor(P1)
net2.add(SMP1)
SM7 = SpikeMonitor(G7)
net2.add(SM7)
SMP2 = SpikeMonitor(P2)
net2.add(SMP2)
M6 = StateMonitor(G6, 'v', record=True)
net2.add(M6)
SM6 = SpikeMonitor(G6)
net2.add(SM6)
M8 = StateMonitor(G8, 'v', record=True)
net2.add(M8)
SM8 = SpikeMonitor(G8)
net2.add(SM8)
net2.run(((2*Num_bound) * bits_per_slot + 3) * ms)
# Plot the sparse bound vector
plot(SMP1.t/ms, SMP1.i,'|')
xlabel('Time (ms)')
ylabel('P Neuron id')
show()
fig, axs = plt.subplots(6,1,figsize=(16,14), gridspec_kw={'height_ratios': [2, 2, 2, 2, 2, 2]})
# Plot the other monitors
subplot(6,1,1)
plot(SM7.t/ms, SM7.i,'|')
xlabel('Time (ms)')
ylabel('P1 Neuron id')
plt.ylim(0, slots_per_vector)
subplot(6,1,2)
plot(SMP2.t/ms, SMP2.i,'|')
xlabel('Time (ms)')
ylabel('P2 Neuron id')
#plt.xlim(0,2*bits_per_slot*Num_bound)
#plt.xlim(bits_per_slot*Num_bound-100,2*bits_per_slot*(Num_bound+1))
#plt.ylim(y_low,y_high)
subplot(6,1,3)
plot(M6.t/ms, M6.v[0].T)
xlabel('Time (ms)')
ylabel('G6 Neuron Voltage')
#plt.xlim(0,2*bits_per_slot*Num_bound)
#plt.xlim(bits_per_slot*Num_bound-100,2*bits_per_slot*(Num_bound+1))
subplot(6,1,4)
plot(SM6.t/ms, SM6.i,'|')
xlabel('Time (ms)')
ylabel('G6 Neuron id')
subplot(6,1,5)
plot(M8.t/ms, M8.v.T)
xlabel('Time (ms)')
ylabel('G8 Neuron Voltage')
subplot(6,1,6)
plot(SM8.t/ms, SM8.i,'|')
xlabel('Time (ms)')
ylabel('G8 Neuron id') | _____no_output_____ | MIT | Simplified-Role-Filler-net1.ipynb | vsapy/DTIN07 |
Latest point anomaly detection with the Anomaly Detector API Use this Jupyter notebook to start visualizing anomalies as a batch with the Anomaly Detector API in Python.While you can detect anomalies as a batch, you can also detect the anomaly status of the last data point in the time series. This notebook iteratively sends latest-point anomaly detection requests to the Anomaly Detector API and visualizes the response. The graph created at the end of this notebook will display the following:* Anomalies found while in the data set, highlighted.* Anomaly detection boundaries * Anomalies seen in the data, highlighted.By calling the API on your data's latest points, you can monitor your data as it's created. The following example simulates using the Anomaly Detector API on streaming data. Sections of the example time series are sent to the API over multiple iterations, and the anomaly status of each section's last data point is saved. The data set used in this example has a pattern that repeats roughly every 7 data points (the `period` in the request's JSON file), so for best results, the data set is sent in groups of 29 points (`4 * + an extra data point`. See [Best practices for using the Anomaly Detector API](https://docs.microsoft.com/azure/cognitive-services/anomaly-detector/concepts/anomaly-detection-best-practices) for more information). | # To start sending requests to the Anomaly Detector API, paste your subscription key below,
# and replace the endpoint variable with the endpoint for your region.
subscription_key = ''
endpoint_latest = 'https://westus2.api.cognitive.microsoft.com/anomalydetector/v1.0/timeseries/last/detect'
import requests
import json
import pandas as pd
import numpy as np
from __future__ import print_function
import warnings
warnings.filterwarnings('ignore')
# Import library to display results
import matplotlib.pyplot as plt
%matplotlib inline
from bokeh.plotting import figure,output_notebook, show
from bokeh.palettes import Blues4
from bokeh.models import ColumnDataSource,Slider
import datetime
from bokeh.io import push_notebook
from dateutil import parser
from ipywidgets import interact, widgets, fixed
output_notebook()
def detect(endpoint, subscription_key, request_data):
headers = {'Content-Type': 'application/json', 'Ocp-Apim-Subscription-Key': subscription_key}
response = requests.post(endpoint, data=json.dumps(request_data), headers=headers)
if response.status_code == 200:
return json.loads(response.content.decode("utf-8"))
else:
print(response.status_code)
raise Exception(response.text)
def build_figure(result, sample_data, sensitivity):
columns = {'expectedValues': result['expectedValues'], 'isAnomaly': result['isAnomaly'], 'isNegativeAnomaly': result['isNegativeAnomaly'],
'isPositiveAnomaly': result['isPositiveAnomaly'], 'upperMargins': result['upperMargins'], 'lowerMargins': result['lowerMargins']
, 'value': [x['value'] for x in sample_data['series']], 'timestamp': [parser.parse(x['timestamp']) for x in sample_data['series']]}
response = pd.DataFrame(data=columns)
values = response['value']
label = response['timestamp']
anomalies = []
anomaly_labels = []
index = 0
anomaly_indexes = []
p = figure(x_axis_type='datetime', title="Anomaly Detection Result ({0} Sensitivity)".format(sensitivity), width=800, height=600)
for anom in response['isAnomaly']:
if anom == True and (values[index] > response.iloc[index]['expectedValues'] + response.iloc[index]['upperMargins'] or
values[index] < response.iloc[index]['expectedValues'] - response.iloc[index]['lowerMargins']):
anomalies.append(values[index])
anomaly_labels.append(label[index])
anomaly_indexes.append(index)
index = index+1
upperband = response['expectedValues'] + response['upperMargins']
lowerband = response['expectedValues'] -response['lowerMargins']
band_x = np.append(label, label[::-1])
band_y = np.append(lowerband, upperband[::-1])
boundary = p.patch(band_x, band_y, color=Blues4[2], fill_alpha=0.5, line_width=1, legend='Boundary')
p.line(label, values, legend='value', color="#2222aa", line_width=1)
p.line(label, response['expectedValues'], legend='expectedValue', line_width=1, line_dash="dotdash", line_color='olivedrab')
anom_source = ColumnDataSource(dict(x=anomaly_labels, y=anomalies))
anoms = p.circle('x', 'y', size=5, color='tomato', source=anom_source)
p.legend.border_line_width = 1
p.legend.background_fill_alpha = 0.1
show(p, notebook_handle=True) | _____no_output_____ | MIT | ipython-notebook/Latest point detection with the Anomaly Detector API.ipynb | Tapasgt/AnomalyDetector |
Detect latest anomaly of sample timeseries The following cells call the Anomaly Detector API with an example time series data set and different sensitivities for anomaly detection. Varying the sensitivity of the Anomaly Detector API can improve how well the response fits your data. | def detect_anomaly(sensitivity):
sample_data = json.load(open('sample.json'))
points = sample_data['series']
skip_point = 29
result = {'expectedValues': [None]*len(points), 'upperMargins': [None]*len(points),
'lowerMargins': [None]*len(points), 'isNegativeAnomaly': [False]*len(points),
'isPositiveAnomaly':[False]*len(points), 'isAnomaly': [False]*len(points)}
anom_count = 0
for i in range(skip_point, len(points)+1):
single_sample_data = {}
single_sample_data['series'] = points[i-29:i]
single_sample_data['granularity'] = 'daily'
single_sample_data['maxAnomalyRatio'] = 0.25
single_sample_data['sensitivity'] = sensitivity
single_point = detect(endpoint_latest, subscription_key, single_sample_data)
if single_point['isAnomaly'] == True:
anom_count = anom_count + 1
result['expectedValues'][i-1] = single_point['expectedValue']
result['upperMargins'][i-1] = single_point['upperMargin']
result['lowerMargins'][i-1] = single_point['lowerMargin']
result['isNegativeAnomaly'][i-1] = single_point['isNegativeAnomaly']
result['isPositiveAnomaly'][i-1] = single_point['isPositiveAnomaly']
result['isAnomaly'][i-1] = single_point['isAnomaly']
build_figure(result, sample_data, sensitivity)
# 95 sensitvity
detect_anomaly(95)
# 85 sensitvity
detect_anomaly(85) | _____no_output_____ | MIT | ipython-notebook/Latest point detection with the Anomaly Detector API.ipynb | Tapasgt/AnomalyDetector |
A review of 2 decades of correlation, hierarchies, netowrks and clustering AimThe aim of this project is to review state of the art clustering algorithms for financial time series and to study their correlation in complicated networks. This will form the basis of an open toolbox to study correlations, hierarchies, networks and clustering in financial markets Methodology adopted in Financial markets:- we compute log returns using the formula: $r_t = log(1 + R_t)$where $1 + R_t = \frac{P_i(t)}{P_i(t-1)}$- The sample correlation matrix is given by: $\Omega = \frac{1}{n} R^T R $ where $R$ is given by returns matrix minus sample mean return for that stock- Once we get the correlation matrix, we convert it to distances given by the formula: $d = \sqrt{2 (1 - \Omega)}$ where d is a nxn distance matrix which is symmetric and since $\Omega$ is symmetric positive definite, $d$ is symmetric positive definite as well. Methodology described in Paper:- Once we compute the distance matrix, we compute the minimum spanning tree using Kruskal's algorithm Implementation- Python 3.6 is used as the primary language - libraries used: numpy, yfinance to download prices, heapq, itertools & networkx library for plotting- DataStructures and Classes used: - A graph class is implemented using adjacency map. - A priority queue class is implemented using heapq module to keep track of minimum weight in Kruska's algorithm. This ensures finding the minimum is done in O(1) operation. Addition and removal are worse case O(floor(mlogn)) where m is the number of edges and n is the number of nodes - A cluster class is implemented using union find algorithm to merge clusters in same group - A MinimumSpanningTree class is implemented with a nested Price class that downloads prices and computes the distances. Calling this library automatically downloads prices from yahoo and computes their distances. A start and end parameter is all is needed to get historical prices of 50 shares - Share names were obtained from sp500 index as of Tuesday July 16. - Redundancy - Networkx library was used to create another graph inside mst.draw_graph() function. This is mainly used for plotting and visualization of the network structure of the correlation matrix. - Analysis - When distance is 0, correlation is 1 or -1 i.e perfectly correlation. Higher the distance, less correlation between the share prices | # set parameters here
start = '2019-01-01'
end = '2019-12-31'
mst = MinimumSpanningTrees(start=start, end=end)
# create a graph from distance computed from mst
# share prices are the vertices and edges are the distance from two share prices
g = mst.create_graph()
# get the minimum spanning tree from the graph
mst_tree = mst.mst_kruskal(g)
# plot the minimum spanning tree
plt.figure(figsize=(10,6))
mst.draw_graph(mst_tree)
plt.title("MST Using Kruskal's Algorithm of 50 shares of SP500"); | _____no_output_____ | Apache-2.0 | marketlearn/network_spanning_trees/mst_example.ipynb | mrajancsr/QuantEquityManagement |
Observation- All the shares are strongly correlated with SPY which makes sense since SPY is an index that contains all the other shares Note: - Following is the edge weights associated with the spanning tree | mst_tree
def create_graph():
from itertools import combinations
g = Graph()
input_vertices = range(1, 8)
vertices = [g.insert_vertex(v) for v in input_vertices]
g.insert_edge(vertices[0], vertices[1], 28)
g.insert_edge(vertices[1], vertices[2], 16)
g.insert_edge(vertices[2], vertices[3], 12)
g.insert_edge(vertices[3], vertices[4], 22)
g.insert_edge(vertices[4], vertices[5], 25)
g.insert_edge(vertices[5], vertices[0], 10)
g.insert_edge(vertices[3], vertices[6], 18)
g.insert_edge(vertices[4], vertices[6], 24)
g.insert_edge(vertices[1], vertices[6], 14)
return g
g = create_graph()
cluster = Cluster()
pq = PriorityQueue()
position = {}
for v in g.get_vertices():
position[v] = cluster.make_cluster(v)
for e in g.get_edges():
pq.push(e, e.get_value())
tree = []
size = g.count_vertices()
while len(tree) != size - 1 and not pq.is_empty():
weight, _, edge = pq.pop()
u, v = edge.endpoint()
a = cluster.find(position[u])
b = cluster.find(position[v])
if a != b:
tree.append(edge)
cluster.union(a, b)
position[v]._container
# container points to cluster object
position[v]._parent
chk = list(g.get_vertices())
position[chk[0]]._parent, position[chk[1]]._parent # each parent is a different position
position[v]
position[v]._parent == position[v]
w, _, edge = pq.pop()
u,v = edge.endpoint()
a = cluster.find(position[u])
b = cluster.find(position[v])
a
edge
a.get_value(), b.get_value()
a._parent
position[u]
a != b
cluster.union(a,b)
start = '2019-01-01'
end = '2019-12-31'
mst = MinimumSpanningTrees(start=start, end=end)
mst.draw_graph(tree)
type(cluster)
id(v)
cols = df.columns[:5]
data = df.to_dict()
data.keys()
np.array(((3, 4), (5, 6)))
np.vstack([np.fromiter(v[0], dtype=float) for v in chk])
chk = iter([(3, 4), (5, 6)])
| _____no_output_____ | Apache-2.0 | marketlearn/network_spanning_trees/mst_example.ipynb | mrajancsr/QuantEquityManagement |
Lesson 4: Model TrainingAt last, it's time to build our models! It might seem like it took us a while to get here, but professional data scientists actually spend the bulk of their time on the 3 steps leading up to this one: * Exploratory Analysis* Data Cleaning* Feature EngineeringThat's because the biggest jumps in model performance are from **better data**, not from fancier algorithms.This is lengthy and action-packed lesson, so buckle up and let's dive right in! In this lesson...First, we'll load our analytical base table from lesson 3. Then, we'll go through the essential modeling steps:1. [Split your dataset](split)2. [Build model pipelines](pipelines)3. [Declare hyperparameters to tune](hyperparameters)4. [Fit and tune models with cross-validation](fit-tune)5. [Evaluate metrics and select winner](evaluate)Finally, we'll save the best model as a project deliverable! First, let's import libraries, recruit models, and load the analytical base table.Let's import our libraries and load the dataset. It's good practice to keep all of your library imports at the top of your notebook or program. | # NumPy for numerical computing
import numpy as np
# Pandas for DataFrames
import pandas as pd
pd.set_option('display.max_columns', 100)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
# Matplotlib for visualization
from matplotlib import pyplot as plt
# display plots in the notebook
%matplotlib inline
# Seaborn for easier visualization
import seaborn as sns
# Scikit-Learn for Modeling
import sklearn | _____no_output_____ | MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Next, let's import 5 algorithms we introduced in the previous lesson. | # Import Elastic Net, Ridge Regression, and Lasso Regression
from sklearn.linear_model import ElasticNet, Ridge, Lasso
# Import Random Forest and Gradient Boosted Trees
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor | _____no_output_____ | MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Quick note about this lesson. In this lesson, we'll be relying heavily on Scikit-Learn, which has many helpful functions we can take advantage of. However, we won't import everything right away. Instead, we'll be importing each function from Scikit-Learn as we need it. That way, we can point out where you can find each function.Next, let's load the analytical base table from lesson 3. | # Load cleaned dataset from lesson 3
df = pd.read_csv('project_files/analytical_base_table.csv')
print(df.shape) | (1863, 41)
| MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
1. Split your datasetLet's start with a crucial but sometimes overlooked step: **Splitting** your data.First, let's import the train_test_split() function from Scikit-Learn. | # Function for splitting training and test set
from sklearn.model_selection import train_test_split | _____no_output_____ | MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Next, separate your dataframe into separate objects for the target variable (y) and the input features (X). | # Create separate object for target variable
y = df.tx_price
# Create separate object for input features
X = df.drop('tx_price', axis=1) | _____no_output_____ | MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Exercise 5.1**First, split X and y into training and test sets using the train_test_split() function.** * **Tip:** Its first two arguments should be X and y.* **Pass in the argument test_size=0.2 to set aside 20% of our observations for the test set.*** **Pass in random_state=1234 to set the random state for replicable results.*** You can read more about this function in the documentation.The function returns a tuple with 4 elements: (X_train, X_test, y_train, y_test). Remember, you can **unpack** it. We've given you a head-start below with the code to unpack the tuple: | # Split X and y into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1234) | _____no_output_____ | MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Let's confirm we have the right number of observations in each subset.**Next, run this code to confirm the size of each subset is correct.** | print( len(X_train), len(X_test), len(y_train), len(y_test) ) | 1490 373 1490 373
| MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Next, when we train our models, we can fit them on the X_train feature values and y_train target values.Finally, when we're ready to evaluate our models on our test set, we would use the trained models to predict X_test and evaluate the predictions against y_test.[**Back to Contents**](toc) 2. Build model pipelinesIn lesson 1, 2, and 3, you explored the dataset, cleaned it, and engineered new features. However, sometimes we'll want to preprocess the training data even more before feeding it into our algorithms. First, let's show the summary statistics from our training data. | # Summary statistics of X_train
X_train.describe() | _____no_output_____ | MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Next, standardize the training data manually, creating a new X_train_new object. | # Standardize X_train
X_train_new = (X_train - X_train.mean()) / X_train.std() | _____no_output_____ | MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Let's look at the summary statistics for X_train_new to confirm standarization worked correctly.* How can you tell? | # Summary statistics of X_train_new
X_train_new.describe() | _____no_output_____ | MIT | Day_7/Lesson 4 - Real Estate Model Training.ipynb | SoftStackFactory/DSML_Primer |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.