markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Multiple aggregationsWe may also want to apply multiple aggregations, like the mean, max, and min. We can do this with the agg() method and pass a list of aggregation functions as the argument.
annual_summary = data_annual['wlev'].agg([np.mean,np.max,np.min]) print annual_summary annual_summary.plot()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Iterating over groupsIn some instances, we may want to iterate over each group. Each group is identifed by a key. If we know the group's key, then we can access that group with the get_group() method. For example, for each year print the mean sea level.
for year in data_annual.groups.keys(): data_year = data_annual.get_group(year) print year, data_year['wlev'].mean()
2000 3.06743417303 2001 3.05765296804 2002 3.07811187215 2003 3.11298972603 2004 3.1040974832 2005 3.12703618873 2006 3.14205230699 2007 3.0956142955 2008 3.07075714448 2009 3.08053287593
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
We had calculated the annual mean sea level earlier, but this is another way to achieve a similar result.ExerciseFor each year, plot the monthly mean water level.Solution
for year in data_annual.groups.keys(): data_year = data_annual.get_group(year) month_mean = data_year.groupby('Month')['wlev'].apply(np.mean) month_mean.plot(label=year) plt.legend()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Multiple groupsWe can also group by multiple columns. For example, we might want to group by year and month. That is, a year/month combo defines the group.
data_yearmonth = data.groupby(['Year','Month']) means = data_yearmonth['wlev'].apply(np.mean) means.plot()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Time SeriesThe x-labels on the plot above are a little bit awkward. A different approach would be to resample the data at a monthly freqeuncy. This can be accomplished by setting the date column as an index. Then we can resample the data at a desired frequency. The resampling method is flexible but a common choice is the average.First, we will need to set the index as a DatetimeIndex. Recall, the date_index variable we had assigned earlier. We will add this to the dataframe and make it into the dataframe index.
data['date_index'] = date_index data.set_index('date_index', inplace=True)
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Now we can resample at a monthly frequency and plot.
data_monthly = data['wlev'].resample('M', how='mean') data_monthly.plot()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Docker Exercise 09 Getting started with Docker SwarmsMake sure that Swarm is enabled on your Docker Desktop by typing `docker system info`, and looking for a message `Swarm: active` (you might have to scroll up a little).If Swarm isn't running, simply type `docker swarm init` in a shell prompt to set it up. Create the networks:
docker network create --driver overlay --subnet=172.10.1.0/24 ex09-frontend docker network create --driver overlay --subnet=172.10.2.0/23 ex09-backend
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
Save the MySQL configurationSave the following to your `development.env` file.
MYSQL_USER=sys_admin MYSQL_PASSWORD=sys_password MYSQL_ROOT_PASSWORD=root_password
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
Create your Docker Swarm configuration
version: "3" networks: ex09-frontend: external: true ex09-backend: external: true services: ex09-db: image: mysql:8.0 command: --default-authentication-plugin=mysql_native_password ports: - "3306:3306" networks: - ex09-backend env_file: - ./development.env ex09-www: image: dockerjames85/php-mysqli-apache:1.1 ports: - "8080:80" networks: - ex09-backend - ex09-frontend depends_on: - ex09-db env_file: - ./development.env deploy: replicas: 5 resources: limits: cpus: "0.1" memory: 100M restart_policy: condition: on-failure ``` ### Deploy the stack
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker stack deploy -c php-mysqli-apache.yml php-mysqli-apache
### Veify the stack has been deployed
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker stack ls
### Verify all the containers have been deployed
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker stack ps php-mysqli-apache
### Verify the load balancers have all the replicas and mapped the ports
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker stack services php-mysqli-apache
### See what containers are on the nodemanager in the swarm
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
docker ps
### Verify that the stack is working correctly
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
local node mastercurl http://localhost:8080
### Destory and remove the stack
_____no_output_____
Apache-2.0
doc0/Exercise09/Exercise09.ipynb
nsingh216/edu
specs
spces_df.head() specs_df.shape specs_df.describe() specs_df['info'][0] json.loads(specs_df['args'][0]) specs_df['info'][3] json.loads(specs_df['args'][3])
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
train
train_df.head() train_df.shape train_df.describe() train_df.event_id.nunique() train_df.game_session.nunique() train_df.timestamp.min() train_df.timestamp.max() train_df.installation_id.nunique() train_df.event_count.nunique() sns.distplot(train_df.event_count, ) sns.distplot(np.log(train_df.event_count)) sns.distplot(train_df.game_time) sns.distplot(np.log1p(train_df.game_time)) train_df.title.value_counts().plot(kind='bar') sns.countplot(y='title', data=train_df, order=train_df.title.value_counts().index) sns.countplot(x='type', data=train_df) sns.countplot(x='world', data=train_df) train_df.groupby(['title', 'type', 'world'])['event_id'].count().sort_values(ascending=False) train_df.query('game_session=="901acc108f55a5a1" & event_code==4100') train_df.query('game_session == "0848ef14a8dc6892"')
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
train labels
train_labels_df.head() train_labels_df.shape train_labels_df.game_session.nunique() train_labels_df.installation_id.nunique() train_labels_df.query('game_session == "0848ef14a8dc6892"')
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
test
test_df.head() test_df.shape test_df.event_id.nunique() test_df.game_session.nunique() test_df.installation_id.nunique() test_df.title.unique() len(test_df.query('~(title=="Bird Measurer (Assessment)") & event_code==4100')) len(test_df.query('title=="Bird Measurer (Assessment)" & event_code==4110')) test_df.query('installation_id == "00abaee7" & event_code==4100')
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
sample submission
sample_submission = pd.read_csv(DATA_DIR / 'sample_submission.csv') sample_submission.head() sample_submission.shape
_____no_output_____
MIT
notebooks/EDA.ipynb
wdy06/kaggle-data-science-bowl-2019
(Optional) Cancel existing runs
for run in exp.get_runs(): print(run.id) if run.status=="Running": run.cancel() from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException cluster_name = "udacityAzureML" try: compute_target = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing compute target') except ComputeTargetException: print('Creating a new compute target...') compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', max_nodes=4) # create the cluster compute_target = ComputeTarget.create(ws, cluster_name, compute_config) compute_target.wait_for_completion(show_output=True) print(compute_target.get_status().serialize()) from azureml.widgets import RunDetails from azureml.train.sklearn import SKLearn from azureml.train.hyperdrive.run import PrimaryMetricGoal from azureml.train.hyperdrive.policy import BanditPolicy from azureml.train.hyperdrive.sampling import RandomParameterSampling from azureml.train.hyperdrive.runconfig import HyperDriveConfig from azureml.train.hyperdrive.parameter_expressions import uniform from azureml.train.hyperdrive.parameter_expressions import choice, uniform import os ps = RandomParameterSampling( { '--C' : choice(0.001,0.01,0.1,1,10,100), '--max_iter': choice(50,100,200) } ) # Specify a Policy policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1) if "training" not in os.listdir(): os.mkdir("./training") # Create a SKLearn estimator for use with train.py est = SKLearn(source_directory = "./", compute_target=compute_target, vm_size='STANDARD_D2_V2', entry_script="train.py") # Create a HyperDriveConfig using the estimator, hyperparameter sampler, and policy. hyperdrive_config = HyperDriveConfig(hyperparameter_sampling=ps, primary_metric_name='Accuracy', primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, policy=policy, estimator=est, max_total_runs=20, max_concurrent_runs=4) # Submit your hyperdrive run to the experiment and show run details with the widget. hyperdrive_run = exp.submit(hyperdrive_config) hyperdrive_run.wait_for_completion(show_output=True) assert(hyperdrive_run.get_status() == "Completed") import joblib best_run = hyperdrive_run.get_best_run_by_primary_metric() print("Best run metrics :",best_run.get_metrics()) print("Best run details :",best_run.get_details()) print("Best run file names :",best_run.get_file_names()) model = best_run.register_model(model_name='best_model', model_path='outputs/model.joblib') from azureml.data.dataset_factory import TabularDatasetFactory # Create TabularDataset using TabularDatasetFactory data_uri = "https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv" ds = TabularDatasetFactory.from_delimited_files(path=data_uri) from train import clean_data # Use the clean_data function to clean your data. x, y = clean_data(ds) from azureml.train.automl import AutoMLConfig # Set parameters for AutoMLConfig # NOTE: DO NOT CHANGE THE experiment_timeout_minutes PARAMETER OR YOUR INSTANCE WILL TIME OUT. # If you wish to run the experiment longer, you will need to run this notebook in your own # Azure tenant, which will incur personal costs. automl_config = AutoMLConfig( compute_target = compute_target, experiment_timeout_minutes=15, task='classification', primary_metric='accuracy', training_data=ds, label_column_name='y', enable_onnx_compatible_models=True, n_cross_validations=2) # Submit your automl run automl_run = exp.submit(automl_config, show_output = False) automl_run.wait_for_completion() # Retrieve and save your best automl model. automl_best_run, automl_best_model = automl_run.get_output() print("Best run metrics :", automl_best_run) # print("Best run details :",automl_run.get_details()) # print("Best run file names :",best_run.get_file_names()) best_automl_model = automl_run.register_model(model_name='best_automl_model') print(os.getcwd()) # Delete cluster compute_target.delete()
_____no_output_____
MIT
udacity-project.ipynb
abhiojha8/Optimizing_ML_Pipeline_Azure
Extending LSTMs: LSTMs with Peepholes and GRUs
# These are all the modules we'll be using later. Make sure you can import them # before proceeding further. %matplotlib inline from __future__ import print_function import collections import math import numpy as np import os import random import tensorflow as tf import zipfile from matplotlib import pylab from six.moves import range from six.moves.urllib.request import urlretrieve import tensorflow as tf import csv
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Downloading StoriesStories are automatically downloaded from https://www.cs.cmu.edu/~spok/grimmtmp/, if not detected in the disk. The total size of stories is around ~500KB. The dataset consists of 100 stories.
url = 'https://www.cs.cmu.edu/~spok/grimmtmp/' # Create a directory if needed dir_name = 'stories' if not os.path.exists(dir_name): os.mkdir(dir_name) def maybe_download(filename): """Download a file if not present""" print('Downloading file: ', dir_name+ os.sep+filename) if not os.path.exists(dir_name+os.sep+filename): filename, _ = urlretrieve(url + filename, dir_name+os.sep+filename) else: print('File ',filename, ' already exists.') return filename num_files = 100 filenames = [format(i, '03d')+'.txt' for i in range(1,num_files+1)] for fn in filenames: maybe_download(fn) for i in range(len(filenames)): file_exists = os.path.isfile(os.path.join(dir_name,filenames[i])) assert file_exists print('%d files found.'%len(filenames))
100 files found.
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Reading dataData will be stored in a list of lists where the each list represents a document and document is a list of words. We will then break the text into bigrams
def read_data(filename): with open(filename) as f: data = tf.compat.as_str(f.read()) # make all the text lowercase data = data.lower() data = list(data) return data documents = [] global documents for i in range(num_files): print('\nProcessing file %s'%os.path.join(dir_name,filenames[i])) chars = read_data(os.path.join(dir_name,filenames[i])) # Breaking the text into bigrams two_grams = [''.join(chars[ch_i:ch_i+2]) for ch_i in range(0,len(chars)-2,2)] # Creates a list of lists with the bigrams (outer loop different stories) documents.append(two_grams) print('Data size (Characters) (Document %d) %d' %(i,len(two_grams))) print('Sample string (Document %d) %s'%(i,two_grams[:50]))
Processing file stories\001.txt Data size (Characters) (Document 0) 3667 Sample string (Document 0) ['in', ' o', 'ld', 'en', ' t', 'im', 'es', ' w', 'he', 'n ', 'wi', 'sh', 'in', 'g ', 'st', 'il', 'l ', 'he', 'lp', 'ed', ' o', 'ne', ', ', 'th', 'er', 'e ', 'li', 've', 'd ', 'a ', 'ki', 'ng', '\nw', 'ho', 'se', ' d', 'au', 'gh', 'te', 'rs', ' w', 'er', 'e ', 'al', 'l ', 'be', 'au', 'ti', 'fu', 'l,'] Processing file stories\002.txt Data size (Characters) (Document 1) 4928 Sample string (Document 1) ['ha', 'rd', ' b', 'y ', 'a ', 'gr', 'ea', 't ', 'fo', 're', 'st', ' d', 'we', 'lt', ' a', ' w', 'oo', 'd-', 'cu', 'tt', 'er', ' w', 'it', 'h ', 'hi', 's ', 'wi', 'fe', ', ', 'wh', 'o ', 'ha', 'd ', 'an', '\no', 'nl', 'y ', 'ch', 'il', 'd,', ' a', ' l', 'it', 'tl', 'e ', 'gi', 'rl', ' t', 'hr', 'ee'] Processing file stories\003.txt Data size (Characters) (Document 2) 9745 Sample string (Document 2) ['a ', 'ce', 'rt', 'ai', 'n ', 'fa', 'th', 'er', ' h', 'ad', ' t', 'wo', ' s', 'on', 's,', ' t', 'he', ' e', 'ld', 'er', ' o', 'f ', 'wh', 'om', ' w', 'as', ' s', 'ma', 'rt', ' a', 'nd', '\ns', 'en', 'si', 'bl', 'e,', ' a', 'nd', ' c', 'ou', 'ld', ' d', 'o ', 'ev', 'er', 'yt', 'hi', 'ng', ', ', 'bu'] Processing file stories\004.txt Data size (Characters) (Document 3) 2852 Sample string (Document 3) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' u', 'po', 'n ', 'a ', 'ti', 'me', ' a', 'n ', 'ol', 'd ', 'go', 'at', ' w', 'ho', ' h', 'ad', ' s', 'ev', 'en', ' l', 'it', 'tl', 'e ', 'ki', 'ds', ', ', 'an', 'd\n', 'lo', 've', 'd ', 'th', 'em', ' w', 'it', 'h ', 'al', 'l ', 'th', 'e ', 'lo', 've', ' o'] Processing file stories\005.txt Data size (Characters) (Document 4) 8189 Sample string (Document 4) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' u', 'po', 'n ', 'a ', 'ti', 'me', ' a', 'n ', 'ol', 'd ', 'ki', 'ng', ' w', 'ho', ' w', 'as', ' i', 'll', ' a', 'nd', ' t', 'ho', 'ug', 'ht', ' t', 'o\n', 'hi', 'ms', 'el', 'f ', "'i", ' a', 'm ', 'ly', 'in', 'g ', 'on', ' w', 'ha', 't ', 'mu', 'st', ' b'] Processing file stories\006.txt Data size (Characters) (Document 5) 4369 Sample string (Document 5) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' a', ' p', 'ea', 'sa', 'nt', ' w', 'ho', ' h', 'ad', ' d', 'ri', 've', 'n ', 'hi', 's ', 'co', 'w ', 'to', ' t', 'he', ' f', 'ai', 'r,', ' a', 'nd', ' s', 'ol', 'd\n', 'he', 'r ', 'fo', 'r ', 'se', 've', 'n ', 'ta', 'le', 'rs', '. ', ' o', 'n ', 'th', 'e '] Processing file stories\007.txt Data size (Characters) (Document 6) 5216 Sample string (Document 6) ['th', 'er', 'e ', 'we', 're', ' o', 'nc', 'e ', 'up', 'on', ' a', ' t', 'im', 'e ', 'a ', 'ki', 'ng', ' a', 'nd', ' a', ' q', 'ue', 'en', ' w', 'ho', ' l', 'iv', 'ed', '\nh', 'ap', 'pi', 'ly', ' t', 'og', 'et', 'he', 'r ', 'an', 'd ', 'ha', 'd ', 'tw', 'el', 've', ' c', 'hi', 'ld', 're', 'n,', ' b'] Processing file stories\008.txt Data size (Characters) (Document 7) 6097 Sample string (Document 7) ['li', 'tt', 'le', ' b', 'ro', 'th', 'er', ' t', 'oo', 'k ', 'hi', 's ', 'li', 'tt', 'le', ' s', 'is', 'te', 'r ', 'by', ' t', 'he', ' h', 'an', 'd ', 'an', 'd ', 'sa', 'id', ', ', 'si', 'nc', 'e\n', 'ou', 'r ', 'mo', 'th', 'er', ' d', 'ie', 'd ', 'we', ' h', 'av', 'e ', 'ha', 'd ', 'no', ' h', 'ap'] Processing file stories\009.txt Data size (Characters) (Document 8) 3699 Sample string (Document 8) ['th', 'er', 'e ', 'we', 're', ' o', 'nc', 'e ', 'a ', 'ma', 'n ', 'an', 'd ', 'a ', 'wo', 'ma', 'n ', 'wh', 'o ', 'ha', 'd ', 'lo', 'ng', ' i', 'n ', 'va', 'in', '\nw', 'is', 'he', 'd ', 'fo', 'r ', 'a ', 'ch', 'il', 'd.', ' ', 'at', ' l', 'en', 'gt', 'h ', 'th', 'e ', 'wo', 'ma', 'n ', 'ho', 'pe'] Processing file stories\010.txt Data size (Characters) (Document 9) 5268 Sample string (Document 9) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' a', ' m', 'an', ' w', 'ho', 'se', ' w', 'if', 'e ', 'di', 'ed', ', ', 'an', 'd ', 'a ', 'wo', 'ma', 'n ', 'wh', 'os', 'e ', 'hu', 'sb', 'an', 'd\n', 'di', 'ed', ', ', 'an', 'd ', 'th', 'e ', 'ma', 'n ', 'ha', 'd ', 'a ', 'da', 'ug', 'ht', 'er', ', ', 'an'] Processing file stories\011.txt Data size (Characters) (Document 10) 2377 Sample string (Document 10) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' a', ' g', 'ir', 'l ', 'wh', 'o ', 'wa', 's ', 'id', 'le', ' a', 'nd', ' w', 'ou', 'ld', ' n', 'ot', ' s', 'pi', 'n,', ' a', 'nd', '\nl', 'et', ' h', 'er', ' m', 'ot', 'he', 'r ', 'sa', 'y ', 'wh', 'at', ' s', 'he', ' w', 'ou', 'ld', ', ', 'sh', 'e ', 'co'] Processing file stories\012.txt Data size (Characters) (Document 11) 7695 Sample string (Document 11) ['ha', 'rd', ' b', 'y ', 'a ', 'gr', 'ea', 't ', 'fo', 're', 'st', ' d', 'we', 'lt', ' a', ' p', 'oo', 'r ', 'wo', 'od', '-c', 'ut', 'te', 'r ', 'wi', 'th', ' h', 'is', ' w', 'if', 'e\n', 'an', 'd ', 'hi', 's ', 'tw', 'o ', 'ch', 'il', 'dr', 'en', '. ', ' t', 'he', ' b', 'oy', ' w', 'as', ' c', 'al'] Processing file stories\013.txt Data size (Characters) (Document 12) 3665 Sample string (Document 12) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' o', 'n ', 'a ', 'ti', 'me', ' a', ' p', 'oo', 'r ', 'ma', 'n,', ' w', 'ho', ' c', 'ou', 'ld', ' n', 'o ', 'lo', 'ng', 'er', '\ns', 'up', 'po', 'rt', ' h', 'is', ' o', 'nl', 'y ', 'so', 'n.', ' ', 'th', 'en', ' s', 'ai', 'd ', 'th', 'e ', 'so', 'n,', ' d'] Processing file stories\014.txt Data size (Characters) (Document 13) 4178 Sample string (Document 13) ['a ', 'lo', 'ng', ' t', 'im', 'e ', 'ag', 'o ', 'th', 'er', 'e ', 'li', 've', 'd ', 'a ', 'ki', 'ng', ' w', 'ho', ' w', 'as', ' f', 'am', 'ed', ' f', 'or', ' h', 'is', ' w', 'is', 'do', 'm\n', 'th', 'ro', 'ug', 'h ', 'al', 'l ', 'th', 'e ', 'la', 'nd', '. ', ' n', 'ot', 'hi', 'ng', ' w', 'as', ' h'] Processing file stories\015.txt Data size (Characters) (Document 14) 8674 Sample string (Document 14) ['on', 'e ', 'su', 'mm', 'er', "'s", ' m', 'or', 'ni', 'ng', ' a', ' l', 'it', 'tl', 'e ', 'ta', 'il', 'or', ' w', 'as', ' s', 'it', 'ti', 'ng', ' o', 'n ', 'hi', 's ', 'ta', 'bl', 'e\n', 'by', ' t', 'he', ' w', 'in', 'do', 'w,', ' h', 'e ', 'wa', 's ', 'in', ' g', 'oo', 'd ', 'sp', 'ir', 'it', 's,'] Processing file stories\016.txt Data size (Characters) (Document 15) 7018 Sample string (Document 15) ['\tc', 'in', 'de', 're', 'll', 'a\n', 'th', 'e ', 'wi', 'fe', ' o', 'f ', 'a ', 'ri', 'ch', ' m', 'an', ' f', 'el', 'l ', 'si', 'ck', ', ', 'an', 'd ', 'as', ' s', 'he', ' f', 'el', 't ', 'th', 'at', ' h', 'er', ' e', 'nd', '\nw', 'as', ' d', 'ra', 'wi', 'ng', ' n', 'ea', 'r,', ' s', 'he', ' c', 'al'] Processing file stories\017.txt Data size (Characters) (Document 16) 3039 Sample string (Document 16) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' a', ' k', 'in', "g'", 's ', 'so', 'n ', 'wh', 'o ', 'wa', 's ', 'se', 'iz', 'ed', ' w', 'it', 'h ', 'a ', 'de', 'si', 're', ' t', 'o ', 'tr', 'av', 'el', '\na', 'bo', 'ut', ' t', 'he', ' w', 'or', 'ld', ', ', 'an', 'd ', 'to', 'ok', ' n', 'o ', 'on', 'e '] Processing file stories\018.txt Data size (Characters) (Document 17) 3020 Sample string (Document 17) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' a', ' w', 'id', 'ow', ' w', 'ho', ' h', 'ad', ' t', 'wo', ' d', 'au', 'gh', 'te', 'rs', ' -', ' o', 'ne', ' o', 'f\n', 'wh', 'om', ' w', 'as', ' p', 're', 'tt', 'y ', 'an', 'd ', 'in', 'du', 'st', 'ri', 'ou', 's,', ' w', 'hi', 'ls', 't ', 'th', 'e ', 'ot'] Processing file stories\019.txt Data size (Characters) (Document 18) 2465 Sample string (Document 18) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' a', ' m', 'an', ' w', 'ho', ' h', 'ad', ' s', 'ev', 'en', ' s', 'on', 's,', ' a', 'nd', ' s', 'ti', 'll', ' h', 'e ', 'ha', 'd\n', 'no', ' d', 'au', 'gh', 'te', 'r,', ' h', 'ow', 'ev', 'er', ' m', 'uc', 'h ', 'he', ' w', 'is', 'he', 'd ', 'fo', 'r ', 'on'] Processing file stories\020.txt Data size (Characters) (Document 19) 3703 Sample string (Document 19) ['\tl', 'it', 'tl', 'e ', 're', 'd-', 'ca', 'p\n', '\no', 'nc', 'e ', 'up', 'on', ' a', ' t', 'im', 'e ', 'th', 'er', 'e ', 'wa', 's ', 'a ', 'de', 'ar', ' l', 'it', 'tl', 'e ', 'gi', 'rl', ' w', 'ho', ' w', 'as', ' l', 'ov', 'ed', '\nb', 'y ', 'ev', 'er', 'y ', 'on', 'e ', 'wh', 'o ', 'lo', 'ok', 'ed'] Processing file stories\021.txt Data size (Characters) (Document 20) 1924 Sample string (Document 20) ['in', ' a', ' c', 'er', 'ta', 'in', ' c', 'ou', 'nt', 'ry', ' t', 'he', 're', ' w', 'as', ' o', 'nc', 'e ', 'gr', 'ea', 't ', 'la', 'me', 'nt', 'at', 'io', 'n ', 'ov', 'er', ' a', '\nw', 'il', 'd ', 'bo', 'ar', ' t', 'ha', 't ', 'la', 'id', ' w', 'as', 'te', ' t', 'he', ' f', 'ar', 'me', "r'", 's '] Processing file stories\022.txt Data size (Characters) (Document 21) 6561 Sample string (Document 21) ['th', 'er', 'e ', 'wa', 's ', 'on', 'ce', ' a', ' p', 'oo', 'r ', 'wo', 'ma', 'n ', 'wh', 'o ', 'ga', 've', ' b', 'ir', 'th', ' t', 'o ', 'a ', 'li', 'tt', 'le', ' s', 'on', ',\n', 'an', 'd ', 'as', ' h', 'e ', 'ca', 'me', ' i', 'nt', 'o ', 'th', 'e ', 'wo', 'rl', 'd ', 'wi', 'th', ' a', ' c', 'au'] Processing file stories\023.txt
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Building the Dictionaries (Bigrams)Builds the following. To understand each of these elements, let us also assume the text "I like to go to school"* `dictionary`: maps a string word to an ID (e.g. {I:0, like:1, to:2, go:3, school:4})* `reverse_dictionary`: maps an ID to a string word (e.g. {0:I, 1:like, 2:to, 3:go, 4:school}* `count`: List of list of (word, frequency) elements (e.g. [(I,1),(like,1),(to,2),(go,1),(school,1)]* `data` : Contain the string of text we read, where string words are replaced with word IDs (e.g. [0, 1, 2, 3, 2, 4])It also introduces an additional special token `UNK` to denote rare words to are too rare to make use of.
def build_dataset(documents): chars = [] # This is going to be a list of lists # Where the outer list denote each document # and the inner lists denote words in a given document data_list = [] for d in documents: chars.extend(d) print('%d Characters found.'%len(chars)) count = [] # Get the bigram sorted by their frequency (Highest comes first) count.extend(collections.Counter(chars).most_common()) # Create an ID for each bigram by giving the current length of the dictionary # And adding that item to the dictionary # Start with 'UNK' that is assigned to too rare words dictionary = dict({'UNK':0}) for char, c in count: # Only add a bigram to dictionary if its frequency is more than 10 if c > 10: dictionary[char] = len(dictionary) unk_count = 0 # Traverse through all the text we have # to replace each string word with the ID of the word for d in documents: data = list() for char in d: # If word is in the dictionary use the word ID, # else use the ID of the special token "UNK" if char in dictionary: index = dictionary[char] else: index = dictionary['UNK'] unk_count += 1 data.append(index) data_list.append(data) reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data_list, count, dictionary, reverse_dictionary global data_list, count, dictionary, reverse_dictionary,vocabulary_size # Print some statistics about data data_list, count, dictionary, reverse_dictionary = build_dataset(documents) print('Most common words (+UNK)', count[:5]) print('Least common words (+UNK)', count[-15:]) print('Sample data', data_list[0][:10]) print('Sample data', data_list[1][:10]) print('Vocabulary: ',len(dictionary)) vocabulary_size = len(dictionary) del documents # To reduce memory.
449177 Characters found. Most common words (+UNK) [('e ', 15229), ('he', 15164), (' t', 13443), ('th', 13076), ('d ', 10687)] Least common words (+UNK) [('rz', 1), ('zi', 1), ('i?', 1), ('\ts', 1), ('".', 1), ('hc', 1), ('sd', 1), ('z ', 1), ('m?', 1), ('\tc', 1), ('oz', 1), ('iq', 1), ('pw', 1), ('tz', 1), ('yr', 1)] Sample data [15, 28, 86, 23, 3, 95, 74, 11, 2, 16] Sample data [22, 156, 25, 37, 82, 185, 43, 9, 90, 19] Vocabulary: 544
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Generating Batches of DataThe following object generates a batch of data which will be used to train the RNN. More specifically the generator breaks a given sequence of words into `batch_size` segments. We also maintain a cursor for each segment. So whenever we create a batch of data, we sample one item from each segment and update the cursor of each segment.
class DataGeneratorOHE(object): def __init__(self,text,batch_size,num_unroll): # Text where a bigram is denoted by its ID self._text = text # Number of bigrams in the text self._text_size = len(self._text) # Number of datapoints in a batch of data self._batch_size = batch_size # Num unroll is the number of steps we unroll the RNN in a single training step # This relates to the truncated backpropagation we discuss in Chapter 6 text self._num_unroll = num_unroll # We break the text in to several segments and the batch of data is sampled by # sampling a single item from a single segment self._segments = self._text_size//self._batch_size self._cursor = [offset * self._segments for offset in range(self._batch_size)] def next_batch(self): ''' Generates a single batch of data ''' # Train inputs (one-hot-encoded) and train outputs (one-hot-encoded) batch_data = np.zeros((self._batch_size,vocabulary_size),dtype=np.float32) batch_labels = np.zeros((self._batch_size,vocabulary_size),dtype=np.float32) # Fill in the batch datapoint by datapoint for b in range(self._batch_size): # If the cursor of a given segment exceeds the segment length # we reset the cursor back to the beginning of that segment if self._cursor[b]+1>=self._text_size: self._cursor[b] = b * self._segments # Add the text at the cursor as the input batch_data[b,self._text[self._cursor[b]]] = 1.0 # Add the preceding bigram as the label to be predicted batch_labels[b,self._text[self._cursor[b]+1]]= 1.0 # Update the cursor self._cursor[b] = (self._cursor[b]+1)%self._text_size return batch_data,batch_labels def unroll_batches(self): ''' This produces a list of num_unroll batches as required by a single step of training of the RNN ''' unroll_data,unroll_labels = [],[] for ui in range(self._num_unroll): data, labels = self.next_batch() unroll_data.append(data) unroll_labels.append(labels) return unroll_data, unroll_labels def reset_indices(self): ''' Used to reset all the cursors if needed ''' self._cursor = [offset * self._segments for offset in range(self._batch_size)] # Running a tiny set to see if things are correct dg = DataGeneratorOHE(data_list[0][25:50],5,5) u_data, u_labels = dg.unroll_batches() # Iterate through each data batch in the unrolled set of batches for ui,(dat,lbl) in enumerate(zip(u_data,u_labels)): print('\n\nUnrolled index %d'%ui) dat_ind = np.argmax(dat,axis=1) lbl_ind = np.argmax(lbl,axis=1) print('\tInputs:') for sing_dat in dat_ind: print('\t%s (%d)'%(reverse_dictionary[sing_dat],sing_dat),end=", ") print('\n\tOutput:') for sing_lbl in lbl_ind: print('\t%s (%d)'%(reverse_dictionary[sing_lbl],sing_lbl),end=", ")
Unrolled index 0 Inputs: e (1), ki (131), d (48), w (11), be (70), Output: li (98), ng (33), au (195), er (14), au (195), Unrolled index 1 Inputs: li (98), ng (33), au (195), er (14), au (195), Output: ve (41), w (169), gh (106), e (1), ti (112), Unrolled index 2 Inputs: ve (41), w (169), gh (106), e (1), ti (112), Output: d (5), ho (62), te (61), al (84), fu (229), Unrolled index 3 Inputs: d (5), ho (62), te (61), al (84), fu (229), Output: a (82), se (58), rs (137), l (57), l, (257), Unrolled index 4 Inputs: a (82), se (58), rs (137), l (57), be (70), Output: ki (131), d (48), w (11), be (70), au (195),
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining the LSTM, LSTM with Peepholes and GRUs* A LSTM has 5 main components * Cell state, Hidden state, Input gate, Forget gate, Output gate* A LSTM with peephole connections * Introduces several new sets of weights that connects the cell state to the gates* A GRU has 3 main components * Hidden state, Reset gate and a Update gate Defining hyperparametersHere we define several hyperparameters and are very similar to the ones we defined in Chapter 6. However additionally we use dropout; a technique that helps to avoid overfitting.
num_nodes = 128 batch_size = 64 num_unrollings = 50 dropout = 0.2 # Use this in the CSV filename when saving # when using dropout filename_extension = '' if dropout>0.0: filename_extension = '_dropout'
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining Inputs and OutputsIn the code we define two different types of inputs. * Training inputs (The stories we downloaded) (batch_size > 1 with unrolling)* Validation inputs (An unseen validation dataset) (bach_size =1, no unrolling)* Test input (New story we are going to generate) (batch_size=1, no unrolling)
tf.reset_default_graph() # Training Input data. train_inputs, train_labels = [],[] # Defining unrolled training inputs for ui in range(num_unrollings): train_inputs.append(tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size],name='train_inputs_%d'%ui)) train_labels.append(tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size], name = 'train_labels_%d'%ui)) valid_inputs = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) valid_labels = tf.placeholder(tf.float32, shape=[1, vocabulary_size]) # Text generation: batch 1, no unrolling. test_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size])
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining Model Parameters and Cell ComputationWe define parameters and cell computation functions for all the different variants (LSTM, LSTM with peepholes and GRUs). **Make sure you only run a single cell withing this section (either the LSTM/ LSTM with peepholes or GRUs) Standard LSTMHere we define the parameters and the cell computation function for a standard LSTM
# Input gate (i_t) - How much memory to write to cell state # Connects the current input to the input gate ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.02)) # Connects the previous hidden state to the input gate im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.02)) # Bias of the input gate ib = tf.Variable(tf.random_uniform([1, num_nodes],-0.02, 0.02)) # Forget gate (f_t) - How much memory to discard from cell state # Connects the current input to the forget gate fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.02)) # Connects the previous hidden state to the forget gate fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.02)) # Bias of the forget gate fb = tf.Variable(tf.random_uniform([1, num_nodes],-0.02, 0.02)) # Candidate value (c~_t) - Used to compute the current cell state # Connects the current input to the candidate cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.02)) # Connects the previous hidden state to the candidate cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.02)) # Bias of the candidate cb = tf.Variable(tf.random_uniform([1, num_nodes],-0.02,0.02)) # Output gate - How much memory to output from the cell state # Connects the current input to the output gate ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.02)) # Connects the previous hidden state to the output gate om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.02)) # Bias of the output gate ob = tf.Variable(tf.random_uniform([1, num_nodes],-0.02,0.02)) # Softmax Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], stddev=0.02)) b = tf.Variable(tf.random_uniform([vocabulary_size],-0.02,0.02)) # Variables saving state across unrollings. # Hidden state saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) # Cell state saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_valid_output = tf.Variable(tf.zeros([1, num_nodes]), trainable=False) saved_valid_state = tf.Variable(tf.zeros([1, num_nodes]), trainable=False) # Same variables for testing phase saved_test_output = tf.Variable(tf.zeros([1, num_nodes]),trainable=False) saved_test_state = tf.Variable(tf.zeros([1, num_nodes]),trainable=False) algorithm = 'lstm' filename_to_save = algorithm + filename_extension +'.csv' # Definition of the cell computation. def lstm_cell(i, o, state): """Create an LSTM cell""" input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
LSTMs with Peephole ConnectionsWe define the parameters and cell computation for a LSTM with peepholes. Note that we are using diagonal peephole connections (for more details refer the text).
# Parameters: # Input gate: input, previous output, and bias. ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) ic = tf.Variable(tf.truncated_normal([1,num_nodes], stddev=0.01)) ib = tf.Variable(tf.random_uniform([1, num_nodes],0.0, 0.01)) # Forget gate: input, previous output, and bias. fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) fc = tf.Variable(tf.truncated_normal([1,num_nodes], stddev=0.01)) fb = tf.Variable(tf.random_uniform([1, num_nodes],0.0, 0.01)) # Memory cell: input, state and bias. cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) cb = tf.Variable(tf.random_uniform([1, num_nodes],0.0,0.01)) # Output gate: input, previous output, and bias. ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) oc = tf.Variable(tf.truncated_normal([1,num_nodes], stddev=0.01)) ob = tf.Variable(tf.random_uniform([1, num_nodes],0.0,0.01)) # Softmax Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], stddev=0.01)) b = tf.Variable(tf.random_uniform([vocabulary_size],0.0,0.01)) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_valid_output = tf.Variable(tf.zeros([1, num_nodes]), trainable=False) saved_valid_state = tf.Variable(tf.zeros([1, num_nodes]), trainable=False) saved_test_output = tf.Variable(tf.zeros([1, num_nodes]), trainable=False) saved_test_state = tf.Variable(tf.zeros([1, num_nodes]), trainable=False) algorithm = 'lstm_peephole' filename_to_save = algorithm + filename_extension +'.csv' # Definition of the cell computation. def lstm_with_peephole_cell(i, o, state): ''' LSTM with peephole connections Our implementation for peepholes is based on https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/43905.pdf ''' input_gate = tf.sigmoid(tf.matmul(i, ix) + state*ic + tf.matmul(o, im) + ib) forget_gate = tf.sigmoid(tf.matmul(i, fx) + state*fc + tf.matmul(o, fm) + fb) update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb state = forget_gate * state + input_gate * tf.tanh(update) output_gate = tf.sigmoid(tf.matmul(i, ox) + state*oc + tf.matmul(o, om) + ob) return output_gate * tf.tanh(state), state
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Gated Recurrent Units (GRUs)Finally we define the parameters and cell computations for the GRU cell.
# Parameters: # Reset gate: input, previous output, and bias. rx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) rh = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) rb = tf.Variable(tf.random_uniform([1, num_nodes],0.0, 0.01)) # Hidden State: input, previous output, and bias. hx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) hh = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) hb = tf.Variable(tf.random_uniform([1, num_nodes],0.0, 0.01)) # Update gate: input, previous output, and bias. zx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], stddev=0.01)) zh = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], stddev=0.01)) zb = tf.Variable(tf.random_uniform([1, num_nodes],0.0, 0.01)) # Softmax Classifier weights and biases. w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], stddev=0.01)) b = tf.Variable(tf.random_uniform([vocabulary_size],0.0,0.01)) # Variables saving state across unrollings. saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False) saved_valid_output = tf.Variable(tf.zeros([1, num_nodes]),trainable=False) saved_test_output = tf.Variable(tf.zeros([1, num_nodes]),trainable=False) algorithm = 'gru' filename_to_save = algorithm + filename_extension +'.csv' # Definition of the cell computation. def gru_cell(i, o): """Create a GRU cell.""" reset_gate = tf.sigmoid(tf.matmul(i, rx) + tf.matmul(o, rh) + rb) h_tilde = tf.tanh(tf.matmul(i,hx) + tf.matmul(reset_gate * o, hh) + hb) z = tf.sigmoid(tf.matmul(i,zx) + tf.matmul(o, zh) + zb) h = (1-z)*o + z*h_tilde return h
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining LSTM/GRU/LSTM-Peephole ComputationsHere first we define the LSTM cell computations as a consice function. Then we use this function to define training and test-time inference logic.
# ========================================================= #Training related inference logic # Keeps the calculated state outputs in all the unrollings # Used to calculate loss outputs = list() # These two python variables are iteratively updated # at each step of unrolling output = saved_output if algorithm=='lstm' or algorithm=='lstm_peephole': state = saved_state # Compute the hidden state (output) and cell state (state) # recursively for all the steps in unrolling # Note: there is no cell state for GRUs for i in train_inputs: if algorithm=='lstm': output, state = lstm_cell(i, output, state) train_state_update_ops = [saved_output.assign(output), saved_state.assign(state)] elif algorithm=='lstm_peephole': output, state = lstm_with_peephole_cell(i, output, state) train_state_update_ops = [saved_output.assign(output), saved_state.assign(state)] elif algorithm=='gru': output = gru_cell(i, output) train_state_update_ops = [saved_output.assign(output)] output = tf.nn.dropout(output,keep_prob=1.0-dropout) # Append each computed output value outputs.append(output) # calculate the score values logits = tf.matmul(tf.concat(axis=0, values=outputs), w) + b # Compute predictions. train_prediction = tf.nn.softmax(logits) # Compute training perplexity train_perplexity_without_exp = tf.reduce_sum(tf.concat(train_labels,0)*-tf.log(tf.concat(train_prediction,0)+1e-10))/(num_unrollings*batch_size) # ======================================================================== # Validation phase related inference logic valid_output = saved_valid_output if algorithm=='lstm' or algorithm=='lstm_peephole': valid_state = saved_valid_state # Compute the LSTM cell output for validation data if algorithm=='lstm': valid_output, valid_state = lstm_cell( valid_inputs, saved_valid_output, saved_valid_state) valid_state_update_ops = [saved_valid_output.assign(valid_output), saved_valid_state.assign(valid_state)] elif algorithm=='lstm_peephole': valid_output, valid_state = lstm_with_peephole_cell( valid_inputs, saved_valid_output, saved_valid_state) valid_state_update_ops = [saved_valid_output.assign(valid_output), saved_valid_state.assign(valid_state)] elif algorithm=='gru': valid_output = gru_cell(valid_inputs, valid_output) valid_state_update_ops = [saved_valid_output.assign(valid_output)] valid_logits = tf.nn.xw_plus_b(valid_output, w, b) # Make sure that the state variables are updated # before moving on to the next iteration of generation with tf.control_dependencies(valid_state_update_ops): valid_prediction = tf.nn.softmax(valid_logits) # Compute validation perplexity valid_perplexity_without_exp = tf.reduce_sum(valid_labels*-tf.log(valid_prediction+1e-10)) # ======================================================================== # Testing phase related inference logic # Compute the LSTM cell output for testing data if algorithm=='lstm': test_output, test_state = lstm_cell(test_input, saved_test_output, saved_test_state) test_state_update_ops = [saved_test_output.assign(test_output), saved_test_state.assign(test_state)] elif algorithm=='lstm_peephole': test_output, test_state = lstm_with_peephole_cell(test_input, saved_test_output, saved_test_state) test_state_update_ops = [saved_test_output.assign(test_output), saved_test_state.assign(test_state)] elif algorithm=='gru': test_output = gru_cell(test_input, saved_test_output) test_state_update_ops = [saved_test_output.assign(test_output)] # Make sure that the state variables are updated # before moving on to the next iteration of generation with tf.control_dependencies(test_state_update_ops): test_prediction = tf.nn.softmax(tf.nn.xw_plus_b(test_output, w, b))
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Calculating LSTM LossWe calculate the training loss of the LSTM here. It's a typical cross entropy loss calculated over all the scores we obtained for training data (`loss`).
# Before calcualting the training loss, # save the hidden state and the cell state to # their respective TensorFlow variables with tf.control_dependencies(train_state_update_ops): # Calculate the training loss by # concatenating the results from all the unrolled time steps loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits_v2( logits=logits, labels=tf.concat(axis=0, values=train_labels)))
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Resetting Operations for Resetting Hidden StatesSometimes the state variable needs to be reset (e.g. when starting predictions at a beginning of a new epoch). But since GRU doesn't have a cell state we have a conditioned reset_state ops
if algorithm=='lstm' or algorithm=='lstm_peephole': # Reset train state reset_train_state = tf.group(tf.assign(saved_state, tf.zeros([batch_size, num_nodes])), tf.assign(saved_output, tf.zeros([batch_size, num_nodes]))) reset_valid_state = tf.group(tf.assign(saved_valid_state, tf.zeros([1, num_nodes])), tf.assign(saved_valid_output, tf.zeros([1, num_nodes]))) # Reset test state. We use imputations in the test state reset reset_test_state = tf.group( saved_test_output.assign(tf.random_normal([1, num_nodes],stddev=0.01)), saved_test_state.assign(tf.random_normal([1, num_nodes],stddev=0.01))) elif algorithm=='gru': # Reset train state reset_train_state = [tf.assign(saved_output, tf.zeros([batch_size, num_nodes]))] # Reset valid state reset_valid_state = [tf.assign(saved_valid_output, tf.zeros([1, num_nodes]))] # Reset test state. We use imputations in the test state reset reset_test_state = [saved_test_output.assign(tf.random_normal([1, num_nodes],stddev=0.01))]
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Defining Learning Rate and the Optimizer with Gradient ClippingHere we define the learning rate and the optimizer we're going to use. We will be using the Adam optimizer as it is one of the best optimizers out there. Furthermore we use gradient clipping to prevent any gradient explosions.
# Used for decaying learning rate gstep = tf.Variable(0, trainable=False) # Running this operation will cause the value of gstep # to increase, while in turn reducing the learning rate inc_gstep = tf.assign(gstep, gstep+1) # Decays learning rate everytime the gstep increases tf_learning_rate = tf.train.exponential_decay(0.001,gstep,decay_steps=1, decay_rate=0.5) # Adam Optimizer. And gradient clipping. optimizer = tf.train.AdamOptimizer(tf_learning_rate) gradients, v = zip(*optimizer.compute_gradients(loss)) # Clipping gradients gradients, _ = tf.clip_by_global_norm(gradients, 5.0) optimizer = optimizer.apply_gradients( zip(gradients, v))
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Greedy Sampling to Break the RepetitionHere we write some simple logic to break the repetition in text. Specifically instead of always getting the word that gave this highest prediction probability, we sample randomly where the probability of being selected given by their prediction probabilities.
def sample(distribution): '''Greedy Sampling We pick the three best predictions given by the LSTM and sample one of them with very high probability of picking the best one''' best_inds = np.argsort(distribution)[-3:] best_probs = distribution[best_inds]/np.sum(distribution[best_inds]) best_idx = np.random.choice(best_inds,p=best_probs) return best_idx
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Running the LSTM to Generate TextHere we train the model on the available data and generate text using the trained model for several steps. From each document we extract text for `steps_per_document` steps to train the model on. We also report the train perplexity at the end of each step. Finally we test the model by asking it to generate some new text starting from a randomly picked bigram. Learning rate Decay LogicHere we define the logic to decrease learning rate whenever the validation perplexity does not decrease
# Learning rate decay related # If valid perpelxity does not decrease # continuously for this many epochs # decrease the learning rate decay_threshold = 5 # Keep counting perplexity increases decay_count = 0 min_perplexity = 1e10 # Learning rate decay logic def decay_learning_rate(session, v_perplexity): global decay_threshold, decay_count, min_perplexity # Decay learning rate if v_perplexity < min_perplexity: decay_count = 0 min_perplexity= v_perplexity else: decay_count += 1 if decay_count >= decay_threshold: print('\t Reducing learning rate') decay_count = 0 session.run(inc_gstep)
_____no_output_____
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Running Training, Validation and GenerationWe traing the LSTM on existing training data, check the validaiton perplexity on an unseen chunk of text and generate a fresh segment of text
# Some hyperparameters needed for the training process num_steps = 26 steps_per_document = 100 docs_per_step = 10 valid_summary = 1 train_doc_count = num_files session = tf.InteractiveSession() # Capture the behavior of train/valid perplexity over time train_perplexity_ot = [] valid_perplexity_ot = [] # Initializing variables tf.global_variables_initializer().run() print('Initialized Global Variables ') average_loss = 0 # Calculates the average loss ever few steps # We use the first 10 documents that has # more than 10*steps_per_document bigrams for creating the validation dataset # Identify the first 10 documents following the above condition long_doc_ids = [] for di in range(num_files): if len(data_list[di])>10*steps_per_document: long_doc_ids.append(di) if len(long_doc_ids)==10: break # Generating validation data data_gens = [] valid_gens = [] for fi in range(num_files): # Get all the bigrams if the document id is not in the validation document ids if fi not in long_doc_ids: data_gens.append(DataGeneratorOHE(data_list[fi],batch_size,num_unrollings)) # if the document is in the validation doc ids, only get up to the # last steps_per_document bigrams and use the last steps_per_document bigrams as validation data else: data_gens.append(DataGeneratorOHE(data_list[fi][:-steps_per_document],batch_size,num_unrollings)) # Defining the validation data generator valid_gens.append(DataGeneratorOHE(data_list[fi][-steps_per_document:],1,1)) feed_dict = {} for step in range(num_steps): for di in np.random.permutation(train_doc_count)[:docs_per_step]: doc_perplexity = 0 for doc_step_id in range(steps_per_document): # Get a set of unrolled batches u_data, u_labels = data_gens[di].unroll_batches() # Populate the feed dict by using each of the data batches # present in the unrolled data for ui,(dat,lbl) in enumerate(zip(u_data,u_labels)): feed_dict[train_inputs[ui]] = dat feed_dict[train_labels[ui]] = lbl # Running the TensorFlow operations _, l, step_perplexity = session.run([optimizer, loss, train_perplexity_without_exp], feed_dict=feed_dict) # Update doc_perpelxity variable doc_perplexity += step_perplexity # Update the average_loss variable average_loss += step_perplexity # shows the training progress print('(%d).'%di,end='') # resetting hidden state after processing a single document # It's still questionable if this adds value in terms of learning # One one hand it's intuitive to reset the state when learning a new document # On the other hand this approach creates a bias for the state to be zero # We encourage the reader to investigate further the effect of resetting the state #session.run(reset_train_state) # resetting hidden state for each document session.run(reset_train_state) # resetting hidden state for each document print('') # Generate new samples if (step+1) % valid_summary == 0: # Compute average loss average_loss = average_loss / (valid_summary*docs_per_step*steps_per_document) # Print losses print('Average loss at step %d: %f' % (step+1, average_loss)) print('\tPerplexity at step %d: %f' %(step+1, np.exp(average_loss))) train_perplexity_ot.append(np.exp(average_loss)) average_loss = 0 # reset loss valid_loss = 0 # reset loss # calculate valid perplexity for v_doc_id in range(10): # Remember we process things as bigrams # So need to divide by 2 for v_step in range(steps_per_document//2): uvalid_data,uvalid_labels = valid_gens[v_doc_id].unroll_batches() # Run validation phase related TensorFlow operations v_perp = session.run( valid_perplexity_without_exp, feed_dict = {valid_inputs:uvalid_data[0],valid_labels: uvalid_labels[0]} ) valid_loss += v_perp session.run(reset_valid_state) # Reset validation data generator cursor valid_gens[v_doc_id].reset_indices() print() v_perplexity = np.exp(valid_loss/(steps_per_document*10.0//2)) print("Valid Perplexity: %.2f\n"%v_perplexity) valid_perplexity_ot.append(v_perplexity) decay_learning_rate(session, v_perplexity) # Generating new text ... # We will be generating one segment having 500 bigrams # Feel free to generate several segments by changing # the value of segments_to_generate print('Generated Text after epoch %d ... '%step) segments_to_generate = 1 chars_in_segment = 500 for _ in range(segments_to_generate): print('======================== New text Segment ==========================') # Start with a random word test_word = np.zeros((1,vocabulary_size),dtype=np.float32) test_word[0,data_list[np.random.randint(0,num_files)][np.random.randint(0,100)]] = 1.0 print("\t",reverse_dictionary[np.argmax(test_word[0])],end='') # Generating words within a segment by feeding in the previous prediction # as the current input in a recursive manner for _ in range(chars_in_segment): sample_pred = session.run(test_prediction, feed_dict = {test_input:test_word}) next_ind = sample(sample_pred.ravel()) test_word = np.zeros((1,vocabulary_size),dtype=np.float32) test_word[0,next_ind] = 1.0 print(reverse_dictionary[next_ind],end='') print("") # Reset train state session.run(reset_test_state) print('====================================================================') print("") session.close() # Write the perplexity data to a CSV with open(filename_to_save, 'wt') as f: writer = csv.writer(f,delimiter=',') writer.writerow(train_perplexity_ot) writer.writerow(valid_perplexity_ot)
Initialized Global Variables (98).(25).(91).(5).(88).(49).(85).(96).(14).(73). Average loss at step 1: 4.500272 Perplexity at step 1: 90.041577 Valid Perplexity: 53.93 Generated Text after epoch 0 ... ======================== New text Segment ========================== her, it the spirit, "one his that to and said the money the spirit, and here, and have, and the gold all wile you that it the morester, and the spirit and had with ith the spirit, and hered hen hen that have the spirit, and the spiras, i will said the spirout. i will said, "i will wout on that to and said, "i wither in the bover, "the spirit, "one the father, "i will said, "the boy, and had to that it to the have to the father, "and here, that had come you came, and here, and here, and, "the spirour wither as the money the spirler the spirit, i must the bected hen the boy that to you with the father to the first had come to the fore the monen the spneit, and have, answered and said, "the gon that and he could hey the money you will sood that in ther, what have the spirit." the gold for to you the more his plaster, "i will the fathen all had come, and wound in the boke, i will had come to the father, it you that then your had then your as you came, and have, and hen the boner to the had ==================================================================== (49).(87).(32).(14).(4).(51).(90).(16).(60).(43). Average loss at step 2: 2.719010 Perplexity at step 2: 15.165307 Valid Perplexity: 38.30 Generated Text after epoch 1 ... ======================== New text Segment ========================== r, but the name, and the queen's allered that is name is the name, the queen was the name, and the queen was hease, and the little hands that he more himself in, and the man came the manikin two man, and the manikin was jumping, he pulled at his left leg so hard that is name in, and the little man whow to her leg were in his the deall the manikin his whole leg the queen's dever had told the names your name, the my the names that he plunged his right the little man came in, and the manikin she knew, that to the name. but the name in the queen, what is my name in his whole his told you thatUNK the devil has told yound the manikin said, is two my name. on the little man came in, and foot so the manikin said, not the manikin said, is that is not not no the little man, and all the little man cantle the nauntribs, of the little man, and then in the little hands and the queen's dever's child, what is you thatUNK the dever has hold. but she had in, and the little man, and in the night, that int ==================================================================== (48).(25).(81).(71).(45).(13).(0).(53).(28).(40). Average loss at step 3: 2.477577 Perplexity at step 3: 11.912361 Valid Perplexity: 32.62 Generated Text after epoch 2 ... ======================== New text Segment ========================== asked his which she put of in two egg-should now will been the must splet down and said, i have done. when he said, i am to humble you can been to the king's heart, and she had driven her the most splendown the king throuhbeard of the king's daughter was too. i wish you will happened to the corner of the king's evil danced, and that the most splet son her for and the king's door this did now began in that her and will be on which your promised that it down on this will down on the maid, i with you had to the cornest. i have been to the heart, and she was laughter and dide once with they down the maid to the comforted the pon the kindly, there too. and her and then the prode, who she said to this days that when the maidUNKin-waiting came and put on her to the maidUNKin-waiting came and put on her the most splend and were of the hand that with your father and that the poor and been she was to this days wedding. then the door, which your wife. but he said, be court sprangs the kind's ear ==================================================================== (78).(49).(12).(40).(27).(34).(89).(28).(66).(58). Average loss at step 4: 2.076020 Perplexity at step 4: 7.972671 Valid Perplexity: 50.50 Generated Text after epoch 3 ... ======================== New text Segment ========================== out of which to the ground and broke. then they bought him his eyes for a while, and presently began to gather to the table, and henceforth always let him eat with them, and likewise said nothing if he did spill a little of anything. and they took the old grandfather to the table, and henceforth always let him eat with them, and likewise said nothing if he did spill a little of anything. i am making a little trough, answered the child, for father and mother to eat with them, and likewise said nothing if he did spill a little of anything. the old grandfather to the table while, and presently began to cry. then they took the old grandfather to the table, and henceforth always let him eat with them, for a while, and presently began to cry. then they took the old grandfather to gather to the table, and henceforth always let him eat with them, and likewise said nothing if he did nothing if he did spill a little of anything. i am making a little trough, answered the child, for for a whi ==================================================================== (72).(5).(55).(2).(42).(75).(57).(80).(47).(14). Average loss at step 5: 2.553451 Perplexity at step 5: 12.851376 Valid Perplexity: 23.96 Generated Text after epoch 4 ... ======================== New text Segment ========================== wither, as which, so the king, which should not he was came to the little said me that he tailor, and after her father and bear, the little tailor, when he who had been boil, and they were one of the king, and they were to be dought the little tailor was comforted the tree. then the king, who hans went one boy, and after her father's death became to his have not the will tailor again the two other so low. i smote not liked the little tailor, who was at one of them to the board against the tailor. when the wild boy, and after her father, and then the little tailor was and remadeed, and the little tailor and the little tailor, and tailor half of the tailor standing, and it was thouse, what he had heard the tailor so the little said, the tailor had faller asleep the treat with them, and then it, and they who who was no one of the king was thoubly and they will for him, and then the king, who was caught, but they will forest again, who had heard the tree. the two giants and said than he ==================================================================== (25).(89).(52).(2).(63).(74).(61).(10).(56).(64). Average loss at step 6: 2.086129 Perplexity at step 6: 8.053676 Valid Perplexity: 24.38 Generated Text after epoch 5 ... ======================== New text Segment ========================== e the bridegroom with the nut. immediately came and said, nevery that they were she came to the bride she was ablew on the bride was on their deathe great before in the midst. and the bridegroom with the griffin the griffind there, but the bride was in sleep and said, i have been complaints, they were in the money she said, and the nut. immediately the chamber, and there in the chamber, and blood, and, and began to repating there she came the nut. and the bride had been as the bride was again led me, where they seated her where they for sease, but the princess there in the might of them and dragess, and there in the chamber there by that, and there in who had been the bird formed it, there went the princess, who had been complace. then they sat down and said, i have been prepared themselves and but on the chamber the prince went to the red of the princess, who was perfectly safe and they were in the chamber, and but on the chamber, and said, i have been complaints, and said, i will ==================================================================== (18).(96).(40).(95).(54).(2).(52).(37).(44).(55). Average loss at step 7: 2.244664 Perplexity at step 7: 9.437240 Valid Perplexity: 21.08 Generated Text after epoch 6 ... ======================== New text Segment ==========================
MIT
ch8/lstm_extensions.ipynb
PacktPublishing/Natural-Language-Processing-with-TensorFlow
Similarity Recommendation* Collaborative Filtering * Similarity score is merchant similarity rank * Products list is most sold products in recent X weeks * Didn't choose most valuable products from `product_values` table is because they are largely overlapped with the top products in each merchant. * Also excluded the most sold products of the target merchant. * Avg daily purchase frequency is the count of each product in the list
import pandas as pd import numpy as np import datetime import Levenshtein import warnings warnings.filterwarnings("ignore") import ray ray.shutdown() ray.init() target_merchant = '49th Parallel Grocery' all_order_train = pd.read_pickle('../all_order_train.pkl') all_order_test = pd.read_pickle('../all_order_test.pkl') print(all_order_train.shape, all_order_test.shape) all_order_train.head() target_train = all_order_train.loc[all_order_train['merchant'] == target_merchant] target_test = all_order_test.loc[all_order_test['merchant'] == target_merchant] print(target_train.shape, target_test.shape) target_train.head() all_order_train = all_order_train.loc[all_order_train['merchant'] != target_merchant] all_order_test = all_order_test.loc[all_order_test['merchant'] != target_merchant] print(all_order_train.shape, all_order_test.shape) all_order_train.head()
(32355508, 12) (94436, 12)
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
Merchant Similarity Score* Here, I converted the 3 similarity factors (top products, size, name) into 1 score, higher score represents higher similarity.* Commapring with sorting by 3 factors, 1 similarity score brings a bit different results.
@ray.remote def get_merchant_data(merchant_df, top=10): merchant_size = merchant_df[['merchant', 'product_id']].astype('str').drop_duplicates()\ .groupby(['merchant'], as_index=False)['product_id']\ .agg('count').reset_index(drop=True).T.to_dict() merchant_data = merchant_size[0] merchant_data['product_ct'] = merchant_data.pop('product_id') top_prod_lst_df = merchant_df[['product_id', 'order_id']].astype('str').drop_duplicates()\ .groupby(['product_id'], as_index=False)['order_id']\ .agg('count').reset_index(drop=True)\ .sort_values(by='order_id', ascending=False)\ .head(n=top) top_prod_lst = list(top_prod_lst_df['product_id'].values) merchant_data['top_prod_lst'] = top_prod_lst return merchant_data @ray.remote def get_merchant_similarity(target_merchant_dct, merchant_dct): prod_similarity = len(set(target_merchant_dct['top_prod_lst']).intersection(set(merchant_dct['top_prod_lst']))) size_similarity = abs(target_merchant_dct['product_ct'] - merchant_dct['product_ct']) name_similarity = Levenshtein.ratio(target_merchant_dct['merchant'], merchant_dct['merchant']) return {'merchant': merchant_dct['merchant'], 'prod_sim': prod_similarity, 'size_sim': size_similarity, 'name_sim': name_similarity} target_merchant_train = get_merchant_data.remote(target_train[['merchant', 'product_id', 'order_id']], top=10) target_merchant_dct = ray.get(target_merchant_train) print(target_merchant_dct) merchant_lst = all_order_train['merchant'].unique() results = [get_merchant_data.remote(all_order_train.loc[all_order_train['merchant']==merchant][['merchant', 'product_id', 'order_id']]) for merchant in merchant_lst] merchant_data_lst = ray.get(results) print(len(merchant_data_lst)) merchant_data_lst[7:9] results = [get_merchant_similarity.remote(target_merchant_train, merchant_dct) for merchant_dct in merchant_data_lst] merchant_similarity_lst = ray.get(results) merchant_similarity_df = pd.DataFrame(merchant_similarity_lst) print(merchant_similarity_df.shape) merchant_similarity_df = merchant_similarity_df.sort_values(by=['prod_sim', 'size_sim', 'name_sim'], ascending=[False, True, False]) merchant_similarity_df.head() prod_sim_min = min(merchant_similarity_df['prod_sim']) prod_sim_max = max(merchant_similarity_df['prod_sim']) size_sim_min = min(merchant_similarity_df['size_sim']) size_sim_max = max(merchant_similarity_df['size_sim']) print(prod_sim_min, prod_sim_max, size_sim_min, size_sim_max) def get_similarity_score(r): similarity = (r['prod_sim'] - prod_sim_min)/(prod_sim_max - prod_sim_min) * (size_sim_max - r['size_sim'])/(size_sim_max - size_sim_min) * r['name_sim'] return round(similarity, 4) merchant_similarity_df['similarity_score'] = merchant_similarity_df.apply(get_similarity_score, axis=1) merchant_similarity_df = merchant_similarity_df.sort_values(by='similarity_score', ascending=False) merchant_similarity_df.head()
_____no_output_____
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
Recent Popular ProductsExcluding top products of the target merchant.
all_order_train.head() latest_period = 2 # in weeks week_lst = sorted(all_order_train['week_number'].unique())[-latest_period:] week_lst prod_ct_df = all_order_train.loc[all_order_train['week_number'].isin(week_lst)][['product_id', 'product_name', 'order_id']].astype('str').drop_duplicates()\ .groupby(['product_id', 'product_name'], as_index=False)['order_id']\ .agg('count').reset_index(drop=True)\ .sort_values(by='order_id', ascending=False) # remove product_id that's in target merchant's top popular products prod_ct_df = prod_ct_df.loc[~prod_ct_df['product_id'].isin(target_merchant_dct['top_prod_lst'])] prod_ct_df.head() n = 20 product_lst = prod_ct_df['product_id'].values[:n] print(product_lst) print() print(prod_ct_df['product_name'].values[:n])
['49683' '24964' '27966' '22935' '39275' '45007' '28204' '4605' '42265' '44632' '5876' '4920' '40706' '30391' '30489' '8518' '27104' '45066' '5077' '17794'] ['Cucumber Kirby' 'Organic Garlic' 'Organic Raspberries' 'Organic Yellow Onion' 'Organic Blueberries' 'Organic Zucchini' 'Organic Fuji Apple' 'Yellow Onions' 'Organic Baby Carrots' 'Sparkling Water Grapefruit' 'Organic Lemon' 'Seedless Red Grapes' 'Organic Grape Tomatoes' 'Organic Cucumber' 'Original Hummus' 'Organic Red Onion' 'Fresh Cauliflower' 'Honeycrisp Apple' '100% Whole Wheat Bread' 'Carrots']
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
Collaborative Filtering
merchant_similarity_df.head() all_order_train.head() n_merchant = 10 similar_merchant_lst = merchant_similarity_df['merchant'].values[:n_merchant] merchant_similarity_lst = merchant_similarity_df['similarity_score'].values[:n_merchant] @ray.remote def get_product_score(prod_df, product_id, product_name): total_weighted_frequency = 0.0 total_similarity = 0.0 for i in range(len(similar_merchant_lst)): merchant = similar_merchant_lst[i] tmp_df = prod_df.loc[prod_df['merchant']==merchant] if tmp_df.shape[0] > 0: daily_avg = tmp_df['order_id'].nunique()/tmp_df['purchase_date'].nunique() similarity = merchant_similarity_lst[i] total_similarity += similarity total_weighted_frequency += similarity * daily_avg prod_score = total_weighted_frequency/total_similarity return {'product_id': product_id, 'product_name': product_name, 'prod_score': round(prod_score, 4)} prod_score_lst = [get_product_score.remote(all_order_train.loc[all_order_train['product_id']==int(product_lst[i])][['merchant', 'order_id', 'purchase_date']], product_lst[i], prod_ct_df['product_name'].values[i]) for i in range(len(product_lst))] prod_score_df = pd.DataFrame(ray.get(prod_score_lst)) prod_score_df = prod_score_df.sort_values(by='prod_score', ascending=False) prod_score_df
_____no_output_____
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
Forecasting Recommendations
import pandas as pd import numpy as np from sklearn.metrics import mean_squared_error from math import sqrt import matplotlib.pyplot as plt # the logger here is to remove the warnings about plotly import logging logger = logging.getLogger('fbprophet.plot') logger.setLevel(logging.CRITICAL) from fbprophet import Prophet import warnings warnings.filterwarnings("ignore") sample_train_df1 = pd.read_pickle('../sample_train_df1.pkl') sample_test_df1 = pd.read_pickle('../sample_test_df1.pkl') print(sample_train_df1.shape, sample_test_df1.shape) train1_col = sample_train_df1['purchase_amount'] test1_col = sample_test_df1['purchase_amount'] # Generate logged moving average for both time series sequences ts_log_train1 = np.log(train1_col) ts_moving_avg_train1 = ts_log_train1.rolling(window=4,center=False).mean() ts_log_test1 = np.log(test1_col) ts_moving_avg_test1 = ts_log_test1.rolling(window=4,center=False).mean() ts_moving_avg_train1.head(n=10) ts_ma_train1 = pd.DataFrame(ts_moving_avg_train1.copy()) ts_ma_train1['ds'] = ts_ma_train1.index ts_ma_train1['y'] = ts_moving_avg_train1.values ts_ma_train1.drop(['purchase_amount'], inplace=True, axis=1) print(ts_ma_train1.shape) ts_ma_test1 = pd.DataFrame(ts_moving_avg_test1.copy()) ts_ma_test1['ds'] = ts_ma_test1.index ts_ma_test1['y'] = ts_moving_avg_test1.values ts_ma_test1.drop(['purchase_amount'], inplace=True, axis=1) print(ts_ma_test1.shape) ts_ma_train1.head() latest_period = 14 forecast_period = 7 train = ts_ma_train1.tail(n=latest_period) test = ts_ma_test1.head(n=forecast_period) print(train.shape, test.shape) train.head() prophet_model = Prophet(daily_seasonality = True, yearly_seasonality=False, weekly_seasonality=False, seasonality_mode = 'multiplicative', n_changepoints=5, changepoint_prior_scale=0.05, seasonality_prior_scale=0.1) prophet_model.fit(train) periods = len(test.index) future = prophet_model.make_future_dataframe(periods=periods) forecast = prophet_model.predict(future) print(train.shape, test.shape, forecast.shape) all_ts = train.append(test).dropna() selected_forecast = forecast.loc[forecast['ds'].isin(all_ts.index)] rmse = round(sqrt(mean_squared_error(all_ts['y'].values, selected_forecast['yhat'].values)), 4) print(rmse) forecast.head() exp_forecast = forecast[['ds', 'yhat']] exp_forecast['y_origin'] = np.exp(exp_forecast['yhat']) exp_forecast.head() original_ts = sample_train_df1.iloc[sample_train_df1.index.isin(train.index)][['purchase_amount']] original_ts = original_ts.append(sample_test_df1.iloc[sample_test_df1.index.isin(test.index)][['purchase_amount']]) print(original_ts.shape) plt.figure(figsize=(16,7)) plt.plot(original_ts.index, original_ts, label='Original Values', color='green') plt.plot(exp_forecast['ds'], exp_forecast['y_origin'].values, label='Forecasted Values', color='purple') plt.legend(loc='best') plt.title("Sample 1 - Original Values vs Forecasted Values (Without Recommended Products) - RMSE:" + str(rmse)) plt.show() product_values_df = pd.read_pickle('product_values.pkl') product_values_df.head() product_values_df['product_id'] = product_values_df['product_id'].astype(str) prod_score_sales_df = prod_score_df.merge(product_values_df[['product_id', 'avg_daily_sales']], on='product_id') prod_score_sales_df.head() test_ct = 20 daily_sales_increase = 0 original_ts = sample_train_df1.iloc[sample_train_df1.index.isin(train.index)][['purchase_amount']] original_ts = original_ts.append(sample_test_df1.iloc[sample_test_df1.index.isin(test.index)][['purchase_amount']]) print(original_ts.shape) exp_forecast['y_forecast'] = exp_forecast['y_origin'] forecast_ts_train = exp_forecast.head(n=latest_period) forecast_ts_test = exp_forecast.tail(n=forecast_period) for idx, r in prod_score_sales_df.iterrows(): added_daily_sales = r['avg_daily_sales'] forecast_ts_test['y_forecast'] += added_daily_sales daily_sales_increase += added_daily_sales if idx >= test_ct: break forecast_ts = forecast_ts_train.append(forecast_ts_test) print('Total sales increased: ' + str(daily_sales_increase * forecast_period)) plt.figure(figsize=(16,7)) plt.plot(original_ts.index, original_ts, label='Original Values', color='green') plt.plot(exp_forecast['ds'], exp_forecast['y_origin'].values, label='Forecasted Values No Recommendation', color='purple') plt.plot(forecast_ts['ds'], forecast_ts['y_forecast'].values, label='Forecasted Values With Recommendation', color='orange') plt.legend(loc='best') plt.title("Sample 1 - Original Values vs Forecasted Values (With Recommended Products) - Daily Sales Increased: " + str(daily_sales_increase)) plt.show()
Total sales increased: 2162.51
MIT
Bank_Fantasy/Golden_Bridge/recommendation_experiments/similarity_recommendations.ipynb
hanhanwu/Hanhan_Break_the_Limits
1️⃣ **Exercício 1.** Escreva uma função que conta a frequência de ocorrência de cada palavra em um texto (arquivo txt) e armazena tal quantidade em um dicionário, onde a chave é a vogal considerada. **Correção:** "onde a chave é a PALAVRA considerada"
from collections import Counter def count_palavras(nome_arquivo: str): file = open(f'{nome_arquivo}.txt', 'rt') texto = file.read() palavras = [palavra for palavra in texto.split(' ')] dicionario = dict(Counter(palavras)) # dicionario2 = {i: palavras.count(i) for i in list(set(palavras))} return dicionario nome_arquivo = input('Digite o nome do arquivo de texto: ') dicionario = count_palavras(nome_arquivo) print(dicionario)
Digite o nome do arquivo de texto: teste {'Gostaria': 1, 'de': 2, 'enfatizar': 1, 'que': 1, 'a': 2, 'hegemonia': 1, 'do': 3, 'ambiente': 2, 'político': 1, 'obstaculiza': 1, 'apreciação': 1, 'fluxo': 1, 'informações.': 1}
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
2️⃣ **Exercício 2.** Escreva uma função que apaga do dicionário anterior, todas as palavras que sejam ‘stopwords’.Ver https://gist.github.com/alopes/5358189
stopwords = ['de', 'a', 'o', 'que', 'e', 'do', 'da', 'em', 'um', 'para', 'é', 'com', 'não', 'uma', 'os', 'no', 'se', 'na', 'por', 'mais', 'as', 'dos', 'como', 'mas', 'foi', 'ao', 'ele', 'das', 'tem', 'à', 'seu', 'sua', 'ou', 'ser', 'quando', 'muito', 'há', 'nos', 'já', 'está', 'eu', 'também', 'só', 'pelo', 'pela', 'até', 'isso', 'ela', 'entre', 'era', 'depois', 'sem', 'mesmo', 'aos', 'ter', 'seus', 'quem', 'nas', 'me', 'esse', 'eles', 'estão', 'você', 'tinha', 'foram', 'essa', 'num', 'nem', 'suas', 'meu', 'às', 'minha', 'têm', 'numa', 'pelos', 'elas', 'havia', 'seja', 'qual', 'será', 'nós', 'tenho', 'lhe', 'deles', 'essas', 'esses', 'pelas', 'este', 'fosse', 'dele', 'tu', 'te', 'vocês', 'vos', 'lhes', 'meus', 'minhas', 'teu', 'tua', 'teus', 'tuas', 'nosso', 'nossa', 'nossos', 'nossas', 'dela', 'delas', 'esta', 'estes', 'estas', 'aquele', 'aquela', 'aqueles', 'aquelas', 'isto', 'aquilo', 'estou', 'está', 'estamos', 'estão', 'estive', 'esteve', 'estivemos', 'estiveram', 'estava', 'estávamos', 'estavam', 'estivera', 'estivéramos', 'esteja', 'estejamos', 'estejam', 'estivesse', 'estivéssemos', 'estivessem', 'estiver', 'estivermos', 'estiverem', 'hei', 'há', 'havemos', 'hão', 'houve', 'houvemos', 'houveram', 'houvera', 'houvéramos', 'haja', 'hajamos', 'hajam', 'houvesse', 'houvéssemos', 'houvessem', 'houver', 'houvermos', 'houverem', 'houverei', 'houverá', 'houveremos', 'houverão', 'houveria', 'houveríamos', 'houveriam', 'sou', 'somos', 'são', 'era', 'éramos', 'eram', 'fui', 'foi', 'fomos', 'foram', 'fora', 'fôramos', 'seja', 'sejamos', 'sejam', 'fosse', 'fôssemos', 'fossem', 'for', 'formos', 'forem', 'serei', 'será', 'seremos', 'serão', 'seria', 'seríamos', 'seriam', 'tenho', 'tem', 'temos', 'tém', 'tinha', 'tínhamos', 'tinham', 'tive', 'teve', 'tivemos', 'tiveram', 'tivera', 'tivéramos', 'tenha', 'tenhamos', 'tenham', 'tivesse', 'tivéssemos', 'tivessem', 'tiver', 'tivermos', 'tiverem', 'terei', 'terá', 'teremos', 'terão', 'teria', 'teríamos', 'teriam'] def delete_stopwords(dicionario): for stopword in stopwords: if stopword in dicionario.keys(): dicionario.pop(stopword, None) return dicionario nome_arquivo = input('Digite o nome do arquivo de texto: ') dicionario = count_palavras(nome_arquivo) print(f'\nDicionario: {dicionario}') novo_dicionario = delete_stopwords(dicionario) print(f'\nApos apagar stopwords: {novo_dicionario}')
Digite o nome do arquivo de texto: teste Dicionario: {'Gostaria': 1, 'de': 2, 'enfatizar': 1, 'que': 1, 'a': 2, 'hegemonia': 1, 'do': 3, 'ambiente': 2, 'político': 1, 'obstaculiza': 1, 'apreciação': 1, 'fluxo': 1, 'informações.': 1} Apos apagar stopwords: {'Gostaria': 1, 'enfatizar': 1, 'hegemonia': 1, 'ambiente': 2, 'político': 1, 'obstaculiza': 1, 'apreciação': 1, 'fluxo': 1, 'informações.': 1}
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
3️⃣ **Exercício 3.** Escreva um programa que lê duas notas de vários alunos e armazena tais notas em um dicionário, onde a chave é o nome do aluno. A entrada de dados deve terminar quando for lida uma string vazia como nome. Escreva uma função que retorna a média do aluno, dado seu nome.
def le_notas(dicionario = {}): nome_aluno = input('Digite o nome do aluno: ') if nome_aluno.isalpha() and nome_aluno not in dicionario.keys(): nota1 = float(input('Digite a primeira nota: (somente numeros) ')) nota2 = float(input('Digite a segunda nota: (somente numeros) ')) dicionario[nome_aluno] = [nota1, nota2] le_notas(dicionario) elif nome_aluno in dicionario.keys(): print('Aluno ja adicionado!') le_notas(dicionario) return dicionario def retorna_nota_aluno(dicionario, nome_aluno): return (dicionario[nome_aluno][0] + dicionario[nome_aluno][1]) / 2 dicionario = le_notas() nome_aluno = input('\nDigite o nome do aluno que deseja saber a nota: ') if dicionario and nome_aluno in dicionario.keys(): media = retorna_nota_aluno(dicionario, nome_aluno) print(f'{nome_aluno}: {media}')
Digite o nome do aluno: Larissa Digite a primeira nota: (somente numeros) 1 Digite a segunda nota: (somente numeros) 2 Digite o nome do aluno: Jesus Digite a primeira nota: (somente numeros) 0 Digite a segunda nota: (somente numeros) 0 Digite o nome do aluno: Digite o nome do aluno que deseja saber a nota: Jesus Jesus: 0.0
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
4️⃣ **Exercício 4.** Uma pista de Kart permite 10 voltas para cada um de 6 corredores. Escreva um programa que leia todos os tempos em segundos e os guarde em um dicionário, onde a chave é o nome do corredor. Ao final diga de quem foi a melhor volta da prova e em que volta; e ainda a classificação final em ordem (1o o campeão). O campeão é o que tem a menor média de tempos.
def le_tempos_corridas(array_tempos=[], numero_voltas=0): if numero_voltas < 10: tempo_volta = float( input(f'[{numero_voltas+1}] Digite o tempo: (numerico/seg) ')) if tempo_volta > 0: array_tempos.append(tempo_volta) le_tempos_corridas(array_tempos, numero_voltas+1) else: print('# Valor invalido no tempo da volta!') le_tempos_corridas(array_tempos, numero_voltas) return array_tempos def le_corredores(dicionario={}, num_corredores=0): if num_corredores < 6: nome_corredor = input( f'[{num_corredores+1}] Digite o nome do corredor: ') if nome_corredor.isalpha(): array_tempos = le_tempos_corridas(array_tempos=[]) dicionario[nome_corredor] = sorted(array_tempos) le_corredores(dicionario, num_corredores+1) else: print('# Valor invalido no nome do corredor!') le_corredores(dicionario, num_corredores) return dicionario def calc_media_tempos(dicionario): return {corredor: sum(array_tempos)/len(array_tempos) for corredor, array_tempos in dicionario.items()} dicionario = le_corredores() for i in sorted(dicionario, key=dicionario.get): print( f'# {i.capitalize()} teve a melhor volta com duracao de {dicionario[i][0]} segundos!') break dicionario_medias = calc_media_tempos(dicionario) for index, i in enumerate(sorted(dicionario_medias, key=dicionario_medias.get)): print( f'[{index+1} Lugar] {i.capitalize()} com media de {dicionario_medias[i]} segundos!') if index == 2: break
[1] Digite o nome do corredor: Larissa [1] Digite o tempo: (numerico/seg) 10 [2] Digite o tempo: (numerico/seg) 15 [2] Digite o nome do corredor: Jesus [1] Digite o tempo: (numerico/seg) 0 # Valor invalido no tempo da volta! [1] Digite o tempo: (numerico/seg) 1 [2] Digite o tempo: (numerico/seg) 1 # Jesus teve a melhor volta com duracao de 1.0 segundos! [1 Lugar] Jesus com media de 1.0 segundos! [2 Lugar] Larissa com media de 12.5 segundos!
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
6️⃣ **Exercício 6.** Criar 10 frozensets com 30 números aleatórios cada, e construir um dicionário que contenha a soma de cada um deles.
import random def get_random_set(size): return frozenset(random.sample(range(1, 100), size)) def get_random_sets(size, num_sets): return [get_random_set(size) for _ in range(num_sets)] def get_dict_from_sets_sum(sets): return {key: sum(value) for key, value in enumerate(sets)} _sets = get_random_sets(30, 10) _dict = get_dict_from_sets_sum(_sets) print(_dict)
{0: 1334, 1: 1552, 2: 1762, 3: 1387, 4: 1535, 5: 1672, 6: 1422, 7: 1572, 8: 1567, 9: 1562}
MIT
semana-02/lista-exercicio/lista-3/poo2-lista3-larissa_justen.ipynb
larissajusten/ufsc-object-oriented-programming
Creating synthetic samplesAfter training the synthesizer on top of fraudulent events we are able to generate as many as desired synthetic samples, always having in mind there's a trade-off between the number of records used for the model training and privacy.
#Importing the required packages import os from ydata.synthesizers.regular import RegularSynthesizer try: os.mkdir('outputs') except FileExistsError as e: print('Directory already exists')
_____no_output_____
MIT
5 - synthetic-data-applications/regular-tabular/credit_card_fraud-balancing/pipeline/sample_synth.ipynb
ydataai/Blog
Init the synth & Samples generation
n_samples = os.environ['NSAMPLES'] model = RegularSynthesizer.load('outputs/synth_model.pkl') synth_data = model.sample(int(n_samples))
INFO: 2022-02-20 23:44:25,790 [SYNTHESIZER] - Start generating model samples.
MIT
5 - synthetic-data-applications/regular-tabular/credit_card_fraud-balancing/pipeline/sample_synth.ipynb
ydataai/Blog
Sending the synthetic samples to the next pipeline stage
OUTPUT_PATH=os.environ['OUTPUT_PATH'] from ydata.connectors.filetype import FileType from ydata.connectors import LocalConnector conn = LocalConnector() #Creating the output with the synthetic sample conn.write_file(synth_data, path=OUTPUT_PATH, file_type = FileType.CSV)
_____no_output_____
MIT
5 - synthetic-data-applications/regular-tabular/credit_card_fraud-balancing/pipeline/sample_synth.ipynb
ydataai/Blog
module name here> API details.
#hide from nbdev.showdoc import * from fastcore.test import *
_____no_output_____
Apache-2.0
00_core.ipynb
akshaysynerzip/hello_nbdev
This is a function to say hello
#export def say_hello(to): "Say hello to somebody" return f'Hello {to}!' say_hello("Akshay") test_eq(say_hello("akshay"), "Hello akshay!")
_____no_output_____
Apache-2.0
00_core.ipynb
akshaysynerzip/hello_nbdev
Demo: ShiftAmountActivityThe basic steps to set up an OpenCLSim simulation are:* Import libraries* Initialise simpy environment* Define object classes* Create objects * Create sites * Create vessels * Create activities* Register processes and run simpy----This notebook shows the workings of the ShiftAmountActivity. This activity uses a processor to transfer a specified number of objects from an origin resource, which must have a container, to a destination resource, which also must have a container. In this case it shifts payload from a from_site to vessel01.NB: The ShiftAmountActivity checks the possible amount of objects which can be transferred, based on the number of objects available in the origin, the number of objects which can be stored in the destination and the number of objects requested to be transferred. If the number of actually to be transferred objects is zero than an exception is raised. These cases have to be prevented by using appropriate events. 0. Import libraries
import datetime, time import simpy import shapely.geometry import pandas as pd import openclsim.core as core import openclsim.model as model import openclsim.plot as plot
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
1. Initialise simpy environment
# setup environment simulation_start = 0 my_env = simpy.Environment(initial_time=simulation_start)
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
2. Define object classes
# create a Site object based on desired mixin classes Site = type( "Site", ( core.Identifiable, core.Log, core.Locatable, core.HasContainer, core.HasResource, ), {}, ) # create a TransportProcessingResource object based on desired mixin classes TransportProcessingResource = type( "TransportProcessingResource", ( core.Identifiable, core.Log, core.ContainerDependentMovable, core.HasResource, core.Processor, ), {}, )
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3. Create objects 3.1. Create site object(s)
# prepare input data for from_site location_from_site = shapely.geometry.Point(4.18055556, 52.18664444) data_from_site = {"env": my_env, "name": "from_site", "geometry": location_from_site, "capacity": 100, "level": 100 } # instantiate to_site from_site = Site(**data_from_site)
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3.2. Create vessel object(s)
# prepare input data for vessel_01 data_vessel01 = {"env": my_env, "name": "vessel01", "geometry": location_from_site, "capacity": 5, "compute_v": lambda x: 10 } # instantiate vessel_01 vessel01 = TransportProcessingResource(**data_vessel01)
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
3.3 Create activity/activities
# initialise registry registry = {} shift_amount_activity_data = model.ShiftAmountActivity( env=my_env, name="Shift amount activity", registry=registry, processor=vessel01, origin=from_site, destination=vessel01, amount=100, duration=60, )
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
4. Register processes and run simpy
model.register_processes([shift_amount_activity_data]) my_env.run()
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
5. Inspect results 5.1 Inspect logsWe can now inspect the logs. The model now shifted cargo from the from_site onto vessel01.
display(plot.get_log_dataframe(shift_amount_activity_data, [shift_amount_activity_data])) display(plot.get_log_dataframe(from_site, [shift_amount_activity_data])) display(plot.get_log_dataframe(vessel01, [shift_amount_activity_data]))
_____no_output_____
MIT
notebooks/03_ShiftAmountActivity.ipynb
TUDelft-CITG/Hydraulic-Infrastructure-Realisation
Early Reinforcement LearningWith the advances of modern computing power, the study of Reinforcement Learning is having a heyday. Machines are now able to learn complex tasks once thought to be solely in the domain of humans, from controlling the [heating and cooling in massive data centers](https://www.technologyreview.com/s/611902/google-just-gave-control-over-data-center-cooling-to-an-ai/) to beating [grandmasters at Starcraft](https://storage.googleapis.com/deepmind-media/research/alphastar/AlphaStar_unformatted.pdf). As magnificent as it may seem today, it had humble roots many decades ago. Seeing how far it's come, it's a wonder to see how far it will go!Let's take a step back in time to see how these early algorithms developed. Many of these algorithms make sense given the context of when they were created. Challenge yourself and see if you can come up with the same strategies given the right problem. Ok! Time to cozy up for a story.This is the hero of our story, the gumdrop emoji. It was enjoying a cool winter day building a snowman when suddenly, it slipped and fell on a frozen lake of death.The lake can be thought of as a 4 x 4 grid where the gumdrop can move left (0), down (1), right (2) and up (3). Unfortunately, this frozen lake of death has holes of death where if the gumdrop enters that square, it will fall in and meet an untimely demise. To make matters worse, the lake is surrounded by icy boulders that if the gumdrop attempts to climb, will have it slip back into its original position. Thankfully, at the bottom right of the lake is a safe ramp that leads to a nice warm cup of hot cocoa. Set UpWe can try and save the gumdrop ourselves! This is a common game people begin their Reinforcement Learning journey with, and is included in the OpenAI's python package [Gym](https://gym.openai.com/) and is aptly named [FrozenLake-v0](https://gym.openai.com/envs/FrozenLake-v0/) ([code](https://github.com/openai/gym/blob/master/gym/envs/toy_text/frozen_lake.py)). No time to waste, let's get the environment up and running. Run the below to install the needed libraries if they are not installed already.
# Ensure the right version of Tensorflow is installed. !pip install tensorflow==2.5 --user
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
**NOTE**: In the output of the above cell you may ignore any WARNINGS or ERRORS related to the dependency resolver. If you get any related errors mentioned above please rerun the above cell.
!pip install gym==0.12.5 --user
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
There are [four methods from Gym](http://gym.openai.com/docs/) that are going to be useful to us in order to save the gumdrop.* `make` allows us to build the environment or game that we can pass actions to* `reset` will reset an environment to it's starting configuration and return the state of the player* `render` displays the environment for human eyes* `step` takes an action and returns the player's next state.Let's make, reset, and render the game. The output is an ANSI string with the following characters:* `S` for starting point* `F` for frozen* `H` for hole* `G` for goal* A red square indicates the current position **Note**: Restart the kernel if the above libraries needed to be installed
import gym import numpy as np import random env = gym.make('FrozenLake-v0', is_slippery=False) state = env.reset() env.render()
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
If we print the state we'll get `0`. This is telling us which square we're in. Each square is labeled from `0` to `15` from left to right, top to bottom, like this:| | | | ||-|-|-|-||0|1|2|3||4|5|6|7||8|9|10|11||12|13|14|15|
print(state)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
We can make a simple print function to let us know whether it's game won, game over, or game on.
def print_state(state, done): statement = "Still Alive!" if done: statement = "Cocoa Time!" if state == 15 else "Game Over!" print(state, "-", statement)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
We can control the gumdrop ourselves with the `step` method. Run the below cell over and over again trying to move from the starting position to the goal. Good luck!
#0 left #1 down #2 right #3 up # Uncomment to reset the game #env.reset() action = 2 # Change me, please! state, _, done, _ = env.step(action) env.render() print_state(state, done)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Were you able to reach the hot chocolate? If so, great job! There are multiple paths through the maze. One solution is `[1, 1, 2, 2, 1, 2]`. Let's loop through our actions in order to get used to interacting with the environment programmatically.
def play_game(actions): state = env.reset() step = 0 done = False while not done and step < len(actions): action = actions[step] state, _, done, _ = env.step(action) env.render() step += 1 print_state(state, done) actions = [1, 1, 2, 2, 1, 2] # Replace with your favorite path. play_game(actions)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Nice, so we know how to get through the maze, but how do we teach that to the gumdrop? It's just some bytes in an android phone. It doesn't have our human insight.We could give it our list of actions directly, but then it would be copying us and not really learning. This was a tricky one to the mathematicians and computer scientists originally trying to solve this problem. How do we teach a machine to do this without human insight? Value IterationLet's turn the clock back on our time machines to 1957 to meet Mr. [Richard Bellman](https://en.wikipedia.org/wiki/Richard_E._Bellman). Bellman started his academic career in mathematics, but due to World War II, left his postgraduate studies at John Hopkins to teach electronics as part of the war effort (as chronicled by J. J. O'Connor and E. F. Robertson [here](https://www-history.mcs.st-andrews.ac.uk/Biographies/Bellman.html)). When the war was over, and it came time for him to focus on his next area of research, he became fascinated with [Dynamic Programming](https://en.wikipedia.org/wiki/Dynamic_programming): the idea of breaking a problem down into sub-problems and using recursion to solve the larger problem.Eventually, his research landed him on [Markov Decision Processes](https://en.wikipedia.org/wiki/Markov_decision_process). These processes are a graphical way of representing how to make a decision based on a current state. States are connected to other states with positive and negative rewards that can be picked up along the way.Sound familiar at all? Perhaps our Frozen Lake?In the lake case, each cell is a state. The `H`s and the `G` are a special type of state called a "Terminal State", meaning they can be entered, but they have no leaving connections. What of rewards? Let's say the value of losing our life is the negative opposite of getting to the goal and staying alive. Thus, we can assign the reward of entering a death hole as -1, and the reward of escaping as +1.Bellman's first breakthrough with this type of problem is now known as Value Iteration ([his original paper](http://www.iumj.indiana.edu/IUMJ/FULLTEXT/1957/6/56038)). He introduced a variable, gamma (γ), to represent discounted future rewards. He also introduced a function of policy (π) that takes a state (s), and outputs corresponding suggested action (a). The goal is to find the value of a state (V), given the rewards that occur when following an action in a particular state (R).Gamma, the discount, is the key ingredient here. If my time steps were in days, and my gamma was .9, `$100` would be worth `$100` to me today, `$90` tomorrow, `$81` the day after, and so on. Putting this all together, we get the Bellman Equationsource: [Wikipedia](https://en.wikipedia.org/wiki/Bellman_equation)In other words, the value of our current state, `current_values`, is equal to the discount times the value of the next state, `next_values`, given the policy the agent will follow. For now, we'll have our agent assume a greedy policy: it will move towards the state with the highest calculated value. If you're wondering what P is, don't worry, we'll get to that later.Let's program it out and see it in action! We'll set up an array representing the lake with -1 as the holes, and 1 as the goal. Then, we'll set up an array of zeros to start our iteration.
LAKE = np.array([[0, 0, 0, 0], [0, -1, 0, -1], [0, 0, 0, -1], [-1, 0, 0, 1]]) LAKE_WIDTH = len(LAKE[0]) LAKE_HEIGHT = len(LAKE) DISCOUNT = .9 # Change me to be a value between 0 and 1. current_values = np.zeros_like(LAKE)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
The Gym environment class has a handy property for finding the number of states in an environment called `observation_space`. In our case, there a 16 integer states, so it will label it as "Discrete". Similarly, `action_space` will tell us how many actions are available to the agent.Let's take advantage of these to make our code portable between different lakes sizes.
print("env.observation_space -", env.observation_space) print("env.observation_space.n -", env.observation_space.n) print("env.action_space -", env.action_space) print("env.action_space.n -", env.action_space.n) STATE_SPACE = env.observation_space.n ACTION_SPACE = env.action_space.n STATE_RANGE = range(STATE_SPACE) ACTION_RANGE = range(ACTION_SPACE)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
We'll need some sort of function to figure out what the best neighboring cell is. The below function take's a cell of the lake, and looks at the current value mapping (to be called with `current_values`, and see's what the value of the adjacent state is corresponding to the given `action`.
def get_neighbor_value(state_x, state_y, values, action): """Returns the value of a state's neighbor. Args: state_x (int): The state's horizontal position, 0 is the lake's left. state_y (int): The state's vertical position, 0 is the lake's top. values (float array): The current iteration's state values. policy (int): Which action to check the value for. Returns: The corresponding action's value. """ left = [state_y, state_x-1] down = [state_y+1, state_x] right = [state_y, state_x+1] up = [state_y-1, state_x] actions = [left, down, right, up] direction = actions[action] check_x = direction[1] check_y = direction[0] is_boulder = check_y < 0 or check_y >= LAKE_HEIGHT \ or check_x < 0 or check_x >= LAKE_WIDTH value = values[state_y, state_x] if not is_boulder: value = values[check_y, check_x] return value
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
But this doesn't find the best action, and the gumdrop is going to need that if it wants to greedily get off the lake. The `get_max_neighbor` function we've defined below takes a number corresponding to a cell as `state_number` and the same value mapping as `get_neighbor_value`.
def get_state_coordinates(state_number): state_x = state_number % LAKE_WIDTH state_y = state_number // LAKE_HEIGHT return state_x, state_y def get_max_neighbor(state_number, values): """Finds the maximum valued neighbor for a given state. Args: state_number (int): the state to find the max neighbor for state_values (float array): the respective value of each state for each cell of the lake. Returns: max_value (float): the value of the maximum neighbor. policy (int): the action to take to move towards the maximum neighbor. """ state_x, state_y = get_state_coordinates(state_number) # No policy or best value yet best_policy = -1 max_value = -np.inf # If the cell has something other than 0, it's a terminal state. if LAKE[state_y, state_x]: return LAKE[state_y, state_x], best_policy for action in ACTION_RANGE: neighbor_value = get_neighbor_value(state_x, state_y, values, action) if neighbor_value > max_value: max_value = neighbor_value best_policy = action return max_value, best_policy
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Now, let's write our value iteration code. We'll write a function that comes out one step of the iteration by checking each state and finding its maximum neighbor. The values will be reshaped so that it's in the form of the lake, but the policy will stay as a list of ints. This way, when Gym returns a state, all we need to do is look at the corresponding index in the policy list to tell our agent where to go.
def iterate_value(current_values): """Finds the future state values for an array of current states. Args: current_values (int array): the value of current states. Returns: next_values (int array): The value of states based on future states. next_policies (int array): The recommended action to take in a state. """ next_values = [] next_policies = [] for state in STATE_RANGE: value, policy = get_max_neighbor(state, current_values) next_values.append(value) next_policies.append(policy) next_values = np.array(next_values).reshape((LAKE_HEIGHT, LAKE_WIDTH)) return next_values, next_policies next_values, next_policies = iterate_value(current_values)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
This is what our values look like after one step. Right now, it just looks like the lake. That's because we started with an array of zeros for `current_values`, and the terminal states of the lake were loaded in.
next_values
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
And this is what our policy looks like reshaped into the form of the lake. The `-1`'s are terminal states. Right now, the agent will move left in any non-terminal state, because it sees all of those states as equal. Remember, if the gumdrop is along the leftmost side of the lake, and tries to move left, it will slip on a boulder and return to the same position.
np.array(next_policies).reshape((LAKE_HEIGHT ,LAKE_WIDTH))
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
There's one last step to apply the Bellman Equation, the `discount`! We'll multiply our next states by the `discount` and set that to our `current_values`. One loop done!
current_values = DISCOUNT * next_values current_values
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Run the below cell over and over again to see how our values change with each iteration. It should be complete after six iterations when the values no longer change. The policy will also change as the values are updated.
next_values, next_policies = iterate_value(current_values) print("Value") print(next_values) print("Policy") print(np.array(next_policies).reshape((4,4))) current_values = DISCOUNT * next_values
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Have a completed policy? Let's see it in action! We'll update our `play_game` function to instead take our list of policies. That way, we can start in a random position and still get to the end.
def play_game(policy): state = env.reset() step = 0 done = False while not done: action = policy[state] # This line is new. state, _, done, _ = env.step(action) env.render() step += 1 print_state(state, done) play_game(next_policies)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Phew! Good job, team! The gumdrop made it out alive. So what became of our gumdrop hero? Well, the next day, it was making another snowman and fell onto an even more slippery and deadly lake. Doh! Turns out this story is part of a trilogy. Feel free to move onto the next section after your own sip of cocoa, coffee, tea, or poison of choice. Policy IterationYou may have noticed that the first lake was built with the parameter `is_slippery=False`. This time, we're going to switch it to `True`.
env = gym.make('FrozenLake-v0', is_slippery=True) state = env.reset() env.render()
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Hmm, looks the same as before. Let's try applying our old policy and see what happens.
play_game(next_policies)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Was there a game over? There's a small chance that the gumdrop made it to the end, but it's much more likely that it accidentally slipped and fell into a hole. Oh no! We can try repeatedly testing the above code cell over and over again, but it might take a while. In fact, this is a similar roadblock Bellman and his colleagues faced.How efficient is Value Iteration? On our modern machines, this algorithm ran fairly quickly, but back in 1960, that wasn't the case. Let's say our lake is a long straight line like this:| | | | | | | ||-|-|-|-|-|-|-||S|F|F|F|F|F|H|This is the worst case scenario for value iteration. In each iteration, we look at every state (s) and each action per state (a), so one step of value iteration is O(s*a). In the case of our lake line, each iteration only updates one cell. In other words, the value iteration step needs to be run `s` times. In total, that's O(s2a).Back in 1960, that was computationally heavy, and so [Ronald Howard](https://en.wikipedia.org/wiki/Ronald_A._Howard) developed an alteration of Value Iteration that mildly sacrificed mathematical accuracy for speed.Here's the strategy: it was observed that the optimal policy often converged before value iteration was complete. To take advantage of this, we'll start with random policy. When we iterate over our values, we'll use this policy instead of trying to find the maximum neighbor. This has been coded out in `find_future_values` below.
def find_future_values(current_values, current_policies): """Finds the next set of future values based on the current policy.""" next_values = [] for state in STATE_RANGE: current_policy = current_policies[state] state_x, state_y = get_state_coordinates(state) # If the cell has something other than 0, it's a terminal state. value = LAKE[state_y, state_x] if not value: value = get_neighbor_value( state_x, state_y, current_values, current_policy) next_values.append(value) return np.array(next_values).reshape((LAKE_HEIGHT, LAKE_WIDTH))
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
After we've calculated our new values, then we'll update the policy (and not the values) based on the maximum neighbor. If there's no change in the policy, then we're done. The below is very similar to our `get_max_neighbor` function. Can you see the differences?
def find_best_policy(next_values): """Finds the best policy given a value mapping.""" next_policies = [] for state in STATE_RANGE: state_x, state_y = get_state_coordinates(state) # No policy or best value yet max_value = -np.inf best_policy = -1 if not LAKE[state_y, state_x]: for policy in ACTION_RANGE: neighbor_value = get_neighbor_value( state_x, state_y, next_values, policy) if neighbor_value > max_value: max_value = neighbor_value best_policy = policy next_policies.append(best_policy) return next_policies
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
To complete the Policy Iteration algorithm, we'll combine the two functions above. Conceptually, we'll be alternating between updating our value function and updating our policy function.
def iterate_policy(current_values, current_policies): """Finds the future state values for an array of current states. Args: current_values (int array): the value of current states. current_policies (int array): a list where each cell is the recommended action for the state matching its index. Returns: next_values (int array): The value of states based on future states. next_policies (int array): The recommended action to take in a state. """ next_values = find_future_values(current_values, current_policies) next_policies = find_best_policy(next_values) return next_values, next_policies
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Next, let's modify the `get_neighbor_value` function to now include the slippery ice. Remember the `P` in the Bellman Equation above? It stands for the probability of ending up in a new state given the current state and action taken. That is, we'll take a weighted sum of the values of all possible states based on our chances to be in those states.How does the physics of the slippery ice work? For this lake, whenever the gumdrop tries to move in a particular direction, there are three possible positions that it could end up with. It could move where it was intending to go, but it could also end up to the left or right of the direction it was facing. For instance, if it wanted to move right, it could end up on the square above or below it! This is depicted below, with the yellow squares being potential positions after attempting to move right.Each of these has an equal probability chance of happening. So since there are three outcomes, they each have about a 33% chance to happen. What happens if we slip in the direction of a boulder? No problem, we'll just end up not moving anywhere. Let's make a function to find what our possible locations could be given a policy and state coordinates.
def get_locations(state_x, state_y, policy): left = [state_y, state_x-1] down = [state_y+1, state_x] right = [state_y, state_x+1] up = [state_y-1, state_x] directions = [left, down, right, up] num_actions = len(directions) gumdrop_right = (policy - 1) % num_actions gumdrop_left = (policy + 1) % num_actions locations = [gumdrop_left, policy, gumdrop_right] return [directions[location] for location in locations]
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Then, we can add it to `get_neighbor_value` to find the weighted value of all the possible states the gumdrop can end up in.
def get_neighbor_value(state_x, state_y, values, policy): """Returns the value of a state's neighbor. Args: state_x (int): The state's horizontal position, 0 is the lake's left. state_y (int): The state's vertical position, 0 is the lake's top. values (float array): The current iteration's state values. policy (int): Which action to check the value for. Returns: The corresponding action's value. """ locations = get_locations(state_x, state_y, policy) location_chance = 1.0 / len(locations) total_value = 0 for location in locations: check_x = location[1] check_y = location[0] is_boulder = check_y < 0 or check_y >= LAKE_HEIGHT \ or check_x < 0 or check_x >= LAKE_WIDTH value = values[state_y, state_x] if not is_boulder: value = values[check_y, check_x] total_value += location_chance * value return total_value
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
For Policy Iteration, we'll start off with a random policy if only because the Gumdrop doesn't know any better yet. We'll reset our current values while we're at it.
current_values = np.zeros_like(LAKE) policies = np.random.choice(ACTION_RANGE, size=STATE_SPACE) np.array(policies).reshape((4,4))
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
As before with Value Iteration, run the cell below multiple until the policy no longer changes. It should only take 2-3 clicks compared to Value Iteration's 6.
next_values, policies = iterate_policy(current_values, policies) print("Value") print(next_values) print("Policy") print(np.array(policies).reshape((4,4))) current_values = DISCOUNT * next_values
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Hmm, does this work? Let's see! Run the cell below to watch the gumdrop slip its way to victory.
play_game(policies)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
So what was the learned strategy here? The gumdrop learned to hug the left wall of boulders until it was down far enough to make a break for the exit. Instead of heading directly for it though, it took advantage of actions that did not have a hole of death in them. Patience is a virtue!We promised this story was a trilogy, and yes, the next day, the gumdrop fell upon a frozen lake yet again. Q LearningValue Iteration and Policy Iteration are great techniques, but what if we don't know how big the lake is? With real world problems, not knowing how many potential states are can be a definite possibility.Enter [Chris Watkins](http://www.cs.rhul.ac.uk/~chrisw/). Inspired by how animals learn with delayed rewards, he came up with the idea of [Q Learning](http://www.cs.rhul.ac.uk/~chrisw/new_thesis.pdf) as an evolution of [Richard Sutton's](https://en.wikipedia.org/wiki/Richard_S._Sutton) [Temporal Difference Learning](https://en.wikipedia.org/wiki/Temporal_difference_learning). Watkins noticed that animals learn from positive and negative rewards, and that they often make mistakes in order to optimize a skill.From this emerged the idea of a Q table. In the lake case, it would look something like this.| |Left|Down|Right|Up||-|-|-|-|-||0| | | | ||1| | | | ||...| | | | |Here's the strategy: our agent will explore the environment. As the agent observes new states, we'll add more rows to our table. Whenever it moves from one state to the next, we'll update the cell corresponding to the old state based on the Bellman Equation. The agent doesn't need to know what the probabilities are between transitions. It'll learn the value of these as it experiments.For Q learning, this works by looking at the row that corresponds to the agent's current state. Then, we'll select the action with the highest value. There are multiple ways to initialize the Q-table, but for us, we'll start with all zeros. In that case, when selecting the best action, we'll randomly select between tied max values. If we don't, the agent will favor certain actions which will limit its exploration.To be able to handle an unknown number of states, we'll initialize our q_table as one row to represent our initial state. Then, we'll make a dictionary to map new states to rows in the table.
new_row = np.zeros((1, env.action_space.n)) q_table = np.copy(new_row) q_map = {0: 0} def print_q(q_table, q_map): print("mapping") print(q_map) print("q_table") print(q_table) print_q(q_table, q_map)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Our new `get_action` function will help us read the `q_table` and find the best action.First, we'll give the agent the ability to act randomly as opposed to choosing the best known action. This gives it the ability to explore and find new situations. This is done with a random chance to act randomly. So random!When the Gumdrop chooses not to act randomly, it will instead act based on the best action recorded in the `q_table`. Numpy's [argwhere](https://docs.scipy.org/doc/numpy/reference/generated/numpy.argwhere.html) is used to find the indexes with the maximum value in the q-table row corresponding to our current state. Since numpy is often used with higher dimensional data, each index is returned as a list of ints. Our indexes are really one dimensional since we're just looking within a single row, so we'll use [np.squeeze](https://docs.scipy.org/doc/numpy/reference/generated/numpy.squeeze.html) to remove the extra brackets. To randomly select from the indexes, we'll use [np.random.choice](https://docs.scipy.org/doc/numpy-1.14.1/reference/generated/numpy.random.choice.html).
def get_action(q_map, q_table, state_row, random_rate): """Find max-valued actions and randomly select from them.""" if random.random() < random_rate: return random.randint(0, ACTION_SPACE-1) action_values = q_table[state_row] max_indexes = np.argwhere(action_values == action_values.max()) max_indexes = np.squeeze(max_indexes, axis=-1) action = np.random.choice(max_indexes) return action
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Here, we'll define how the `q_table` gets updated. We'll apply the Bellman Equation as before, but since there is so much luck involved between slipping and random actions, we'll update our `q_table` as a weighted average between the `old_value` we're updating and the `future_value` based on the best action in the next state. That way, there's a little bit of memory between old and new experiences.
def update_q(q_table, new_state_row, reward, old_value): """Returns an updated Q-value based on the Bellman Equation.""" learning_rate = .1 # Change to be between 0 and 1. future_value = reward + DISCOUNT * np.max(q_table[new_state_row]) return old_value + learning_rate * (future_value - old_value)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
We'll update our `play_game` function to take our table and mapping, and at the end, we'll return any updates to them. Once we observe new states, we'll check our mapping and add then to the table if space isn't allocated for them already.Finally, for every `state` - `action` - `new-state` transition, we'll update the cell in `q_table` that corresponds to the `state` and `action` with the Bellman Equation.There's a little secret to solving this lake problem, and that's to have a small negative reward when moving between states. Otherwise, the gumdrop will become too afraid of slipping in a death hole to explore out of what is thought to be safe positions.
def play_game(q_table, q_map, random_rate, render=False): state = env.reset() step = 0 done = False while not done: state_row = q_map[state] action = get_action(q_map, q_table, state_row, random_rate) new_state, _, done, _ = env.step(action) #Add new state to table and mapping if it isn't there already. if new_state not in q_map: q_map[new_state] = len(q_table) q_table = np.append(q_table, new_row, axis=0) new_state_row = q_map[new_state] reward = -.01 #Encourage exploration. if done: reward = 1 if new_state == 15 else -1 current_q = q_table[state_row, action] q_table[state_row, action] = update_q( q_table, new_state_row, reward, current_q) step += 1 if render: env.render() print_state(new_state, done) state = new_state return q_table, q_map
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Ok, time to shine, gumdrop emoji! Let's do one simulation and see what happens.
# Run to refresh the q_table. random_rate = 1 q_table = np.copy(new_row) q_map = {0: 0} q_table, q_map = play_game(q_table, q_map, random_rate, render=True) print_q(q_table, q_map)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Unless the gumdrop was incredibly lucky, chances were, it fell in some death water. Q-learning is markedly different from Value Iteration or Policy Iteration in that it attempts to simulate how an animal learns in unknown situations. Since the layout of the lake is unknown to the Gumdrop, it doesn't know which states are death holes, and which ones are safe. Because of this, it's going to make many mistakes before it can start making successes.Feel free to run the above cell multiple times to see how the gumdrop steps through trial and error. When you're ready, run the below cell to have the gumdrop play 1000 times.
for _ in range(1000): q_table, q_map = play_game(q_table, q_map, random_rate) random_rate = random_rate * .99 print_q(q_table, q_map) random_rate
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Cats have nine lives, our Gumdrop lived a thousand! Moment of truth. Can it get out of the lake now that it matters?
q_table, q_map = play_game(q_table, q_map, 0, render=True)
_____no_output_____
Apache-2.0
quests/rl/early_rl/early_rl.ipynb
mohakala/training-data-analyst
Web interface `s4_design_sim_tool`> Interactive web-based user interface for the CMB-S4 reference simulation tool See the [Documentation](https://cmb-s4.github.io/s4_design_sim_tool/)If your browser doesn't visualize the widget input boxes, try reloading the page and **disable your adblocker**.For support requests, [open an issue on the `s4_design_sim_tool` repository](https://github.com/CMB-S4/s4_design_sim_tool/issues)
# default_exp ui import ipywidgets as widgets from IPython.display import display w = {} for emission in ["foreground_emission", "CMB_unlensed", "CMB_lensing_signal"]: w[emission] = widgets.BoundedFloatText( value=1, min=0, max=1, step=0.01, description='Weight:', disabled=False ) emission = "CMB_tensor_to_scalar_ratio" w[emission] = widgets.BoundedFloatText( value=3e-3, min=0, max=1, step=1e-5, description=f'r:', disabled=False )
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Sky emission weightingEach sky emission has a weighting factor between 0 and 1 Foreground emissionSynchrotron, Dust, Free-free, AMEWebsky CIB, tSZ, kSZ
display(w["foreground_emission"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
Unlensed CMBPlanck cosmological parameters, no tensor modes
display(w["CMB_unlensed"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
CMB lensing signalCMB lensed - CMB unlensed:* 1 for lensed CMB* 0 for unlensed CMB* `>0, <1` for residual after de-lensingFor the case of partial de-lensing, consider that lensing is a non-linear and this is a very rough approximation, still it could be useful in same cases, for example low-ell BB modes.
display(w["CMB_lensing_signal"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool
CMB tensor to scalar ratioValue of the `r` cosmological parameter
display(w["CMB_tensor_to_scalar_ratio"])
_____no_output_____
Apache-2.0
06_ui.ipynb
CMB-S4/s4_design_sim_tool