markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
4. Visualize the resultsLet's take a look:
for i in range(1, len(average_precisions)): print("{:<14}{:<6}{}".format(classes[i], 'AP', round(average_precisions[i], 3))) print() print("{:<14}{:<6}{}".format('','mAP', round(mean_average_precision, 3))) m = max((n_classes + 1) // 2, 2) n = 2 fig, cells = plt.subplots(m, n, figsize=(n*8,m*8)) for i in range(m): for j in range(n): if n*i+j+1 > n_classes: break cells[i, j].plot(recalls[n*i+j+1], precisions[n*i+j+1], color='blue', linewidth=1.0) cells[i, j].set_xlabel('recall', fontsize=14) cells[i, j].set_ylabel('precision', fontsize=14) cells[i, j].grid(True) cells[i, j].set_xticks(np.linspace(0,1,11)) cells[i, j].set_yticks(np.linspace(0,1,11)) cells[i, j].set_title("{}, AP: {:.3f}".format(classes[n*i+j+1], average_precisions[n*i+j+1]), fontsize=16)
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
5. Advanced use`Evaluator` objects maintain copies of all relevant intermediate results like predictions, precisions and recalls, etc., so in case you want to experiment with different parameters, e.g. different IoU overlaps, there is no need to compute the predictions all over again every time you make a change to a parameter. Instead, you can only update the computation from the point that is affected onwards.The evaluator's `__call__()` method is just a convenience wrapper that executes its other methods in the correct order. You could just call any of these other methods individually as shown below (but you have to make sure to call them in the correct order).Note that the example below uses the same evaluator object as above. Say you wanted to compute the Pascal VOC post-2010 'integrate' version of the average precisions instead of the pre-2010 version computed above. The evaluator object still has an internal copy of all the predictions, and since computing the predictions makes up the vast majority of the overall computation time and since the predictions aren't affected by changing the average precision computation mode, we skip computing the predictions again and instead only compute the steps that come after the prediction phase of the evaluation. We could even skip the matching part, since it isn't affected by changing the average precision mode either. In fact, we would only have to call `compute_average_precisions()` `compute_mean_average_precision()` again, but for the sake of illustration we'll re-do the other computations, too.
evaluator.get_num_gt_per_class(ignore_neutral_boxes=True, verbose=False, ret=False) evaluator.match_predictions(ignore_neutral_boxes=True, matching_iou_threshold=0.5, border_pixels='include', sorting_algorithm='quicksort', verbose=True, ret=False) precisions, recalls = evaluator.compute_precision_recall(verbose=True, ret=True) average_precisions = evaluator.compute_average_precisions(mode='integrate', num_recall_points=11, verbose=True, ret=True) mean_average_precision = evaluator.compute_mean_average_precision(ret=True) for i in range(1, len(average_precisions)): print("{:<14}{:<6}{}".format(classes[i], 'AP', round(average_precisions[i], 3))) print() print("{:<14}{:<6}{}".format('','mAP', round(mean_average_precision, 3)))
aeroplane AP 0.822 bicycle AP 0.874 bird AP 0.787 boat AP 0.713 bottle AP 0.505 bus AP 0.899 car AP 0.89 cat AP 0.923 chair AP 0.61 cow AP 0.845 diningtable AP 0.79 dog AP 0.899 horse AP 0.903 motorbike AP 0.875 person AP 0.825 pottedplant AP 0.526 sheep AP 0.811 sofa AP 0.83 train AP 0.906 tvmonitor AP 0.797 mAP 0.802
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
Roboschool simulations of physical robotics with Amazon SageMaker--- IntroductionRoboschool is an [open source](https://github.com/openai/roboschool/tree/master/roboschool) physics simulator that is commonly used to train RL policies for simulated robotic systems. Roboschool provides 3D visualization of physical systems with multiple joints in contact with each other and their environment.This notebook will show how to install Roboschool into the SageMaker RL container, and train pre-built robotics applications that are included with Roboschool. Pick which Roboschool problem to solveRoboschool defines a [variety](https://github.com/openai/roboschool/blob/master/roboschool/__init__.py) of Gym environments that correspond to different robotics problems. Here we're highlighting a few of them at varying levels of difficulty:- **Reacher (easy)** - a very simple robot with just 2 joints reaches for a target- **Hopper (medium)** - a simple robot with one leg and a foot learns to hop down a track - **Humanoid (difficult)** - a complex 3D robot with two arms, two legs, etc. learns to balance without falling over and then to run on a trackThe simpler problems train faster with less computational resources. The more complex problems are more fun.
# Uncomment the problem to work on roboschool_problem = "reacher" # roboschool_problem = 'hopper' # roboschool_problem = 'humanoid'
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Pre-requisites ImportsTo get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
import sagemaker import boto3 import sys import os import glob import re import subprocess import numpy as np from IPython.display import HTML import time from time import gmtime, strftime sys.path.append("common") from misc import get_execution_role, wait_for_s3_object from docker_utils import build_and_push_docker_image from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Setup S3 bucketSet up the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata.
sage_session = sagemaker.session.Session() s3_bucket = sage_session.default_bucket() s3_output_path = "s3://{}/".format(s3_bucket) print("S3 bucket path: {}".format(s3_output_path))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Define Variables We define variables such as the job prefix for the training jobs *and the image path for the container (only when this is BYOC).*
# create a descriptive job name job_name_prefix = "rl-roboschool-" + roboschool_problem
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Configure where training happensYou can train your RL training jobs using the SageMaker notebook instance or local notebook instance. In both of these scenarios, you can run the following in either local or SageMaker modes. The local mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set `local_mode = True`.
# run in local_mode on this machine, or as a SageMaker TrainingJob? local_mode = False if local_mode: instance_type = "local" else: # If on SageMaker, pick the instance type instance_type = "ml.c5.2xlarge"
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Create an IAM roleEither get the execution role when running from a SageMaker notebook instance `role = sagemaker.get_execution_role()` or, when running from local notebook instance, use utils method `role = get_execution_role()` to create an execution role.
try: role = sagemaker.get_execution_role() except: role = get_execution_role() print("Using IAM role arn: {}".format(role))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Install docker for `local` modeIn order to work in `local` mode, you need to have docker installed. When running from you local machine, please make sure that you have docker and docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script to install dependenceis.Note, you can only run a single local notebook at one time.
# only run from SageMaker notebook instance if local_mode: !/bin/bash ./common/setup.sh
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Build docker containerWe must build a custom docker container with Roboschool installed. This takes care of everything:1. Fetching base container image2. Installing Roboschool and its dependencies3. Uploading the new container image to ECRThis step can take a long time if you are running on a machine with a slow internet connection. If your notebook instance is in SageMaker or EC2 it should take 3-10 minutes depending on the instance type.
%%time cpu_or_gpu = "gpu" if instance_type.startswith("ml.p") else "cpu" repository_short_name = "sagemaker-roboschool-ray-%s" % cpu_or_gpu docker_build_args = { "CPU_OR_GPU": cpu_or_gpu, "AWS_REGION": boto3.Session().region_name, } custom_image_name = build_and_push_docker_image(repository_short_name, build_args=docker_build_args) print("Using ECR image %s" % custom_image_name)
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Write the Training CodeThe training code is written in the file “train-coach.py” which is uploaded in the /src directory. First import the environment files and the preset files, and then define the main() function.
!pygmentize src/train-{roboschool_problem}.py
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Train the RL model using the Python SDK Script modeIf you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs. 1. Specify the source directory where the environment, presets and training code is uploaded.2. Specify the entry point as the training code 3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container. 4. Define the training parameters such as the instance count, job name, S3 path for output and job name. 5. Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET or the RLRAY_PRESET can be used to specify the RL agent algorithm you want to use. 6. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
%%time metric_definitions = RLEstimator.default_metric_definitions(RLToolkit.RAY) estimator = RLEstimator( entry_point="train-%s.py" % roboschool_problem, source_dir="src", dependencies=["common/sagemaker_rl"], image_uri=custom_image_name, role=role, instance_type=instance_type, instance_count=1, output_path=s3_output_path, base_job_name=job_name_prefix, metric_definitions=metric_definitions, hyperparameters={ # Attention scientists! You can override any Ray algorithm parameter here: # "rl.training.config.horizon": 5000, # "rl.training.config.num_sgd_iter": 10, }, ) estimator.fit(wait=local_mode) job_name = estimator.latest_training_job.job_name print("Training job: %s" % job_name)
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
VisualizationRL training can take a long time. So while it's running there are a variety of ways we can track progress of the running training job. Some intermediate output gets saved to S3 during training, so we'll set up to capture that.
print("Job name: {}".format(job_name)) s3_url = "s3://{}/{}".format(s3_bucket, job_name) intermediate_folder_key = "{}/output/intermediate/".format(job_name) intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key) print("S3 job path: {}".format(s3_url)) print("Intermediate folder path: {}".format(intermediate_url)) tmp_dir = "/tmp/{}".format(job_name) os.system("mkdir {}".format(tmp_dir)) print("Create local folder {}".format(tmp_dir))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Fetch videos of training rolloutsVideos of certain rollouts get written to S3 during training. Here we fetch the last 10 videos from S3, and render the last one.
recent_videos = wait_for_s3_object( s3_bucket, intermediate_folder_key, tmp_dir, fetch_only=(lambda obj: obj.key.endswith(".mp4") and obj.size > 0), limit=10, training_job_name=job_name, ) last_video = sorted(recent_videos)[-1] # Pick which video to watch os.system("mkdir -p ./src/tmp_render/ && cp {} ./src/tmp_render/last_video.mp4".format(last_video)) HTML('<video src="./src/tmp_render/last_video.mp4" controls autoplay></video>')
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Plot metrics for training jobWe can see the reward metric of the training as it's running, using algorithm metrics that are recorded in CloudWatch metrics. We can plot this to see the performance of the model over time.
%matplotlib inline from sagemaker.analytics import TrainingJobAnalytics if not local_mode: df = TrainingJobAnalytics(job_name, ["episode_reward_mean"]).dataframe() num_metrics = len(df) if num_metrics == 0: print("No algorithm metrics found in CloudWatch") else: plt = df.plot(x="timestamp", y="value", figsize=(12, 5), legend=True, style="b-") plt.set_ylabel("Mean reward per episode") plt.set_xlabel("Training time (s)") else: print("Can't plot metrics in local mode.")
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Monitor training progressYou can repeatedly run the visualization cells to get the latest videos or see the latest metrics as the training job proceeds. Evaluation of RL modelsWe use the last checkpointed model to run evaluation for the RL Agent. Load checkpointed modelCheckpointed data from the previously trained models will be passed on for evaluation / inference in the checkpoint channel. In local mode, we can simply use the local directory, whereas in the SageMaker mode, it needs to be moved to S3 first.
if local_mode: model_tar_key = "{}/model.tar.gz".format(job_name) else: model_tar_key = "{}/output/model.tar.gz".format(job_name) local_checkpoint_dir = "{}/model".format(tmp_dir) wait_for_s3_object(s3_bucket, model_tar_key, tmp_dir, training_job_name=job_name) if not os.path.isfile("{}/model.tar.gz".format(tmp_dir)): raise FileNotFoundError("File model.tar.gz not found") os.system("mkdir -p {}".format(local_checkpoint_dir)) os.system("tar -xvzf {}/model.tar.gz -C {}".format(tmp_dir, local_checkpoint_dir)) print("Checkpoint directory {}".format(local_checkpoint_dir)) if local_mode: checkpoint_path = "file://{}".format(local_checkpoint_dir) print("Local checkpoint file path: {}".format(local_checkpoint_dir)) else: checkpoint_path = "s3://{}/{}/checkpoint/".format(s3_bucket, job_name) if not os.listdir(local_checkpoint_dir): raise FileNotFoundError("Checkpoint files not found under the path") os.system("aws s3 cp --recursive {} {}".format(local_checkpoint_dir, checkpoint_path)) print("S3 checkpoint file path: {}".format(checkpoint_path)) %%time estimator_eval = RLEstimator( entry_point="evaluate-ray.py", source_dir="src", dependencies=["common/sagemaker_rl"], image_uri=custom_image_name, role=role, instance_type=instance_type, instance_count=1, base_job_name=job_name_prefix + "-evaluation", hyperparameters={ "evaluate_episodes": 5, "algorithm": "PPO", "env": "Roboschool%s-v1" % roboschool_problem.capitalize(), }, ) estimator_eval.fit({"model": checkpoint_path}) job_name = estimator_eval.latest_training_job.job_name print("Evaluation job: %s" % job_name)
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Visualize the output Optionally, you can run the steps defined earlier to visualize the output. Model deploymentNow let us deploy the RL policy so that we can get the optimal action, given an environment observation.
from sagemaker.tensorflow.model import TensorFlowModel model = TensorFlowModel(model_data=estimator.model_data, framework_version="2.1.0", role=role) predictor = model.deploy(initial_instance_count=1, instance_type=instance_type) # Mapping of environments to observation space observation_space_mapping = {"reacher": 9, "hopper": 15, "humanoid": 44}
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Now let us predict the actions using a dummy observation
# ray 0.8.2 requires all the following inputs # 'prev_action', 'is_training', 'prev_reward' and 'seq_lens' are placeholders for this example # they won't affect prediction results input = { "inputs": { "observations": np.ones(shape=(1, observation_space_mapping[roboschool_problem])).tolist(), "prev_action": [0, 0], "is_training": False, "prev_reward": -1, "seq_lens": -1, } } result = predictor.predict(input) result["outputs"]["actions"]
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Clean up endpoint
predictor.delete_endpoint()
_____no_output_____
Apache-2.0
reinforcement_learning/rl_roboschool_ray/rl_roboschool_ray.ipynb
Amirosimani/amazon-sagemaker-examples
Plotting Asteroids
import matplotlib.pyplot as plt import numpy as np import pandas as pd
_____no_output_____
MIT
HW_Plotting.ipynb
UWashington-Astro300/Astro300-A21
Imports
import pandas as pd import numpy as np pd.set_option('display.max_colwidth', None) import re from wordcloud import WordCloud import contractions import matplotlib.pyplot as plt import seaborn as sns plt.style.use('ggplot') plt.rcParams['font.size'] = 15 import nltk from nltk.stem.porter import PorterStemmer from nltk import sent_tokenize, word_tokenize from nltk.corpus import stopwords STOPWORDS = set(stopwords.words('english'))
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Data Load
df_train = pd.read_csv('../Datasets/disaster_tweet/train.csv') df_train.head(20) df_train.tail(20)
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Observation1. Mixed case2. Contractions3. Hashtags and mentions4. Incorrect spellings5. Punctuations6. websites and urls Functions
all_text = ' '.join(list(df_train['text'])) def check_texts(check_item, all_text): return check_item in all_text print(check_texts('<a', all_text)) print(check_texts('<div', all_text)) print(check_texts('<p', all_text)) print(check_texts('#x', all_text)) print(check_texts(':)', all_text)) print(check_texts('<3', all_text)) print(check_texts('heard', all_text)) def remove_urls(text): ''' This method takes in text to remove urls and website links, if any''' url_pattern = r'(www.|http[s]?://)(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+' text = re.sub(url_pattern, '', text) return text def remove_html_entities(text): ''' This method removes html tags''' html_entities = r'<.*?>|&([a-z0-9]+|#[0-9]{1,6}|#x[0-9a-f]{1,6});' text = re.sub(html_entities, '', text) return text def convert_lower_case(text): return text.lower() def detect_news(text): if 'news' in text: text = text + ' news' return text def remove_social_media_tags(text): ''' This method removes @ and # tags''' tag_pattern = r'@([a-z0-9]+)|#' text = re.sub(tag_pattern, '', text) return text # Count it before I remove them altogether def count_punctuations(text): getpunctuation = re.findall('[.?"\'`\,\-\!:;\(\)\[\]\\/“”]+?',text) return len(getpunctuation) def preprocess_text(x): cleaned_text = re.sub(r'[^a-zA-Z\d\s]+', '', x) word_list = [] for each_word in cleaned_text.split(' '): word_list.append(contractions.fix(each_word).lower()) word_list = [porter_stemmer.stem(each_word.replace('\n', '').strip()) for each_word in word_list] word_list = set(word_list) - set(STOPWORDS) return " ".join(word_list) porter_stemmer = PorterStemmer() df_train['text'] = df_train['text'].apply(remove_urls) df_train['text'] = df_train['text'].apply(remove_html_entities) df_train['text'] = df_train['text'].apply(convert_lower_case) df_train['text'] = df_train['text'].apply(detect_news) df_train['text'] = df_train['text'].apply(remove_social_media_tags) df_train['punctuation_count'] = df_train['text'].apply(count_punctuations) df_train['text'] = df_train['text'].apply(preprocess_text) df_train['text_tokenized'] = df_train['text'].apply(word_tokenize) df_train['words_per_tweet'] = df_train['text_tokenized'].apply(len) df_train
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Tweet Length Analysis
sns.histplot(x='words_per_tweet', hue='target', data=df_train, kde=True) plt.show()
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Punctuation Analysis
sns.countplot(x='target', hue='punctuation_count', data=df_train) plt.legend([]) plt.show()
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
Tweet Text Analysis using WordCloud
real_disaster_tweets = ' '.join(list(df_train[df_train['target'] == 1]['text'])) real_disaster_tweets non_real_disaster_tweets = ' '. join(list(df_train[df_train['target'] == 0]['text'])) wc = WordCloud(background_color="black", max_words=100, width=1000, height=600, random_state=1).generate(real_disaster_tweets) plt.figure(figsize=(15,15)) plt.imshow(wc) plt.axis("off") plt.title("Wordcloud of Tweets about Real Disasters") plt.show() wc = WordCloud(background_color="black", max_words=100, width=1000, height=600, font_step=1, random_state=1).generate(non_real_disaster_tweets) plt.figure(figsize=(15,15)) plt.imshow(wc) plt.axis("off") plt.title("Wordcloud of Tweets NOT about Real Disasters") plt.show()
_____no_output_____
MIT
Week 1/3_Natural_Disaster_Tweets_EDA_Visuals.ipynb
theaslo/NLP_Workshop_WAIA_Sept_2021
For both the parties the proportion of negative tweets is slightly greater than the positive tweets. Let's create a Word Cloud to identify which words occur frequetly in the tweets and try to derive what is their significance.
bjp_tweets = bjp_df['clean_text'] bjp_string =[] for t in bjp_tweets: bjp_string.append(t) bjp_string = pd.Series(bjp_string).str.cat(sep=' ') from wordcloud import WordCloud wordcloud = WordCloud(width=1600, height=800,max_font_size=200).generate(bjp_string) plt.figure(figsize=(12,10)) plt.title('BJP Word Cloud') #matplotlib.pyplot.title(label, fontdict=None, loc='center', pad=None, **kwargs)[source] plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show()
_____no_output_____
MIT
Exploratory Data Analysis.ipynb
abhishek291994/Twitter-Sentiment-Analysis-for-Indian-Elections
The words like 'JhaSanjay', 'ArvindKejriwal', 'Delhi', 'Govindraj' occur frequently in our corpus and are highlighted by our Word Cloud.In the context of the BJP. Arivind Kejriwal are staunch opponents of the BJP government. Delhi is the Capital of India and also a state that the BJP has suffered heavy losses in the previous elections. Hence, winning the polls in Delhi seems to be a major discussion in the BJP realated tweets.The South Indian States are all opposed to the BJP government. The clashed between the political idealogies have been causing violence in some cases which resulted in the death of a BJP supporter from the state of Tamil Nadu. This again was one of the major discussions on the Twitter The Word Cloud Also shows 'https' which indicated that that tweets are not yet cleaned properly. I will further clean the tweets before building the Models.
cong_tweets = congress_df['clean_text'] cong_string =[] for t in cong_tweets: cong_string.append(t) cong_string = pd.Series(cong_string).str.cat(sep=' ') from wordcloud import WordCloud wordcloud = WordCloud(width=1600, height=800,max_font_size=200).generate(cong_string) plt.figure(figsize=(12,10)) plt.title('Congress Word Cloud') #matplotlib.pyplot.title(label, fontdict=None, loc='center', pad=None, **kwargs)[source] plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show()
_____no_output_____
MIT
Exploratory Data Analysis.ipynb
abhishek291994/Twitter-Sentiment-Analysis-for-Indian-Elections
rocket functions> ROCKET (RandOm Convolutional KErnel Transform) functions for univariate and multivariate time series using GPU.
#export from tsai.imports import * from tsai.data.external import * #export from sklearn.linear_model import RidgeClassifierCV from numba import njit, prange #export # Angus Dempster, Francois Petitjean, Geoff Webb # Dempster A, Petitjean F, Webb GI (2019) ROCKET: Exceptionally fast and # accurate time series classification using random convolutional kernels. # arXiv:1910.13051 # changes: # - added kss parameter to generate_kernels # - convert X to np.float64 def generate_kernels(input_length, num_kernels, kss=[7, 9, 11], pad=True, dilate=True): candidate_lengths = np.array((kss)) # initialise kernel parameters weights = np.zeros((num_kernels, candidate_lengths.max())) # see note lengths = np.zeros(num_kernels, dtype = np.int32) # see note biases = np.zeros(num_kernels) dilations = np.zeros(num_kernels, dtype = np.int32) paddings = np.zeros(num_kernels, dtype = np.int32) # note: only the first *lengths[i]* values of *weights[i]* are used for i in range(num_kernels): length = np.random.choice(candidate_lengths) _weights = np.random.normal(0, 1, length) bias = np.random.uniform(-1, 1) if dilate: dilation = 2 ** np.random.uniform(0, np.log2((input_length - 1) // (length - 1))) else: dilation = 1 if pad: padding = ((length - 1) * dilation) // 2 if np.random.randint(2) == 1 else 0 else: padding = 0 weights[i, :length] = _weights - _weights.mean() lengths[i], biases[i], dilations[i], paddings[i] = length, bias, dilation, padding return weights, lengths, biases, dilations, paddings @njit(fastmath = True) def apply_kernel(X, weights, length, bias, dilation, padding): # zero padding if padding > 0: _input_length = len(X) _X = np.zeros(_input_length + (2 * padding)) _X[padding:(padding + _input_length)] = X X = _X input_length = len(X) output_length = input_length - ((length - 1) * dilation) _ppv = 0 # "proportion of positive values" _max = np.NINF for i in range(output_length): _sum = bias for j in range(length): _sum += weights[j] * X[i + (j * dilation)] if _sum > 0: _ppv += 1 if _sum > _max: _max = _sum return _ppv / output_length, _max @njit(parallel = True, fastmath = True) def apply_kernels(X, kernels): X = X.astype(np.float64) weights, lengths, biases, dilations, paddings = kernels num_examples = len(X) num_kernels = len(weights) # initialise output _X = np.zeros((num_examples, num_kernels * 2)) # 2 features per kernel for i in prange(num_examples): for j in range(num_kernels): _X[i, (j * 2):((j * 2) + 2)] = \ apply_kernel(X[i], weights[j][:lengths[j]], lengths[j], biases[j], dilations[j], paddings[j]) return _X #hide X_train, y_train, X_valid, y_valid = get_UCR_data('OliveOil') seq_len = X_train.shape[-1] X_train = X_train[:, 0].astype(np.float64) X_valid = X_valid[:, 0].astype(np.float64) labels = np.unique(y_train) transform = {} for i, l in enumerate(labels): transform[l] = i y_train = np.vectorize(transform.get)(y_train).astype(np.int32) y_valid = np.vectorize(transform.get)(y_valid).astype(np.int32) X_train = (X_train - X_train.mean(axis = 1, keepdims = True)) / (X_train.std(axis = 1, keepdims = True) + 1e-8) X_valid = (X_valid - X_valid.mean(axis = 1, keepdims = True)) / (X_valid.std(axis = 1, keepdims = True) + 1e-8) # only univariate time series of shape (samples, len) kernels = generate_kernels(seq_len, 10000) X_train_tfm = apply_kernels(X_train, kernels) X_valid_tfm = apply_kernels(X_valid, kernels) classifier = RidgeClassifierCV(alphas=np.logspace(-3, 3, 7), normalize=True) classifier.fit(X_train_tfm, y_train) score = classifier.score(X_valid_tfm, y_valid) test_eq(ge(score,.9), True) #export class ROCKET(nn.Module): def __init__(self, c_in, seq_len, n_kernels=10000, kss=[7, 9, 11]): ''' ROCKET is a GPU Pytorch implementation of the ROCKET methods generate_kernels and apply_kernels that can be used with univariate and multivariate time series. Input: is a 3d torch tensor of type torch.float32. When used with univariate TS, make sure you transform the 2d to 3d by adding unsqueeze(1). c_in: number of channels or features. For univariate c_in is 1. seq_len: sequence length ''' super().__init__() kss = [ks for ks in kss if ks < seq_len] convs = nn.ModuleList() for i in range(n_kernels): ks = np.random.choice(kss) dilation = 2**np.random.uniform(0, np.log2((seq_len - 1) // (ks - 1))) padding = int((ks - 1) * dilation // 2) if np.random.randint(2) == 1 else 0 weight = torch.randn(1, c_in, ks) weight -= weight.mean() bias = 2 * (torch.rand(1) - .5) layer = nn.Conv1d(c_in, 1, ks, padding=2 * padding, dilation=int(dilation), bias=True) layer.weight = torch.nn.Parameter(weight, requires_grad=False) layer.bias = torch.nn.Parameter(bias, requires_grad=False) convs.append(layer) self.convs = convs self.n_kernels = n_kernels self.kss = kss def forward(self, x): for i in range(self.n_kernels): out = self.convs[i](x) _max = out.max(dim=-1).values _ppv = torch.gt(out, 0).sum(dim=-1).float() / out.shape[-1] cat = torch.cat((_max, _ppv), dim=-1) output = cat if i == 0 else torch.cat((output, cat), dim=-1) return output #hide out = create_scripts() beep(out)
_____no_output_____
Apache-2.0
nbs/010_rocket_functions.ipynb
williamsdoug/timeseriesAI
DictionariesWe've been learning about *sequences* in Python but now we're going to switch gears and learn about *mappings* in Python. If you're familiar with other languages you can think of these Dictionaries as hash tables. This section will serve as a brief introduction to dictionaries and consist of: 1.) Constructing a Dictionary 2.) Accessing objects from a dictionary 3.) Nesting Dictionaries 4.) Basic Dictionary MethodsSo what are mappings? Mappings are a collection of objects that are stored by a *key*, unlike a sequence that stored objects by their relative position. This is an important distinction, since mappings won't retain order since they have objects defined by a key.A Python dictionary consists of a key and then an associated value. That value can be almost any Python object. Constructing a DictionaryLet's see how we can construct dictionaries to get a better understanding of how they work!
# Make a dictionary with {} and : to signify a key and a value my_dict = {'key1':'value1','key2':'value2'} # Call values by their key my_dict['key2']
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
Its important to note that dictionaries are very flexible in the data types they can hold. For example:
my_dict = {'key1':123,'key2':[12,23,33],'key3':['item0','item1','item2']} # Let's call items from the dictionary my_dict['key3'] # Can call an index on that value my_dict['key3'][0] # Can then even call methods on that value my_dict['key3'][0].upper()
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
We can affect the values of a key as well. For instance:
my_dict['key1'] # Subtract 123 from the value my_dict['key1'] = my_dict['key1'] - 123 #Check my_dict['key1']
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
A quick note, Python has a built-in method of doing a self subtraction or addition (or multiplication or division). We could have also used += or -= for the above statement. For example:
# Set the object equal to itself minus 123 my_dict['key1'] -= 123 my_dict['key1']
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
We can also create keys by assignment. For instance if we started off with an empty dictionary, we could continually add to it:
# Create a new dictionary d = {} # Create a new key through assignment d['animal'] = 'Dog' # Can do this with any object d['answer'] = 42 #Show d
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
Nesting with DictionariesHopefully you're starting to see how powerful Python is with its flexibility of nesting objects and calling methods on them. Let's see a dictionary nested inside a dictionary:
# Dictionary nested inside a dictionary nested inside a dictionary d = {'key1':{'nestkey':{'subnestkey':'value'}}}
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
Wow! That's a quite the inception of dictionaries! Let's see how we can grab that value:
# Keep calling the keys d['key1']['nestkey']['subnestkey']
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
A few Dictionary MethodsThere are a few methods we can call on a dictionary. Let's get a quick introduction to a few of them:
# Create a typical dictionary d = {'key1':1,'key2':2,'key3':3} # Method to return a list of all keys d.keys() # Method to grab all values d.values() # Method to return tuples of all items (we'll learn about tuples soon) d.items()
_____no_output_____
MIT
Languages/Python/00 - Python Object and Data Structure/05-Dictionaries.ipynb
Tikam02/ReBuild
Measures of Income Mobility **Author: Wei Kang , Serge Rey **Income mobility could be viewed as a reranking pheonomenon where regions switch income positions while it could also be considered to be happening as long as regions move away from the previous income levels. The former is named absolute mobility and the latter relative mobility.This notebook introduces how to estimate income mobility measures from longitudinal income data using methods in **giddy**. Currently, five summary mobility estimators are implemented in **giddy.mobility**. All of them are Markov-based, meaning that they are closely related to the discrete Markov Chains methods introduced in [Markov Based Methods notebook](Markov Based Methods.ipynb). More specifically, each of them is derived from a transition probability matrix $P$. Whether the final estimate is absolute or reletive mobility depends on how the original continuous income data are discretized. The five Markov-based summary measures of mobility (Formby et al., 2004) are listed below:| Num| Measures | Symbol | |-------------| :-------------: |:-------------:||1| $M_P(P) = \frac{m-\sum_{i=1}^m p_{ii}}{m-1} $ | P ||2| $M_D(P) = 1-|det(P)|$ |D | |3| $M_{L2}(P)=1-|\lambda_2|$| L2| |4| $M_{B1}(P) = \frac{m-m \sum_{i=1}^m \pi_i P_{ii}}{m-1} $ | B1 | |5| $M_{B2}(P)=\frac{1}{m-1} \sum_{i=1}^m \sum_{j=1}^m \pi_i P_{ij} |i-j|$| B2| $\pi$ is the inital income distribution. For any transition probability matrix with a quasi-maximal diagonal, all of these mobility measures take values on $[0,1]$. $0$ means immobility and $1$ perfect mobility. If the transition probability matrix takes the form of the identity matrix, every region is stuck in its current state implying complete immobility. On the contrary, when each row of $P$ is identical, current state is irrelevant to the probability of moving away to any class. Thus, the transition matrix with identical rows is considered perfect mobile. The larger the mobility estimate, the more mobile the regional income system is. However, it should be noted that these measures try to reveal mobility pattern from different aspects and are thus not comparable to each other. Actually the mean and variance of these measures are different. We implemented all the above five summary mobility measures in a single method $markov\_mobility$. A parameter $measure$ could be specified to select which measure to calculate. By default, the mobility measure 'P' will be estimated.```pythondef markov_mobility(p, measure = "P",ini=None)```
from giddy import markov,mobility mobility.markov_mobility?
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
US income mobility exampleSimilar to [Markov Based Methods notebook](Markov Based Methods.ipynb), we will demonstrate the usage of the mobility methods by an application to data on per capita incomes observed annually from 1929 to 2009 for the lower 48 US states.
import libpysal import numpy as np import mapclassify as mc income_path = libpysal.examples.get_path("usjoin.csv") f = libpysal.io.open(income_path) pci = np.array([f.by_col[str(y)] for y in range(1929, 2010)]) #each column represents an state's income time series 1929-2010 q5 = np.array([mc.Quantiles(y).yb for y in pci]).transpose() #each row represents an state's income time series 1929-2010 m = markov.Markov(q5) m.p
/Users/weikang/anaconda3/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result. return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
After acquiring the estimate of transition probability matrix, we could call the method $markov\_mobility$ to estimate any of the five Markov-based summary mobility indice. 1. Shorrock1's mobility measure$$M_{P} = \frac{m-\sum_{i=1}^m P_{ii}}{m-1}$$```pythonmeasure = "P"```
mobility.markov_mobility(m.p, measure="P")
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
2. Shorroks2's mobility measure$$M_{D} = 1 - |\det(P)|$$```pythonmeasure = "D"```
mobility.markov_mobility(m.p, measure="D")
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
3. Sommers and Conlisk's mobility measure$$M_{L2} = 1 - |\lambda_2|$$```pythonmeasure = "L2"```
mobility.markov_mobility(m.p, measure = "L2")
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
4. Bartholomew1's mobility measure$$M_{B1} = \frac{m-m \sum_{i=1}^m \pi_i P_{ii}}{m-1}$$$\pi$: the inital income distribution```pythonmeasure = "B1"```
pi = np.array([0.1,0.2,0.2,0.4,0.1]) mobility.markov_mobility(m.p, measure = "B1", ini=pi)
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
5. Bartholomew2's mobility measure$$M_{B2} = \frac{1}{m-1} \sum_{i=1}^m \sum_{j=1}^m \pi_i P_{ij} |i-j|$$$\pi$: the inital income distribution```pythonmeasure = "B1"```
pi = np.array([0.1,0.2,0.2,0.4,0.1]) mobility.markov_mobility(m.p, measure = "B2", ini=pi)
_____no_output_____
BSD-3-Clause
notebooks/Mobility measures.ipynb
mikiec84/giddy
FaceNet Keras DemoThis notebook demos the usage of the FaceNet model, and showshow to preprocess images before feeding them into the model.
%load_ext autoreload %autoreload 2 import sys sys.path.append('../') import os import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl from PIL import Image from skimage.transform import resize from skimage.util import img_as_ubyte, img_as_float from sklearn.metrics import pairwise_distances from utils import set_up_environment, prewhiten, maximum_center_crop, l2_normalize from plot.heatmap import heatmap, annotate_heatmap set_up_environment(visible_devices='1')
_____no_output_____
MIT
notebooks/demos/demo_embeddings.ipynb
psturmfels/adversarial_faces
Loading the ModelFirst you need to download the keras weights from https://github.com/nyoki-mtl/keras-facenet and put the downloaded weights file in the parent directory.
model_path = '../facenet_keras.h5' model = tf.keras.models.load_model(model_path)
WARNING:tensorflow:No training configuration found in save file: the model was *not* compiled. Compile it manually.
MIT
notebooks/demos/demo_embeddings.ipynb
psturmfels/adversarial_faces
Preprocessing the InputThis next cell preprocesses the input using Pillow and skimage, both of which can be installed using pip. We center crop the image to avoid scaling issues, then resize the image to 160 x 160, and then we standardize the images using the utils module in this repository.
images = [] images_whitened = [] image_path = '../images/' image_files = os.listdir(image_path) image_files = [image_file for image_file in image_files if image_file.endswith('.png')] for image_file in image_files: image = np.array(Image.open(os.path.join(image_path, image_file))) image = image[:, :, :3] image = maximum_center_crop(image) image = np.array(Image.fromarray(image).resize(size=(160, 160))) image = img_as_ubyte(image) image_whitened = prewhiten(image.astype(np.float32)) images.append(image) images_whitened.append(image_whitened) mpl.rcParams['figure.dpi'] = 50 fig, axs = plt.subplots(1, len(images), figsize=(5 * len(images), 5)) for i in range(len(images)): axs[i].imshow(images[i]) axs[i].set_title(image_files[i], fontsize=24) axs[i].axis('off')
_____no_output_____
MIT
notebooks/demos/demo_embeddings.ipynb
psturmfels/adversarial_faces
Computing EmbeddingsFinally, we compute the embeddings and pairwise distances of the images. We can see that the model is able to distinguish the same identity from different identities!
image_batch = tf.convert_to_tensor(np.array(images_whitened)) embedding_batch = model.predict(image_batch) normalized_embedding_batch = l2_normalize(embedding_batch) np.sqrt(np.sum(np.square(normalized_embedding_batch[0] - normalized_embedding_batch[1]))) pairwise_distance_matrix = pairwise_distances(normalized_embedding_batch) mpl.rcParams['figure.dpi'] = 150 ax, cbar = heatmap(pairwise_distance_matrix, image_files, image_files, cmap="seismic", cbarlabel="Normalized L2 Distance") texts = annotate_heatmap(ax, valfmt="{x:.2f}") fig.tight_layout()
_____no_output_____
MIT
notebooks/demos/demo_embeddings.ipynb
psturmfels/adversarial_faces
Temporal-Difference MethodsIn this notebook, you will write your own implementations of many Temporal-Difference (TD) methods.While we have provided some starter code, you are welcome to erase these hints and write your code from scratch.--- Part 0: Explore CliffWalkingEnvWe begin by importing the necessary packages.
import sys import gym import numpy as np import random as rn from collections import defaultdict, deque import matplotlib.pyplot as plt %matplotlib inline import check_test from plot_utils import plot_values
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Use the code cell below to create an instance of the [CliffWalking](https://github.com/openai/gym/blob/master/gym/envs/toy_text/cliffwalking.py) environment.
env = gym.make('CliffWalking-v0')
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
The agent moves through a $4\times 12$ gridworld, with states numbered as follows:```[[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]]```At the start of any episode, state `36` is the initial state. State `47` is the only terminal state, and the cliff corresponds to states `37` through `46`.The agent has 4 potential actions:```UP = 0RIGHT = 1DOWN = 2LEFT = 3```Thus, $\mathcal{S}^+=\{0, 1, \ldots, 47\}$, and $\mathcal{A} =\{0, 1, 2, 3\}$. Verify this by running the code cell below.
print(env.action_space) print(env.observation_space)
Discrete(4) Discrete(48)
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
In this mini-project, we will build towards finding the optimal policy for the CliffWalking environment. The optimal state-value function is visualized below. Please take the time now to make sure that you understand _why_ this is the optimal state-value function._**Note**: You can safely ignore the values of the cliff "states" as these are not true states from which the agent can make decisions. For the cliff "states", the state-value function is not well-defined._
# define the optimal state-value function V_opt = np.zeros((4,12)) V_opt[0:13][0] = -np.arange(3, 15)[::-1] V_opt[0:13][1] = -np.arange(3, 15)[::-1] + 1 V_opt[0:13][2] = -np.arange(3, 15)[::-1] + 2 V_opt[3][0] = -13 plot_values(V_opt)
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Part 1: TD Control: SarsaIn this section, you will write your own implementation of the Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)
def greedy_eps_action(Q, state, nA, eps): if rn.random()> eps: return np.argmax(Q[state]) else: return rn.choice(np.arange(nA)) def sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .95, eps_min = 1e-2): # initialize action-value function (empty dictionary of arrays) nA = env.action_space.n Q = defaultdict(lambda: np.zeros(nA)) # initialize performance monitor # loop over episodes eps = eps_start for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = max(eps_min,eps*eps_decay) state = env.reset() score = 0 action = greedy_eps_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = greedy_eps_action(Q, next_state, nA, eps) this_V = Q[state][action] next_V = Q[next_state][next_action] Q[state][action] = this_V + alpha*(reward + gamma*next_V - this_V) state = next_state action = next_action if done: Q[state][action] = Q[state][action] + alpha*(reward - Q[state][action]) tmp_scores.append(score) break return Q
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
# obtain the estimated optimal policy and corresponding action-value function Q_sarsa = sarsa(env, 5000, .01) # print the estimated optimal policy policy_sarsa = np.array([np.argmax(Q_sarsa[key]) if key in Q_sarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_sarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsa) # plot the estimated optimal state-value function V_sarsa = ([np.max(Q_sarsa[key]) if key in Q_sarsa else 0 for key in np.arange(48)]) plot_values(V_sarsa)
Episode 5000/5000
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Part 2: TD Control: Q-learningIn this section, you will write your own implementation of the Q-learning control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)
def q_learning(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .95, eps_min = 1e-2): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = max(eps_min,eps*eps_decay) state = env.reset() score = 0 action = greedy_eps_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = greedy_eps_action(Q, next_state, nA, eps) this_V = Q[state][action] next_V = Q[next_state][next_action] Q[state][action] = this_V + alpha*(reward + gamma*max(Q[next_state]) - this_V) state = next_state action = next_action if done: Q[state][action] = Q[state][action] + alpha*(reward - Q[state][action]) break return Q
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
# obtain the estimated optimal policy and corresponding action-value function Q_sarsamax = q_learning(env, 5000, .01) # print the estimated optimal policy policy_sarsamax = np.array([np.argmax(Q_sarsamax[key]) if key in Q_sarsamax else -1 for key in np.arange(48)]).reshape((4,12)) check_test.run_check('td_control_check', policy_sarsamax) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_sarsamax) # plot the estimated optimal state-value function plot_values([np.max(Q_sarsamax[key]) if key in Q_sarsamax else 0 for key in np.arange(48)])
Episode 5000/5000
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Part 3: TD Control: Expected SarsaIn this section, you will write your own implementation of the Expected Sarsa control algorithm.Your algorithm has four arguments:- `env`: This is an instance of an OpenAI Gym environment.- `num_episodes`: This is the number of episodes that are generated through agent-environment interaction.- `alpha`: This is the step-size parameter for the update step.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as output:- `Q`: This is a dictionary (of one-dimensional arrays) where `Q[s][a]` is the estimated action value corresponding to state `s` and action `a`.Please complete the function in the code cell below.(_Feel free to define additional functions to help you to organize your code._)
def expected_sarsa(env, num_episodes, alpha, gamma=1.0, eps_start = 1, eps_decay = .9, eps_min = 1e-2): # initialize empty dictionary of arrays nA = env.action_space.n Q = defaultdict(lambda: np.zeros(env.nA)) eps = eps_start # loop over episodes for i_episode in range(1, num_episodes+1): # monitor progress if i_episode % 100 == 0: print("\rEpisode {}/{}".format(i_episode, num_episodes), end="") sys.stdout.flush() ## TODO: complete the function eps = .001 state = env.reset() score = 0 action = greedy_eps_action(Q, state, nA, eps) while True: next_state, reward, done, info = env.step(action) score += reward if not done: next_action = greedy_eps_action(Q, next_state, nA, eps) this_V = Q[state][action] prob_s = np.ones(nA)*eps/nA prob_s[np.argmax(Q[next_state])] = 1 - eps + eps/nA Q[state][action] = this_V + alpha*(reward + gamma*np.dot(Q[next_state], prob_s) - this_V) state = next_state action = next_action if done: Q[state][action] = Q[state][action] + alpha*(reward - Q[state][action]) break return Q
_____no_output_____
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
Use the next code cell to visualize the **_estimated_** optimal policy and the corresponding state-value function. If the code cell returns **PASSED**, then you have implemented the function correctly! Feel free to change the `num_episodes` and `alpha` parameters that are supplied to the function. However, if you'd like to ensure the accuracy of the unit test, please do not change the value of `gamma` from the default.
# obtain the estimated optimal policy and corresponding action-value function Q_expsarsa = expected_sarsa(env, 10000, 1) # print the estimated optimal policy policy_expsarsa = np.array([np.argmax(Q_expsarsa[key]) if key in Q_expsarsa else -1 for key in np.arange(48)]).reshape(4,12) check_test.run_check('td_control_check', policy_expsarsa) print("\nEstimated Optimal Policy (UP = 0, RIGHT = 1, DOWN = 2, LEFT = 3, N/A = -1):") print(policy_expsarsa) # plot the estimated optimal state-value function plot_values([np.max(Q_expsarsa[key]) if key in Q_expsarsa else 0 for key in np.arange(48)])
Episode 10000/10000
MIT
temporal-difference/Temporal_Difference.ipynb
kdliaokueida/Deep_Reinforcement_Learning
_____no_output_____
Apache-2.0
w1d1_Primer_Notebook_de_Jupyter_en_GoogleColab_120521.ipynb
talitacardenas/The_Bridge_School_DataScience_PT
Encabezado Encabezado tipo 2Se llama ***Markdown***1. Elemento de lista2. Elemento de lista En Jupyter oara ejecutar una celda tecleamos>- Ctrl + EnterO si queremos anadir una nueva línea de codigo y a la vez ejecutar- Alt + Enter
# Si quiero escribir un cmentario en una celda utilizamos el fragmento # # Si sin más líneas también
_____no_output_____
Apache-2.0
w1d1_Primer_Notebook_de_Jupyter_en_GoogleColab_120521.ipynb
talitacardenas/The_Bridge_School_DataScience_PT
30
1 + 1 print("Es resultado de una operacion es:" + suma)
_____no_output_____
Apache-2.0
w1d1_Primer_Notebook_de_Jupyter_en_GoogleColab_120521.ipynb
talitacardenas/The_Bridge_School_DataScience_PT
Veamos que nos devolverá un error porque estamos intentando imorimir una cadena de texto y un valor numericoLa solucion seria trnsformar nuestro valor numerico en String
print("El resultado de esta operacion es> + str (suma)")
_____no_output_____
Apache-2.0
w1d1_Primer_Notebook_de_Jupyter_en_GoogleColab_120521.ipynb
talitacardenas/The_Bridge_School_DataScience_PT
作業在鐵達尼資料集中,今天我們專注觀察變數之間的相關性,以Titanic_train.csv 中,首先將有遺失值的數值刪除,並回答下列問題。* Q1: 透過數值法計算 Age 和 Survived 是否有相關性?* Q2:透過數值法計算 Sex 和 Survived 是否有相關性?* Q3: 透過數值法計算 Age 和 Fare 是否有相關性? * 提示: 1.產稱一個新的變數 Survived_cate ,資料型態傳換成類別型態 2.把題目中的 Survived 用 Survived_cate 來做分析 3.首先觀察一下這些變數的資料型態後,再來想要以哪一種判斷倆倆的相關性。
# import library import matplotlib.pyplot as plt import numpy as np import pandas as pd from scipy import stats import math import statistics import seaborn as sns from IPython.display import display import pingouin as pg import researchpy %matplotlib inline
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
讀入資料
df_train = pd.read_csv("Titanic_train.csv") print(df_train.info()) ## 這邊我們做一個調整,把 Survived 變成離散型變數 Survived_cate df_train['Survived_cate']=df_train['Survived'] df_train['Survived_cate']=df_train['Survived_cate'].astype('object') print(df_train.info()) display(df_train.head(5))
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
Q1: 透過數值法計算 Age 和 Survived 是否有相關性?
## Age:連續型 Survived_cate 為離散型,所以採用 Eta Squared
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
計算相關係數,不能允許有遺失值,所以必須先補值,或者把遺失值刪除
## 取出資料後,把遺失值刪除 complete_data=df_train[['Age','Survived_cate']].dropna() display(complete_data) aov = pg.anova(dv='Age', between='Survived_cate', data=complete_data, detailed=True) aov etaSq = aov.SS[0] / (aov.SS[0] + aov.SS[1]) etaSq def judgment_etaSq(etaSq): if etaSq < .01: qual = 'Negligible' elif etaSq < .06: qual = 'Small' elif etaSq < .14: qual = 'Medium' else: qual = 'Large' return(qual) judgment_etaSq(etaSq) g = sns.catplot(x="Survived_cate", y="Age", hue="Survived_cate", data=complete_data, kind="violin")
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
結論: 年紀和存活沒有相關性(complete_data),思考是否需要放入模型,或者要深入觀察特性,是否需要做特徵轉換 Q2:透過數值法計算 Sex 和 Survived 是否有相關性?
## Sex:離散型 Survived_cate 為離散型,所以採用 Cramér's V contTable = pd.crosstab(df_train['Sex'], df_train['Survived_cate']) contTable df = min(contTable.shape[0], contTable.shape[1]) - 1 df crosstab, res = researchpy.crosstab(df_train['Survived_cate'], df_train['Sex'], test='chi-square') #print(res) print("Cramer's value is",res.loc[2,'results']) #這邊用卡方檢定獨立性,所以採用的 test 參數為卡方 "test =" argument. # 採用的變數在這個模組中,會自己根據資料集來判斷,Cramer's Phi if it a 2x2 table, or Cramer's V is larger than 2x2. ## 寫一個副程式判斷相關性的強度 def judgment_CramerV(df,V): if df == 1: if V < 0.10: qual = 'negligible' elif V < 0.30: qual = 'small' elif V < 0.50: qual = 'medium' else: qual = 'large' elif df == 2: if V < 0.07: qual = 'negligible' elif V < 0.21: qual = 'small' elif V < 0.35: qual = 'medium' else: qual = 'large' elif df == 3: if V < 0.06: qual = 'negligible' elif V < 0.17: qual = 'small' elif V < 0.29: qual = 'medium' else: qual = 'large' elif df == 4: if V < 0.05: qual = 'negligible' elif V < 0.15: qual = 'small' elif V < 0.25: qual = 'medium' else: qual = 'large' else: if V < 0.05: qual = 'negligible' elif V < 0.13: qual = 'small' elif V < 0.22: qual = 'medium' else: qual = 'large' return(qual) judgment_CramerV(df,res.loc[2,'results']) g= sns.countplot(x="Sex", hue="Survived_cate", data=df_train)
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
數值型態和圖形, 存活和性別存在高度的相關性,要預測存活,一定要把性別加上去。 Q3: 透過數值法計算 Age 和 Fare 是否有相關性?
## Age 連續 , Fare 連續,用 Pearson 相關係數 ## 取出資料後,把遺失值刪除 complete_data=df_train[['Age','Fare']].dropna() display(complete_data) # 由於 pearsonr 有兩個回傳結果,我們只需取第一個回傳值為相關係數 corr, _=stats.pearsonr(complete_data['Age'],complete_data['Fare']) print(corr) #代表身高和體重有高度線性相關 g = sns.regplot(x="Age", y="Fare", color="g",data=complete_data) #年齡和身高有關連
_____no_output_____
MIT
Solution/Day_38_Solution.ipynb
YenLinWu/DataScienceMarathon
Imports
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score, validation_curve from sklearn.naive_bayes import MultinomialNB from sklearn.metrics import accuracy_score, confusion_matrix, classification_report, plot_roc_curve, plot_confusion_matrix, f1_score
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Loading the data
df = pd.read_csv('../input/heart-disease-uci/heart.csv') df.head() df.shape
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
as we can see this data this data is about 303 rows and 14 column Exploring our dataset
df.sex.value_counts(normalize=True)
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
this mean we have more female then male. let's plot only people who got disease by sex
df.sex[df.target==1].value_counts().plot(kind="bar") # commenting the plot plt.title("people who got disease by sex") plt.xlabel("sex") plt.ylabel("effected"); plt.xticks(rotation = 0); df.target.value_counts(normalize=True)
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
the two classes are almost equal Ploting Heart Disease by Age / Max Heart Rate
sns.scatterplot(x=df.age, y=df.thalach, hue = df.target); # commenting the plot plt.title("Heart Disease by Age / Max Heart Rate") plt.xlabel("Age") plt.legend(["Disease", "No Disease"]) plt.ylabel("Max Heart Rate");
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Correlation matrix
corr = df.corr() f, ax = plt.subplots(figsize=(12, 10)) sns.heatmap(corr, annot=True, fmt='.2f', ax=ax); df.head()
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Modeling
df.head()
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Features / Lable
X = df.drop('target', axis=1) X.head() y = df.target y.head()
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Spliting our dataset with 20% for test
np.random.seed(42) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) y_train.head()
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Evaluation metrics Function for geting score (f1 and acc) and ploting the confusion metrix
def getScore(model, X_test, y_test): y_pred = model.predict(X_test) print('f1_score') print(f1_score(y_test,y_pred,average='binary')) print('accuracy') acc = accuracy_score(y_test,y_pred, normalize=True) print(acc) print('Confusion Matrix :') plot_confusion_matrix(model, X_test, y_test) plt.show() return acc np.random.seed(42) clf = MultinomialNB() clf.fit(X_train, y_train); clf_accuracy = getScore(clf, X_test, y_test)
f1_score 0.8524590163934426 accuracy 0.8524590163934426 Confusion Matrix :
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Classification report
print(classification_report(y_test, clf.predict(X_test)))
precision recall f1-score support 0 0.81 0.90 0.85 29 1 0.90 0.81 0.85 32 accuracy 0.85 61 macro avg 0.85 0.85 0.85 61 weighted avg 0.86 0.85 0.85 61
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
ROC curve
plot_roc_curve(clf, X_test, y_test);
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Feature importance
clf.coef_ f_dict = dict(zip(X.columns , clf.coef_[0])) f_data = pd.DataFrame(f_dict, index=[0]) f_data.T.plot.bar(title="Feature Importance", legend=False, figsize=(10,4)); plt.xticks(rotation = 0);
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
from this plot we can see features who have importance or not Cross-validation
cv_precision = np.mean(cross_val_score(MultinomialNB(), X, y, cv=5)) cv_precision
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
GreadSearcheCV
np.random.seed(42) param_grid = { 'alpha': [0.01, 0.1, 0.5, 1.0, 10.0] } grid_search = GridSearchCV(estimator = MultinomialNB(), param_grid = param_grid, cv = 10, n_jobs = -1, verbose = 2) grid_search.fit(X_train, y_train) best_grid = grid_search.best_params_ print('best grid = ', best_grid) grid_accuracy = grid_search.score(X_test, y_test) print('Grid Score = ', grid_accuracy) best_grid grid_accuracy
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Comparing results
model_scores = {'MNB': clf_accuracy, 'grid_searche': grid_accuracy} model_compare = pd.DataFrame(model_scores, index=['accuracy']) model_compare.T.plot.bar(); plt.xticks(rotation = 0);
_____no_output_____
MIT
MultinomialNB.ipynb
djendara/heart-disease-classification
Preparação dos dados
clean_df = pd.DataFrame(columns=df.columns, index=df.index) residual_df = pd.DataFrame(columns=df.columns, index=df.index) for col in df.columns: residual, clean = remove_periodic(df[col].tolist(), df.index, frequency_threshold=0.01e12) clean_df[col] = clean.tolist() residual_df[col] = residual.tolist() train_df = df[(df.index >= '2010-09-01') & (df.index <= '2011-09-01')] train_clean_df = clean_df[(clean_df.index >= '2010-09-01') & (clean_df.index <= '2011-09-01')] train_residual_df = residual_df[(residual_df.index >= '2010-09-01') & (residual_df.index <= '2011-09-01')] test_df = df[(df.index >= '2010-08-05')& (df.index < '2010-08-06')] test_clean_df = clean_df[(clean_df.index >= '2010-08-05')& (clean_df.index < '2010-08-06')] test_residual_df = residual_df[(residual_df.index >= '2010-08-05')& (residual_df.index < '2010-08-06')] fig = plt.figure(figsize=(12, 8)) plt.plot(test_clean_df.DHHL_3.values, color='blue') plt.plot(test_df.DHHL_3.values, color='red') plt.show()
_____no_output_____
MIT
notebooks/180418 - FCM SpatioTemporal.ipynb
cseveriano/spatio-temporal-forecasting
Fuzzy C-Means
from pyFTS.partitioners import FCM
_____no_output_____
MIT
notebooks/180418 - FCM SpatioTemporal.ipynb
cseveriano/spatio-temporal-forecasting
Adagrad from scratch
from mxnet import ndarray as nd # Adagrad. def adagrad(params, sqrs, lr, batch_size): eps_stable = 1e-7 for param, sqr in zip(params, sqrs): g = param.grad / batch_size sqr[:] += nd.square(g) div = lr * g / nd.sqrt(sqr + eps_stable) param[:] -= div import mxnet as mx from mxnet import autograd from mxnet import gluon import random mx.random.seed(1) random.seed(1) # Generate data. num_inputs = 2 num_examples = 1000 true_w = [2, -3.4] true_b = 4.2 X = nd.random_normal(scale=1, shape=(num_examples, num_inputs)) y = true_w[0] * X[:, 0] + true_w[1] * X[:, 1] + true_b y += .01 * nd.random_normal(scale=1, shape=y.shape) dataset = gluon.data.ArrayDataset(X, y) # Construct data iterator. def data_iter(batch_size): idx = list(range(num_examples)) random.shuffle(idx) for batch_i, i in enumerate(range(0, num_examples, batch_size)): j = nd.array(idx[i: min(i + batch_size, num_examples)]) yield batch_i, X.take(j), y.take(j) # Initialize model parameters. def init_params(): w = nd.random_normal(scale=1, shape=(num_inputs, 1)) b = nd.zeros(shape=(1,)) params = [w, b] sqrs = [] for param in params: param.attach_grad() # sqrs.append(param.zeros_like()) return params, sqrs # Linear regression. def net(X, w, b): return nd.dot(X, w) + b # Loss function. def square_loss(yhat, y): return (yhat - y.reshape(yhat.shape)) ** 2 / 2 %matplotlib inline import matplotlib as mpl mpl.rcParams['figure.dpi']= 120 import matplotlib.pyplot as plt import numpy as np def train(batch_size, lr, epochs, period): assert period >= batch_size and period % batch_size == 0 [w, b], sqrs = init_params() total_loss = [np.mean(square_loss(net(X, w, b), y).asnumpy())] # Epoch starts from 1. for epoch in range(1, epochs + 1): for batch_i, data, label in data_iter(batch_size): with autograd.record(): output = net(data, w, b) loss = square_loss(output, label) loss.backward() adagrad([w, b], sqrs, lr, batch_size) if batch_i * batch_size % period == 0: total_loss.append(np.mean(square_loss(net(X, w, b), y).asnumpy())) print("Batch size %d, Learning rate %f, Epoch %d, loss %.4e" % (batch_size, lr, epoch, total_loss[-1])) print('w:', np.reshape(w.asnumpy(), (1, -1)), 'b:', b.asnumpy()[0], '\n') x_axis = np.linspace(0, epochs, len(total_loss), endpoint=True) plt.semilogy(x_axis, total_loss) plt.xlabel('epoch') plt.ylabel('loss') plt.show() train(batch_size=10, lr=0.9, epochs=3, period=10)
Batch size 10, Learning rate 0.900000, Epoch 1, loss 5.3231e-05 Batch size 10, Learning rate 0.900000, Epoch 2, loss 4.9388e-05 Batch size 10, Learning rate 0.900000, Epoch 3, loss 4.9256e-05 w: [[ 1.99946415 -3.39996123]] b: 4.19967
Apache-2.0
chapter06_optimization/adagrad-scratch.ipynb
sgeos/mxnet_the_straight_dope
Flatten all battles, campains, etc.
def _flatten_battles(battles, root=None): buttles_to_run = copy(battles) records = [] for name, data in battles.items(): if 'children' in data: children = data.pop('children') records.extend(_flatten_battles(children, root=name)) else: data['level'] = 100 data['name'] = name data['parent'] = root records.append(data) return records records = {k: _flatten_battles(v, root=k) for k, v in battles.items()} # fronts records = {k: pd.DataFrame(json_normalize(v)) for k, v in records.items()}
_____no_output_____
MIT
Chapter11/_json_to_table.ipynb
Drtaylor1701/Learn-Python-by-Building-Data-Science-Applications
Store as CSV
for front, data in records.items(): data.to_csv(f'./data/{front}.csv', index=None)
_____no_output_____
MIT
Chapter11/_json_to_table.ipynb
Drtaylor1701/Learn-Python-by-Building-Data-Science-Applications
Stanza Dependency Parsing Navigation:* [General Info](info)* [Setting up Stanza for training](setup)* [Preparing Dataset for DEPPARSE](prepare)* [Training a Dependency Parser with Stanza](depparse)* [Using Trained Model for Prediction](predict)* [Prediction and Saving to CONLL-U](save) General Info [`Link to Manual`](https://stanfordnlp.github.io/stanza/index.html) [`Training Page`](https://stanfordnlp.github.io/stanza/training.html)[`Link to GitHub Repository`](https://github.com/stanfordnlp/stanza) (git clone this repo)`Libraries needed:` `corpuscula` (conllu parsing); `stanza` (training); `tqdm` (displaying progress); `junky` (loading datasets); `mordl` (conllu evaluation script).`Pre-Trained Embeddings used in this example:` Recommended vectors are downloaded from [here](https://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11234/1-1989/word-embeddings-conll17.tar?sequence=9&isAllowed=y)(~30GB, 60+ languages)`Pipeline Input:` CONLL-U file.`Pipeline Output:` CONLL-U file with predicitons.`Sample pipeline output:````>>> nlp = stanza.Pipeline('ru', processors='tokenize,pos,lemma,ner,depparse', depparse_model_path='stanza/saved_models/depparse/ru_syntagrus_parser.pt', tokenize_pretokenized=True)>>> doc = nlp(' '.join(test[0]))>>> print(*[f'id: {word.id}\tword: {word.text}\thead id: {word.head}\t\ head: {sent.words[word.head-1].text if word.head > 0 else "root"}\tdeprel: {word.deprel}' for sent in doc.sentences for word in sent.words], sep='\n') id: 1 word: В head id: 3 head: период deprel: caseid: 2 word: советский head id: 3 head: период deprel: amodid: 3 word: период head id: 11 head: составляло deprel: oblid: 4 word: времени head id: 3 head: период deprel: nmodid: 5 word: число head id: 11 head: составляло deprel: nsubjid: 6 word: ИТ head id: 5 head: число deprel: nmodid: 7 word: - head id: 8 head: специалистов deprel: punctid: 8 word: специалистов head id: 6 head: ИТ deprel: apposid: 9 word: в head id: 10 head: Армении deprel: caseid: 10 word: Армении head id: 5 head: число deprel: nmodid: 11 word: составляло head id: 0 head: root deprel: rootid: 12 word: около head id: 14 head: тысяч deprel: caseid: 13 word: десяти head id: 14 head: тысяч deprel: nummodid: 14 word: тысяч head id: 11 head: составляло deprel: oblid: 15 word: . head id: 11 head: составляло deprel: punct``` Setting up Stanza for training
#!pip install stanza # !pip install -U stanza
_____no_output_____
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Run in terminal.1. Clone Stanza GitHub repository```$git clone https://github.com/stanfordnlp/stanza```2. Move to cloned git repository & download embeddings ({lang}.vectors.xz format)(run in a screen, takes up several hours, depending on the Internet speed). Make sure the vectors are in `/extern_data/word2vec` folder. You will probably need to create this folder and move the downloaded folders with word vectors there manually.```$ cd stanza$ ./scripts/download_vectors.sh ./extern_data/```3. Make sure your `./stanza/scripts/config.sh` is set up like below. Modify if necessary (pay attention to UDBASE and NERBASE).```export UDBASE=./udbaseexport NERBASE=./nerbase Set directories to store processed training/evaluation filesexport DATA_ROOT=./dataexport TOKENIZE_DATA_DIR=$DATA_ROOT/tokenizeexport MWT_DATA_DIR=$DATA_ROOT/mwtexport POS_DATA_DIR=$DATA_ROOT/posexport LEMMA_DATA_DIR=$DATA_ROOT/lemmaexport DEPPARSE_DATA_DIR=$DATA_ROOT/depparseexport ETE_DATA_DIR=$DATA_ROOT/eteexport NER_DATA_DIR=$DATA_ROOT/nerexport CHARLM_DATA_DIR=$DATA_ROOT/charlm Set directories to store external word vector dataexport WORDVEC_DIR=./extern_data/```**NB!** Make sure `WORDVEC_DIR=./extern_data/` if your vectors are in `/extern_data/word2vec` folder.If you leave `WORDVEC_DIR=./extern_data/`, your vectors should be stored in `/extern_data/word2vec/word2vec` folder.4. Download language resources:
import stanza stanza.download('ru')
Downloading https://raw.githubusercontent.com/stanfordnlp/stanza-resources/master/resources_1.0.0.json: 115kB [00:00, 2.47MB/s] 2020-07-24 11:27:31 INFO: Downloading default packages for language: ru (Russian)... 2020-07-24 11:27:32 INFO: File exists: /home/steysie/stanza_resources/ru/default.zip. 2020-07-24 11:27:38 INFO: Finished downloading models and saved to /home/steysie/stanza_resources.
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Preparing Dataset for DEPPARSE
from corpuscula.corpus_utils import syntagrus, download_ud, Conllu from corpuscula import corpus_utils import junky import corpuscula.corpus_utils as cu import stanza # cu.set_root_dir('.') # !pip install -U junky corpus_utils.download_syntagrus(root_dir=corpus_utils.get_root_dir(), overwrite=True) junky.clear_tqdm() # train, train_heads, train_deprels = junky.get_conllu_fields(syntagrus.train, fields=['HEAD', 'DEPREL']) # dev, train_heads, dev_deprels = junky.get_conllu_fields(syntagrus.dev, fields=['HEAD', 'DEPREL']) test, test_heads, test_deprels = junky.get_conllu_fields(syntagrus.test, fields=['HEAD', 'DEPREL'])
Load corpus
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Training a Dependency Parser with Stanza **`STEP 1`**`Input files for DEPPARSE model training should be placed here:` **`{UDBASE}/{corpus}/{corpus_short}-ud-{train,dev,test}.conllu`**, where * **`{UDBASE}`** is `./stanza/udbase/` (specified in `config.sh`), * **`{corpus}`** is full corpus name (e.g. `UD_Russian-SynTagRus` or `UD_English-EWT`, case-sensitive), and * **`{corpus_short}`** is the treebank code, can be [found here](https://stanfordnlp.github.io/stanza/model_history.html) (e.g. `ru_syntagrus`).**`STEP 2`****Important:** Create `./data/depparse/` folder, otherwise the code below will fail to run.**`STEP 3`** To prepare data, run:```$ cd stanza$ ./scripts/prep_depparse_data.sh UD_Russian-SynTagRus gold```The script above prepares the train-dev-test.conllu data which is located in `./udbase/UD_Russian-SynTagRus/`.**`STEP 4`**To start training, run:```$ ./scripts/run_depparse.sh UD_Russian-SynTagRus gold```The model will be saved to `saved_models/depparse/ru_syntagrus_parser.pt`.**`HOW TO USE`** Loading Trained Models to PipelineTo load the model for prediction, when setting up Tagger Pipeline, specify path to the model:```nlp = stanza.Pipeline('ru', processors='tokenize,pos,lemma,ner,depparse', pos_model_path=, lemma_model_path=, ner_model_path=, depparse_model_path=)``` Using Trained Model for Prediction If you want to disable Stanza built-in tokenizer, specify `tokenize_pretokenized=True` parameter in Pipeline.Input should still be a list of strings, but tokens will be separated by spaces, no multi-word tokens will appear.
nlp = stanza.Pipeline('ru', processors='tokenize,pos,lemma,ner,depparse', depparse_model_path='stanza/saved_models/depparse/ru_syntagrus_parser.pt', tokenize_pretokenized=True) doc = nlp(' '.join(test[0])) print(*[f'id: {word.id}\tword: {word.text}\thead id: {word.head}\t\ head: {sent.words[word.head-1].text if word.head > 0 else "root"}\tdeprel: {word.deprel}' for sent in doc.sentences for word in sent.words], sep='\n') doc from collections import OrderedDict import stanza from tqdm import tqdm def stanza_parse(sents, depparse_model='stanza/saved_models/depparse/ru_syntagrus_parser.pt' ): sents = [' '.join(sent) for sent in sents] nlp = stanza.Pipeline('ru', processors='tokenize,pos,lemma,ner,depparse', # pos_model_path=pos_model, # lemma_model_path=lemma_model, # ner_model_path=ner_model, depparse_model_path=depparse_model, tokenize_pretokenized=True) for idx, sent in enumerate(tqdm(sents)): doc = nlp(sent) res = [] assert len(doc.sentences) == 1, \ 'ERROR: incorrect lengths of sentences ({}) for sent {}' \ .format(len(doc.sentences), idx) sent = doc.sentences[0] tokens, words = sent.tokens, sent.words assert len(tokens) == len(words), \ 'ERROR: inconsistent lengths of tokens and words for sent {}' \ .format(idx) for token, word in zip(tokens, words): res.append({ 'ID': token.id, 'FORM': token.text, 'LEMMA': word.lemma, 'UPOS': word.upos, 'XPOS': word.xpos, 'FEATS': OrderedDict( [(k, v) for k, v in [ t.split('=', 1) for t in word.feats.split('|') ]] if word.feats else [] ), 'HEAD': str(word.head), 'DEPREL': word.deprel, 'DEPS': str(word.head)+':'+ word.deprel, 'MISC': OrderedDict( [('NE', token.ner[2:])] if token.ner != 'O' else [] ) }) yield res
_____no_output_____
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Prediction and Saving Results to CONLL-U
junky.clear_tqdm() Conllu.save(stanza_parse(test), 'stanza_syntagrus.conllu', fix=True, log_file=None)
2020-07-28 13:24:45 INFO: Loading these models for language: ru (Russian): ======================================= | Processor | Package | --------------------------------------- | tokenize | syntagrus | | pos | syntagrus | | lemma | syntagrus | | depparse | stanza/sav..._parser.pt | | ner | wikiner | ======================================= 2020-07-28 13:24:45 INFO: Use device: cpu 2020-07-28 13:24:45 INFO: Loading: tokenize 2020-07-28 13:24:45 INFO: Loading: pos 2020-07-28 13:24:46 INFO: Loading: lemma 2020-07-28 13:24:46 INFO: Loading: depparse 2020-07-28 13:24:47 INFO: Loading: ner 2020-07-28 13:24:48 INFO: Done loading processors! 100%|██████████| 6491/6491 [25:39<00:00, 4.22it/s]
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
Inference on Test Corpus
# !pip install mordl from mordl import conll18_ud_eval gold_file = 'corpus/_UD/UD_Russian-SynTagRus/ru_syntagrus-ud-test.conllu' system_file = 'stanza_syntagrus.conllu' conll18_ud_eval(gold_file, system_file, verbose=True, counts=False)
0%| | 0/6491 [34:50<?, ?it/s]
CC0-1.0
stanza/stanza_depparse.ipynb
steysie/parse-xplore
ISFOG 2020 - Pile driving prediction eventData science techniques are rapidly transforming businesses in a broad range of sectors. While marketing and social applications have received most attention to date, geotechnical engineering can also benefit from data science tools that are now readily available. In the context of the ISFOG2020 conference in Austin, TX, a prediction event is launched which invites geotechnical engineers to share knowledge and gain hands-on experience with machine learning models.This Jupyter notebook shows you how to get started with machine learning (ML) tools and creates a simple ML model for pile driveability. Participants are encouraged to work through this initial notebook to get familiar with the dataset and the basics of ML. 1. Importing packagesThe Python programming language works with a number of packages. We will work with the ```Pandas``` package for data processing, ```Matplotlib``` for data visualisation and ```scikit-learn``` for the ML. We will also make use of the numerical package ```Numpy```. These package come pre-installed with the Anaconda distribution (see installation guide). Each package is extensively documented with online documentation, tutorials and examples. We can import the necessary packages with the following code.Note: Code cells can be executed with Shift+Enter or by using the run button in the toolbar at the top. Note that code cells need to be executed from top to bottom. The order of execution is important.
import pandas as pd import numpy as np import matplotlib.pyplot as plt import sklearn %matplotlib inline
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo
2. Pile driving dataThe dataset is kindly provided by [Cathie Group](http://www.cathiegroup.com). 2.1. Importing dataThe first step in any data science exercise is to get familiar with the data. The data is provided in a csv file (```training_data.csv```). We can import the data with Pandas and display the first five rows using the ```head()``` function.
data = pd.read_csv("/kaggle/input/training_data.csv") # Store the contents of the csv file in the variable 'data' data.head()
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo
The data has 12 columns, containing PCPT data ($ q_c $, $ f_s $ and $ u_2 $), recorded hammer data (blowcount, normalised hammer energy, normalised ENTHRU and total number of blows), pile data (diameter, bottom wall thickness and pile final penetration). A unique ID identifies the location and $ z $ defines the depth below the mudline.The data has already been resampled to a regular grid with 0.5m grid intervals to facilitate the further data handling.The hammer energy has been normalised using the same reference energy for all piles in this prediction exercise.We can see that there is no driving data in the first five rows (NaN values), this is because driving only started after a given self-weight penetration of the pile. 2.2. Summary statisticsWe can easily create summary statistics of each column using the ```describe()``` function on the data. This gives us the number of elements, mean, standard deviation, minimum, maximum and percentiles of each column of the data.We can see that there are more PCPT data points than hammer data points. This makes sense as there is soil data available above the pile self-weight penetration and below the final pile penetration. The pile data is defined in the self-weight penetration part of the profile, so there are slightly more pile data points than hammer record data points.
data.describe()
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo
2.3. PlottingWe can plot the cone tip resistance, blowcount and normalised ENTHRU energy for all locations to show how the data varies with depth. We can generate this plot using the ```Matplotlib``` package.
fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3, sharey=True, figsize=(16,9)) ax1.scatter(data["qc [MPa]"], data["z [m]"], s=5) # Create the cone tip resistance vs depth plot ax2.scatter(data["Blowcount [Blows/m]"], data["z [m]"], s=5) # Create the Blowcount vs depth plot ax3.scatter(data["Normalised ENTRHU [-]"], data["z [m]"], s=5) # Create the ENTHRU vs depth plot # Format the axes (position, labels and ranges) for ax in (ax1, ax2, ax3): ax.xaxis.tick_top() ax.xaxis.set_label_position('top') ax.grid() ax.set_ylim(50, 0) ax1.set_xlabel(r"Cone tip resistance, $ q_c $ (MPa)") ax1.set_xlim(0, 120) ax2.set_xlabel(r"Blowcount (Blows/m)") ax2.set_xlim(0, 200) ax3.set_xlabel(r"Normalised ENTRHU (-)") ax3.set_xlim(0, 1) ax1.set_ylabel(r"Depth below mudline, $z$ (m)") # Show the plot plt.show()
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo
The cone resistance data shows that the site mainly consists of sand of varying relative density. In certain profiles, clay is present below 10m. There are also locations with very high cone resistance (>70MPa).The blowcount profile shows that blowcount is relatively well clustered around a generally increasing trend with depth. The normalised ENTHRU energy is also increasing with depth. We can isolate the data for a single location by selecting this data from the dataframe with all data. As an example, we can do this for location EK.
# Select the data where the column 'Location ID' is equal to the location name location_data = data[data["Location ID"] == "EK"]
_____no_output_____
Apache-2.0
analysis/isfog-2020-linear-model-demo.ipynb
alexandershires/offshore-geo