markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Run the Processing Job using Amazon SageMakerNext, use the Amazon SageMaker Python SDK to submit a processing job using our custom python script. Review the Processing Script | !pygmentize preprocess-scikit-text-to-bert-feature-store.py | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Run this script as a processing job. You also need to specify one `ProcessingInput` with the `source` argument of the Amazon S3 bucket and `destination` is where the script reads this data from `/opt/ml/processing/input` (inside the Docker container.) All local paths inside the processing container must begin with `/opt/ml/processing/`.Also give the `run()` method a `ProcessingOutput`, where the `source` is the path the script writes output data to. For outputs, the `destination` defaults to an S3 bucket that the Amazon SageMaker Python SDK creates for you, following the format `s3://sagemaker--//output//`. You also give the `ProcessingOutput` value for `output_name`, to make it easier to retrieve these output artifacts after the job is run.The arguments parameter in the `run()` method are command-line arguments in our `preprocess-scikit-text-to-bert-feature-store.py` script.Note that we sharding the data using `ShardedByS3Key` to spread the transformations across all worker nodes in the cluster. Track the `Experiment`We will track every step of this experiment throughout the `prepare`, `train`, `optimize`, and `deploy`. Concepts**Experiment**: A collection of related Trials. Add Trials to an Experiment that you wish to compare together.**Trial**: A description of a multi-step machine learning workflow. Each step in the workflow is described by a Trial Component. There is no relationship between Trial Components such as ordering.**Trial Component**: A description of a single step in a machine learning workflow. For example data cleaning, feature extraction, model training, model evaluation, etc.**Tracker**: A logger of information about a single TrialComponent. Create the `Experiment` | import time
from smexperiments.experiment import Experiment
timestamp = int(time.time())
experiment = Experiment.create(
experiment_name="Amazon-Customer-Reviews-BERT-Experiment-{}".format(timestamp),
description="Amazon Customer Reviews BERT Experiment",
sagemaker_boto_client=sm,
)
experiment_name = experiment.experiment_name
print("Experiment name: {}".format(experiment_name)) | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Create the `Trial` | import time
from smexperiments.trial import Trial
timestamp = int(time.time())
trial = Trial.create(
trial_name="trial-{}".format(timestamp), experiment_name=experiment_name, sagemaker_boto_client=sm
)
trial_name = trial.trial_name
print("Trial name: {}".format(trial_name)) | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Create the `Experiment Config` | experiment_config = {
"ExperimentName": experiment_name,
"TrialName": trial_name,
"TrialComponentDisplayName": "prepare",
}
print(experiment_name)
%store experiment_name
print(trial_name)
%store trial_name | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Create Feature Store and Feature Group | featurestore_runtime = boto3.Session().client(service_name="sagemaker-featurestore-runtime", region_name=region)
timestamp = int(time.time())
feature_store_offline_prefix = "reviews-feature-store-" + str(timestamp)
print(feature_store_offline_prefix)
feature_group_name = "reviews-feature-group-" + str(timestamp)
print(feature_group_name)
from sagemaker.feature_store.feature_definition import (
FeatureDefinition,
FeatureTypeEnum,
)
feature_definitions = [
FeatureDefinition(feature_name="input_ids", feature_type=FeatureTypeEnum.STRING),
FeatureDefinition(feature_name="input_mask", feature_type=FeatureTypeEnum.STRING),
FeatureDefinition(feature_name="segment_ids", feature_type=FeatureTypeEnum.STRING),
FeatureDefinition(feature_name="label_id", feature_type=FeatureTypeEnum.INTEGRAL),
FeatureDefinition(feature_name="review_id", feature_type=FeatureTypeEnum.STRING),
FeatureDefinition(feature_name="date", feature_type=FeatureTypeEnum.STRING),
FeatureDefinition(feature_name="label", feature_type=FeatureTypeEnum.INTEGRAL),
# FeatureDefinition(feature_name='review_body', feature_type=FeatureTypeEnum.STRING)
]
from sagemaker.feature_store.feature_group import FeatureGroup
feature_group = FeatureGroup(name=feature_group_name, feature_definitions=feature_definitions, sagemaker_session=sess)
print(feature_group) | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Set the Processing Job Hyper-Parameters | processing_instance_type = "ml.c5.2xlarge"
processing_instance_count = 2
train_split_percentage = 0.90
validation_split_percentage = 0.05
test_split_percentage = 0.05
balance_dataset = True
max_seq_length = 64 | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Choosing a `max_seq_length` for BERTSince a smaller `max_seq_length` leads to faster training and lower resource utilization, we want to find the smallest review length that captures `80%` of our reviews.Remember our distribution of review lengths from a previous section?```mean 51.683405std 107.030844min 1.00000010% 2.00000020% 7.00000030% 19.00000040% 22.00000050% 26.00000060% 32.00000070% 43.00000080% 63.00000090% 110.000000100% 5347.000000max 5347.000000```Review length `63` represents the `80th` percentile for this dataset. However, it's best to stick with powers-of-2 when using BERT. So let's choose `64` as this is the smallest power-of-2 greater than `63`. Reviews with length > `64` will be truncated to `64`. | from sagemaker.sklearn.processing import SKLearnProcessor
processor = SKLearnProcessor(
framework_version="0.23-1",
role=role,
instance_type=processing_instance_type,
instance_count=processing_instance_count,
env={"AWS_DEFAULT_REGION": region},
max_runtime_in_seconds=7200,
)
from sagemaker.processing import ProcessingInput, ProcessingOutput
processor.run(
code="preprocess-scikit-text-to-bert-feature-store.py",
inputs=[
ProcessingInput(
input_name="raw-input-data",
source=raw_input_data_s3_uri,
destination="/opt/ml/processing/input/data/",
s3_data_distribution_type="ShardedByS3Key",
)
],
outputs=[
ProcessingOutput(
output_name="bert-train", s3_upload_mode="EndOfJob", source="/opt/ml/processing/output/bert/train"
),
ProcessingOutput(
output_name="bert-validation",
s3_upload_mode="EndOfJob",
source="/opt/ml/processing/output/bert/validation",
),
ProcessingOutput(
output_name="bert-test", s3_upload_mode="EndOfJob", source="/opt/ml/processing/output/bert/test"
),
],
arguments=[
"--train-split-percentage",
str(train_split_percentage),
"--validation-split-percentage",
str(validation_split_percentage),
"--test-split-percentage",
str(test_split_percentage),
"--max-seq-length",
str(max_seq_length),
"--balance-dataset",
str(balance_dataset),
"--feature-store-offline-prefix",
str(feature_store_offline_prefix),
"--feature-group-name",
str(feature_group_name),
],
experiment_config=experiment_config,
logs=True,
wait=False,
)
scikit_processing_job_name = processor.jobs[-1].describe()["ProcessingJobName"]
print(scikit_processing_job_name)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/processing-jobs/{}">Processing Job</a></b>'.format(
region, scikit_processing_job_name
)
)
)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix">CloudWatch Logs</a> After About 5 Minutes</b>'.format(
region, scikit_processing_job_name
)
)
)
from IPython.core.display import display, HTML
display(
HTML(
'<b>Review <a target="blank" href="https://s3.console.aws.amazon.com/s3/buckets/{}/{}/?region={}&tab=overview">S3 Output Data</a> After The Processing Job Has Completed</b>'.format(
bucket, scikit_processing_job_name, region
)
)
) | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Monitor the Processing Job | running_processor = sagemaker.processing.ProcessingJob.from_processing_name(
processing_job_name=scikit_processing_job_name, sagemaker_session=sess
)
processing_job_description = running_processor.describe()
print(processing_job_description)
running_processor.wait(logs=False) | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
_Please Wait Until the ^^ Processing Job ^^ Completes Above._ Inspect the Processed Output DataTake a look at a few rows of the transformed dataset to make sure the processing was successful. | processing_job_description = running_processor.describe()
output_config = processing_job_description["ProcessingOutputConfig"]
for output in output_config["Outputs"]:
if output["OutputName"] == "bert-train":
processed_train_data_s3_uri = output["S3Output"]["S3Uri"]
if output["OutputName"] == "bert-validation":
processed_validation_data_s3_uri = output["S3Output"]["S3Uri"]
if output["OutputName"] == "bert-test":
processed_test_data_s3_uri = output["S3Output"]["S3Uri"]
print(processed_train_data_s3_uri)
print(processed_validation_data_s3_uri)
print(processed_test_data_s3_uri)
!aws s3 ls $processed_train_data_s3_uri/
!aws s3 ls $processed_validation_data_s3_uri/
!aws s3 ls $processed_test_data_s3_uri/ | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Pass Variables to the Next Notebook(s) | %store raw_input_data_s3_uri
%store max_seq_length
%store train_split_percentage
%store validation_split_percentage
%store test_split_percentage
%store balance_dataset
%store feature_store_offline_prefix
%store feature_group_name
%store processed_train_data_s3_uri
%store processed_validation_data_s3_uri
%store processed_test_data_s3_uri
%store | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Query The Feature Store | feature_store_query = feature_group.athena_query()
feature_store_table = feature_store_query.table_name
query_string = """
SELECT input_ids, input_mask, segment_ids, label_id, split_type FROM "{}" WHERE split_type='train' LIMIT 5
""".format(
feature_store_table
)
print("Running " + query_string)
feature_store_query.run(
query_string=query_string,
output_location="s3://" + bucket + "/" + feature_store_offline_prefix + "/query_results/",
)
feature_store_query.wait()
feature_store_query.as_dataframe() | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Show the Experiment Tracking Lineage | from sagemaker.analytics import ExperimentAnalytics
import pandas as pd
pd.set_option("max_colwidth", 500)
# pd.set_option("max_rows", 100)
experiment_analytics = ExperimentAnalytics(
sagemaker_session=sess, experiment_name=experiment_name, sort_by="CreationTime", sort_order="Descending"
)
experiment_analytics_df = experiment_analytics.dataframe()
experiment_analytics_df
trial_component_name = experiment_analytics_df.TrialComponentName[0]
print(trial_component_name)
trial_component_description = sm.describe_trial_component(TrialComponentName=trial_component_name)
trial_component_description | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Show SageMaker ML Lineage Tracking Amazon SageMaker ML Lineage Tracking creates and stores information about the steps of a machine learning (ML) workflow from data preparation to model deployment. Amazon SageMaker Lineage enables events that happen within SageMaker to be traced via a graph structure. The data simplifies generating reports, making comparisons, or discovering relationships between events. For example easily trace both how a model was generated and where the model was deployed.The lineage graph is created automatically by SageMaker and you can directly create or modify your own graphs. Key Concepts* **Lineage Graph** - A connected graph tracing your machine learning workflow end to end.* **Artifacts** - Represents a URI addressable object or data. Artifacts are typically inputs or outputs to Actions.* **Actions** - Represents an action taken such as a computation, transformation, or job.* **Contexts** - Provides a method to logically group other entities.* **Associations** - A directed edge in the lineage graph that links two entities.* **Lineage Traversal** - Starting from an arbitrary point trace the lineage graph to discover and analyze relationships between steps in your workflow.* **Experiments** - Experiment entites (Experiments, Trials, and Trial Components) are also part of the lineage graph and can be associated wtih Artifacts, Actions, or Contexts. Show Lineage Artifacts For Our Processing Job | from sagemaker.lineage.visualizer import LineageTableVisualizer
lineage_table_viz = LineageTableVisualizer(sess)
lineage_table_viz_df = lineage_table_viz.show(processing_job_name=scikit_processing_job_name)
lineage_table_viz_df | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Release Resources | %%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script>
%%javascript
try {
Jupyter.notebook.save_checkpoint();
Jupyter.notebook.session.delete();
}
catch(err) {
// NoOp
} | _____no_output_____ | Apache-2.0 | 06_prepare/02_Prepare_Dataset_BERT_Scikit_ScriptMode_FeatureStore.ipynb | MarcusFra/workshop |
Midterm Exam PROBLEMSTATEMENT 1 | a="John Angelo A. Dazo"
b="202012935"
c="19 years old"
d="June 09, 2002"
e="Blk 9 Lot 8 Persan Village, Sanja Mayor, Tanza, Cavite"
f="Programming Logic and Design"
g="1.33"
print("Full Name: "+a)
print("Student Number: "+b)
print("Age: "+c)
print("Birthday: "+d)
print("Address: "+e)
print("Course: "+f)
print("Last Sem GWA: "+g) | Full Name: John Angelo A. Dazo
Student Number: 202012935
Age: 19 years old
Birthday: June 09, 2002
Address: Blk 9 Lot 8 Persan Village, Sanja Mayor, Tanza, Cavite
Course: Programming Logic and Design
Last Sem GWA: 1.33
| Apache-2.0 | Midterm_Exam.ipynb | JohnAngeloDazo/CPEN-21A-ECE-2-1 |
PROBLEMSTATEMENT 2 | n=4
answ="Y"
print(bool(2<n)and(n<6)) #a
print(bool(2<n)or(n==6)) #b
print(bool(not(2<n)or(n==6)))#c
print(bool(not(n<6))) #d
print(bool(answ=="Y")or(answ=="y")) #e
print(bool(answ=="Y")and(answ=="y")) #f
print(bool(not(answ=="y"))) #g
print(bool((2<n)and(n==5+1))or(answ=="No")) #h
print(bool((n==2)and(n==7))or(answ=="Y")) #i
print(bool(n==2)and((n==7)or(answ=="Y"))) #j | True
True
False
False
True
False
True
False
True
False
| Apache-2.0 | Midterm_Exam.ipynb | JohnAngeloDazo/CPEN-21A-ECE-2-1 |
PROBLEMSTATEMENT 3 | x=2
y=-3
w=7
z=-10
print(x/y) #a
print(w/y/x) #b
print(z/y%x) #c
print(x%-y*w) #d
print(x%y) #e
print(z%w-y/x*5+5) #f
print(9-x%(2+y)) #g
print(z//w) #h
print((2+y)**2) #i
print(w/x*2) #j | -0.6666666666666666
-1.1666666666666667
1.3333333333333335
14
-1
16.5
9
-2
1
7.0
| Apache-2.0 | Midterm_Exam.ipynb | JohnAngeloDazo/CPEN-21A-ECE-2-1 |
Train YOLOv5 on RailSem19This notebook is based on the [YOLOv5 tutorial](https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb). This notebook is designed to run in Google Colab.--- SetupClone YOLOv5, install dependencies and check PyTorch and GPU: | !git clone https://github.com/ultralytics/yolov5 # clone
%cd yolov5
%pip install -qr requirements.txt # install
from yolov5 import utils
display = utils.notebook_init() # checks
%cd ../
!git clone https://github.com/Denbergvanthijs/railsem19_yolov5.git | _____no_output_____ | Apache-2.0 | train_rs19.ipynb | Denbergvanthijs/railsem19_yolov5 |
Load Tensorboard and Weights & Biases (both optional): | # Tensorboard (optional)
# %load_ext tensorboard
# %tensorboard --logdir runs/train
# Weights & Biases (optional)
%pip install -q wandb
import wandb
wandb.login() | _____no_output_____ | Apache-2.0 | train_rs19.ipynb | Denbergvanthijs/railsem19_yolov5 |
Mount personal Google Drive: | from google.colab import drive
drive.mount('/content/drive') | _____no_output_____ | Apache-2.0 | train_rs19.ipynb | Denbergvanthijs/railsem19_yolov5 |
Copy training data from Google Drive to local disk: | !mkdir data
!cp ./drive/MyDrive/rs19_person_semseg ./data/rs19_person_semseg -r | _____no_output_____ | Apache-2.0 | train_rs19.ipynb | Denbergvanthijs/railsem19_yolov5 |
TrainingTo train a model with the hyperparameters used in the research paper: | !python ./yolov5/train.py --batch-size 64 --epochs 50 --data ./railsem19_yolov5/data/rs19_person_semseg.yaml --weights yolov5s.pt --single-cls --cache --hyp ./railsem19_yolov5/data/hyp_evolve.yaml | _____no_output_____ | Apache-2.0 | train_rs19.ipynb | Denbergvanthijs/railsem19_yolov5 |
Hyperparameter optimalisationThe following will start the hyperparameter optimalisation: | !python ./yolov5/train.py --batch-size 64 --epochs 10 --data ./railsem19_yolov5/data/rs19_person_semseg.yaml --weights yolov5s.pt --single-cls --cache --evolve 40 --hyp "./railsem19_yolov5/data/hyp_evolve.yaml" | _____no_output_____ | Apache-2.0 | train_rs19.ipynb | Denbergvanthijs/railsem19_yolov5 |
Importing and Setuping Up Dataset | df = pd.read_csv('all_games.csv', low_memory=False)
df.head()
# Remove uncalibrated games
df = df[df['White Rating'] != '?']
df = df[df['Black Rating'] != '?']
df = df.astype({
'White Rating': int,
'Black Rating': int,
})
df['Rating Diff'] = abs(df['White Rating'] - df['Black Rating'])
df['Avg Rating'] = (df['White Rating'] + df['Black Rating']) / 2 | _____no_output_____ | MIT | jordan_analysis.ipynb | kdesai2018/DSL-final-project |
Rating Analysis | Rating_Ranges = [1200, 1400, 1600, 1800, 2000, 2200]
rating = df['White Rating'].append(df['Black Rating'], ignore_index=True)
bins = [rating.min()] + Rating_Ranges + [rating.max()]
display(rating.value_counts(bins=bins, sort=False))
rating.hist(bins=bins)
display(df['Rating Diff'].value_counts(bins=20, sort=False))
df['Rating Diff'].hist(bins=20)
avg_bins = [df['Avg Rating'].min()] + Rating_Ranges + [df['Avg Rating'].max()]
display(df['Avg Rating'].value_counts(bins=avg_bins, sort=False))
df['Avg Rating'].hist(bins=avg_bins)
display(df['Avg Rating'].value_counts(bins=3, sort=False))
df['Event'].value_counts()
df['Event'].unique() | _____no_output_____ | MIT | jordan_analysis.ipynb | kdesai2018/DSL-final-project |
Opening Analysis | def opening_prune(op):
split = re.split('[:#,]', op)
return split[0].rstrip()
df['Opening Short'] = df['Opening'].apply(opening_prune)
df['Opening Short'].value_counts(normalize=True)
counts = df['Opening Short'].value_counts()
percentage = df['Opening Short'].value_counts(normalize=True).mul(100)
open_stats = pd.DataFrame({'counts': counts, 'percentage':percentage})
print('Overall Most Popular Openings')
open_stats.head(10)
df_q = df[df['Opening Short'] == "Queen's Pawn Game"]
df_q['Opening'].value_counts()
df_k = df[df['Opening Short'] == "King's Pawn Game"]
df_k['Opening'].value_counts()
for event in df['Event'].unique():
df_event = df[df['Event'] == event]
print(f'Popular Openings in {event}')
counts = df_event['Opening Short'].value_counts()
percentage = df_event['Opening Short'].value_counts(normalize=True).mul(100)
open_stats = pd.DataFrame({'counts': counts, 'percentage':percentage})
display(open_stats.head(10)) | Popular Openings in Rated Blitz game
| MIT | jordan_analysis.ipynb | kdesai2018/DSL-final-project |
Rating Breakdown | Rating_Ranges = [1500, 2200, 2200]
df_novice = df[df['Avg Rating'] < 1500]
df_n = pd.DataFrame(df_novice['Opening Short'].value_counts(normalize=True)).reset_index()
df_n[df_n['index'] == 'Scandinavian Defense']
#df_n.loc('Scandinavian Defense')
df_master = df[df['Avg Rating'] > 2200]
df_m = pd.DataFrame(df_master['Opening Short'].value_counts(normalize=True)).reset_index()
df_m[df_m['index'] == 'Scandinavian Defense']
df_mid = df[(df['Avg Rating'] > 1500) & (df['Avg Rating'] < 2200)]
Rating_Ranges = [1200, 1400, 1600, 1800, 2000, 2200, 2500, 2500]
for i in range(len(Rating_Ranges)):
if(i == 0):
df_rate = df[(df['White Rating'] < Rating_Ranges[i]) & (df['Black Rating'] < Rating_Ranges[i])]
print(f'Popular Openings in Rating Range: 0 - {Rating_Ranges[i]}')
elif(i == len(Rating_Ranges) - 1):
df_rate = df[(df['White Rating'] > Rating_Ranges[i]) & (df['Black Rating'] > Rating_Ranges[i])]
print(f'Popular Openings in Rating Range: >{Rating_Ranges[i]}')
else:
df_rate = df[(df['White Rating'] > Rating_Ranges[i-1]) & (df['Black Rating'] > Rating_Ranges[i-1]) &
(df['White Rating'] < Rating_Ranges[i]) & (df['Black Rating'] < Rating_Ranges[i])]
print(f'Popular Openings in Rating Range: {Rating_Ranges[i-1]} - {Rating_Ranges[i]}')
counts = df_rate['Opening Short'].value_counts()
percentage = df_rate['Opening Short'].value_counts(normalize=True).mul(100)
open_stats = pd.DataFrame({'counts': counts, 'percentage':percentage})
open_stats.index.name='Opening'
open_stats = open_stats.reset_index()
display(open_stats.head(10))
# display(open_stats[open_stats['Opening'] == "King's Pawn Game"]) | Popular Openings in Rating Range: 0 - 1200
| MIT | jordan_analysis.ipynb | kdesai2018/DSL-final-project |
Effectiveness of Top Openings in Top Level Play | df_top = df[(df['Avg Rating'] > 2500)]
counts = df_top['Opening Short'].value_counts()
percentage = df_top['Opening Short'].value_counts(normalize=True).mul(100)
top_stats = pd.DataFrame({'counts': counts, 'percentage':percentage})
display(top_stats.head(5))
openings = ['Sicilian Defense', 'French Defense', 'English Opening', 'Nimzo-Larsen Attack', 'Zukertort Opening']
for op in openings:
df_op = df_top[df_top['Opening Short'] == op]
df_results = pd.DataFrame(df_op['Result'].value_counts(normalize=True))
print(f'>2500 ELO: Results for {op}')
display(df_results)
ratings.min()
prev_i = 750
data = []
for i in range(800, 2900, 50):
df_op = df[(df['Opening Short'] == 'Sicilian Defense') & (df['Avg Rating'] > prev_i) & (df['Avg Rating'] < i)]
if df_op.empty:
prev_i = i
continue
df_results = pd.DataFrame(df_op['Result'].value_counts(normalize=True))
try:
data.append({'Rating': i, 'White': df_results.loc['1-0']['Result'], 'Black': df_results.loc['0-1']['Result'], 'Draw': df_results.loc['1/2-1/2']['Result']})
except KeyError:
continue
prev_i = i
df_sicilian = pd.DataFrame(data)
df_sicilian = df_sicilian.set_index('Rating')
df_sicilian['Black/Draw'] = df_sicilian['Black'] + df_sicilian['Draw']
df_sicilian.plot.line()
prev_i = 750
data = []
for i in range(800, 2900, 50):
df_op = df[(df['Opening Short'] == 'Nimzo-Larsen Attack') & (df['Avg Rating'] > prev_i) & (df['Avg Rating'] < i)]
if df_op.empty:
prev_i = i
continue
df_results = pd.DataFrame(df_op['Result'].value_counts(normalize=True))
try:
data.append({'Rating': i, 'White': df_results.loc['1-0']['Result'], 'Black': df_results.loc['0-1']['Result'], 'Draw': df_results.loc['1/2-1/2']['Result']})
except KeyError:
continue
prev_i = i
df_lar = pd.DataFrame(data)
df_lar = df_lar.set_index('Rating')
df_lar.plot.line()
df_lar.plot.line()
plt.suptitle("Nimzo-Larsen Attack: Rating vs Results")
df_sicilian.plot.line()
plt.suptitle("Sicilian Defense: Rating vs Results")
df_sicilian['Black vs White'] = df_sicilian['Black'] - df_sicilian['White']
df['Opening Short'].value_counts(normalize=True)
counts = df['Opening Short'].value_counts()
percentage = df['Opening Short'].value_counts(normalize=True).mul(100)
open_stats = pd.DataFrame({'counts': counts, 'percentage':percentage})
open_stats.index.name='Opening'
open_stats = open_stats.reset_index()
print('Overall Most Popular Openings')
open_stats.head(21)
open_stats[open_stats['Opening'] == 'Nimzo-Larsen Attack'] | _____no_output_____ | MIT | jordan_analysis.ipynb | kdesai2018/DSL-final-project |
prepared by Abuzer Yakaryilmaz This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Vectors: Dot (Scalar) ProductTwo vectors can be multiplied with each other in different ways.One of the very basic methods is dot product.It is also called scalar product, because the result is a scalar value, e.g., a real number.Consider the following two vectors:$$ u = \myrvector{-3 \\ -2 \\ 0 \\ -1 \\ 4} \mbox{ and } v = \myrvector{-1\\ -1 \\2 \\ -3 \\ 5}.$$The dot product of $ u $ and $ v $, denoted by $ \dot{u}{v}$, can be defined algorithmically.Pairwise multiplication: the values in the same positions are multiplied with each other.Summation of all pairwise multiplications: Then we sum all the results obtained from the pairwise multiplications.We write its Python code below. | # let's define both vectors
u = [-3,-2,0,-1,4]
v = [-1,-1,2,-3,5]
uv = 0; # summation is initially zero
for i in range(len(u)): # iteratively access every pair with the same indices
print("pairwise multiplication of the entries with index",i,"is",u[i]*v[i])
uv = uv + u[i]*v[i] # i-th entries are multiplied and then added to summation
print() # print an empty line
print("The dot product of",u,'and',v,'is',uv) | _____no_output_____ | MIT | math/Math24_Dot_Product.ipynb | HasanIjaz-HB/Quantum-Computing |
The pairwise multiplications of entries are $ (-3)\cdot(-1) = 3 $, $ (-2)\cdot(-1) = 2 $, $ 0\cdot 2 = 0 $, $ (-1)\cdot(-3) = 3 $, and, $ 4 \cdot 5 = 20 $. Thus the summation of all pairwise multiplications of entries is $ 3+2+0+3+20 = 28 $.Remark that the dimensions of the given vectors must be the same. Otherwise, the dot product is not defined. Task 1 Find the dot product of the following vectors in Python:$$ v = \myrvector{-3 \\ 4 \\ -5 \\ 6} ~~~~\mbox{and}~~~~ u = \myrvector{4 \\ 3 \\ 6 \\ 5}.$$Your outcome should be $0$. | #
# your solution is here
#
| _____no_output_____ | MIT | math/Math24_Dot_Product.ipynb | HasanIjaz-HB/Quantum-Computing |
click for our solution Task 2 Let $ u = \myrvector{ -3 \\ -4 } $ be a 2 dimensional vector.Find $ \dot{u}{u} $ in Python. | #
# your solution is here
#
| _____no_output_____ | MIT | math/Math24_Dot_Product.ipynb | HasanIjaz-HB/Quantum-Computing |
click for our solution Notes:As may be observed from Task 2, the length of a vector can be calculated by using its dot product with itself.$$ \norm{u} = \sqrt{\dot{u}{u}}. $$$ \dot{u}{u} $ is $25$, and so $ \norm{u} = \sqrt{25} = 5 $. $ \dot{u}{u} $ automatically accumulates the contribution of each entry to the length. Orthogonal (perpendicular) vectors For simplicity, we consider 2-dimensional vectors.The following two vectors are perpendicular (orthogonal) to each other.The angle between them is $ 90 $ degrees. | # let's find the dot product of v and u
v = [-4,0]
u = [0,-5]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result) | _____no_output_____ | MIT | math/Math24_Dot_Product.ipynb | HasanIjaz-HB/Quantum-Computing |
Now, let's check the dot product of the following two vectors: | # we can use the same code
v = [-4,3]
u = [-3,-4]
result = 0;
for i in range(2):
result = result + v[i]*u[i]
print("the dot product of u and v is",result) | _____no_output_____ | MIT | math/Math24_Dot_Product.ipynb | HasanIjaz-HB/Quantum-Computing |
The dot product of new $ u $ and $ v $ is also $0$. This is not surprising, because the vectors $u$ and $v$ (in both cases) are orthogonal to each other.Fact: The dot product of two orthogonal (perpendicular) vectors is zero. If the dot product of two vectors is zero, then they are orthogonal to each other. This fact is important, because, as we will see later, orthogonal vectors (states) can be distinguished perfectly. Task 3 Verify that (i) $ u $ is orthogonal to $ -v $, (ii) $ -u $ is orthogonal to $ v $, and (iii) $ -u $ is orthogonal to $ -v $. | # you may consider to write a function in Python for dot product
#
# your solution is here
#
| _____no_output_____ | MIT | math/Math24_Dot_Product.ipynb | HasanIjaz-HB/Quantum-Computing |
click for our solution Task 4 Find the dot product of $ v $ and $ u $ in Python.$$ v = \myrvector{-1 \\ 2 \\ -3 \\ 4} ~~~~\mbox{and}~~~~ u = \myrvector{-2 \\ -1 \\ 5 \\ 2}.$$Find the dot product of $ -2v $ and $ 3u $ in Python.Compare both results. | #
# your solution is here
#
| _____no_output_____ | MIT | math/Math24_Dot_Product.ipynb | HasanIjaz-HB/Quantum-Computing |
Multi step model (simple encoder-decoder)In this notebook, we demonstrate how to:- prepare time series data for training a RNN forecasting model- get data in the required shape for the keras API- implement a RNN model in keras to predict the next 3 steps ahead (time *t+1* to *t+3*) in the time series. This model uses a simple encoder decoder approach in which the final hidden state of the encoder is replicated across each time step of the decoder. - enable early stopping to reduce the likelihood of model overfitting- evaluate the model on a test datasetThe data in this example is taken from the GEFCom2014 forecasting competition1. It consists of 3 years of hourly electricity load and temperature values between 2012 and 2014. The task is to forecast future values of electricity load.1Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli and Rob J. Hyndman, "Probabilistic energy forecasting: Global Energy Forecasting Competition 2014 and beyond", International Journal of Forecasting, vol.32, no.3, pp 896-913, July-September, 2016. | import os
import warnings
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
from collections import UserDict
from IPython.display import Image
%matplotlib inline
from common.utils import load_data, mape, TimeSeriesTensor, create_evaluation_df
pd.options.display.float_format = '{:,.2f}'.format
np.set_printoptions(precision=2)
warnings.filterwarnings("ignore") | _____no_output_____ | MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
Load data into Pandas dataframe | energy = load_data('data/')
energy.head()
valid_start_dt = '2014-09-01 00:00:00'
test_start_dt = '2014-11-01 00:00:00'
T = 6
HORIZON = 3 | _____no_output_____ | MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
Create training set containing only the model features | train = energy.copy()[energy.index < valid_start_dt][['load', 'temp']] | _____no_output_____ | MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
Scale data to be in range (0, 1). This transformation should be calibrated on the training set only. This is to prevent information from the validation or test sets leaking into the training data. | from sklearn.preprocessing import MinMaxScaler
y_scaler = MinMaxScaler()
y_scaler.fit(train[['load']])
X_scaler = MinMaxScaler()
train[['load', 'temp']] = X_scaler.fit_transform(train) | _____no_output_____ | MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
Use the TimeSeriesTensor convenience class to:1. Shift the values of the time series to create a Pandas dataframe containing all the data for a single training example2. Discard any samples with missing values3. Transform this Pandas dataframe into a numpy array of shape (samples, time steps, features) for input into KerasThe class takes the following parameters:- **dataset**: original time series- **H**: the forecast horizon- **tensor_structure**: a dictionary discribing the tensor structure in the form { 'tensor_name' : (range(max_backward_shift, max_forward_shift), [feature, feature, ...] ) }- **freq**: time series frequency- **drop_incomplete**: (Boolean) whether to drop incomplete samples | tensor_structure = {'X':(range(-T+1, 1), ['load', 'temp'])}
train_inputs = TimeSeriesTensor(train, 'load', HORIZON, {'X':(range(-T+1, 1), ['load', 'temp'])})
train_inputs.dataframe.head() | _____no_output_____ | MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
Construct validation set (keeping T hours from the training set in order to construct initial features) | look_back_dt = dt.datetime.strptime(valid_start_dt, '%Y-%m-%d %H:%M:%S') - dt.timedelta(hours=T-1)
valid = energy.copy()[(energy.index >=look_back_dt) & (energy.index < test_start_dt)][['load', 'temp']]
valid[['load', 'temp']] = X_scaler.transform(valid)
valid_inputs = TimeSeriesTensor(valid, 'load', HORIZON, tensor_structure) | _____no_output_____ | MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
Implement the RNN We will implement a RNN forecasting model with the following structure: | Image('./images/simple_encoder_decoder.png')
from keras.models import Model, Sequential
from keras.layers import GRU, Dense, RepeatVector, TimeDistributed, Flatten
from keras.callbacks import EarlyStopping
LATENT_DIM = 5
BATCH_SIZE = 32
EPOCHS = 10
model = Sequential()
model.add(GRU(LATENT_DIM, input_shape=(T, 2)))
model.add(RepeatVector(HORIZON))
model.add(GRU(LATENT_DIM, return_sequences=True))
model.add(TimeDistributed(Dense(1)))
model.add(Flatten())
model.compile(optimizer='RMSprop', loss='mse')
model.summary()
earlystop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5)
model.fit(train_inputs['X'],
train_inputs['target'],
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(valid_inputs['X'], valid_inputs['target']),
callbacks=[earlystop],
verbose=1) | Train on 23368 samples, validate on 1461 samples
Epoch 1/50
23368/23368 [==============================] - 4s 185us/step - loss: 0.0217 - val_loss: 0.0053
Epoch 2/50
23368/23368 [==============================] - 3s 129us/step - loss: 0.0050 - val_loss: 0.0042
Epoch 3/50
23368/23368 [==============================] - 3s 147us/step - loss: 0.0044 - val_loss: 0.0039
Epoch 4/50
23368/23368 [==============================] - 3s 129us/step - loss: 0.0041 - val_loss: 0.0035
Epoch 5/50
23368/23368 [==============================] - 3s 134us/step - loss: 0.0039 - val_loss: 0.0033
Epoch 6/50
23368/23368 [==============================] - 3s 138us/step - loss: 0.0037 - val_loss: 0.0032
Epoch 7/50
23368/23368 [==============================] - 3s 127us/step - loss: 0.0035 - val_loss: 0.0029
Epoch 8/50
23368/23368 [==============================] - 3s 130us/step - loss: 0.0034 - val_loss: 0.0029
Epoch 9/50
23368/23368 [==============================] - 4s 156us/step - loss: 0.0033 - val_loss: 0.0037
Epoch 10/50
23368/23368 [==============================] - 3s 123us/step - loss: 0.0033 - val_loss: 0.0029
Epoch 11/50
23368/23368 [==============================] - 4s 177us/step - loss: 0.0032 - val_loss: 0.0026
Epoch 12/50
23368/23368 [==============================] - 5s 195us/step - loss: 0.0032 - val_loss: 0.0032
Epoch 13/50
23368/23368 [==============================] - 4s 167us/step - loss: 0.0032 - val_loss: 0.0027
Epoch 14/50
23368/23368 [==============================] - 3s 135us/step - loss: 0.0032 - val_loss: 0.0028
Epoch 15/50
23368/23368 [==============================] - 4s 174us/step - loss: 0.0031 - val_loss: 0.0041
Epoch 16/50
23368/23368 [==============================] - 4s 172us/step - loss: 0.0031 - val_loss: 0.0034
| MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
Evaluate the model | look_back_dt = dt.datetime.strptime(test_start_dt, '%Y-%m-%d %H:%M:%S') - dt.timedelta(hours=T-1)
test = energy.copy()[test_start_dt:][['load', 'temp']]
test[['load', 'temp']] = X_scaler.transform(test)
test_inputs = TimeSeriesTensor(test, 'load', HORIZON, tensor_structure)
predictions = model.predict(test_inputs['X'])
predictions
eval_df = create_evaluation_df(predictions, test_inputs, HORIZON, y_scaler)
eval_df.head()
eval_df['APE'] = (eval_df['prediction'] - eval_df['actual']).abs() / eval_df['actual']
eval_df.groupby('h')['APE'].mean()
mape(eval_df['prediction'], eval_df['actual']) | _____no_output_____ | MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
Plot actuals vs predictions at each horizon for first week of the test period. As is to be expected, predictions for one step ahead (*t+1*) are more accurate than those for 2 or 3 steps ahead | plot_df = eval_df[(eval_df.timestamp<'2014-11-08') & (eval_df.h=='t+1')][['timestamp', 'actual']]
for t in range(1, HORIZON+1):
plot_df['t+'+str(t)] = eval_df[(eval_df.timestamp<'2014-11-08') & (eval_df.h=='t+'+str(t))]['prediction'].values
fig = plt.figure(figsize=(15, 8))
ax = plt.plot(plot_df['timestamp'], plot_df['actual'], color='red', linewidth=4.0)
ax = fig.add_subplot(111)
ax.plot(plot_df['timestamp'], plot_df['t+1'], color='blue', linewidth=4.0, alpha=0.75)
ax.plot(plot_df['timestamp'], plot_df['t+2'], color='blue', linewidth=3.0, alpha=0.5)
ax.plot(plot_df['timestamp'], plot_df['t+3'], color='blue', linewidth=2.0, alpha=0.25)
plt.xlabel('timestamp', fontsize=12)
plt.ylabel('load', fontsize=12)
ax.legend(loc='best')
plt.show() | _____no_output_____ | MIT | 3_RNN_encoder_decoder.ipynb | YangLIN1997/DeepLearningForTimeSeriesForecasting |
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.stats import norm
from sklearn.preprocessing import StandardScaler
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
df=pd.read_csv('/content/drive/MyDrive/DataSet_Main/HouseData.csv')
df.columns | _____no_output_____ | MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
|
Analysis Sales Price | # Descriptive statistics summary
df['SalePrice'].describe() | _____no_output_____ | MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
- If mean and median are same in the dataset meansthere is no outliers | # Histogram
sns.distplot(df['SalePrice']) | _____no_output_____ | MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
- Normarl Distribution | # Skewness an d kurtosis
print('Skewness: %f' % df['SalePrice'].skew())
print('Kurtosis: %f' % df['SalePrice'].skew()) | Skewness: 1.882876
Kurtosis: 1.882876
| MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
- If skewness is zero means dataset has no skewed point- It ranges from 0 to 10 Relationships | # Scatter plot grlivarea/saleprice
var='GrLivArea'
data=pd.concat([df['SalePrice'],df[var]],axis=1) #another way of making dataframe
data.plot.scatter(x=var,y='SalePrice',ylim=(0,800000)) | _____no_output_____ | MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
- For removing Outliers - If value is greater than 4000 then remove row | # Scatter plot totalbsmtsf/saleprice
var='TotalBsmtSF'
data=pd.concat([df['SalePrice'],df[var]],axis=1)
data.plot.scatter(x=var,y='SalePrice',ylim=(0,800000)) | _____no_output_____ | MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
Relationship with categorical variables | # Box plot overallqual/saleprice
var='OverallQual'
data=pd.concat([df['SalePrice'],df[var]],axis=1)
f,ax=plt.subplots(figsize=(10,6))
fig=sns.boxplot(x=var,y='SalePrice',data=data)
fig.axis(ymin=0,ymax=800000);
var='YearBuilt'
data=pd.concat([df['SalePrice'],df[var]],axis=1)
f,ax=plt.subplots(figsize=(16,8))
fig=sns.boxplot(x=var,y='SalePrice',data=data)
fig.axis(ymin=0,ymax=800000)
plt.xticks(rotation=90); | _____no_output_____ | MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
- The Dataset which is not able to visualize correctly alternate method is use tableau and powerBI for better and clear illustration Correlation | # Correlation Matrix
corrmat=df.corr()
f,ax=plt.subplots(figsize=(12,9))
sns.heatmap(corrmat,vmax=.8,square=True);
# SalePrice correlation matrix
k=10 # number of variables for heatmap
cols=corrmat.nlargest(k,'SalePrice')['SalePrice'].index
cm=np.corrcoef(df[cols].values.T)
sns.set(font_scale=0.6)
hm=sns.heatmap(cm,cbar=True,annot=True,square=True,fmt='.2f',annot_kws={'size':7},yticklabels=cols.values,xticklabels=cols.values)
plt.show()
# Scatterplot
sns.set()
cols=['SalePrice','OverallQual','GrLivArea','GarageCars']
sns.pairplot(df[cols],size=2.5)
plt.show(); | _____no_output_____ | MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
Missing Value | # missing data
total=df.isnull().sum().sort_values(ascending=False)
percent=(df.isnull().sum()/df.isnull().count()).sort_values(ascending=False)
missing_data=pd.concat([total,percent],axis=1,keys=['Total','Percent'])
missing_data.head(20) | _____no_output_____ | MIT | EDA_techniques.ipynb | capitallatera/Stat-and-ML |
ะะฝะดะธะฒะธะดัะฐะปัะฝะพะต ะทะฐะดะฐะฝะธะต | import matplotlib.pyplot as plt
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
import numpy as np
x = np.linspace(0, 5, 50)
y1 = 3*x
y2 = [i/0.10 for i in x]
fig, ax = plt.subplots(figsize=(15, 9))
ax.set_title("ะัะฐัะธะบ ะฒะตัะพััะฝะพััะธ ะฟะพะปััะตะฝะธั ะทะฐัะตัะฐ ะฟะพ ะบัะพััะฟะปะฐััะพัะผะตะฝะฝะพะผั ะฟัะพะณัะฐะผะผะธัะพะฒะฐะฝะธั", fontsize=16)
ax.set_xlabel("ะัะตะฝะบะธ", fontsize=14)
ax.set_ylabel("ะะตัะพััะฝะพััั ะฟะพะปััะตะฝะธั ะทะฐัะตัะฐ", fontsize=14)
ax.grid(which="major", linewidth=1, color="black")
ax.grid(which="minor", linestyle="--", color="grey", linewidth=0.6)
ax.scatter(x, y1, c="red", label="ะะตัะพััะฝะพััั ะฟะพะปััะตะฝะธั ะทะฐัะตัะฐ")
ax.plot(x, y2, label="ะัะฐัะธะบ ะฒะตัะพััะฝะพััะธ ")
ax.legend()
#ะะตะปะตะฝะธั ะพัะฝะพะฒะฝะพะน ัะตัะบะธ
ax.xaxis.set_minor_locator(AutoMinorLocator())
ax.yaxis.set_minor_locator(AutoMinorLocator())
plt.show() | _____no_output_____ | MIT | ind.ipynb | djamaludinn/cp4 |
Sentiment Analysis with an RNNIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. >Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative. Network ArchitectureThe architecture for this network is shown below.>**First, we'll pass in words to an embedding layer.** We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. *In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.*>**After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells.** The LSTM cells will add *recurrent* connections to the network and give us the ability to include information about the *sequence* of words in the movie review data. >**Finally, the LSTM outputs will go to a sigmoid output layer.** We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1. We don't care about the sigmoid outputs except for the **very last one**; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg). --- Load in and visualize the data | from google.colab import files
# Upload reviews.txt and labels.txt to google colab
uploaded = files.upload()
import numpy as np
# read data from text files
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
print(reviews[:2000])
print()
print(labels[:20]) | bromwell high is a cartoon comedy . it ran at the same time as some other programs about school life such as teachers . my years in the teaching profession lead me to believe that bromwell high s satire is much closer to reality than is teachers . the scramble to survive financially the insightful students who can see right through their pathetic teachers pomp the pettiness of the whole situation all remind me of the schools i knew and their students . when i saw the episode in which a student repeatedly tried to burn down the school i immediately recalled . . . . . . . . . at . . . . . . . . . . high . a classic line inspector i m here to sack one of your teachers . student welcome to bromwell high . i expect that many adults of my age think that bromwell high is far fetched . what a pity that it isn t
story of a man who has unnatural feelings for a pig . starts out with a opening scene that is a terrific example of absurd comedy . a formal orchestra audience is turned into an insane violent mob by the crazy chantings of it s singers . unfortunately it stays absurd the whole time with no general narrative eventually making it just too off putting . even those from the era should be turned off . the cryptic dialogue would make shakespeare seem easy to a third grader . on a technical level it s better than you might think with some good cinematography by future great vilmos zsigmond . future stars sally kirkland and frederic forrest can be seen briefly .
homelessness or houselessness as george carlin stated has been an issue for years but never a plan to help those on the street that were once considered human who did everything from going to school work or vote for the matter . most people think of the homeless as just a lost cause while worrying about things such as racism the war on iraq pressuring kids to succeed technology the elections inflation or worrying if they ll be next to end up on the streets . br br but what if y
positive
negative
po
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Data pre-processingThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.You can see an example of the reviews data above. Here are the processing steps, we'll want to take:>* We'll want to get rid of periods and extraneous punctuation.* Also, you might notice that the reviews are delimited with newline characters `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter. * Then I can combined all the reviews back together into one big string.First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. | from string import punctuation
print(punctuation)
# get rid of punctuation
reviews = reviews.lower() # lowercase, standardize
all_text = ''.join([c for c in reviews if c not in punctuation])
# split by new lines and spaces
reviews_split = all_text.split('\n')
all_text = ' '.join(reviews_split)
# create a list of words
words = all_text.split()
words[:30] | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Encoding the wordsThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`. | # feel free to use this import
from collections import Counter
## Build a dictionary that maps words to integers
words_counts = Counter(words)
vocab = sorted(words_counts, key=words_counts.get, reverse=True)
vocab_to_int = {word:ii for ii, word in enumerate(vocab, 1)}
## use the dict to tokenize each review in reviews_split
## store the tokenized reviews in reviews_ints
reviews_ints = []
for review in reviews_split:
reviews_ints.append([vocab_to_int[word] for word in review.split()]) | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
**Test your code**As a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review. | # stats about vocabulary
print('Unique words: ', len((vocab_to_int))) # should ~ 74000+
print()
# print tokens in first review
print('Tokenized review: \n', reviews_ints[:1]) | Unique words: 74072
Tokenized review:
[[21025, 308, 6, 3, 1050, 207, 8, 2138, 32, 1, 171, 57, 15, 49, 81, 5785, 44, 382, 110, 140, 15, 5194, 60, 154, 9, 1, 4975, 5852, 475, 71, 5, 260, 12, 21025, 308, 13, 1978, 6, 74, 2395, 5, 613, 73, 6, 5194, 1, 24103, 5, 1983, 10166, 1, 5786, 1499, 36, 51, 66, 204, 145, 67, 1199, 5194, 19869, 1, 37442, 4, 1, 221, 883, 31, 2988, 71, 4, 1, 5787, 10, 686, 2, 67, 1499, 54, 10, 216, 1, 383, 9, 62, 3, 1406, 3686, 783, 5, 3483, 180, 1, 382, 10, 1212, 13583, 32, 308, 3, 349, 341, 2913, 10, 143, 127, 5, 7690, 30, 4, 129, 5194, 1406, 2326, 5, 21025, 308, 10, 528, 12, 109, 1448, 4, 60, 543, 102, 12, 21025, 308, 6, 227, 4146, 48, 3, 2211, 12, 8, 215, 23]]
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Encoding the labelsOur labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively, and place those in a new list, `encoded_labels`. | # 1=positive, 0=negative label conversion
encoded_labels = np.array([1 if label=="positive" else 0 for label in labels.split('\n')]) | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Removing OutliersAs an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:1. Getting rid of extremely long or short reviews; the outliers2. Padding/truncating the remaining data so that we have reviews of the same length.Before we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training. | # outlier review stats
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens))) | Zero-length reviews: 1
Maximum review length: 2514
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.> **Exercise:** First, remove *any* reviews with zero length from the `reviews_ints` list and their corresponding label in `encoded_labels`. | print('Number of reviews before removing outliers: ', len(reviews_ints))
## remove any reviews/labels with zero length from the reviews_ints list.
review_zero_lenght = [ii for ii in range(len(reviews_ints)) if len(reviews_ints[ii]) <= 1]
reviews_ints = [reviews_ints[ii] for ii in range(len(reviews_ints)) if ii not in review_zero_lenght]
encoded_labels = [encoded_labels[ii] for ii in range(len(encoded_labels)) if ii not in review_zero_lenght]
print('Number of reviews after removing outliers: ', len(reviews_ints)) | Number of reviews before removing outliers: 25001
Number of reviews after removing outliers: 25000
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
--- Padding sequencesTo deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some `seq_length`, we'll pad with 0s. For reviews longer than `seq_length`, we can truncate them to the first `seq_length` words. A good `seq_length`, in this case, is 200.> **Exercise:** Define a function that returns an array `features` that contains the padded data, of a standard size, that we'll pass to the network. * The data should come from `review_ints`, since we want to feed integers to the network. * Each row should be `seq_length` elements long. * For reviews shorter than `seq_length` words, **left pad** with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. * For reviews longer than `seq_length`, use only the first `seq_length` words as the feature vector.As a small example, if the `seq_length=10` and an input review is: ```[117, 18, 128]```The resultant, padded sequence should be: ```[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]```**Your final `features` array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified `seq_length`.**This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data. | def pad_features(reviews_ints, seq_length):
''' Return features of review_ints, where each review is padded with 0's
or truncated to the input seq_length.
'''
## implement function
features = np.zeros((len(reviews_ints), seq_length), dtype=int)
for ii, review in enumerate(reviews_ints):
features[ii, -len(review):] = np.array(review)[:seq_length]
return features
# Test your implementation!
seq_length = 200
features = pad_features(reviews_ints, seq_length=seq_length)
## test statements - do not change - ##
assert len(features)==len(reviews_ints), "Your features should have as many rows as reviews."
assert len(features[0])==seq_length, "Each feature row should contain seq_length values."
# print first 10 values of the first 30 batches
print(features[:30,:10]) | [[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[22382 42 46418 15 706 17139 3389 47 77 35]
[ 4505 505 15 3 3342 162 8312 1652 6 4819]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 54 10 14 116 60 798 552 71 364 5]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 1 330 578 34 3 162 748 2731 9 325]
[ 9 11 10171 5305 1946 689 444 22 280 673]
[ 0 0 0 0 0 0 0 0 0 0]
[ 1 307 10399 2069 1565 6202 6528 3288 17946 10628]
[ 0 0 0 0 0 0 0 0 0 0]
[ 21 122 2069 1565 515 8181 88 6 1325 1182]
[ 1 20 6 76 40 6 58 81 95 5]
[ 54 10 84 329 26230 46427 63 10 14 614]
[ 11 20 6 30 1436 32317 3769 690 15100 6]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 40 26 109 17952 1422 9 1 327 4 125]
[ 0 0 0 0 0 0 0 0 0 0]
[ 10 499 1 307 10399 55 74 8 13 30]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0]]
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Training, Validation, TestWith our data in nice shape, we'll split it into training, validation, and test sets.> **Exercise:** Create the training, validation, and test sets. * You'll need to create sets for the features and the labels, `train_x` and `train_y`, for example. * Define a split fraction, `split_frac` as the fraction of data to **keep** in the training set. Usually this is set to 0.8 or 0.9. * Whatever data is left will be split in half to create the validation and *testing* data. | split_frac = 0.8
encoded_labels = np.array(encoded_labels)
## split data into training, validation, and test data (features and labels, x and y)
split_idx = int(len(features)*0.8)
train_x, remaining_x = features[:split_idx], features[split_idx:]
train_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]
test_idx = int(len(remaining_x)*0.5)
val_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]
val_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]
## print out the shapes of your resultant feature data
print("\t\t\tFeature Shapes: \tLabels Shapes:")
print("Train set: \t\t{} \t\t{}".format(train_x.shape, train_y.shape),
"\nValidation set: \t{} \t\t{}".format(val_x.shape, val_y.shape),
"\nTest set: \t\t{} \t\t{}".format(test_x.shape, test_y.shape)) | Feature Shapes: Labels Shapes:
Train set: (20000, 200) (20000,)
Validation set: (2500, 200) (2500,)
Test set: (2500, 200) (2500,)
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
**Check your work**With train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:``` Feature Shapes:Train set: (20000, 200) Validation set: (2500, 200) Test set: (2500, 200)``` --- DataLoaders and BatchingAfter creating training, test, and validation data, we can create DataLoaders for this data by following two steps:1. Create a known format for accessing our data, using [TensorDataset](https://pytorch.org/docs/stable/data.html) which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.2. Create DataLoaders and batch our training, validation, and test Tensor datasets.```train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))train_loader = DataLoader(train_data, batch_size=batch_size)```This is an alternative to creating a generator function for batching our data into full batches. | import torch
from torch.utils.data import TensorDataset, DataLoader
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
valid_data = TensorDataset(torch.from_numpy(val_x), torch.from_numpy(val_y))
test_data = TensorDataset(torch.from_numpy(test_x), torch.from_numpy(test_y))
# dataloaders
batch_size = 50
# make sure to SHUFFLE your data
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)
test_loader = DataLoader(test_data, shuffle=True, batch_size=batch_size)
# obtain one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size()) # batch_size, seq_length
print('Sample input: \n', sample_x)
print()
print('Sample label size: ', sample_y.size()) # batch_size
print('Sample label: \n', sample_y) | Sample input size: torch.Size([50, 200])
Sample input:
tensor([[ 1, 224, 2, ..., 8522, 10, 1514],
[ 32, 48, 210, ..., 174, 57, 1],
[ 0, 0, 0, ..., 11, 233, 580],
...,
[ 0, 0, 0, ..., 5, 41, 798],
[ 1, 78, 36, ..., 59, 9260, 32],
[ 11, 20, 14, ..., 2, 841, 4]])
Sample label size: torch.Size([50])
Sample label:
tensor([1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1,
1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1,
0, 1])
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
--- Sentiment Network with PyTorchBelow is where you'll define the network.The layers are as follows:1. An [embedding layer](https://pytorch.org/docs/stable/nn.htmlembedding) that converts our word tokens (integers) into embeddings of a specific size.2. An [LSTM layer](https://pytorch.org/docs/stable/nn.htmllstm) defined by a hidden_state size and number of layers3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size4. A sigmoid activation layer which turns all outputs into a value 0-1; return **only the last sigmoid output** as the output of this network. The Embedding LayerWe need to add an [embedding layer](https://pytorch.org/docs/stable/nn.htmlembedding) because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights. The LSTM Layer(s)We'll create an [LSTM](https://pytorch.org/docs/stable/nn.htmllstm) to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.Most of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships. > **Exercise:** Complete the `__init__`, `forward`, and `init_hidden` functions for the SentimentRNN model class.Note: `init_hidden` should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available. | # First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
import torch.nn as nn
class SentimentRNN(nn.Module):
"""
The RNN model that will be used to perform Sentiment analysis.
"""
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
"""
Initialize the model by setting up the layers.
"""
super(SentimentRNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define all layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=drop_prob, batch_first=True)
self.dropout = nn.Dropout(drop_prob)
self.fc = nn.Linear(hidden_dim, output_size)
self.sigmoid = nn.Sigmoid()
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
# embeddings and lstm_out
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# sigmoid function
sig_out = self.sigmoid(out)
# reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Instantiate the networkHere, we'll instantiate the network. First up, defining the hyperparameters.* `vocab_size`: Size of our vocabulary or the range of values for our input, word tokens.* `output_size`: Size of our desired output; the number of class scores we want to output (pos/neg).* `embedding_dim`: Number of columns in the embedding lookup table; size of our embeddings.* `hidden_dim`: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.* `n_layers`: Number of LSTM layers in the network. Typically between 1-3> **Exercise:** Define the model hyperparameters. | # Instantiate the model w/ hyperparams
vocab_size = len(vocab_to_int)+1 # +1 for the 0 padding + our word tokens
output_size = 1
embedding_dim = 400
hidden_dim = 256
n_layers = 2
net = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net) | SentimentRNN(
(embedding): Embedding(74073, 400)
(lstm): LSTM(400, 256, num_layers=2, batch_first=True, dropout=0.5)
(dropout): Dropout(p=0.5)
(fc): Linear(in_features=256, out_features=1, bias=True)
(sigmoid): Sigmoid()
)
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
--- TrainingBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.>We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. [BCELoss](https://pytorch.org/docs/stable/nn.htmlbceloss), or **Binary Cross Entropy Loss**, applies cross entropy loss to a single value between 0 and 1.We also have some data and training hyparameters:* `lr`: Learning rate for our optimizer.* `epochs`: Number of times to iterate through the training dataset.* `clip`: The maximum gradient value to clip at (to prevent exploding gradients). | # loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
# training params
epochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
if(train_on_gpu):
net.cuda()
net.train()
# train for some number of epochs
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# batch loop
for inputs, labels in train_loader:
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses))) | Epoch: 1/4... Step: 100... Loss: 0.635234... Val Loss: 0.656715
Epoch: 1/4... Step: 200... Loss: 0.666213... Val Loss: 0.640142
Epoch: 1/4... Step: 300... Loss: 0.506314... Val Loss: 0.556882
Epoch: 1/4... Step: 400... Loss: 0.515226... Val Loss: 0.541775
Epoch: 2/4... Step: 500... Loss: 0.481418... Val Loss: 0.488891
Epoch: 2/4... Step: 600... Loss: 0.529370... Val Loss: 0.465098
Epoch: 2/4... Step: 700... Loss: 0.426508... Val Loss: 0.489485
Epoch: 2/4... Step: 800... Loss: 0.314929... Val Loss: 0.505303
Epoch: 3/4... Step: 900... Loss: 0.238121... Val Loss: 0.503732
Epoch: 3/4... Step: 1000... Loss: 0.269352... Val Loss: 0.478062
Epoch: 3/4... Step: 1100... Loss: 0.303809... Val Loss: 0.424000
Epoch: 3/4... Step: 1200... Loss: 0.222771... Val Loss: 0.430880
Epoch: 4/4... Step: 1300... Loss: 0.201906... Val Loss: 0.511853
Epoch: 4/4... Step: 1400... Loss: 0.172416... Val Loss: 0.484791
Epoch: 4/4... Step: 1500... Loss: 0.249004... Val Loss: 0.512710
Epoch: 4/4... Step: 1600... Loss: 0.220029... Val Loss: 0.538146
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
--- TestingThere are a few ways to test your network.* **Test data performance:** First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.* **Inference on user-generated data:** Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called **inference**. | # Get test data loss and accuracy
test_losses = [] # track loss
num_correct = 0
# init hidden state
h = net.init_hidden(batch_size)
net.eval()
# iterate over test data
for inputs, labels in test_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# get predicted outputs
output, h = net(inputs, h)
# calculate loss
test_loss = criterion(output.squeeze(), labels.float())
test_losses.append(test_loss.item())
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze()) # rounds to the nearest integer
# compare predictions to true label
correct_tensor = pred.eq(labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
# -- stats! -- ##
# avg test loss
print("Test loss: {:.3f}".format(np.mean(test_losses)))
# accuracy over all test data
test_acc = num_correct/len(test_loader.dataset)
print("Test accuracy: {:.3f}".format(test_acc)) | Test loss: 0.525
Test accuracy: 0.811
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Inference on a test reviewYou can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly! > **Exercise:** Write a `predict` function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!* You can use any functions that you've already defined or define any helper functions you want to complete `predict`, but it should just take in a trained net, a text review, and a sequence length. | def predict(net, test_review, seq_length=200):
''' Prints out whether a give review is predicted to be
positive or negative in sentiment, using a trained model.
params:
net - A trained net
test_review - a review made of normal text and punctuation
seq_length - the padded length of a review
'''
# tokenize review
test_review = test_review.lower() # lowercase, standardize
review = ''.join([c for c in test_review if c not in punctuation])
words = review.split()
review_ints = []
review_ints.append([vocab_to_int[word] for word in words])
# pad review
features = pad_features(review_ints, seq_length)
# convert to tensor
feature_tensor = torch.from_numpy(features)
# set model to eval
net.eval()
# initialize hidden state
batch_size = feature_tensor.size(0)
h = net.init_hidden(batch_size)
if(train_on_gpu):
feature_tensor = feature_tensor.cuda()
# get the output from the model
output, h = net(feature_tensor, h)
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze())
# printing output value, before rounding
print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))
# print custom response
if(pred.item()==1):
print("Positive review detected!")
else:
print("Negative review detected.")
# negative test review
test_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'
# positive test review
test_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'
# call function
# try negative and positive reviews!
seq_length=200
predict(net, test_review_pos, seq_length)
predict(net, test_review_neg, seq_length) | Prediction value, pre-rounding: 0.993000
Positive review detected!
Prediction value, pre-rounding: 0.005079
Negative review detected.
| MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
Try out test_reviews of your own!Now that you have a trained model and a predict function, you can pass in _any_ kind of text and this model will predict whether the text has a positive or negative sentiment. Push this model to its limits and try to find what words it associates with positive or negative.Later, you'll learn how to deploy a model like this to a production environment so that it can respond to any kind of user data put into a web app! | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN.ipynb | hsneto/pytorch-challenge |
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from google.colab import files
uploaded_files = files.upload()
#reading the file
census = pd.read_excel('2county2019.xlsx')
census.head(5)
#fixing the names
census['CTYNAME']=census['CTYNAME'].str.replace('County', '')
census['CTYNAME']=census['CTYNAME'].str.replace('Parish', '')
census['CTYNAME']=census['CTYNAME'].str.replace('Census Area', '')
census['CTYNAME']=census['CTYNAME'].str.replace('Burough', '')
census['CTYNAME']=census['CTYNAME'].str.replace('Municipality', '')
census['CTYNAME']=census['CTYNAME'].str.replace('City and Burough', '')
census['CTYNAME']=census['CTYNAME'].str.replace('city', '')
#census['CTYNAME']=census['CTYNAME'].str.split(' ',expand=True)[0:-1].str[:-1]
census['CTYNAME']=census['CTYNAME'].str.strip(' ')
census.head(5)
census['CountyState']= census['CTYNAME'].str.cat(census['STNAME'], sep =", ")
census
#percentage of counties with >50% male or >50% female
fem_census = census['BFEM?'].value_counts("FEM")
fem_census
from google.colab import files
uploaded_files = files.upload()
#superfund data
superfunds = pd.read_csv("superfunds.csv")
superfunds.head(5)
active = superfunds[superfunds["Status"] == "NPL Site"].shape
proposed = superfunds[superfunds["Status"] == "Deleted NPL Site"].shape
deleted = superfunds[superfunds["Status"] == "Proposed NPL Site"].shape
(active, proposed, deleted)
superfunds['CountyState']= superfunds['County'].str.cat(superfunds['State'], sep =", ")
superfunds.head(5)
superfunds_series = superfunds.set_index('CountyState').squeeze()
superfunds_series.head(5)
census_nodrop = census
#counties with >13% black population
black_census = census[census["BAC_PER"] > 13]
black_census
#number of counties with >50% male or >50% female given >13% black population
black_fem_census1 = black_census['BFEM?'].value_counts()
black_fem_census1
#percentage of counties with >50% male or >50% female given >13% black population
black_fem_census2 = black_census['BFEM?'].value_counts("FEM")
black_fem_census2
black_census_series = black_census.set_index('CountyState').squeeze()
black_census_series
nodrop_series = census_nodrop.set_index('CountyState').squeeze()
nodrop_series
superfund_and_blackcensus = black_census_series.join(superfunds_series)
#remove any counties that aren't in the superfund dataset
superfund_and_blackcensus = superfund_and_blackcensus[superfund_and_blackcensus['Site Score'].notna()]
superfund_and_blackcensus.head(5)
superfund_and_nodrop = nodrop_series.join(superfunds_series)
#remove any counties that aren't in the superfund dataset
superfund_and_nodrop = superfund_and_nodrop[superfund_and_nodrop['Site Score'].notna()]
superfund_and_nodrop.head(200)
#number of counties with >50% male or >50% female given >13% black population and superfund site
sex1 = superfund_and_blackcensus['BFEM?'].value_counts()
sex1
#number of counties with >50% male or >50% female given >13% black population and superfund site
sex1 = superfund_and_blackcensus['BFEM?'].value_counts()
sex1
#percentage of counties with >50% male or >50% female given >13% black population and superfund site
sex2 = superfund_and_blackcensus['BFEM?'].value_counts('FEM')
sex2
superfund_and_nodrop['nonBI'] = superfund_and_nodrop['BAC_PER'] + superfund_and_nodrop['IAC_PER']
superfund_and_nodrop
plt.scatter(superfund_and_nodrop['nonBI'], superfund_and_nodrop['Site Score'], c='green', s=2, label="non Black or Indigenious")
plt.scatter(superfund_and_nodrop['BAC_PER'], superfund_and_nodrop['Site Score'], c='black', s=2, label="Black")
plt.scatter(superfund_and_nodrop['IAC_PER'], superfund_and_nodrop['Site Score'], c='brown', s=2, label="Indigenious")
plt.xlabel("% of residents")
plt.ylabel("superfund site score")
plt.figure()
plt.show() | _____no_output_____ | CC-BY-3.0 | Census_Data.ipynb | ertomz/h4bl-superfund-website |
|
Notebook for importing twitter data for hurricane sandy.The twitter dataset is from mdredze. | %matplotlib inline
import sys
import os
sys.path.append(os.path.abspath('../'))
import pandas as pd
import pymongo
import twitterinfrastructure.twitter_sandy as ts
import importlib
importlib.reload(ts)
#os.chdir('../')
print(os.getcwd()) | C:\dev\research\socialsensing\notebooks
| BSD-2-Clause | notebooks/twitter-import.ipynb | jacob-heglund/socialsensing-jh |
Hydrate tweet IDs into tweets using Hydrator.1. Run the following cell to convert the raw mdredze sandy tweet ids file into an interim file of tweet ids in the format necessary to hydrate using Hydrator.1. Use [Hydrator](https://github.com/DocNow/hydrator) to hydrate the "data/interim/sandy-tweetids.txt" file. Hydrating on 03-14-2018 created a 13.3 GB json file with ??? tweets. | # create interim file with only tweet ids for hydration using Hydrator
# (6,554,744 tweet ids, 124.5 MB)
# takes ~1 min (3.1 GHz Intel Core i7, 16 GB 1867 MHz DDR3)
path = "data/raw/release-mdredze.txt"
write_path = "data/interim/sandy-tweetids.txt"
num_tweets = ts.create_hydrator_tweetids(path=path, write_path=write_path,
filter_sandy=False, progressbar=False, verbose=1) | 2019-05-02 13:30:54 : Started converting tweet ids from data/raw/release-mdredze.txt to Hydrator format.
| BSD-2-Clause | notebooks/twitter-import.ipynb | jacob-heglund/socialsensing-jh |
Import hydrated tweets into mongodb database. | # import tweets (4799665 tweets out of 4799665 lines, 12.2 GB total doc size)
# takes ~ 40 mins (3.1 GHz Intel Core i7, 16 GB 1867 MHz DDR3)
# path = 'data/processed/sandy-tweets-20180314.json'
path = 'E:/Work/projects/twitterinfrastructure/data/processed/sandy-tweets-20180314.json'
collection = 'tweets'
db_name = 'sandy'
db_instance = 'mongodb://localhost:27017/'
insert_num = ts.insert_tweets(path, collection=collection, db_name=db_name,
db_instance=db_instance, progressbar=True,
overwrite=True, verbose=1)
| 2019-05-02 13:18:14 : Started inserting tweets from "E:/Work/projects/twitterinfrastructure/data/processed/sandy-tweets-20180314.json" to tweets collection in sandy database.
2019-05-02 13:18:14 : Dropped tweets collection (if exists).
| BSD-2-Clause | notebooks/twitter-import.ipynb | jacob-heglund/socialsensing-jh |
Import taxi_zones GeoJSON into mongodb database.1. Open terminal.1. Change to the twitterinfrastructure project home directory. For example, run the following (based on my directory structure): $ cd Documents/projects/twitterinfrastructure1. Use mongoimport to import the taxi_zones_crs4326_mod.geojson into the database by running the following in terminal (not mongodb shell). Be aware of double dash lines in front of db, collection, file, and jsonArray arguments). $ mongoimport --db sandy --collection taxi_zones --file "data/processed/taxi_zones_crs4326_mod.geojson" --jsonArray1. Run the following cell to create a geosphere index in the taxi_zones collection. | # create geosphere index in taxi_zones collection
db_instance = 'mongodb://localhost:27017/'
db_name = 'sandy'
zones_collection = 'taxi_zones'
#db_name = 'sandy_test'
#zones_collection = 'taxi_zones_test'
client = pymongo.MongoClient(db_instance)
db = client[db_name]
db[zones_collection].create_index([("geometry", pymongo.GEOSPHERE)])
zones = db[zones_collection].find()
print('{count} taxi zones found in imported taxi_zones GeoJSON file.'.format(
count=zones.count())) | 263 taxi zones found in imported taxi_zones GeoJSON file.
| BSD-2-Clause | notebooks/twitter-import.ipynb | jacob-heglund/socialsensing-jh |
Import nyiso_zones GeoJSON into mongodb database.1. Open terminal.1. Change to the twitterinfrastructure project home directory. For example, run the following (based on my directory structure): $ cd Documents/projects/twitterinfrastructure1. Use mongoimport to import the 'nyiso-zones-crs4326-mod.geojson' file into the database by running the following in terminal (not mongodb shell). Be aware of double dash lines in front of db, collection, file, and jsonArray arguments. Make sure you delete any existing nyiso_zones collection in the database (the command will append, not overwrite). This geojson was created by manually querying and copying nyiso zone geojsons from [here](https://services1.arcgis.com/Lsfphzk53dXVltQC/arcgis/rest/services/NYISO_Zones/FeatureServer/0/query?outFields=*&where=1%3D1) (linked from [here](https://hub.arcgis.com/items/3a510da542c74537b268657f63dc2ce4)) to the 'data/raw/nyiso/' directory. Those individual zone geojsons were then combined into the 'nyiso.geojson' file and loaded into qgis3 (version 3.2, using the 'Add Vector Layer' option, individual zones were visualized by adjusting symbology of the layer properties to be categorized). The layer was then exported to a geojson file using qgis3 (with the EPSG:4326 crs). $ mongoimport --db sandy --collection nyiso_zones --file "data/processed/nyiso-zones-crs4326-mod.geojson" --jsonArray1. Run the following cell to create a geosphere index in the nyiso_zones collection and add the properties.zone_id field to each zone in the collection. | # create geosphere index in nyiso_zones collection
db_instance = 'mongodb://localhost:27017/'
db_name = 'sandy'
zones_collection = 'nyiso_zones'
client = pymongo.MongoClient(db_instance)
db = client[db_name]
db[zones_collection].create_index([("geometry", pymongo.GEOSPHERE)])
zones = db[zones_collection].find()
print('{count} nyiso zones found in imported nyiso_zones GeoJSON file.'.format(
count=zones.count()))
# add zone_id to nyiso_zones collection
zones_path = 'data/raw/nyiso/nyiso-zones.csv'
df = pd.read_csv(zones_path)
for abbrev, zone_id in zip(df['abbrev'], df['zone_id']):
db[zones_collection].update_one(
{"properties.Zone": abbrev},
{"$set": {"properties.zone_id": zone_id}}
) | 11 nyiso zones found in imported nyiso_zones GeoJSON file.
| BSD-2-Clause | notebooks/twitter-import.ipynb | jacob-heglund/socialsensing-jh |
Define a Convolutional Neural NetworkCopy the neural network from the Neural Networks section before and modify it to take 3-channel images (instead of 1-channel images as it was defined). | import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5) # reshape
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net() | _____no_output_____ | Apache-2.0 | DeepLearning/pytorch/cifar10_tutorial.ipynb | MikoyChinese/learn |
Define a Loss function and optimizerLet's use a Classification Cross-Entropy loss and SGD with momentum. | import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=1e-3, momentum=0.9) | _____no_output_____ | Apache-2.0 | DeepLearning/pytorch/cifar10_tutorial.ipynb | MikoyChinese/learn |
Train the networkThis is when things start to get interesting. We simply have to loop over our data iterator, and feed the inputs to the network and optimize. | for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is list of [inputs, labels]
inputs, labels = data
# zero the parameters gradients
optimizer.zero_grad()
# Forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999:
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training.')
# save trained model:
PATH = data_dir + 'cifar_net.pth'
torch.save(net.state_dict(), PATH) | _____no_output_____ | Apache-2.0 | DeepLearning/pytorch/cifar10_tutorial.ipynb | MikoyChinese/learn |
Test the network on the test dataWe have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all.We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth. If the prediction is correct, we add the sample to the list of correct predictions.Okay, first step. Let us display an image from the test set to get familiar. | dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
# Load pre-train model
net = Net()
net.load_state_dict(torch.load(PATH))
outputs = net(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# show the predict accuracy of each class
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i])) | Accuracy of plane : 56 %
Accuracy of car : 57 %
Accuracy of bird : 40 %
Accuracy of cat : 55 %
Accuracy of deer : 45 %
Accuracy of dog : 24 %
Accuracy of frog : 51 %
Accuracy of horse : 60 %
Accuracy of ship : 73 %
Accuracy of truck : 69 %
| Apache-2.0 | DeepLearning/pytorch/cifar10_tutorial.ipynb | MikoyChinese/learn |
Training on GPUJust like how you transfer a Tensor onto the GPU, you transfer the neural net onto the GPU.Letโs first define our device as the first visible cuda device if we have CUDA available: | device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)
correct = 0
total = 0
net.to(device)
with torch.no_grad():
for data in testloader:
images, labels = data[0].to(device), data[1].to(device)
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total)) | Accuracy of the network on the 10000 test images: 53 %
| Apache-2.0 | DeepLearning/pytorch/cifar10_tutorial.ipynb | MikoyChinese/learn |
Copyright 2019 The TensorFlow Authors.Licensed under the Apache License, Version 2.0 (the "License"); | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | Apache-2.0 | site/ko/tutorials/generative/adversarial_fgsm.ipynb | gmb-ftcont/docs-l10n |
FGSM์ ์ด์ฉํ ์ ๋์ ์ํ ์์ฑ TensorFlow.org์์ ๋ณด๊ธฐ ๊ตฌ๊ธ ์ฝ๋ฉ(Colab)์์ ์คํํ๊ธฐ ๊นํ๋ธ(GitHub) ์์ค ๋ณด๊ธฐ Download notebook Note: ์ด ๋ฌธ์๋ ํ
์ํ๋ก ์ปค๋ฎค๋ํฐ์์ ๋ฒ์ญํ์ต๋๋ค. ์ปค๋ฎค๋ํฐ ๋ฒ์ญ ํ๋์ ํน์ฑ์ ์ ํํ ๋ฒ์ญ๊ณผ ์ต์ ๋ด์ฉ์ ๋ฐ์ํ๊ธฐ ์ํด ๋
ธ๋ ฅํจ์๋๋ถ๊ตฌํ๊ณ [๊ณต์ ์๋ฌธ ๋ฌธ์](https://www.tensorflow.org/?hl=en)์ ๋ด์ฉ๊ณผ ์ผ์นํ์ง ์์ ์ ์์ต๋๋ค.์ด ๋ฒ์ญ์ ๊ฐ์ ํ ๋ถ๋ถ์ด ์๋ค๋ฉด[tensorflow/docs-l10n](https://github.com/tensorflow/docs-l10n/) ๊นํ ์ ์ฅ์๋ก ํ ๋ฆฌํ์คํธ๋ฅผ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.๋ฌธ์ ๋ฒ์ญ์ด๋ ๋ฆฌ๋ทฐ์ ์ฐธ์ฌํ๋ ค๋ฉด[[email protected]](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ko)๋ก๋ฉ์ผ์ ๋ณด๋ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค. ์ด ํํ ๋ฆฌ์ผ์์๋ Ian Goodfellow *et al*์ [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572)์ ๊ธฐ์ ๋ FGSM(Fast Gradient Signed Method)์ ์ด์ฉํด ์ ๋์ ์ํ(adversarial example)์ ์์ฑํ๋ ๋ฐฉ๋ฒ์ ๋ํด ์๊ฐํฉ๋๋ค. FGSM์ ์ ๊ฒฝ๋ง ๊ณต๊ฒฉ ๊ธฐ์ ๋ค ์ค ์ด๊ธฐ์ ๋ฐ๊ฒฌ๋ ๋ฐฉ๋ฒ์ด์ ๊ฐ์ฅ ์ ๋ช
ํ ๋ฐฉ์ ์ค ํ๋์
๋๋ค. ์ ๋์ ์ํ์ด๋?์ ๋์ ์ํ์ด๋ ์ ๊ฒฝ๋ง์ ํผ๋์ํฌ ๋ชฉ์ ์ผ๋ก ๋ง๋ค์ด์ง ํน์ํ ์
๋ ฅ์ผ๋ก, ์ ๊ฒฝ๋ง์ผ๋ก ํ์ฌ๊ธ ์ํ์ ์๋ชป ๋ถ๋ฅํ๋๋ก ํฉ๋๋ค. ๋น๋ก ์ธ๊ฐ์๊ฒ ์ ๋์ ์ํ์ ์ผ๋ฐ ์ํ๊ณผ ํฐ ์ฐจ์ด๊ฐ ์์ด๋ณด์ด์ง๋ง, ์ ๊ฒฝ๋ง์ ์ ๋์ ์ํ์ ์ฌ๋ฐ๋ฅด๊ฒ ์๋ณํ์ง ๋ชปํฉ๋๋ค. ์ด์ ๊ฐ์ ์ ๊ฒฝ๋ง ๊ณต๊ฒฉ์๋ ์ฌ๋ฌ ์ข
๋ฅ๊ฐ ์๋๋ฐ, ๋ณธ ํํ ๋ฆฌ์ผ์์๋ ํ์ดํธ ๋ฐ์ค(white box) ๊ณต๊ฒฉ ๊ธฐ์ ์ ์ํ๋ FGSM์ ์๊ฐํฉ๋๋ค. ํ์ดํธ ๋ฐ์ค ๊ณต๊ฒฉ์ด๋ ๊ณต๊ฒฉ์๊ฐ ๋์ ๋ชจ๋ธ์ ๋ชจ๋ ํ๋ผ๋ฏธํฐ๊ฐ์ ์ ๊ทผํ ์ ์๋ค๋ ๊ฐ์ ํ์ ์ด๋ฃจ์ด์ง๋ ๊ณต๊ฒฉ์ ์ผ์ปซ์ต๋๋ค. ์๋ ์ด๋ฏธ์ง๋ Goodfellow et al์ ์๊ฐ๋ ๊ฐ์ฅ ์ ๋ช
ํ ์ ๋์ ์ํ์ธ ํ๋ค์ ์ฌ์ง์
๋๋ค.์๋ณธ ์ด๋ฏธ์ง์ ํน์ ํ ์์ ์๊ณก์ ์ถ๊ฐํ๋ฉด ์ ๊ฒฝ๋ง์ผ๋ก ํ์ฌ๊ธ ํ๋ค๋ฅผ ๋์ ์ ๋ขฐ๋๋ก ๊ธดํ ์์ญ์ด๋ก ์๋ชป ์ธ์ํ๋๋ก ๋ง๋ค ์ ์์ต๋๋ค. ์ดํ ์น์
์์๋ ์ด ์๊ณก ์ถ๊ฐ ๊ณผ์ ์ ๋ํด ์ดํด๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค. FGSMFGSM์ ์ ๊ฒฝ๋ง์ ๊ทธ๋๋์ธํธ(gradient)๋ฅผ ์ด์ฉํด ์ ๋์ ์ํ์ ์์ฑํ๋ ๊ธฐ๋ฒ์
๋๋ค. ๋ง์ฝ ๋ชจ๋ธ์ ์
๋ ฅ์ด ์ด๋ฏธ์ง๋ผ๋ฉด, ์
๋ ฅ ์ด๋ฏธ์ง์ ๋ํ ์์ค ํจ์์ ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํ์ฌ ๊ทธ ์์ค์ ์ต๋ํํ๋ ์ด๋ฏธ์ง๋ฅผ ์์ฑํฉ๋๋ค. ์ด์ฒ๋ผ ์๋กญ๊ฒ ์์ฑ๋ ์ด๋ฏธ์ง๋ฅผ ์ ๋์ ์ด๋ฏธ์ง(adversarial image)๋ผ๊ณ ํฉ๋๋ค. ์ด ๊ณผ์ ์ ๋ค์๊ณผ ๊ฐ์ ์์์ผ๋ก ์ ๋ฆฌํ ์ ์์ต๋๋ค:$$adv\_x = x + \epsilon*\text{sign}(\nabla_xJ(\theta, x, y))$$๊ฐ ๊ธฐํธ์ ๋ํ ์ค๋ช
์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค.* adv_x : ์ ๋์ ์ด๋ฏธ์ง.* x : ์๋ณธ ์
๋ ฅ ์ด๋ฏธ์ง.* y : ์๋ณธ ์
๋ ฅ ๋ ์ด๋ธ(label).* $\epsilon$ : ์๊ณก์ ์์ ์ ๊ฒ ๋ง๋ค๊ธฐ ์ํด ๊ณฑํ๋ ์.* $\theta$ : ๋ชจ๋ธ์ ํ๋ผ๋ฏธํฐ.* $J$ : ์์ค ํจ์.์ฌ๊ธฐ์ ํฅ๋ฏธ๋ก์ด ์ฌ์ค์ ์
๋ ฅ ์ด๋ฏธ์ง์ ๋ํ ๊ทธ๋๋์ธํธ๊ฐ ์ฌ์ฉ๋๋ค๋ ์ ์
๋๋ค. ์ด๋ ์์ค์ ์ต๋ํํ๋ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ๋ ๊ฒ์ด FGSM์ ๋ชฉ์ ์ด๊ธฐ ๋๋ฌธ์
๋๋ค. ์์ฝํ์๋ฉด, ์ ๋์ ์ํ์ ๊ฐ ํฝ์
์ ์์ค์ ๋ํ ๊ธฐ์ฌ๋๋ฅผ ๊ทธ๋๋์ธํธ๋ฅผ ํตํด ๊ณ์ฐํ ํ, ๊ทธ ๊ธฐ์ฌ๋์ ๋ฐ๋ผ ํฝ์
๊ฐ์ ์๊ณก์ ์ถ๊ฐํจ์ผ๋ก์จ ์์ฑํ ์ ์์ต๋๋ค. ๊ฐ ํฝ์
์ ๊ธฐ์ฌ๋๋ ์ฐ์ ๋ฒ์น(chain rule)์ ์ด์ฉํด ๊ทธ๋๋์ธํธ๋ฅผ ๊ณ์ฐํ๋ ๊ฒ์ผ๋ก ๋น ๋ฅด๊ฒ ํ์
ํ ์ ์์ต๋๋ค. ์ด๊ฒ์ด ์
๋ ฅ ์ด๋ฏธ์ง์ ๋ํ ๊ทธ๋๋์ธํธ๊ฐ ์ฐ์ด๋ ์ด์ ์
๋๋ค. ๋ํ, ๋์ ๋ชจ๋ธ์ ๋ ์ด์ ํ์ตํ๊ณ ์์ง ์๊ธฐ ๋๋ฌธ์ (๋ฐ๋ผ์ ์ ๊ฒฝ๋ง์ ๊ฐ์ค์น์ ๋ํ ๊ทธ๋๋์ธํธ๋ ํ์ํ์ง ์์ต๋๋ค) ๋ชจ๋ธ์ ๊ฐ์ค์น๊ฐ์ ๋ณํ์ง ์์ต๋๋ค. FGSM์ ๊ถ๊ทน์ ์ธ ๋ชฉํ๋ ์ด๋ฏธ ํ์ต์ ๋ง์น ์ํ์ ๋ชจ๋ธ์ ํผ๋์ํค๋ ๊ฒ์
๋๋ค.์ด์ ์ฌ์ ํ๋ จ๋ ๋ชจ๋ธ์ ๊ณต๊ฒฉํด๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค. ์ด ํํ ๋ฆฌ์ผ์์ ์ฌ์ฉ๋ ๋ชจ๋ธ์ [ImageNet](http://www.image-net.org/)์์ ์ฌ์ ํ๋ จ๋ [MobileNetV2](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/applications/MobileNetV2) ๋ชจ๋ธ์
๋๋ค. | import tensorflow as tf
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rcParams['figure.figsize'] = (8, 8)
mpl.rcParams['axes.grid'] = False | _____no_output_____ | Apache-2.0 | site/ko/tutorials/generative/adversarial_fgsm.ipynb | gmb-ftcont/docs-l10n |
์ฌ์ ํ๋ จ๋ MobileNetV2 ๋ชจ๋ธ๊ณผ ImageNet์ ํด๋์ค(class) ์ด๋ฆ๋ค์ ๋ถ๋ฌ์ต๋๋ค. | pretrained_model = tf.keras.applications.MobileNetV2(include_top=True,
weights='imagenet')
pretrained_model.trainable = False
# ImageNet ํด๋์ค ๋ ์ด๋ธ
decode_predictions = tf.keras.applications.mobilenet_v2.decode_predictions
# ์ด๋ฏธ์ง๊ฐ MobileNetV2์ ์ ๋ฌ๋ ์ ์๋๋ก ์ ์ฒ๋ฆฌํด์ฃผ๋ ํฌํผ ๋ฉ์๋(helper function)
def preprocess(image):
image = tf.cast(image, tf.float32)
image = image/255
image = tf.image.resize(image, (224, 224))
image = image[None, ...]
return image
# ํ๋ฅ ๋ฒกํฐ์์ ๋ ์ด๋ธ์ ์ถ์ถํด์ฃผ๋ ํฌํผ ๋ฉ์๋
def get_imagenet_label(probs):
return decode_predictions(probs, top=1)[0][0] | _____no_output_____ | Apache-2.0 | site/ko/tutorials/generative/adversarial_fgsm.ipynb | gmb-ftcont/docs-l10n |
์๋ณธ ์ด๋ฏธ์งMirko [CC-BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/)์ [๋๋ธ๋ผ๋ ๋ฆฌํธ๋ฆฌ๋ฒ](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg) ์ํ ์ด๋ฏธ์ง๋ฅผ ์ด์ฉํด ์ ๋์ ์ํ์ ์์ฑํฉ๋๋ค. ์ฒซ ๋จ๊ณ๋ก, ์๋ณธ ์ด๋ฏธ์ง๋ฅผ ์ ์ฒ๋ฆฌํ์ฌ MobileNetV2 ๋ชจ๋ธ์ ์
๋ ฅ์ผ๋ก ์ ๊ณตํฉ๋๋ค. | image_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')
image_raw = tf.io.read_file(image_path)
image = tf.image.decode_image(image_raw)
image = preprocess(image)
image_probs = pretrained_model.predict(image) | _____no_output_____ | Apache-2.0 | site/ko/tutorials/generative/adversarial_fgsm.ipynb | gmb-ftcont/docs-l10n |
์ด๋ฏธ์ง๋ฅผ ์ดํด๋ด
์๋ค. | plt.figure()
plt.imshow(image[0])
_, image_class, class_confidence = get_imagenet_label(image_probs)
plt.title('{} : {:.2f}% Confidence'.format(image_class, class_confidence*100))
plt.show() | _____no_output_____ | Apache-2.0 | site/ko/tutorials/generative/adversarial_fgsm.ipynb | gmb-ftcont/docs-l10n |
์ ๋์ ์ด๋ฏธ์ง ์์ฑํ๊ธฐ FGSM ์คํํ๊ธฐ์ฒซ๋ฒ์งธ ๋จ๊ณ๋ ์ํ ์์ฑ์ ์ํด ์๋ณธ ์ด๋ฏธ์ง์ ๊ฐํ๊ฒ ๋ ์๊ณก์ ์์ฑํ๋ ๊ฒ์
๋๋ค. ์์ ์ดํด๋ณด์๋ฏ์ด, ์๊ณก์ ์์ฑํ ๋์๋ ์
๋ ฅ ์ด๋ฏธ์ง์ ๋ํ ๊ทธ๋๋์ธํธ๋ฅผ ์ฌ์ฉํฉ๋๋ค. | loss_object = tf.keras.losses.CategoricalCrossentropy()
def create_adversarial_pattern(input_image, input_label):
with tf.GradientTape() as tape:
tape.watch(input_image)
prediction = pretrained_model(input_image)
loss = loss_object(input_label, prediction)
# ์
๋ ฅ ์ด๋ฏธ์ง์ ๋ํ ์์ค ํจ์์ ๊ธฐ์ธ๊ธฐ๋ฅผ ๊ตฌํฉ๋๋ค.
gradient = tape.gradient(loss, input_image)
# ์๊ณก์ ์์ฑํ๊ธฐ ์ํด ๊ทธ๋๋์ธํธ์ ๋ถํธ๋ฅผ ๊ตฌํฉ๋๋ค.
signed_grad = tf.sign(gradient)
return signed_grad | _____no_output_____ | Apache-2.0 | site/ko/tutorials/generative/adversarial_fgsm.ipynb | gmb-ftcont/docs-l10n |
์์ฑํ ์๊ณก์ ์๊ฐํํด ๋ณผ ์ ์์ต๋๋ค. | # ์ด๋ฏธ์ง์ ๋ ์ด๋ธ์ ์-ํซ ์ธ์ฝ๋ฉ ์ฒ๋ฆฌํฉ๋๋ค.
labrador_retriever_index = 208
label = tf.one_hot(labrador_retriever_index, image_probs.shape[-1])
label = tf.reshape(label, (1, image_probs.shape[-1]))
perturbations = create_adversarial_pattern(image, label)
plt.imshow(perturbations[0]) | _____no_output_____ | Apache-2.0 | site/ko/tutorials/generative/adversarial_fgsm.ipynb | gmb-ftcont/docs-l10n |
์๊ณก ์น์ ์ก์ค๋ก (epsilon)์ ๋ฐ๊ฟ๊ฐ๋ฉฐ ๋ค์ํ ๊ฐ๋ค์ ์๋ํด๋ด
์๋ค. ์์ ๊ฐ๋จํ ์คํ์ ํตํด ์ก์ค๋ก ์ ๊ฐ์ด ์ปค์ง์๋ก ๋คํธ์ํฌ๋ฅผ ํผ๋์ํค๋ ๊ฒ์ด ์ฌ์์ง์ ์ ์ ์์ต๋๋ค. ํ์ง๋ง ์ด๋ ์ด๋ฏธ์ง์ ์๊ณก์ด ์ ์ ๋ ๋๋ ทํด์ง๋ค๋ ๋จ์ ์ ๋๋ฐํฉ๋๋ค. | def display_images(image, description):
_, label, confidence = get_imagenet_label(pretrained_model.predict(image))
plt.figure()
plt.imshow(image[0])
plt.title('{} \n {} : {:.2f}% Confidence'.format(description,
label, confidence*100))
plt.show()
epsilons = [0, 0.01, 0.1, 0.15]
descriptions = [('Epsilon = {:0.3f}'.format(eps) if eps else 'Input')
for eps in epsilons]
for i, eps in enumerate(epsilons):
adv_x = image + eps*perturbations
adv_x = tf.clip_by_value(adv_x, 0, 1)
display_images(adv_x, descriptions[i]) | _____no_output_____ | Apache-2.0 | site/ko/tutorials/generative/adversarial_fgsm.ipynb | gmb-ftcont/docs-l10n |
plotting a neuron | import numpy as np
import McNeuron
import data_transforms
from keras.models import Sequential
from keras.layers.core import Dense, Reshape
from keras.layers.recurrent import LSTM
import matplotlib.pyplot as plt
from copy import deepcopy
import os
from numpy import linalg as LA
%matplotlib inline
import scipy.io
mat = scipy.io.loadmat("/Volumes/Arch/Dropbox/HG-GAN/03-Data/Matlab format/part 1.mat")
#mat = scipy.io.loadmat("/Volumes/Arch/Dropbox/HG-GAN/03-Data/Matlab format/sample.mat")
mat.keys()
tmp = mat['neuron_data'][186][3]
#tmp = mat['N']
neuron = data_transforms.swc_to_neuron(tmp)
tmp = mat['neuron_data'][186][3]
#tmp = mat['N']
neuron = data_transforms.swc_to_neuron(tmp)
neuron2 = data_transforms.downsample_neuron(neuron, number = 20)
McNeuron.visualize.plot_2D(neuron2 ,size = 4)
McNeuron.visualize.plot_2D(neuron ,size = 4)
L = []
for i in range(185):
tmp = mat['neuron_data'][i][3]
#tmp = mat['N']
neuron = data_transforms.swc_to_neuron(tmp)
neuron2 = data_transforms.downsample_neuron(neuron, number = 20)
print i
L.append(neuron2)
L
n = neuron.mesoscale_subsample(150)
#n = neuron.subsample_main_nodes()
McNeuron.visualize.plot_2D(n ,size = 4)
print n.n_node
McNeuron.visualize.plot_2D(neuron ,size = 4) | _____no_output_____ | MIT | HG-GAN-Geometry.ipynb | RoozbehFarhoodi/McNeuron |
Subsampling methods | # self is neuron!
def get_main_points(self):
"""
gets the index of branching points and end points.
"""
(branch_index,) = np.where(self.branch_order[self.n_soma:]==2)
(endpoint_index,) = np.where(self.branch_order[self.n_soma:]==0)
selected_index = np.union1d(branch_index + self.n_soma, endpoint_index + self.n_soma)
selected_index = np.append(range(self.n_soma), selected_index)
return selected_index
def parent_id(self, selected_index):
"""
Gives back the parent id of all the selected_index of the neuron.
Parameters
----------
selected_index: numpy array
the index of nodes
Returns
-------
parent_id: the index of parent of each element in selected_index in this array.
"""
parent_id = np.array([],dtype = int)
for i in selected_index:
p = self.parent_index[i]
while(~np.any(selected_index == p)):
p = self.parent_index[p]
(ind,) = np.where(selected_index==p)
parent_id = np.append(parent_id , ind)
return parent_id
def neuron_with_selected_nodes(self, selected_index):
"""
Gives back a new neuron made up with the selected_index nodes of self.
if node A is parent (or grand parent) of node B in the original neuron, it is the same for the new neuron.
Parameters
----------
selected_index: numpy array
the index of nodes from original neuron for making new neuron
Returns
-------
Neuron: the subsampled neuron
"""
parent = parent_id(self, selected_index)
# making the list of nodes
n_list = []
for i in range(selected_index.shape[0]):
n = McNeuron.Node()
n.xyz = self.nodes_list[selected_index[i]].xyz
n.r = self.nodes_list[selected_index[i]].r
n.type = self.nodes_list[selected_index[i]].type
n_list.append(n)
# adjusting the childern and parents for the nodes.
for i in np.arange(1,selected_index.shape[0]):
j = parent[i]
n_list[i].parent = n_list[j]
n_list[j].add_child(n_list[i])
return McNeuron.Neuron(file_format = 'only list of nodes', input_file = n_list)
def find_sharpest_fork(self, Nodes):
"""
Looks at the all branching point in the Nodes list, selects those which both its children are end points and finds
the closest pair of childern (the distance between children).
Parameters
----------
Nodes: list
the list of Node
Returns
-------
sharpest_pair: array
the index of the pair of closest pair of childern
distance: float
Distance of the pair of children
"""
pair_list = []
Dis = np.array([])
for n in Nodes:
if n.parent is not None:
if n.parent.parent is not None:
a = n.parent.children
if(isinstance(a, list)):
if(len(a)==2):
n1 = a[0]
n2 = a[1]
if(len(n1.children) == 0 and len(n2.children) == 0):
pair_list.append([n1 , n2])
dis = LA.norm(a[0].xyz - a[1].xyz,2)
Dis = np.append(Dis,dis)
if(len(Dis)!= 0):
(b,) = np.where(Dis == Dis.min())
sharpest_pair = pair_list[b[0]]
distance = Dis.min()
else:
sharpest_pair = [0,0]
distance = 0.
return sharpest_pair, distance
def find_sharpest_fork_general(self, Nodes):
"""
Looks at the all branching point in the Nodes list, selects those which both its children are end points and finds
the closest pair of childern (the distance between children).
Parameters
----------
Nodes: list
the list of Node
Returns
-------
sharpest_pair: array
the index of the pair of closest pair of childern
distance: float
Distance of the pair of children
"""
pair_list = []
Dis = np.array([])
for n in Nodes:
if n.parent is not None:
if n.parent.parent is not None:
a = n.parent.children
if(isinstance(a, list)):
if(len(a)==2):
n1 = a[0]
n2 = a[1]
pair_list.append([n1 , n2])
dis = LA.norm(a[0].xyz - a[1].xyz,2)
Dis = np.append(Dis,dis)
if(len(Dis)!= 0):
(b,) = np.where(Dis == Dis.min())
sharpest_pair = pair_list[b[0]]
distance = Dis.min()
else:
sharpest_pair = [0,0]
distance = 0.
return sharpest_pair, distance
def remove_pair_replace_node(self, Nodes, pair):
"""
Removes the pair of nodes and replace it with a new node. the parent of new node is the parent of the pair of node,
and its location and its radius are the mean of removed nodes.
Parameters
----------
Nodes: list
the list of Nodes
pair: array
The index of pair of nodes. the nodes should be end points and have the same parent.
Returns
-------
The new list of Nodes which the pair are removed and a mean node is replaced.
"""
par = pair[0].parent
loc = pair[0].xyz + pair[1].xyz
loc = loc/2
r = pair[0].r + pair[1].r
r = r/2
Nodes.remove(pair[1])
Nodes.remove(pair[0])
n = McNeuron.Node()
n.xyz = loc
n.r = r
par.children = []
par.add_child(n)
n.parent = par
Nodes.append(n)
def remove_pair_adjust_parent(self, Nodes, pair):
"""
Removes the pair of nodes and adjust its parent. the location of the parent is the mean of the locaton of two nodes.
Parameters
----------
Nodes: list
the list of Nodes
pair: array
The index of pair of nodes. the nodes should be end points and have the same parent.
Returns
-------
The new list of Nodes which the pair are removed their parent is adjusted.
"""
par = pair[0].parent
loc = pair[0].xyz + pair[1].xyz
loc = loc/2
Nodes.remove(pair[1])
Nodes.remove(pair[0])
par.xyz = loc
par.children = []
def prune_shortest_seg(self):
(endpoint_index,) = np.where(self.branch_order[self.n_soma:]==0)
#for i in endpoint_index:
def random_subsample(self, num):
"""
randomly selects a few nodes from neuron and builds a new neuron with them. The location of these node in the new neuron
is the same as the original neuron and the morphology of them is such that if node A is parent (or grand parent) of node B
in the original neuron, it is the same for the new neuron.
Parameters
----------
num: int
number of nodes to be selected randomly.
Returns
-------
Neuron: the subsampled neuron
"""
# select the index of num nodes randomly.
I = np.arange(self.n_soma, self.n_node)
np.random.shuffle(I)
selected_index = I[0:num]
selected_index = np.union1d(np.arange(self.n_soma), selected_index)
selected_index = selected_index.astype(int)
selected_index = np.unique(np.sort(selected_index))
# making a list of node from the selected nodes
neuron = neuron_with_selected_nodes(self, selected_index)
return neuron
def subsample_main_nodes(self):
"""
subsamples a neuron with its main node only; i.e endpoints and branching nodes.
Returns
-------
Neuron: the subsampled neuron
"""
# select all the main points
selected_index = get_main_points(self)
# Computing the parent id of the selected nodes
neuron = neuron_with_selected_nodes(self, selected_index)
return neuron
def regular_subsample(self, distance):
"""
subsamples a neuron from original neuron. It has all the main points of the original neuron,
i.e endpoints or branching nodes, are not changed and meanwhile the distance of two consecutive nodes
of subsample neuron is around the 'distance'.
for each segment between two consecuative main points, a few nodes from the segment will be added to the selected node;
it starts from the far main point, and goes on the segment toward the near main point. Then the first node which is
going to add has the property that it is the farest node from begining on the segment such that its distance from begining is
less than 'distance'. The next nodes will be selected similarly. this procesure repeat for all the segments.
Parameters
----------
distance: float
the mean distance between pairs of consecuative nodes.
Returns
-------
Neuron: the subsampled neuron
"""
# Selecting the main points: branching nodes and end nodes
selected_index = get_main_points(self)
# for each segment between two consecuative main points, a few nodes from the segment will be added to the selected node.
# These new nodes will be selected base on the fact that neural distance of two consecuative nodes is around 'distance'.
# Specifically, it starts from the far main point, and goes on the segment toward the near main point. Then the first node which is
# going to add has the property that it is the farest node from begining on the segment such that its distance from begining is
# less than 'distance'. The next nodes will be selected similarly.
for i in selected_index:
upList = np.array([i],dtype = int)
index = self.parent_index[i]
dist = self.distance_from_parent[i]
while(~np.any(selected_index == index)):
upList = np.append(upList,index)
index = self.parent_index[index]
dist = np.append(dist, sum(self.distance_from_parent[upList]))
dist = np.append(0,dist)
(I,) = np.where(np.diff(np.floor(dist/distance))>0)
I = upList[I]
selected_index = np.append(selected_index,I)
selected_index = np.unique(selected_index)
neuron = neuron_with_selected_nodes(self, selected_index)
return neuron
def regular_subsample_with_fixed_number(self, num):
"""
gives back a regular subsample neuron (regular means that the distance between consecuative nodes is approximately fixed)
such that the number of nodes is 'num'.
Parameters
----------
num: int
number of nodes on the subsampled neuron
Returns
-------
Neuron: the subsampled neuron
"""
l = sum(self.distance_from_parent)
branch_number = len(np.where(self.branch_order[self.n_soma:] == 2))
distance = l/(num - branch_number)
return regular_subsample(self, distance)
def mesoscale_subsample(self, number):
main_point = self.subsample_main_nodes()
Nodes = main_point.nodes_list
rm = (main_point.n_node - number)/2.
for remove in range(int(rm)):
b, m = find_sharpest_fork(self, Nodes)
remove_pair_adjust_parent(self, Nodes, b)
return McNeuron.Neuron(file_format = 'only list of nodes', input_file = Nodes)
def regular_mesoscale_subsample(self, number):
thresh = 1.
# n = neuron.subsample(thresh)
# while(len(n.nodes_list)>number):
# thresh += 1
# n = neuron.subsample(thresh)
# if(sum(n.branch_order[n.n_soma:]==1)==0):
# break
# neuron = n
Nodes = self.nodes_list
while(len(Nodes)>number):
b, m = find_sharpest_fork_general(self, Nodes)
print m
if(m>0. and m < thresh):
remove_pair_replace_node(self, Nodes, b)
else:
self = McNeuron.Neuron(file_format = 'only list of nodes', input_file = Nodes)
thresh = thresh + 1
self = self.subsample(thresh)
Nodes = self.nodes_list
print thresh
return McNeuron.Neuron(file_format = 'only list of nodes', input_file = Nodes)
self= neuron
(endpoint_index,) = np.where(self.branch_order[self.n_soma:]==0)
for i in endpoint_index:
a = self.nodes_list[i]
b = a.parent
while(len(b.children) ==1):
b = b.parent
print LA.norm(b.xyz - a.xyz,2) | _____no_output_____ | MIT | HG-GAN-Geometry.ipynb | RoozbehFarhoodi/McNeuron |
Testing subsamples | neuron_list = McNeuron.visualize.get_all_path(os.getcwd()+"/Data/Pyramidal/chen")
neuron = McNeuron.Neuron(file_format = 'swc', input_file=neuron_list[19])
# McNeuron.visualize.plot_2D(neuron,size = 4)
# McNeuron.visualize.plot_2D(random_subsample(neuron, 200) ,size = 4)
# McNeuron.visualize.plot_2D(subsample_main_nodes(neuron) ,size = 4)
# McNeuron.visualize.plot_2D(regular_subsample(neuron, distance = 60) ,size = 4)
# McNeuron.visualize.plot_2D(regular_subsample_with_fixed_number(neuron, num = 200) ,size = 4)
#McNeuron.visualize.plot_2D(mesoscale_subsample(neuron, number = 40) ,size = 4)
McNeuron.visualize.plot_2D(regular_mesoscale_subsample(neuron, number = 40) ,size = 4)
neuron.location | _____no_output_____ | MIT | HG-GAN-Geometry.ipynb | RoozbehFarhoodi/McNeuron |
Models | def reducing_data(swc_df, pruning_number=10):
"""
Parameters
----------
swc_df: dataframe
the original swc file
pruning_number: int
number of nodes remaining at the end of pruning
Returns
-------
pruned_df: dataframe
pruned dataframe
"""
L = []
for i in range(len(swc_df)):
L.append(mesoscale_subsample(McNeuron.Neuron(file_format = 'swc', input_file = swc_df[i]), pruning_number))
return L
def separate(list_of_neurons):
"""
Parameters
----------
list_of_neurons: List of Neurons
Returns
-------
geometry: array of shape (n-1, 3)
(x, y, z) coordinates of each shape assuming that soma is at (0, 0, 0)
morphology : array of shape (n-1,)
index of node - index of parent
"""
Geo = list()
Morph = list()
for n in range(len(list_of_neurons)):
neuron = list_of_neurons[n]
Geo.append(neuron.location)
Morph.append(neuron.parent_index)
return Geo, Morph
def geometry_generator(n_nodes=10):
"""
Generator network: fully connected 2-layer network to generate locations
Parameters
----------
n_nodes: int
number of nodes
Returns
-------
model: keras object
number of models
"""
model = Sequential()
model.add(Dense(input_dim=100, output_dim=512))
model.add(Activation('tanh'))
model.add(Dense(input_dim=512, output_dim=512))
model.add(Activation('tanh'))
model.add(Dense(input_dim=512, output_dim=n_nodes * 3))
model.add(Reshape((n_nodes, 3)))
return model
def morphology_generator(n_nodes=10):
"""
Generator network: fully connected 2-layer network to generate locations
Parameters
----------
n_nodes: int
number of nodes
Returns
-------
model: keras object
number of models
"""
model = Sequential()
# A keras seq to seq model, with the following characteristics:
# input length: 1
# input dimensionality: 100
# some hidden layers for encoding
# some hidden layers for decoding
# output length: n_nodes - 1
# output dimensionality: n_nodes - 1 (there will finally be a softmax on each output node)
return model
for i in range(5):
n_nodes = 10 + 10 * i
subsampled_neuron = mesoscale_subsample(deepcopy(neuron), n_nodes)
print 'Number of nodes: %d' % (n_nodes)
McNeuron.visualize.plot_2D(subsampled_neuron, size = 4)
#McNeuron.visualize.plot_dedrite_tree(subsampled_neuron)
#plt.show()
for i in range(20):
n_nodes = 10 + 10 * i
subsampled_neuron = mesoscale_subsample_2d(deepcopy(neuron), n_nodes)
print subsampled_neuron.n_node
McNeuron.visualize.plot_2D(subsampled_neuron, size = 4, save = str(40+i)+".eps")
subsampled_neuron.n_node | _____no_output_____ | MIT | HG-GAN-Geometry.ipynb | RoozbehFarhoodi/McNeuron |
Showing the geometrical data | tmp = reducing_data(neuron_list[0:20], pruning_number=10)
geo, morph = separate(tmp)
McNeuron.visualize.plot_2D(tmp[0])
plt.scatter(tmp[0].location[0,:],tmp[0].location[1,:])
for n in range(10):
plt.scatter(geo[n][0,:],geo[n][1,:])
plt.show()
McNeuron.visualize.plot_2D(tmp[1]) | _____no_output_____ | MIT | HG-GAN-Geometry.ipynb | RoozbehFarhoodi/McNeuron |
Testing function: separate [works] | geo, morph = separate(tmp)
print morph[0]
print morph[1]
print morph[2]
print geo[6].shape
n = 1
plt.scatter(geo[n][0,:],geo[n][1,:]) | _____no_output_____ | MIT | HG-GAN-Geometry.ipynb | RoozbehFarhoodi/McNeuron |
Testing geometry_generator( ) [works] | neuron = McNeuron.Neuron(file_format = 'swc', input_file=neuron_list[0])
McNeuron.visualize.plot_2D(neuron)
n1 = neuron.subsample(100)
McNeuron.visualize.plot_2D(n1)
McNeuron.visualize.plot_dedrite_tree(n1)
neuron.n_node
n1.n_node
plt.hist(n1.distance_from_parent)
plt.scatter(n1.location[0,:],n1.location[1,:],s = 7)
tmp = mat['N'][2500][3]
tmp[0:3,:] | _____no_output_____ | MIT | HG-GAN-Geometry.ipynb | RoozbehFarhoodi/McNeuron |
Guided Capstone Step 6. Documentation **The Data Science Method** 1. Problem Identification 2. Data Wrangling 3. Exploratory Data Analysis 4. Pre-processing and Training Data Development5. Modeling6. **Documentation** * Review the Results * Finalize Code * Finalize Documentation * Create a Project Report * Create a Slide Deck for the Executive Audience In this guided capstone we are going to revisit many of the actions we took in the previous guided capstone steps. This gives you the opportunity to practice the code you wrote to solve the questions in step 4 and 5. ** Start by loading the necessary packages and printing out our current working directory just to confirm we are in the correct project directory. ** | import os
import pandas as pd
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
os.getcwd() | _____no_output_____ | MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
Fit Models with Training Dataset ** Using sklearn fit the model you chose in Guided Capstone 5 on your training dataset. This includes: creating dummy features for states if you need them, scaling the data,and creating train and test splits before fitting the chosen model.Also, remember to generate a model performance score(MAE, or explained variance) based on the testing hold-out data set.** Best Model | ##model4 is best model as accuracy is 93% and it is generalized model.
df_1=pd.read_csv(r'/Users/ajesh_mahto/Desktop/capstone_project/data/step_try.csv')
df_1.head()
df_1=df_1.drop('Unnamed: 0',axis=1)
df=pd.get_dummies(df_1, columns=['state'],drop_first=True)
from sklearn.preprocessing import StandardScaler
X=df.drop(['Name','AdultWeekend'], axis=1)
y=df['AdultWeekend']
scaler=StandardScaler()
X_scaled=scaler.fit_transform(X)
y=y.ravel()
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
X_train,X_test,y_train,y_test = train_test_split(X, y, test_size=0.25, random_state=1)
model4=LinearRegression()
model4.fit(X_train,y_train)
ypred=model4.predict(X_test)
actual_weekened = pd.DataFrame({'Actual': y_test, 'Predicted': ypred})
actual_weekened
model4.score(X_train,y_train)
model4.score(X_test,y_test)
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
import math
print('Mean Absolute Error:', mean_absolute_error(y_test, ypred))
print('Root Mean Squared Error:', np.sqrt(mean_squared_error(y_test, ypred)))
print('Mean Squared Error:', mean_squared_error(y_test, ypred))
| Mean Absolute Error: 5.162174619564856
Root Mean Squared Error: 6.874864639122517
Mean Squared Error: 47.263763806257174
| MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
Review the results ** Now, let's predict the Big Mountain Weekend price with our model in order to provide a recommendation to our managers on how to price the `AdultWeekend` lift ticket. First we need to find the row for Big Mountain resort in our data using string contains or string matching.** | #df[df['Name'].str.contains('Big Mountain')]
df3=df[df['Name'].str.contains('Big Mountain')]
df3 | _____no_output_____ | MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
** Prepare the Big Mountain resort data row as you did in the model fitting stage.** |
features=df3.drop(['Name','AdultWeekend'], axis=1)
| _____no_output_____ | MIT | models/GuidedCapstone_final_documentationStep6HL.ipynb | reetibhagat/big_mountain_resort |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.