markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Build docker containerSince we're working with a custom environment with custom dependencies, we create our own container for training. We:1. Fetch the base MXNet and Coach container image,2. Install EnergyPlus and its dependencies on top,3. Upload the new container image to AWS ECR.
cpu_or_gpu = 'gpu' if instance_type.startswith('ml.p') else 'cpu' repository_short_name = "sagemaker-hvac-coach-%s" % cpu_or_gpu docker_build_args = { 'CPU_OR_GPU': cpu_or_gpu, 'AWS_REGION': boto3.Session().region_name, } custom_image_name = build_and_push_docker_image(repository_short_name, build_args=docker_build_args) print("Using ECR image %s" % custom_image_name)
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Setup the environmentThe environment is defined in a Python file called `data_center_env.py` and for SageMaker training jobs, the file will be uploaded inside the `/src` directory.The environment implements the init(), step() and reset() functions that describe how the environment behaves. This is consistent with Open AI Gym interfaces for defining an environment.1. `init()` - initialize the environment in a pre-defined state2. `step()` - take an action on the environment3. `reset()` - restart the environment on a new episode Configure the presets for RL algorithm The presets that configure the RL training jobs are defined in the “preset-energy-plus-clipped-ppo.py” file which is also uploaded as part of the `/src` directory. Using the preset file, you can define agent parameters to select the specific agent algorithm. You can also set the environment parameters, define the schedule and visualization parameters, and define the graph manager. The schedule presets will define the number of heat up steps, periodic evaluation steps, training steps between evaluations, etc.All of these can be overridden at run-time by specifying the `RLCOACH_PRESET` hyperparameter. Additionally, it can be used to define custom hyperparameters.
!pygmentize src/preset-energy-plus-clipped-ppo.py
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Write the Training Code The training code is written in the file “train-coach.py” which is uploaded in the /src directory. First import the environment files and the preset files, and then define the main() function.
!pygmentize src/train-coach.py
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Train the RL model using the Python SDK Script modeIf you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The RLEstimator is used for training RL jobs. 1. Specify the source directory where the environment, presets and training code is uploaded.2. Specify the entry point as the training code 3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container. 4. Define the training parameters such as the instance count, job name, S3 path for output and job name. 5. Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET can be used to specify the RL agent algorithm you want to use. 6. [optional] Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
%%time estimator = RLEstimator(entry_point="train-coach.py", source_dir='src', dependencies=["common/sagemaker_rl"], image_uri=custom_image_name, role=role, instance_type=instance_type, instance_count=1, output_path=s3_output_path, base_job_name=job_name_prefix, hyperparameters = { 'save_model': 1 } ) estimator.fit(wait=local_mode) job_name = estimator.latest_training_job.job_name print("Training job: %s" % job_name)
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Store intermediate training output and model checkpoints The output from the training job above is stored on S3. The intermediate folder contains gifs and metadata of the training.
s3_url = "s3://{}/{}".format(s3_bucket,job_name) if local_mode: output_tar_key = "{}/output.tar.gz".format(job_name) else: output_tar_key = "{}/output/output.tar.gz".format(job_name) intermediate_folder_key = "{}/output/intermediate/".format(job_name) output_url = "s3://{}/{}".format(s3_bucket, output_tar_key) intermediate_url = "s3://{}/{}".format(s3_bucket, intermediate_folder_key) print("S3 job path: {}".format(s3_url)) print("Output.tar.gz location: {}".format(output_url)) print("Intermediate folder path: {}".format(intermediate_url)) tmp_dir = "/tmp/{}".format(job_name) os.system("mkdir {}".format(tmp_dir)) print("Create local folder {}".format(tmp_dir))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Visualization Plot metrics for training jobWe can pull the reward metric of the training and plot it to see the performance of the model over time.
%matplotlib inline import pandas as pd csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv" key = os.path.join(intermediate_folder_key, csv_file_name) wait_for_s3_object(s3_bucket, key, tmp_dir) csv_file = "{}/{}".format(tmp_dir, csv_file_name) df = pd.read_csv(csv_file) df = df.dropna(subset=['Training Reward']) x_axis = 'Episode #' y_axis = 'Training Reward' plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-') plt.set_ylabel(y_axis); plt.set_xlabel(x_axis);
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Evaluation of RL modelsWe use the last checkpointed model to run evaluation for the RL Agent. Load checkpointed modelCheckpointed data from the previously trained models will be passed on for evaluation / inference in the checkpoint channel. In local mode, we can simply use the local directory, whereas in the SageMaker mode, it needs to be moved to S3 first.
wait_for_s3_object(s3_bucket, output_tar_key, tmp_dir) if not os.path.isfile("{}/output.tar.gz".format(tmp_dir)): raise FileNotFoundError("File output.tar.gz not found") os.system("tar -xvzf {}/output.tar.gz -C {}".format(tmp_dir, tmp_dir)) if local_mode: checkpoint_dir = "{}/data/checkpoint".format(tmp_dir) else: checkpoint_dir = "{}/checkpoint".format(tmp_dir) print("Checkpoint directory {}".format(checkpoint_dir)) if local_mode: checkpoint_path = 'file://{}'.format(checkpoint_dir) print("Local checkpoint file path: {}".format(checkpoint_path)) else: checkpoint_path = "s3://{}/{}/checkpoint/".format(s3_bucket, job_name) if not os.listdir(checkpoint_dir): raise FileNotFoundError("Checkpoint files not found under the path") os.system("aws s3 cp --recursive {} {}".format(checkpoint_dir, checkpoint_path)) print("S3 checkpoint file path: {}".format(checkpoint_path))
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Run the evaluation stepUse the checkpointed model to run the evaluation step.
estimator_eval = RLEstimator(entry_point="evaluate-coach.py", source_dir='src', dependencies=["common/sagemaker_rl"], image_uri=custom_image_name, role=role, instance_type=instance_type, instance_count=1, output_path=s3_output_path, base_job_name=job_name_prefix+"-evaluation", hyperparameters = { "RLCOACH_PRESET": "preset-energy-plus-clipped-ppo", "evaluate_steps": 288*2, #2 episodes, i.e. 2 days } ) estimator_eval.fit({'checkpoint': checkpoint_path})
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Model deployment Since we specified MXNet when configuring the RLEstimator, the MXNet deployment container will be used for hosting.
from sagemaker.mxnet.model import MXNetModel model = MXNetModel(model_data=estimator.model_data, entry_point='src/deploy-mxnet-coach.py', framework_version='1.8.0', py_version="py37", role=role) predictor = model.deploy(initial_instance_count=1, instance_type=instance_type)
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
We can test the endpoint with a samples observation, where the current room temperature is high. Since the environment vector was of the form `[outdoor_temperature, outdoor_humidity, indoor_humidity]` and we used observation normalization in our preset, we choose an observation of `[0, 0, 2]`. Since we're deploying a PPO model, our model returns both state value and actions.
action, action_mean, action_std = predictor.predict(np.array([0., 0., 2.,])) action_mean
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
We can see heating and cooling setpoints are returned from the model, and these can be used to control the HVAC system for efficient energy usage. More training iterations will help improve the model further. Clean up endpoint
predictor.delete_endpoint()
_____no_output_____
Apache-2.0
reinforcement_learning/rl_hvac_coach_energyplus/rl_hvac_coach_energyplus.ipynb
P15241328/amazon-sagemaker-examples
Pendulum Data
full_df = pd.DataFrame() cols = ['Trial', "tr_mse", 'te_mse'] fname = "./saved-outputs/log_mixedemlp_basic1e-05_equiv1e-05.pkl" rpp_df = pd.read_pickle(fname) rpp_df.columns = cols rpp_df['type'] = 'RPP' full_df = pd.concat((full_df, rpp_df)) fname = "./saved-outputs/log_mlp_basic0.01_equiv0.0001.pkl" mlp_df = pd.read_pickle(fname) mlp_df.columns = cols mlp_df['type'] = "MLP" full_df = pd.concat((full_df, mlp_df)) fname = "./saved-outputs/log_emlp_basic0.01_equiv0.0001.pkl" emlp_df = pd.read_pickle(fname) emlp_df.columns = cols emlp_df['type'] = "EMLP" full_df = pd.concat((full_df, emlp_df)) full_df['log_te_mse'] = np.log(full_df['te_mse']) cpal = sns.color_palette("cmo.matter", n_colors=3) fs = 30 alpha = 0.75 fig, ax = plt.subplots(1,1, dpi=150, figsize=(8, 3)) vlns = sns.boxplot(x='type', y='log_te_mse', data=full_df, palette="Blues", ax=ax) # for violin in vlns.collections[::2]: # violin.set_alpha(alpha) ax.set_xlabel("", fontsize=fs) ax.set_ylabel("Log MSE", fontsize=fs) ax.tick_params("both", labelsize=fs-2) ax.set_title("Pendulum; SO(3)", fontsize=fs+2) sns.despine() plt.savefig("./misspec_pendulum.pdf", bbox_inches='tight') plt.show()
_____no_output_____
BSD-2-Clause
experiments/misspec-symmetry/plotter.ipynb
mfinzi/residual-pathway-priors
Music Generation Using Deep Learning Real World ProblemThis case-study focuses on generating music automatically using Recurrent Neural Network(RNN). We do not necessarily have to be a music expert in order to generate music. Even a non expert can generate a decent quality music using RNN.We all like to listen interesting music and if there is some way to generate music automatically, particularly decent quality music then it's a big leap in the world of music industry.Task: Our task here is to take some existing music data then train a model using this existing data. The model has to learn the patterns in music that we humans enjoy. Once it learns this, the model should be able to generate new music for us. It cannot simply copy-paste from the training data. It has to understand the patterns of music to generate new music. We here are not expecting our model to generate new music which is of professional quality, but we want it to generate a decent quality music which should be melodious and good to hear.Now, what is music? In short music is nothing but a sequence of musical notes. Our input to the model is a sequence of musical events/notes. Our output will be new sequence of musical events/notes. In this case-study we have limited our self to single instrument music as this is our first cut model. In future, we will extend this to multiple instrument music. Data Source:1. http://abc.sourceforge.net/NMD/2. http://trillian.mit.edu/~jc/music/book/oneills/1850/X/ From first data-source, we have downloaded first two files:* Jigs (340 tunes)* Hornpipes (65 tunes)
import os import json import numpy as np import pandas as pd from keras.models import Sequential from keras.layers import LSTM, Dropout, TimeDistributed, Dense, Activation, Embedding data_directory = "../Data/" data_file = "Data_Tunes.txt" charIndex_json = "char_to_index.json" model_weights_directory = '../Data/Model_Weights/' BATCH_SIZE = 16 SEQ_LENGTH = 64 def read_batches(all_chars, unique_chars): length = all_chars.shape[0] batch_chars = int(length / BATCH_SIZE) #155222/16 = 9701 for start in range(0, batch_chars - SEQ_LENGTH, 64): #(0, 9637, 64) #it denotes number of batches. It runs everytime when #new batch is created. We have a total of 151 batches. X = np.zeros((BATCH_SIZE, SEQ_LENGTH)) #(16, 64) Y = np.zeros((BATCH_SIZE, SEQ_LENGTH, unique_chars)) #(16, 64, 87) for batch_index in range(0, 16): #it denotes each row in a batch. for i in range(0, 64): #it denotes each column in a batch. Each column represents each character means #each time-step character in a sequence. X[batch_index, i] = all_chars[batch_index * batch_chars + start + i] Y[batch_index, i, all_chars[batch_index * batch_chars + start + i + 1]] = 1 #here we have added '1' because the #correct label will be the next character in the sequence. So, the next character will be denoted by #all_chars[batch_index * batch_chars + start + i + 1] yield X, Y def built_model(batch_size, seq_length, unique_chars): model = Sequential() model.add(Embedding(input_dim = unique_chars, output_dim = 512, batch_input_shape = (batch_size, seq_length))) model.add(LSTM(256, return_sequences = True, stateful = True)) model.add(Dropout(0.2)) model.add(LSTM(128, return_sequences = True, stateful = True)) model.add(Dropout(0.2)) model.add(TimeDistributed(Dense(unique_chars))) model.add(Activation("softmax")) return model def training_model(data, epochs = 80): #mapping character to index char_to_index = {ch: i for (i, ch) in enumerate(sorted(list(set(data))))} print("Number of unique characters in our whole tunes database = {}".format(len(char_to_index))) #87 with open(os.path.join(data_directory, charIndex_json), mode = "w") as f: json.dump(char_to_index, f) index_to_char = {i: ch for (ch, i) in char_to_index.items()} unique_chars = len(char_to_index) model = built_model(BATCH_SIZE, SEQ_LENGTH, unique_chars) model.summary() model.compile(loss = "categorical_crossentropy", optimizer = "adam", metrics = ["accuracy"]) all_characters = np.asarray([char_to_index[c] for c in data], dtype = np.int32) print("Total number of characters = "+str(all_characters.shape[0])) #155222 epoch_number, loss, accuracy = [], [], [] for epoch in range(epochs): print("Epoch {}/{}".format(epoch+1, epochs)) final_epoch_loss, final_epoch_accuracy = 0, 0 epoch_number.append(epoch+1) for i, (x, y) in enumerate(read_batches(all_characters, unique_chars)): final_epoch_loss, final_epoch_accuracy = model.train_on_batch(x, y) #check documentation of train_on_batch here: https://keras.io/models/sequential/ print("Batch: {}, Loss: {}, Accuracy: {}".format(i+1, final_epoch_loss, final_epoch_accuracy)) #here, above we are reading the batches one-by-one and train our model on each batch one-by-one. loss.append(final_epoch_loss) accuracy.append(final_epoch_accuracy) #saving weights after every 10 epochs if (epoch + 1) % 10 == 0: if not os.path.exists(model_weights_directory): os.makedirs(model_weights_directory) model.save_weights(os.path.join(model_weights_directory, "Weights_{}.h5".format(epoch+1))) print('Saved Weights at epoch {} to file Weights_{}.h5'.format(epoch+1, epoch+1)) #creating dataframe and record all the losses and accuracies at each epoch log_frame = pd.DataFrame(columns = ["Epoch", "Loss", "Accuracy"]) log_frame["Epoch"] = epoch_number log_frame["Loss"] = loss log_frame["Accuracy"] = accuracy log_frame.to_csv("../Data/log.csv", index = False) file = open(os.path.join(data_directory, data_file), mode = 'r') data = file.read() file.close() if __name__ == "__main__": training_model(data) log = pd.read_csv(os.path.join(data_directory, "log.csv")) log
_____no_output_____
MIT
LSTM/Music_Generation_Train1.ipynb
AbhilashPal/MuseNet
Make sure the number of minority class (lbl==1) is smaller than number of majority class (lbl==-1)
sum(df.lbl) len(df) df.to_csv('test_men_binary.csv',header=None, sep=',', index=None)
_____no_output_____
BSD-2-Clause
src/datasets/ConvertMultipleClasses2BinaryClasses.ipynb
robertu94/mlsvm
Dask jobqueue example for NEC Linux clustercovers the following aspects, i.e. how to* load project and machine specific Dask jobqueue configurations* open, scale and close a default jobqueue cluster* do an example calculation on larger than memory data Load jobqueue configuration defaults
import os os.environ['DASK_CONFIG']='.' # use local directory to look up Dask configurations import dask.config dask.config.get('jobqueue') # prints available jobqueue configurations
_____no_output_____
MIT
nesh/01_default_cluster_example.ipynb
ExaESM-WP4/Dask-jobqueue-configs
Set up jobqueue cluster ...
import dask_jobqueue default_cluster = dask_jobqueue.PBSCluster(config_name='nesh-jobqueue-config') print(default_cluster.job_script())
#!/bin/bash #PBS -N dask-worker #PBS -q clmedium #PBS -l elapstim_req=00:45:00,cpunum_job=4,memsz_job=24gb #PBS -o dask_jobqueue_logs/dask-worker.o%s #PBS -e dask_jobqueue_logs/dask-worker.e%s JOB_ID=${PBS_JOBID%%.*} /sfs/fs6/home-geomar/smomw260/miniconda3/envs/dask-minimal-20191218/bin/python -m distributed.cli.dask_worker tcp://192.168.31.10:32956 --nthreads 4 --memory-limit 24.00GB --name name --nanny --death-timeout 60 --local-directory /scratch --interface ib0
MIT
nesh/01_default_cluster_example.ipynb
ExaESM-WP4/Dask-jobqueue-configs
... and the client process
import dask.distributed as dask_distributed default_cluster_client = dask_distributed.Client(default_cluster)
_____no_output_____
MIT
nesh/01_default_cluster_example.ipynb
ExaESM-WP4/Dask-jobqueue-configs
Start jobqueue workers
default_cluster.scale(jobs=2) !qstat default_cluster_client
_____no_output_____
MIT
nesh/01_default_cluster_example.ipynb
ExaESM-WP4/Dask-jobqueue-configs
Do calculation on larger than memory data
import dask.array as da fake_data = da.random.uniform(0, 1, size=(365, 1e4, 1e4), chunks=(365,500,500)) # problem specific chunking fake_data import time start_time = time.time() fake_data.mean(axis=0).compute() elapsed = time.time() - start_time print('elapse time ',elapsed,' in seconds')
elapse time 46.89112448692322 in seconds
MIT
nesh/01_default_cluster_example.ipynb
ExaESM-WP4/Dask-jobqueue-configs
Close jobqueue cluster and client process
!qstat default_cluster.close() default_cluster_client.close() !qstat
_____no_output_____
MIT
nesh/01_default_cluster_example.ipynb
ExaESM-WP4/Dask-jobqueue-configs
![qiskit_header.png](attachment:qiskit_header.png) _*Qiskit Aqua: Experimenting with Max-Cut problem and Traveling Salesman problem with variational quantum eigensolver*_ The latest version of this notebook is available on https://github.com/Qiskit/qiskit-tutorial.*** ContributorsAntonio Mezzacapo[1], Jay Gambetta[1], Kristan Temme[1], Ramis Movassagh[1], Albert Frisch[1], Takashi Imamichi[1], Giacomo Nannicni[1], Richard Chen[1], Marco Pistoia[1], Stephen Wood[1] Affiliation- [1]IBMQ IntroductionMany problems in quantitative fields such as finance and engineering are optimization problems. Optimization problems lie at the core of complex decision-making and definition of strategies. Optimization (or combinatorial optimization) means searching for an optimal solution in a finite or countably infinite set of potential solutions. Optimality is defined with respect to some criterion function, which is to be minimized or maximized. This is typically called cost function or objective function. **Typical optimization problems**Minimization: cost, distance, length of a traversal, weight, processing time, material, energy consumption, number of objectsMaximization: profit, value, output, return, yield, utility, efficiency, capacity, number of objects We consider here max-cut problems of practical interest in many fields, and show how they can be nmapped on quantum computers. Weighted Max-CutMax-Cut is an NP-complete problem, with applications in clustering, network science, and statistical physics. To grasp how practical applications are mapped into given Max-Cut instances, consider a system of many people that can interact and influence each other. Individuals can be represented by vertices of a graph, and their interactions seen as pairwise connections between vertices of the graph, or edges. With this representation in mind, it is easy to model typical marketing problems. For example, suppose that it is assumed that individuals will influence each other's buying decisions, and knowledge is given about how strong they will influence each other. The influence can be modeled by weights assigned on each edge of the graph. It is possible then to predict the outcome of a marketing strategy in which products are offered for free to some individuals, and then ask which is the optimal subset of individuals that should get the free products, in order to maximize revenues.The formal definition of this problem is the following:Consider an $n$-node undirected graph *G = (V, E)* where *|V| = n* with edge weights $w_{ij}>0$, $w_{ij}=w_{ji}$, for $(i, j)\in E$. A cut is defined as a partition of the original set V into two subsets. The cost function to be optimized is in this case the sum of weights of edges connecting points in the two different subsets, *crossing* the cut. By assigning $x_i=0$ or $x_i=1$ to each node $i$, one tries to maximize the global profit function (here and in the following summations run over indices 0,1,...n-1)$$\tilde{C}(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j).$$In our simple marketing model, $w_{ij}$ represents the probability that the person $j$ will buy a product after $i$ gets a free one. Note that the weights $w_{ij}$ can in principle be greater than $1$, corresponding to the case where the individual $j$ will buy more than one product. Maximizing the total buying probability corresponds to maximizing the total future revenues. In the case where the profit probability will be greater than the cost of the initial free samples, the strategy is a convenient one. An extension to this model has the nodes themselves carry weights, which can be regarded, in our marketing model, as the likelihood that a person granted with a free sample of the product will buy it again in the future. With this additional information in our model, the objective function to maximize becomes $$C(\textbf{x}) = \sum_{i,j} w_{ij} x_i (1-x_j)+\sum_i w_i x_i. $$ In order to find a solution to this problem on a quantum computer, one needs first to map it to an Ising Hamiltonian. This can be done with the assignment $x_i\rightarrow (1-Z_i)/2$, where $Z_i$ is the Pauli Z operator that has eigenvalues $\pm 1$. Doing this we find that $$C(\textbf{Z}) = \sum_{i,j} \frac{w_{ij}}{4} (1-Z_i)(1+Z_j) + \sum_i \frac{w_i}{2} (1-Z_i) = -\frac{1}{2}\left( \sum_{i<j} w_{ij} Z_i Z_j +\sum_i w_i Z_i\right)+\mathrm{const},$$where const = $\sum_{i<j}w_{ij}/2+\sum_i w_i/2 $. In other terms, the weighted Max-Cut problem is equivalent to minimizing the Ising Hamiltonian $$ H = \sum_i w_i Z_i + \sum_{i<j} w_{ij} Z_iZ_j.$$Aqua can generate the Ising Hamiltonian for the first profit function $\tilde{C}$. Approximate Universal Quantum Computing for Optimization ProblemsThere has been a considerable amount of interest in recent times about the use of quantum computers to find a solution to combinatorial problems. It is important to say that, given the classical nature of combinatorial problems, exponential speedup in using quantum computers compared to the best classical algorithms is not guaranteed. However, due to the nature and importance of the target problems, it is worth investigating heuristic approaches on a quantum computer that could indeed speed up some problem instances. Here we demonstrate an approach that is based on the Quantum Approximate Optimization Algorithm by Farhi, Goldstone, and Gutman (2014). We frame the algorithm in the context of *approximate quantum computing*, given its heuristic nature. The Algorithm works as follows:1. Choose the $w_i$ and $w_{ij}$ in the target Ising problem. In principle, even higher powers of Z are allowed.2. Choose the depth of the quantum circuit $m$. Note that the depth can be modified adaptively.3. Choose a set of controls $\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$, built using a quantum circuit made of C-Phase gates and single-qubit Y rotations, parameterized by the components of $\boldsymbol\theta$. 4. Evaluate $C(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H|~\psi(\boldsymbol\theta)\rangle = \sum_i w_i \langle\psi(\boldsymbol\theta)~|Z_i|~\psi(\boldsymbol\theta)\rangle+ \sum_{i<j} w_{ij} \langle\psi(\boldsymbol\theta)~|Z_iZ_j|~\psi(\boldsymbol\theta)\rangle$ by sampling the outcome of the circuit in the Z-basis and adding the expectation values of the individual Ising terms together. In general, different control points around $\boldsymbol\theta$ have to be estimated, depending on the classical optimizer chosen. 5. Use a classical optimizer to choose a new set of controls.6. Continue until $C(\boldsymbol\theta)$ reaches a minimum, close enough to the solution $\boldsymbol\theta^*$.7. Use the last $\boldsymbol\theta$ to generate a final set of samples from the distribution $|\langle z_i~|\psi(\boldsymbol\theta)\rangle|^2\;\forall i$ to obtain the answer. It is our belief the difficulty of finding good heuristic algorithms will come down to the choice of an appropriate trial wavefunction. For example, one could consider a trial function whose entanglement best aligns with the target problem, or simply make the amount of entanglement a variable. In this tutorial, we will consider a simple trial function of the form$$|\psi(\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$where $U_\mathrm{entangler}$ is a collection of C-Phase gates (fully entangling gates), and $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, where $n$ is the number of qubits and $m$ is the depth of the quantum circuit. The motivation for this choice is that for these classical problems this choice allows us to search over the space of quantum states that have only real coefficients, still exploiting the entanglement to potentially converge faster to the solution.One advantage of using this sampling method compared to adiabatic approaches is that the target Ising Hamiltonian does not have to be implemented directly on hardware, allowing this algorithm not to be limited to the connectivity of the device. Furthermore, higher-order terms in the cost function, such as $Z_iZ_jZ_k$, can also be sampled efficiently, whereas in adiabatic or annealing approaches they are generally impractical to deal with. References:- A. Lucas, Frontiers in Physics 2, 5 (2014)- E. Farhi, J. Goldstone, S. Gutmann e-print arXiv 1411.4028 (2014)- D. Wecker, M. B. Hastings, M. Troyer Phys. Rev. A 94, 022309 (2016)- E. Farhi, J. Goldstone, S. Gutmann, H. Neven e-print arXiv 1703.06199 (2017)
# useful additional packages import matplotlib.pyplot as plt import matplotlib.axes as axes %matplotlib inline import numpy as np import networkx as nx from qiskit import BasicAer from qiskit.tools.visualization import plot_histogram from qiskit.optimization.ising import max_cut, tsp from qiskit.aqua.algorithms import VQE, ExactEigensolver from qiskit.aqua.components.optimizers import SPSA from qiskit.aqua.components.variational_forms import RY from qiskit.aqua import QuantumInstance from qiskit.optimization.ising.common import sample_most_likely # setup aqua logging import logging from qiskit.aqua import set_qiskit_aqua_logging # set_qiskit_aqua_logging(logging.DEBUG) # choose INFO, DEBUG to see the log
_____no_output_____
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
[Optional] Setup token to run the experiment on a real deviceIf you would like to run the experiment on a real device, you need to setup your account first.Note: If you do not store your token yet, use `IBMQ.save_account('MY_API_TOKEN')` to store it first.
from qiskit import IBMQ # provider = IBMQ.load_account()
_____no_output_____
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Max-Cut problem
# Generating a graph of 4 nodes n=4 # Number of nodes in graph G=nx.Graph() G.add_nodes_from(np.arange(0,n,1)) elist=[(0,1,1.0),(0,2,1.0),(0,3,1.0),(1,2,1.0),(2,3,1.0)] # tuple is (i,j,weight) where (i,j) is the edge G.add_weighted_edges_from(elist) colors = ['r' for node in G.nodes()] pos = nx.spring_layout(G) default_axes = plt.axes(frameon=True) nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos) # Computing the weight matrix from the random graph w = np.zeros([n,n]) for i in range(n): for j in range(n): temp = G.get_edge_data(i,j,default=0) if temp != 0: w[i,j] = temp['weight'] print(w)
[[0. 1. 1. 1.] [1. 0. 1. 0.] [1. 1. 0. 1.] [1. 0. 1. 0.]]
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Brute force approachTry all possible $2^n$ combinations. For $n = 4$, as in this example, one deals with only 16 combinations, but for n = 1000, one has 1.071509e+30 combinations, which is impractical to deal with by using a brute force approach.
best_cost_brute = 0 for b in range(2**n): x = [int(t) for t in reversed(list(bin(b)[2:].zfill(n)))] cost = 0 for i in range(n): for j in range(n): cost = cost + w[i,j]*x[i]*(1-x[j]) if best_cost_brute < cost: best_cost_brute = cost xbest_brute = x print('case = ' + str(x)+ ' cost = ' + str(cost)) colors = ['r' if xbest_brute[i] == 0 else 'b' for i in range(n)] nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, pos=pos) print('\nBest solution = ' + str(xbest_brute) + ' cost = ' + str(best_cost_brute))
case = [0, 0, 0, 0] cost = 0.0 case = [1, 0, 0, 0] cost = 3.0 case = [0, 1, 0, 0] cost = 2.0 case = [1, 1, 0, 0] cost = 3.0 case = [0, 0, 1, 0] cost = 3.0 case = [1, 0, 1, 0] cost = 4.0 case = [0, 1, 1, 0] cost = 3.0 case = [1, 1, 1, 0] cost = 2.0 case = [0, 0, 0, 1] cost = 2.0 case = [1, 0, 0, 1] cost = 3.0 case = [0, 1, 0, 1] cost = 4.0 case = [1, 1, 0, 1] cost = 3.0 case = [0, 0, 1, 1] cost = 3.0 case = [1, 0, 1, 1] cost = 2.0 case = [0, 1, 1, 1] cost = 3.0 case = [1, 1, 1, 1] cost = 0.0 Best solution = [1, 0, 1, 0] cost = 4.0
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Mapping to the Ising problem
qubitOp, offset = max_cut.get_operator(w)
_____no_output_____
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
[Optional] Using DOcplex for mapping to the Ising problemUsing ```docplex.get_qubitops``` is a different way to create an Ising Hamiltonian of Max-Cut. ```docplex.get_qubitops``` can create a corresponding Ising Hamiltonian from an optimization model of Max-Cut. An example of using ```docplex.get_qubitops``` is as below.
from docplex.mp.model import Model from qiskit.optimization.ising import docplex # Create an instance of a model and variables. mdl = Model(name='max_cut') x = {i: mdl.binary_var(name='x_{0}'.format(i)) for i in range(n)} # Object function max_cut_func = mdl.sum(w[i,j]* x[i] * ( 1 - x[j] ) for i in range(n) for j in range(n)) mdl.maximize(max_cut_func) # No constraints for Max-Cut problems. qubitOp_docplex, offset_docplex = docplex.get_operator(mdl)
_____no_output_____
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Checking that the full Hamiltonian gives the right cost
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector ee = ExactEigensolver(qubitOp, k=1) result = ee.run() x = sample_most_likely(result['eigvecs'][0]) print('energy:', result['energy']) print('max-cut objective:', result['energy'] + offset) print('solution:', max_cut.get_graph_solution(x)) print('solution objective:', max_cut.max_cut_value(x, w)) colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)] nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
energy: -1.5 max-cut objective: -4.0 solution: [0. 1. 0. 1.] solution objective: 4.0
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
seed = 10598 spsa = SPSA(max_trials=300) ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear') vqe = VQE(qubitOp, ry, spsa) backend = BasicAer.get_backend('statevector_simulator') quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed) result = vqe.run(quantum_instance) x = sample_most_likely(result['eigvecs'][0]) print('energy:', result['energy']) print('time:', result['eval_time']) print('max-cut objective:', result['energy'] + offset) print('solution:', max_cut.get_graph_solution(x)) print('solution objective:', max_cut.max_cut_value(x, w)) colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)] nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos) # run quantum algorithm with shots seed = 10598 spsa = SPSA(max_trials=300) ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear') vqe = VQE(qubitOp, ry, spsa) backend = BasicAer.get_backend('qasm_simulator') quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed) result = vqe.run(quantum_instance) x = sample_most_likely(result['eigvecs'][0]) print('energy:', result['energy']) print('time:', result['eval_time']) print('max-cut objective:', result['energy'] + offset) print('solution:', max_cut.get_graph_solution(x)) print('solution objective:', max_cut.max_cut_value(x, w)) plot_histogram(result['eigvecs'][0]) colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)] nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
energy: -1.5 time: 11.74726128578186 max-cut objective: -4.0 solution: [0 1 0 1] solution objective: 4.0
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
[Optional] Checking that the full Hamiltonian made by ```docplex.get_operator``` gives the right cost
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector ee = ExactEigensolver(qubitOp_docplex, k=1) result = ee.run() x = sample_most_likely(result['eigvecs'][0]) print('energy:', result['energy']) print('max-cut objective:', result['energy'] + offset_docplex) print('solution:', max_cut.get_graph_solution(x)) print('solution objective:', max_cut.max_cut_value(x, w)) colors = ['r' if max_cut.get_graph_solution(x)[i] == 0 else 'b' for i in range(n)] nx.draw_networkx(G, node_color=colors, node_size=600, alpha = .8, pos=pos)
energy: -1.5 max-cut objective: -4.0 solution: [0. 1. 0. 1.] solution objective: 4.0
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Traveling Salesman ProblemIn addition to being a notorious NP-complete problem that has drawn the attention of computer scientists and mathematicians for over two centuries, the Traveling Salesman Problem (TSP) has important bearings on finance and marketing, as its name suggests. Colloquially speaking, the traveling salesman is a person that goes from city to city to sell merchandise. The objective in this case is to find the shortest path that would enable the salesman to visit all the cities and return to its hometown, i.e. the city where he started traveling. By doing this, the salesman gets to maximize potential sales in the least amount of time. The problem derives its importance from its "hardness" and ubiquitous equivalence to other relevant combinatorial optimization problems that arise in practice. The mathematical formulation with some early analysis was proposed by W.R. Hamilton in the early 19th century. Mathematically the problem is, as in the case of Max-Cut, best abstracted in terms of graphs. The TSP on the nodes of a graph asks for the shortest *Hamiltonian cycle* that can be taken through each of the nodes. A Hamilton cycle is a closed path that uses every vertex of a graph once. The general solution is unknown and an algorithm that finds it efficiently (e.g., in polynomial time) is not expected to exist.Find the shortest Hamiltonian cycle in a graph $G=(V,E)$ with $n=|V|$ nodes and distances, $w_{ij}$ (distance from vertex $i$ to vertex $j$). A Hamiltonian cycle is described by $N^2$ variables $x_{i,p}$, where $i$ represents the node and $p$ represents its order in a prospective cycle. The decision variable takes the value 1 if the solution occurs at node $i$ at time order $p$. We require that every node can only appear once in the cycle, and for each time a node has to occur. This amounts to the two constraints (here and in the following, whenever not specified, the summands run over 0,1,...N-1)$$\sum_{i} x_{i,p} = 1 ~~\forall p$$$$\sum_{p} x_{i,p} = 1 ~~\forall i.$$For nodes in our prospective ordering, if $x_{i,p}$ and $x_{j,p+1}$ are both 1, then there should be an energy penalty if $(i,j) \notin E$ (not connected in the graph). The form of this penalty is $$\sum_{i,j\notin E}\sum_{p} x_{i,p}x_{j,p+1}>0,$$ where it is assumed the boundary condition of the Hamiltonian cycles $(p=N)\equiv (p=0)$. However, here it will be assumed a fully connected graph and not include this term. The distance that needs to be minimized is $$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}.$$Putting this all together in a single objective function to be minimized, we get the following:$$C(\textbf{x})=\sum_{i,j}w_{ij}\sum_{p} x_{i,p}x_{j,p+1}+ A\sum_p\left(1- \sum_i x_{i,p}\right)^2+A\sum_i\left(1- \sum_p x_{i,p}\right)^2,$$where $A$ is a free parameter. One needs to ensure that $A$ is large enough so that these constraints are respected. One way to do this is to choose $A$ such that $A > \mathrm{max}(w_{ij})$.Once again, it is easy to map the problem in this form to a quantum computer, and the solution will be found by minimizing a Ising Hamiltonian.
# Generating a graph of 3 nodes n = 3 num_qubits = n ** 2 ins = tsp.random_tsp(n) G = nx.Graph() G.add_nodes_from(np.arange(0, n, 1)) colors = ['r' for node in G.nodes()] pos = {k: v for k, v in enumerate(ins.coord)} default_axes = plt.axes(frameon=True) nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos) print('distance\n', ins.w)
distance [[ 0. 25. 19.] [25. 0. 27.] [19. 27. 0.]]
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Brute force approach
from itertools import permutations def brute_force_tsp(w, N): a=list(permutations(range(1,N))) last_best_distance = 1e10 for i in a: distance = 0 pre_j = 0 for j in i: distance = distance + w[j,pre_j] pre_j = j distance = distance + w[pre_j,0] order = (0,) + i if distance < last_best_distance: best_order = order last_best_distance = distance print('order = ' + str(order) + ' Distance = ' + str(distance)) return last_best_distance, best_order best_distance, best_order = brute_force_tsp(ins.w, ins.dim) print('Best order from brute force = ' + str(best_order) + ' with total distance = ' + str(best_distance)) def draw_tsp_solution(G, order, colors, pos): G2 = G.copy() n = len(order) for i in range(n): j = (i + 1) % n G2.add_edge(order[i], order[j]) default_axes = plt.axes(frameon=True) nx.draw_networkx(G2, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos) draw_tsp_solution(G, best_order, colors, pos)
order = (0, 1, 2) Distance = 71.0 Best order from brute force = (0, 1, 2) with total distance = 71.0
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Mapping to the Ising problem
qubitOp, offset = tsp.get_operator(ins)
_____no_output_____
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
[Optional] Using DOcplex for mapping to the Ising problemUsing ```docplex.get_qubitops``` is a different way to create an Ising Hamiltonian of TSP. ```docplex.get_qubitops``` can create a corresponding Ising Hamiltonian from an optimization model of TSP. An example of using ```docplex.get_qubitops``` is as below.
# Create an instance of a model and variables mdl = Model(name='tsp') x = {(i,p): mdl.binary_var(name='x_{0}_{1}'.format(i,p)) for i in range(n) for p in range(n)} # Object function tsp_func = mdl.sum(ins.w[i,j] * x[(i,p)] * x[(j,(p+1)%n)] for i in range(n) for j in range(n) for p in range(n)) mdl.minimize(tsp_func) # Constrains for i in range(n): mdl.add_constraint(mdl.sum(x[(i,p)] for p in range(n)) == 1) for p in range(n): mdl.add_constraint(mdl.sum(x[(i,p)] for i in range(n)) == 1) qubitOp_docplex, offset_docplex = docplex.get_operator(mdl)
_____no_output_____
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Checking that the full Hamiltonian gives the right cost
#Making the Hamiltonian in its full form and getting the lowest eigenvalue and eigenvector ee = ExactEigensolver(qubitOp, k=1) result = ee.run() print('energy:', result['energy']) print('tsp objective:', result['energy'] + offset) x = sample_most_likely(result['eigvecs'][0]) print('feasible:', tsp.tsp_feasible(x)) z = tsp.get_tsp_solution(x) print('solution:', z) print('solution objective:', tsp.tsp_value(z, ins.w)) draw_tsp_solution(G, z, colors, pos)
energy: -600035.5 tsp objective: 71.0 feasible: True solution: [0, 1, 2] solution objective: 71.0
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
Running it on quantum computerWe run the optimization routine using a feedback loop with a quantum computer that uses trial functions built with Y single-qubit rotations, $U_\mathrm{single}(\theta) = \prod_{i=1}^n Y(\theta_{i})$, and entangler steps $U_\mathrm{entangler}$.
seed = 10598 spsa = SPSA(max_trials=300) ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear') vqe = VQE(qubitOp, ry, spsa) backend = BasicAer.get_backend('statevector_simulator') quantum_instance = QuantumInstance(backend, seed_simulator=seed, seed_transpiler=seed) result = vqe.run(quantum_instance) print('energy:', result['energy']) print('time:', result['eval_time']) #print('tsp objective:', result['energy'] + offset) x = sample_most_likely(result['eigvecs'][0]) print('feasible:', tsp.tsp_feasible(x)) z = tsp.get_tsp_solution(x) print('solution:', z) print('solution objective:', tsp.tsp_value(z, ins.w)) draw_tsp_solution(G, z, colors, pos) # run quantum algorithm with shots seed = 10598 spsa = SPSA(max_trials=300) ry = RY(qubitOp.num_qubits, depth=5, entanglement='linear') vqe = VQE(qubitOp, ry, spsa) backend = BasicAer.get_backend('qasm_simulator') quantum_instance = QuantumInstance(backend, shots=1024, seed_simulator=seed, seed_transpiler=seed) result = vqe.run(quantum_instance) print('energy:', result['energy']) print('time:', result['eval_time']) #print('tsp objective:', result['energy'] + offset) x = sample_most_likely(result['eigvecs'][0]) print('feasible:', tsp.tsp_feasible(x)) z = tsp.get_tsp_solution(x) print('solution:', z) print('solution objective:', tsp.tsp_value(z, ins.w)) plot_histogram(result['eigvecs'][0]) draw_tsp_solution(G, z, colors, pos)
_____no_output_____
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
[Optional] Checking that the full Hamiltonian made by ```docplex.get_operator``` gives the right cost
ee = ExactEigensolver(qubitOp_docplex, k=1) result = ee.run() print('energy:', result['energy']) print('tsp objective:', result['energy'] + offset_docplex) x = sample_most_likely(result['eigvecs'][0]) print('feasible:', tsp.tsp_feasible(x)) z = tsp.get_tsp_solution(x) print('solution:', z) print('solution objective:', tsp.tsp_value(z, ins.w)) draw_tsp_solution(G, z, colors, pos) import qiskit.tools.jupyter %qiskit_version_table %qiskit_copyright
_____no_output_____
Apache-2.0
qiskit/advanced/aqua/optimization/max_cut_and_tsp.ipynb
gvvynplaine/qiskit-iqx-tutorials
**Read Later:**Module Documentationhttps://pytorch.org/docs/stable/generated/torch.nn.Module.html A Gentle Introduction to ``torch.autograd``---------------------------------``torch.autograd`` is PyTorch’s automatic differentiation engine that powersneural network training. In this section, you will get a conceptualunderstanding of how autograd helps a neural network train.Background~~~~~~~~~~Neural networks (NNs) are a collection of nested functions that areexecuted on some input data. These functions are defined by *parameters*(consisting of weights and biases), which in PyTorch are stored intensors.Training a NN happens in two steps:**Forward Propagation**: In forward prop, the NN makes its best guessabout the correct output. It runs the input data through each of itsfunctions to make this guess.**Backward Propagation**: In backprop, the NN adjusts its parametersproportionate to the error in its guess. It does this by traversingbackwards from the output, collecting the derivatives of the error withrespect to the parameters of the functions (*gradients*), and optimizingthe parameters using gradient descent. For a more detailed walkthroughof backprop, check out this `video from3Blue1Brown `__.Usage in PyTorch~~~~~~~~~~~Let's take a look at a single training step.For this example, we load a pretrained resnet18 model from ``torchvision``.We create a random data tensor to represent a single image with 3 channels, and height & width of 64,and its corresponding ``label`` initialized to some random values.
import torch, torchvision model = torchvision.models.resnet18(pretrained=True) data = torch.rand(1, 3, 64, 64) labels = torch.rand(1, 1000) print(data.size(),labels.size())
torch.Size([1, 3, 64, 64]) torch.Size([1, 1000])
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Next, we run the input data through the model through each of its layers to make a prediction.This is the **forward pass**.
prediction = model(data) # forward pass print(prediction.size())
torch.Size([1, 1000])
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
We use the model's prediction and the corresponding label to calculate the error (``loss``).The next step is to backpropagate this error through the network.Backward propagation is kicked off when we call ``.backward()`` on the error tensor.Autograd then calculates and stores the gradients for each model parameter in the parameter's ``.grad`` attribute.
loss = (prediction - labels).sum() loss.backward() # backward pass
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Next, we load an optimizer, in this case SGD with a learning rate of 0.01 and momentum of 0.9.We register all the parameters of the model in the optimizer.model.parameters() can acesss all model's parameters
optim = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Finally, we call ``.step()`` to initiate gradient descent. The optimizer adjusts each parameter by its gradient stored in ``.grad``.
optim.step() #gradient descent
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
At this point, you have everything you need to train your neural network.The below sections detail the workings of autograd - feel free to skip them. -------------- Differentiation in Autograd~~~~~~~~~~~~~~~~~~~~~~~~~~~Let's take a look at how ``autograd`` collects gradients. We create two tensors ``a`` and ``b`` with``requires_grad=True``. This signals to ``autograd`` that every operation on them should be tracked.
import torch a = torch.tensor([2., 3.], requires_grad=True) b = torch.tensor([6., 4.], requires_grad=True)
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
We create another tensor ``Q`` from ``a`` and ``b``.\begin{align}Q = 3a^3 - b^2\end{align}
Q = 3*a**3 - b**2 print(Q)
tensor([-12., 65.], grad_fn=<SubBackward0>)
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Let's assume ``a`` and ``b`` to be parameters of an NN, and ``Q``to be the error. In NN training, we want gradients of the errorw.r.t. parameters, i.e.\begin{align}\frac{\partial Q}{\partial a} = 9a^2\end{align}\begin{align}\frac{\partial Q}{\partial b} = -2b\end{align}When we call ``.backward()`` on ``Q``, autograd calculates these gradientsand stores them in the respective tensors' ``.grad`` attribute.We need to explicitly pass a ``gradient`` argument in ``Q.backward()`` because it is a vector.``gradient`` is a tensor of the same shape as ``Q``, and it represents thegradient of Q w.r.t. itself, i.e.\begin{align}\frac{dQ}{dQ} = 1\end{align}Equivalently, we can also aggregate Q into a scalar and call backward implicitly, like ``Q.sum().backward()``.
external_grad = torch.tensor([1,1]) Q.backward(gradient=external_grad)
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Gradients are now deposited in ``a.grad`` and ``b.grad``
# check if collected gradients are correct print(a.grad) print(9*a**2) print(9*a**2 == a.grad) print(-2*b == b.grad)
tensor([36., 81.]) tensor([36., 81.], grad_fn=<MulBackward0>) tensor([True, True]) tensor([True, True])
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Optional Reading - Vector Calculus using ``autograd``^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Mathematically, if you have a vector valued function$\vec{y}=f(\vec{x})$, then the gradient of $\vec{y}$ withrespect to $\vec{x}$ is a Jacobian matrix $J$:\begin{align}J = \left(\begin{array}{cc} \frac{\partial \bf{y}}{\partial x_{1}} & ... & \frac{\partial \bf{y}}{\partial x_{n}} \end{array}\right) = \left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{1}}{\partial x_{n}}\\ \vdots & \ddots & \vdots\\ \frac{\partial y_{m}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right)\end{align}Generally speaking, ``torch.autograd`` is an engine for computingvector-Jacobian product. That is, given any vector $\vec{v}$, compute the product$J^{T}\cdot \vec{v}$If $\vec{v}$ happens to be the gradient of a scalar function $l=g\left(\vec{y}\right)$:\begin{align}\vec{v} = \left(\begin{array}{ccc}\frac{\partial l}{\partial y_{1}} & \cdots & \frac{\partial l}{\partial y_{m}}\end{array}\right)^{T}\end{align}then by the chain rule, the vector-Jacobian product would be thegradient of $l$ with respect to $\vec{x}$:\begin{align}J^{T}\cdot \vec{v}=\left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}}\\ \vdots & \ddots & \vdots\\ \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right)\left(\begin{array}{c} \frac{\partial l}{\partial y_{1}}\\ \vdots\\ \frac{\partial l}{\partial y_{m}} \end{array}\right)=\left(\begin{array}{c} \frac{\partial l}{\partial x_{1}}\\ \vdots\\ \frac{\partial l}{\partial x_{n}} \end{array}\right)\end{align}This characteristic of vector-Jacobian product is what we use in the above example;``external_grad`` represents $\vec{v}$. Computational Graph~~~~~~~~~~~~~~~~~~~Conceptually, autograd keeps a record of data (tensors) & all executedoperations (along with the resulting new tensors) in a directed acyclicgraph (DAG) consisting of`Function `__objects. In this DAG, leaves are the input tensors, roots are the outputtensors. By tracing this graph from roots to leaves, you canautomatically compute the gradients using the chain rule.In a forward pass, autograd does two things simultaneously:- run the requested operation to compute a resulting tensor, and- maintain the operation’s *gradient function* in the DAG.The backward pass kicks off when ``.backward()`` is called on the DAGroot. ``autograd`` then:- computes the gradients from each ``.grad_fn``,- accumulates them in the respective tensor’s ``.grad`` attribute, and- using the chain rule, propagates all the way to the leaf tensors.Below is a visual representation of the DAG in our example. In the graph,the arrows are in the direction of the forward pass. The nodes represent the backward functionsof each operation in the forward pass. The leaf nodes in blue represent our leaf tensors ``a`` and ``b``... figure:: /_static/img/dag_autograd.pngNote**DAGs are dynamic in PyTorch** An important thing to note is that the graph is recreated from scratch; after each ``.backward()`` call, autograd starts populating a new graph. This is exactly what allows you to use control flow statements in your model; you can change the shape, size and operations at every iteration if needed.Exclusion from the DAG^^^^^^^^^^^^^^^^^^^^^^``torch.autograd`` tracks operations on all tensors which have their``requires_grad`` flag set to ``True``. For tensors that don’t requiregradients, setting this attribute to ``False`` excludes it from thegradient computation DAG.The output tensor of an operation will require gradients even if only asingle input tensor has ``requires_grad=True``.
x = torch.rand(5, 5) y = torch.rand(5, 5) z = torch.rand((5, 5), requires_grad=True) a = x + y print(f"Does `a` require gradients? : {a.requires_grad}") b = x + z print(f"Does `b` require gradients?: {b.requires_grad}")
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
In a NN, parameters that don't compute gradients are usually called **frozen parameters**.It is useful to "freeze" part of your model if you know in advance that you won't need the gradients of those parameters(this offers some performance benefits by reducing autograd computations).Another common usecase where exclusion from the DAG is important is for`finetuning a pretrained network `__In finetuning, we freeze most of the model and typically only modify the classifier layers to make predictions on new labels.Let's walk through a small example to demonstrate this. As before, we load a pretrained resnet18 model, and freeze all the parameters.
from torch import nn, optim model = torchvision.models.resnet18(pretrained=True) # Freeze all the parameters in the network for param in model.parameters(): param.requires_grad = False
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Let's say we want to finetune the model on a new dataset with 10 labels.In resnet, the classifier is the last linear layer ``model.fc``.We can simply replace it with a new linear layer (unfrozen by default)that acts as our classifier.
model.fc = nn.Linear(512, 10)
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Now all parameters in the model, except the parameters of ``model.fc``, are frozen.The only parameters that compute gradients are the weights and bias of ``model.fc``.
# Optimize only the classifier optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
_____no_output_____
MIT
official_tutorial/lesson2b_autograd_tutorial_deep_learning_usage.ipynb
zhennongchen/pytorch-tutorial
Thai2Vec Classification Using ULMFitThis notebook demonstrates how to use the [ULMFit model](https://arxiv.org/abs/1801.06146) implemented by`thai2vec` for text classification. We use [Wongnai Challenge: Review Rating Prediction](https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction) as our benchmark as it is the only sizeable and publicly available text classification dataset at the time of writing (June 21, 2018). It has 39,999 reviews for training and validation, and 6,203 reviews for testing. Our workflow is as follows:* Perform 75/15 train-validation split* Minimal text cleaning and tokenization using `newmm` engine of `pyThaiNLP`* Get embeddings of Wongnai dataset from all the data available (train and test sets)* Load pretrained Thai Wikipedia embeddings; for those embeddings which exist only in Wongnai dataset, we use the average of Wikipedia embeddings instead* Train language model based on all the data available on Wongnai dataset* Replace the top and train the classifier based on the training set by gradual unfreezingWe achieved validation perplexity at 35.75113 and validation micro F1 score at 0.598 for five-label classification. Micro F1 scores for public and private leaderboards are 0.61451 and 0.60925 respectively (supposedly we could train further with the 15% validation set we did not use), which are state-of-the-art as of the time of writing (June 21, 2018). FastText benchmark has the performance of 0.50483 and 0.49366 for public and private leaderboards respectively. ![alt text](../wongnai_data/misc/submission.jpg "Submission") ![alt text](../wongnai_data/misc/leaderboard.jpg "Leaderboards") Imports
%reload_ext autoreload %autoreload 2 %matplotlib inline import re import html import numpy as np import dill as pickle from IPython.display import Image from IPython.core.display import HTML from collections import Counter from sklearn.model_selection import train_test_split from fastai.text import * from pythainlp.tokenize import word_tokenize from utils import * DATA_PATH='/home/ubuntu/Projects/new2vec/wongnai_data/' RAW_PATH = f'{DATA_PATH}raw/' MODEL_PATH = f'{DATA_PATH}models/' raw_files = !ls {RAW_PATH}
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Train/Validation SetsWe use data from [Wongnai Challenge: Review Rating Prediction](https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction). The training data consists of 39,999 restaurant reviews from unknown number of reviewers labeled one to five stars, with the schema `(label,review)`. We use 75/15 train-validation split. The test set has 6,203 reviews from the same number of reviewers. No information accuracy is 46.9%.
raw_train = pd.read_csv(f'{RAW_PATH}w_review_train.csv',sep=';',header=None) raw_train = raw_train.iloc[:,[1,0]] raw_train.columns = ['label','review'] raw_test = pd.read_csv(f'{RAW_PATH}test_file.csv',sep=';') submission = pd.read_csv(f'{RAW_PATH}sample_submission.csv',sep=',') print(raw_train.shape) raw_train.head() cnt = Counter(raw_train['label']) cnt #baseline cnt.most_common(1)[0][1] / raw_train.shape[0] raw_test.head() submission.head() #test df raw_test = pd.read_csv(f'{RAW_PATH}test_file.csv',sep=';') raw_test.head() df_tst = pd.DataFrame({'label':raw_test['reviewID'],'review':raw_test['review']}) df_tst.to_csv(f'{DATA_PATH}test.csv', header=False, index=False) #train/validation/train_language_model split df_trn, df_val = train_test_split(raw_train, test_size = 0.15, random_state = 1412) df_lm = pd.concat([df_trn,df_tst]) df_lm.to_csv(f'{DATA_PATH}train_lm.csv', header=False, index=False) df_trn.to_csv(f'{DATA_PATH}train.csv', header=False, index=False) df_val.to_csv(f'{DATA_PATH}valid.csv', header=False, index=False) df_trn.shape,df_val.shape,df_lm.shape
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Language Modeling Text Processing We first determine the vocab for the reviews, then train a language model based on our training set. We perform the following minimal text processing:* The token `xbos` is used to note start of a text since we will be chaining them together for the language model training. * `pyThaiNLP`'s `newmm` word tokenizer is used to tokenize the texts.* `tkrep` is used to replace repetitive characters such as `อร่อยมากกกกกก` becoming `อรอ่ยมาtkrep6ก`
max_vocab = 60000 min_freq = 2 df_lm = pd.read_csv(f'{DATA_PATH}train_lm.csv',header=None,chunksize=30000) df_val = pd.read_csv(f'{DATA_PATH}valid.csv',header=None,chunksize=30000) trn_lm,trn_tok,trn_labels,itos_cls,stoi_cls,freq_trn = numericalizer(df_trn) val_lm,val_tok,val_labels,itos_cls,stoi_cls,freq_val = numericalizer(df_val,itos_cls) # np.save(f'{MODEL_PATH}trn_tok.npy', trn_tok) # np.save(f'{MODEL_PATH}val_tok.npy', val_tok) # np.save(f'{MODEL_PATH}trn_lm.npy', trn_lm) # np.save(f'{MODEL_PATH}val_lm.npy', val_lm) # pickle.dump(itos_cls, open(f'{MODEL_PATH}itos_cls.pkl', 'wb')) tok_lm = np.load(f'{MODEL_PATH}tok_lm.npy') tok_val = np.load(f'{MODEL_PATH}tok_val.npy') tok_lm[:1] #numericalized tokenized texts trn_lm[:1] #index to token itos_cls[:10]
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Load Pretrained Language Model Instead of starting from random weights, we import the language model pretrained on Wikipedia (see `pretrained_wiki.ipynb`). For words that appear only in the Wongnai dataset but not Wikipedia, we start with the average of all embeddings instead. Max vocab size is set at 60,000 and minimum frequency of 2 consistent with the pretrained model. We ended up with 19,998 embeddings for all reviews.
em_sz = 300 vocab_size = len(itos_cls) wgts = torch.load(f'{MODEL_PATH}thwiki_model2.h5', map_location=lambda storage, loc: storage) itos_pre = pickle.load(open(f'{MODEL_PATH}itos_pre.pkl','rb')) stoi_pre = collections.defaultdict(lambda:-1, {v:k for k,v in enumerate(itos_pre)}) #pretrained weights wgts = merge_wgts(em_sz, wgts, itos_pre, itos_cls)
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Train Language Model
em_sz,nh,nl = 300,1150,3 wd=1e-7 bptt=70 bs=60 opt_fn = partial(optim.Adam, betas=(0.8, 0.99)) weight_factor = 0.7 drops = np.array([0.25, 0.1, 0.2, 0.02, 0.15])*weight_factor #data loader trn_dl = LanguageModelLoader(np.concatenate(trn_lm), bs, bptt) val_dl = LanguageModelLoader(np.concatenate(val_lm), bs, bptt) md = LanguageModelData(path=DATA_PATH, pad_idx=1, n_tok=vocab_size, trn_dl=trn_dl, val_dl=val_dl, bs=bs, bptt=bptt) #model fitter learner= md.get_model(opt_fn, em_sz, nh, nl, dropouti=drops[0], dropout=drops[1], wdrop=drops[2], dropoute=drops[3], dropouth=drops[4]) learner.metrics = [accuracy] #load the saved models + new embeddings learner.model.load_state_dict(wgts) #find optimal learning rate learner.lr_find2(start_lr = 1e-6, end_lr=0.1) learner.sched.plot() #optimal learning rate lr=3e-3 lr #train while frozen once to warm up learner.freeze_to(-1) learner.fit(lr, 1, wds=wd, use_clr=(20,5), cycle_len=1) #use_clr (peak as ratio of lr, iterations to grow and descend e.g. 10 is 1/10 grow and 9/10 descend) learner.unfreeze() learner.fit(lr, 1, wds=wd, use_clr=(20,5), cycle_len=5) learner.sched.plot_loss() learner.save('wongnai_lm') learner.save_encoder('wongnai_enc')
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Classification With the language model trained on Wongnai dataset, we use its embeddings to initialize the review classifier. We train the classifier using discriminative learning rates, slanted triangular learning rates, gradual unfreezing and a few other tricks detailed in the [ULMFit paper](https://arxiv.org/abs/1801.06146). We have found that training only the last two layers of the model seems to be the right balance for this dataset. Load Tokenized Texts and Labels
#load csvs and tokenizer max_vocab = 60000 min_freq = 2 df_trn = pd.read_csv(f'{DATA_PATH}train.csv',header=None,chunksize=30000) df_val = pd.read_csv(f'{DATA_PATH}valid.csv',header=None,chunksize=30000) df_tst = pd.read_csv(f'{DATA_PATH}test.csv',header=None,chunksize=30000) trn_cls,trn_tok,trn_labels,itos_cls,stoi_cls,freq = numericalizer(df_trn) val_cls,val_tok,val_labels,itos_cls,stoi_cls,freq = numericalizer(df_val,itos_cls) tst_cls,tst_tok,tst_labels,itos_cls,stoi_cls,freq = numericalizer(df_tst,itos_cls) #get labels trn_labels = np.squeeze(trn_labels) val_labels = np.squeeze(val_labels) tst_labels = np.squeeze(tst_labels) min_lbl = trn_labels.min() trn_labels -= min_lbl val_labels -= min_lbl
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Create Data Loader
#dataset object bs = 60 trn_ds = TextDataset(trn_cls, trn_labels) val_ds = TextDataset(val_cls, val_labels) tst_ds = TextDataset(tst_cls, tst_labels) #sampler trn_samp = SortishSampler(trn_cls, key=lambda x: len(trn_cls[x]), bs=bs//2) val_samp = SortSampler(val_cls, key=lambda x: len(val_cls[x])) tst_samp = SortSampler(tst_cls, key=lambda x: len(tst_cls[x])) #data loader trn_dl = DataLoader(trn_ds, bs//2, transpose=True, num_workers=1, pad_idx=1, sampler=trn_samp) val_dl = DataLoader(val_ds, bs, transpose=True, num_workers=1, pad_idx=1, sampler=val_samp) tst_dl = DataLoader(tst_ds, bs, transpose=True, num_workers=1, pad_idx=1, sampler=tst_samp) md = ModelData(DATA_PATH, trn_dl, val_dl, tst_dl)
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Train Classifier
#parameters weight_factor = 0.5 drops = np.array([0.25, 0.1, 0.2, 0.02, 0.15])*weight_factor bptt = 70 em_sz = 300 nh = 1150 nl = 3 vocab_size = len(itos_cls) nb_class=int(trn_labels.max())+1 opt_fn = partial(optim.Adam, betas=(0.7, 0.99)) bs = 60 wd = 1e-7 #classifier model # em_sz*3 for max, mean, just activations m = get_rnn_classifer(bptt, max_seq=1000, n_class=nb_class, n_tok=vocab_size, emb_sz=em_sz, n_hid=nh, n_layers=nl, pad_token=1, layers=[em_sz*3, 50, nb_class], drops=[drops[4], 0.1], dropouti=drops[0], wdrop=drops[1], dropoute=drops[2], dropouth=drops[3]) #get learner learner = RNN_Learner(md, TextModel(to_gpu(m)), opt_fn=opt_fn) learner.reg_fn = partial(seq2seq_reg, alpha=2, beta=1) learner.clip=25. learner.metrics = [accuracy] #load encoder trained earlier learner.load_encoder('wongnai_enc') #find learning rate learner.lr_find2() learner.sched.plot() #set learning rate lr=1e-2 lrm = 2.6 lrs = np.array([lr/(lrm**4), lr/(lrm**3), lr/(lrm**2), lr/lrm, lr]) #train last layer learner.freeze_to(-1) learner.fit(lrs, 1, wds=wd, cycle_len=10, use_clr=(10,10)) learner.save('last_layer') #train last two layers learner.load('last_layer') learner.freeze_to(-2) learner.fit(lrs, 1, wds=wd, cycle_len=3, use_clr=(8,3)) learner.save('last_two_layers')
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Validation Performance
learner.load('last_two_layers') #get validation performance probs,y= learner.predict_with_targs() preds = np.argmax(np.exp(probs),1) Counter(preds) Counter(y) from sklearn.metrics import confusion_matrix from sklearn.metrics import fbeta_score most_frequent = np.array([4]*len(preds)) print(f'Baseline Micro F1: {fbeta_score(y,most_frequent,1,average="micro")}') print(f'Micro F1: {fbeta_score(y,preds,1,average="micro")}') cm = confusion_matrix(y,preds) plot_confusion_matrix(cm,classes=[1,2,3,4,5])
Baseline Micro F1: 0.176 Micro F1: 0.5976666666666667 Confusion matrix, without normalization [[ 14 36 13 1 1] [ 13 80 150 22 2] [ 5 36 945 814 28] [ 0 0 344 2210 230] [ 0 1 22 696 337]]
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Submission
probs,y= learner.predict_with_targs(is_test=True) preds = np.argmax(np.exp(probs),1) + 1 Counter(preds) submit_df = pd.DataFrame({'a':y,'b':preds}) submit_df.columns = ['reviewID','rating'] submit_df.head() submit_df.to_csv(f'{DATA_PATH}valid10_2layers_newmm.csv',index=False)
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Benchmark with FastText We used [fastText](https://github.com/facebookresearch/fastText)'s own [pretrained embeddings](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) and a relatively "default" settings in order to benchmark our results. This gave us the micro-averaged F1 score of 0.50483 and 0.49366 for the public and private leaderboard respectively. ![alt text](../wongnai_data/misc/fasttext.jpg "fastText") Data Preparation
df_trn = pd.read_csv(f'{DATA_PATH}train.csv',header=None) df_val = pd.read_csv(f'{DATA_PATH}valid.csv',header=None) df_tst = pd.read_csv(f'{DATA_PATH}test.csv', header=None) train_set = [] for i in range(df_trn.shape[0]): label = df_trn.iloc[i,0] line = df_trn.iloc[i,1].replace('\n', ' ') train_set.append(f'__label__{label} {line}') train_doc = '\n'.join(train_set) with open(f'{DATA_PATH}train.txt','w') as f: f.write(train_doc) valid_set = [] for i in range(df_val.shape[0]): label = df_val.iloc[i,0] line = df_val.iloc[i,1].replace('\n', ' ') valid_set.append(f'__label__{label} {line}') valid_doc = '\n'.join(valid_set) with open(f'{DATA_PATH}valid.txt','w') as f: f.write(valid_doc) test_set = [] for i in range(df_tst.shape[0]): label = df_tst.iloc[i,0] line = df_tst.iloc[i,1].replace('\n', ' ') test_set.append(f'__label__{label} {line}') test_doc = '\n'.join(test_set) with open(f'{DATA_PATH}test.txt','w') as f: f.write(test_doc)
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Train FastText
!/home/ubuntu/theFastText/fastText-0.1.0/fasttext supervised -input '{DATA_PATH}train.txt' -pretrainedVectors '{MODEL_PATH}wiki.th.vec' -epoch 10 -dim 300 -wordNgrams 2 -output '{MODEL_PATH}fasttext_model' !/home/ubuntu/theFastText/fastText-0.1.0/fasttext test '{MODEL_PATH}fasttext_model.bin' '{DATA_PATH}valid.txt' preds = !/home/ubuntu/theFastText/fastText-0.1.0/fasttext predict '{MODEL_PATH}fasttext_model.bin' '{DATA_PATH}test.txt'
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Submission
submit_df = pd.DataFrame({'a':[i+1 for i in range(len(preds))],'b':preds}) submit_df.columns = ['reviewID','rating'] submit_df['rating'] = submit_df['rating'].apply(lambda x: x.split('__')[2]) submit_df.head() submit_df.to_csv(f'{DATA_PATH}fasttext.csv',index=False)
_____no_output_____
MIT
notebook/ulmfit_wongnai.ipynb
titipata/thai2vec
Generate and Save Results from Different ANN Methods
import numpy as np import pandas as pd import os import json import ast import random import tensorflow as tf import tensorflow_addons as tfa from bpmll import bp_mll_loss import sklearn_json as skljson from sklearn.model_selection import train_test_split from sklearn import metrics import sys os.chdir('C:\\Users\\rober\\OneDrive\\Documents\\STAT 6500\\Project\\NewsArticleClassification\\codes\\ANN Results') ## Set working directory ## to be 'ANN Results' sys.path.append('../ThresholdFunctionLearning') ## Append path to the ThresholdFunctionLearning directory to the interpreters ## search path from threshold_learning import predict_test_labels_binary ## Import the 'predict_test_labels_binary()' function from the from threshold_learning import predict_labels_binary ## threshold_learning library
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
Models on Reduced Dataset (each instance has atleast one label)
## Load the reduced tfidf dataset file_object = open('../BP-MLL Text Categorization/tfidf_trainTest_data_reduced.json',) tfidf_data_reduced = json.load(file_object) X_train_hasLabel = np.array(tfidf_data_reduced['X_train_hasLabel']) X_test_hasLabel = np.array(tfidf_data_reduced['X_test_hasLabel']) Y_train_hasLabel = np.array(tfidf_data_reduced['Y_train_hasLabel']) Y_test_hasLabel = np.array(tfidf_data_reduced['Y_test_hasLabel'])
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
Feed-Forward Cross-Entropy Network
## Start by defining and compiling the cross-entropy loss network (bpmll used later) tf.random.set_seed(123) num_labels = 13 model_ce_FF = tf.keras.models.Sequential([ tf.keras.layers.Dense(32, activation = 'relu'), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(num_labels, activation = 'sigmoid') ]) optim_func = tf.keras.optimizers.Adam(lr=0.0001) metric = tfa.metrics.HammingLoss(mode = 'multilabel', threshold = 0.5) model_ce_FF.compile(optimizer = optim_func, loss = 'binary_crossentropy', metrics = metric ) tf.random.set_seed(123) history_ce_FF_lr001 = model_ce_FF.fit(X_train_hasLabel, Y_train_hasLabel, epochs = 100, validation_data = (X_test_hasLabel, Y_test_hasLabel), verbose=2) ## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Convert training history to dataframe and write to a .json file history_ce_FF_lr001_df = pd.DataFrame(history_ce_FF_lr001.history) #with open("Reduced Data Eval Metrics/Cross Entropy Feed Forward/history_ce_FF_lr0001.json", "w") as outfile: # history_ce_FF_lr001_df.to_json(outfile) ## Learn a threshold function and save the test error for use in future DF Y_train_pred = model_ce_FF.predict(X_train_hasLabel) Y_test_pred = model_ce_FF.predict(X_test_hasLabel) t_range = (0, 1) test_labels_binary, threshold_function = predict_test_labels_binary(Y_train_pred, Y_train_hasLabel, Y_test_pred, t_range) ce_FF_withThreshold = metrics.hamming_loss(Y_test_hasLabel, test_labels_binary)
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
Feed-Forward BP-MLL Network
## Start by defining and compiling the bp-mll loss network tf.random.set_seed(123) model_bpmll_FF = tf.keras.models.Sequential([ tf.keras.layers.Dense(32, activation = 'relu'), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(num_labels, activation = 'sigmoid') ]) optim_func = tf.keras.optimizers.Adam(lr = 0.0001) model_bpmll_FF.compile(optimizer = optim_func, loss = bp_mll_loss, metrics = metric ) tf.random.set_seed(123) history_bpmll_FF_lr001 = model_bpmll_FF.fit(X_train_hasLabel, Y_train_hasLabel, epochs = 100, validation_data = (X_test_hasLabel, Y_test_hasLabel), verbose=2) ## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Convert training history to dataframe and write to a .json file history_bpmll_FF_lr001_df = pd.DataFrame(history_bpmll_FF_lr001.history) #with open("Reduced Data Eval Metrics/BPMLL Feed Forward/history_bpmll_FF_lr0001.json", "w") as outfile: # history_bpmll_FF_lr001_df.to_json(outfile) ## Learn a threshold function and save the test error for use in future DF Y_train_pred = model_bpmll_FF.predict(X_train_hasLabel) Y_test_pred = model_bpmll_FF.predict(X_test_hasLabel) t_range = (0, 1) test_labels_binary, threshold_function = predict_test_labels_binary(Y_train_pred, Y_train_hasLabel, Y_test_pred, t_range) bpmll_FF_withThreshold = metrics.hamming_loss(Y_test_hasLabel, test_labels_binary)
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
BPMLL Bidirectional LSTM Recurrent Network
## Load the pre-processed data file_object_reduced = open('../RNN Text Categorization/RNN_data_dict_reduced.json',) RNN_data_dict_reduced = json.load(file_object_reduced) RNN_data_dict_reduced = ast.literal_eval(RNN_data_dict_reduced) train_padded_hasLabel = np.array(RNN_data_dict_reduced['train_padded_hasLabel']) test_padded_hasLabel = np.array(RNN_data_dict_reduced['test_padded_hasLabel']) Y_train_hasLabel = np.array(RNN_data_dict_reduced['Y_train_hasLabel']) Y_test_hasLabel = np.array(RNN_data_dict_reduced['Y_test_hasLabel']) ## Define the bidirectional LSTM RNN architecture tf.random.set_seed(123) num_labels = 13 max_length = 100 num_unique_words = 2711 model_bpmll_biLSTM = tf.keras.models.Sequential([ tf.keras.layers.Embedding(num_unique_words, 32, input_length = max_length), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(16, return_sequences = False, return_state = False)), #tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(num_labels, activation = 'sigmoid') ]) optim_func = tf.keras.optimizers.Adam(lr = 0.0001) model_bpmll_biLSTM.compile(loss = bp_mll_loss, optimizer = optim_func, metrics = metric) tf.random.set_seed(123) history_bpmll_RNN_lr001 = model_bpmll_biLSTM.fit(train_padded_hasLabel, Y_train_hasLabel, epochs = 100, validation_data = (test_padded_hasLabel, Y_test_hasLabel), verbose=2) ## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Convert training history to dataframe and write to a .json file history_bpmll_RNN_lr001_df = pd.DataFrame(history_bpmll_RNN_lr001.history) #with open("Reduced Data Eval Metrics/BPMLL RNN/history_bpmll_RNN_lr0001.json", "w") as outfile: # history_bpmll_RNN_lr001_df.to_json(outfile) ## Learn a threshold function and save the test error for use in future DF Y_train_pred = model_bpmll_biLSTM.predict(train_padded_hasLabel) Y_test_pred = model_bpmll_biLSTM.predict(test_padded_hasLabel) t_range = (0, 1) test_labels_binary, threshold_function = predict_test_labels_binary(Y_train_pred, Y_train_hasLabel, Y_test_pred, t_range) bpmll_RNN_withThreshold = metrics.hamming_loss(Y_test_hasLabel, test_labels_binary)
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
Cross-Entropy Bidirectional LSTM Recurrent Network
## Define the bidirectional LSTM RNN architecture tf.random.set_seed(123) num_labels = 13 max_length = 100 num_unique_words = 2711 model_ce_biLSTM = tf.keras.models.Sequential([ tf.keras.layers.Embedding(num_unique_words, 32, input_length = max_length), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(16, return_sequences = False, return_state = False)), #tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(num_labels, activation = 'sigmoid') ]) optim_func = tf.keras.optimizers.Adam(lr = 0.0001) model_ce_biLSTM.compile(loss = 'binary_crossentropy', optimizer = optim_func, metrics = metric) tf.random.set_seed(123) history_ce_RNN_lr001 = model_ce_biLSTM.fit(train_padded_hasLabel, Y_train_hasLabel, epochs = 100, validation_data = (test_padded_hasLabel, Y_test_hasLabel), verbose=2) ## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Convert training history to dataframe and write to a .json file history_ce_RNN_lr001_df = pd.DataFrame(history_ce_RNN_lr001.history) #with open("Reduced Data Eval Metrics/Cross Entropy RNN/history_ce_RNN_lr0001.json", "w") as outfile: # history_ce_RNN_lr001_df.to_json(outfile) ## Learn a threshold function and save the test error for use in future DF Y_train_pred = model_ce_biLSTM.predict(train_padded_hasLabel) Y_test_pred = model_ce_biLSTM.predict(test_padded_hasLabel) t_range = (0, 1) test_labels_binary, threshold_function = predict_test_labels_binary(Y_train_pred, Y_train_hasLabel, Y_test_pred, t_range) ce_RNN_withThreshold = metrics.hamming_loss(Y_test_hasLabel, test_labels_binary) ## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Collect the test set hamming losses for the models ## with learned threshold functions into a df and write to .json file val_hamming_loss_withThreshold_lr001_df = pd.DataFrame({'ce_FF_lr0001' : ce_FF_withThreshold, 'bpmll_FF_lr0001' : bpmll_FF_withThreshold, 'ce_RNN_lr0001' : ce_RNN_withThreshold, 'bpmll_RNN_lr0001' : bpmll_RNN_withThreshold}, index = [0]) #with open("Reduced Data Eval Metrics/val_hamming_loss_withThreshold_lr0001.json", "w") as outfile: # val_hamming_loss_withThreshold_lr001_df.to_json(outfile) val_hamming_loss_withThreshold_lr001_df
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
Models on Full Dataset (some instances have no labels)
## Load the full tfidf dataset file_object = open('../BP-MLL Text Categorization/tfidf_trainTest_data.json',) tfidf_data_full = json.load(file_object) X_train = np.array(tfidf_data_full['X_train']) X_test = np.array(tfidf_data_full['X_test']) Y_train = np.array(tfidf_data_full['Y_train']) Y_test = np.array(tfidf_data_full['Y_test'])
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
Feed-Forward Cross-Entropy Network
## Use same architecture as the previous cross-entropy feed-forward network and train on full dataset tf.random.set_seed(123) num_labels = 13 model_ce_FF_full = tf.keras.models.Sequential([ tf.keras.layers.Dense(32, activation = 'relu'), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(num_labels, activation = 'sigmoid') ]) optim_func = tf.keras.optimizers.Adam(lr = 0.0001) model_ce_FF_full.compile(optimizer = optim_func, loss = 'binary_crossentropy', metrics = metric ) tf.random.set_seed(123) history_ce_FF_lr001_full = model_ce_FF_full.fit(X_train, Y_train, epochs = 100, validation_data = (X_test, Y_test), verbose=2) ## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Convert training history to dataframe and write to a .json file history_ce_FF_lr001_full_df = pd.DataFrame(history_ce_FF_lr001_full.history) #with open("Full Data Eval Metrics/Cross Entropy Feed Forward/history_ce_FF_lr0001_full.json", "w") as outfile: # history_ce_FF_lr001_full_df.to_json(outfile) ## Learn a threshold function and save the test error for use in future DF Y_train_pred = model_ce_FF_full.predict(X_train) Y_test_pred = model_ce_FF_full.predict(X_test) t_range = (0, 1) test_labels_binary, threshold_function = predict_test_labels_binary(Y_train_pred, Y_train, Y_test_pred, t_range) ce_FF_full_withThreshold = metrics.hamming_loss(Y_test, test_labels_binary)
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
LSTM Reccurrent Network
## Load the pre-processed data file_object = open('../RNN Text Categorization/RNN_data_dict.json',) RNN_data_dict = json.load(file_object) RNN_data_dict = ast.literal_eval(RNN_data_dict) train_padded = np.array(RNN_data_dict['train_padded']) test_padded = np.array(RNN_data_dict['test_padded']) Y_train = np.array(RNN_data_dict['Y_train']) Y_test = np.array(RNN_data_dict['Y_test']) ## Define the LSTM RNN architecture tf.random.set_seed(123) num_labels = 13 model_LSTM_full = tf.keras.models.Sequential([ tf.keras.layers.Embedding(num_unique_words, 32, input_length = max_length), tf.keras.layers.LSTM(16, return_sequences = False, return_state = False), #tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(num_labels, activation = 'sigmoid') ]) optim_func = tf.keras.optimizers.Adam(lr = 0.0001) model_LSTM_full.compile(loss = 'binary_crossentropy', optimizer = optim_func, metrics = metric) tf.random.set_seed(123) history_ce_RNN_lr001_full = model_LSTM_full.fit(train_padded, Y_train, epochs = 100, validation_data = (test_padded, Y_test), verbose=2) ## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Convert training history to dataframe and write to a .json file history_ce_RNN_lr001_full_df = pd.DataFrame(history_ce_RNN_lr001_full.history) #with open("Full Data Eval Metrics/Cross Entropy RNN/history_ce_RNN_lr0001_full.json", "w") as outfile: # history_ce_RNN_lr001_full_df.to_json(outfile) ## Learn a threshold function and save the test error for use in future DF Y_train_pred = model_LSTM_full.predict(train_padded) Y_test_pred = model_LSTM_full.predict(test_padded) t_range = (0, 1) test_labels_binary, threshold_function = predict_test_labels_binary(Y_train_pred, Y_train, Y_test_pred, t_range) ce_RNN_full_withThreshold = metrics.hamming_loss(Y_test, test_labels_binary) ce_RNN_full_withThreshold ## (CAUTION: DO NOT OVERWRITE EXISTING FILES) -- Collect the test set hamming losses for the models ## with learned threshold functions into a df and write to .json file val_hamming_loss_withThreshold_lr001_df = pd.DataFrame({'ce_FF_full_lr0001' : ce_FF_full_withThreshold, 'ce_RNN_full_lr0001' : ce_RNN_full_withThreshold}, index = [0]) #with open("Full Data Eval Metrics/val_hamming_loss_withThreshold_lr0001.json", "w") as outfile: # val_hamming_loss_withThreshold_lr001_df.to_json(outfile) val_hamming_loss_withThreshold_lr001_df
_____no_output_____
MIT
codes/ANN Results/Generate_ANN_Results.ipynb
architdatar/NewsArticleClassification
Azure Content Moderator API Reference: https://docs.microsoft.com/en-us/azure/cognitive-services/Content-Moderator/api-reference
import requests subscripiton_key = 'YOUR_SUBSCRIPTION_KEY' endpoint = 'YOUR_ENDPOINT_URL' request_url = f'{endpoint}/contentmoderator/moderate/v1.0/ProcessText/Screen' headers = { 'Content-Type': 'text/plain', 'Ocp-Apim-Subscription-Key': subscripiton_key, } params = { 'classify': True, } body = 'Is this a crap email [email protected], phone: 6657789887, IP: 255.255.255.255, 1 Microsoft Way, Redmond, WA 98052' response = requests.post(request_url, data=body, headers=headers, params=params) response.json()
_____no_output_____
MIT
evaluate-text-with-azure-cognitive-language-services/content-moderator.ipynb
zkan/azure-ai-engineer-associate-workshop
Data Analytic Boot Camp - ETL Project How is the restaurant's inspection score compare to the Yelp customer review rating? We always rely on the application on our digital device to look for high rating restaurant. However,does the high rating restaurants (rank by customers) provide a clearn and healthy food environment for their customer? This project is trying to answer this question by building an ETL flow. Resources: - Restaurant Inspection Scores, San Francisco Department of Public Health (After cleaning the data, n = 54314) - Customers Based Rating Scores, Yelp API (After cleaning the data, n = 4049) Note: Yelp partnered with the local city government to develop the Local Inspector Value_Entry Specification (LIVES) system. However, the system is partnered with other local web developers, which has no link to Yelp database.
# Dependencies import pandas as pd import os import csv import requests import json import numpy as np from config_1 import ykey # Database Connection Dependencies import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, inspect import sqlite3 # Import Matplot Lib import matplotlib from matplotlib import style style.use('seaborn') import matplotlib.pyplot as plt # Loading the CSV file csvpath = os.path.join(".", "Resources", "Restaurant_Scores_-_LIVES_Standard.csv") inspection_scores = pd.read_csv(csvpath) # Rename the headers of the dataframe for merging in the database inspection_scores = inspection_scores.rename(index=str, columns={"business_name":"name", "business_address":"address"}) inspection_scores["zip"] = inspection_scores["business_postal_code"].astype(str) inspection_scores['phone'] = inspection_scores['business_phone_number'].astype(str) # Count the unique value of business id in the data inspection_scores['business_id'].value_counts() inspection_scores['business_id'].nunique() # Make the business name and address lower case for merging in the database inspection_scores['name'] = inspection_scores['name'].str.lower() inspection_scores['address'] = inspection_scores['address'].str.lower() # Modify the zip code and business phone number for referencing the business in the database inspection_scores["zip"] = inspection_scores["zip"].str[:5] inspection_scores['phone'] = inspection_scores['phone'].map(lambda x: str(x)[:-2]) # Drop the unnecessary data in the dataframe inspection_df = inspection_scores.drop(['business_postal_code', 'business_latitude', 'business_longitude', 'business_location', 'business_phone_number', 'Neighborhoods', 'Police Districts', 'Supervisor Districts', 'Fire Prevention Districts', 'Zip Codes', 'Analysis Neighborhoods'], axis=1) # Check the dataframe after cleaning the data print(len(inspection_df)) inspection_df.head() # Clearning the zip codes in the data frame and create a list for API request zip_codes = inspection_scores["zip"].unique() zip_codes = zip_codes[zip_codes != "CA"] zip_codes = zip_codes[zip_codes != "Ca"] zip_codes = zip_codes[zip_codes != "0"] zip_codes = zip_codes[zip_codes != "941"] zip_codes = zip_codes.tolist() del zip_codes[7] zip_codes = [int(i) for i in zip_codes] print(zip_codes) # Save the cleaned data frame as a csf file for later use yelp_df.to_csv("Resources/Restaurant_Scores_-_LIVES_Standard_Cleaned.csv", index=False, header=True)
_____no_output_____
MIT
ETL_Project_Completed.ipynb
NormanLo4319/ETL-Project
Yelp API Request We tried two ways to extract data from Yelp API request, 1. Search by city location, San Francisco 2. Search by zip codes This project choose to use method 1 because method 2 create bunch of duplicates that is difficult to clean in the later time.
# Testing Yelp API request for extracting the business related data # Yelp API key is stored in ykey headers = {"Authorization": "bearer %s" % ykey} endpoint = "https://api.yelp.com/v3/businesses/search" name = [] rating = [] review_count = [] address = [] city = [] state = [] zip_ = [] phone = [] # Define the parameters params = {"term": "restaurants", "location": "San Francisco", "radius": 5000, "categories": "food", "limit": 50, "offset":0} print(params) for j in range(0, 50): try: # Make a request to the Yelp API response = requests.get(url = endpoint, params = params, headers = headers) data_response = response.json() # Add the total counts of fast food stores to "total" # print(json.dumps(data_response, indent=4, sort_keys=True)) print(data_response["businesses"][j]["name"]) name.append(data_response["businesses"][j]["name"]) print(data_response["businesses"][j]["rating"]) rating.append(data_response["businesses"][j]["rating"]) print(data_response["businesses"][j]["review_count"]) review_count.append(data_response["businesses"][j]["review_count"]) print(data_response["businesses"][j]["location"]["address1"]) address.append(data_response["businesses"][j]["location"]["address1"]) print(data_response["businesses"][j]["location"]["city"]) city.append(data_response["businesses"][j]["location"]["city"]) print(data_response["businesses"][j]["location"]["state"]) state.append(data_response["businesses"][j]["location"]["state"]) print(data_response["businesses"][j]["location"]["zip_code"]) zip_.append(data_response["businesses"][j]["location"]["zip_code"]) print(data_response["businesses"][j]["phone"]) phone.append(data_response["businesses"][j]["phone"]) except KeyError: print("no restaurant found!") # Print out the responses print(data_response['businesses'][1]['name']) print(data_response['businesses'][1]['rating']) print(data_response['businesses'][1]['price']) print(data_response['businesses'][1]['location']['address1']) print(data_response['businesses'][1]['location']['state']) print(data_response['businesses'][1]['location']['city']) print(data_response['businesses'][1]['location']['zip_code']) # Extract data from Yelp API by location = San Francisco # Yelp API key is stored in ykey headers = {"Authorization": "bearer %s" % ykey} endpoint = "https://api.yelp.com/v3/businesses/search" name = [] rating = [] review_count = [] address = [] city = [] state = [] zip_ = [] phone = [] # Sending 100 requests and each request will return 50 restaurants for i in range(0, 100): for j in range(50): try: # Define the parameters params = {"term": "restaurants", "location": "San Francisco", "radius": 40000, "categories": "food", "limit": 50, "offset":(i*5)} print(params) # Make a request to the Yelp API response = requests.get(url = endpoint, params = params, headers = headers) data_response = response.json() # Add the total counts of fast food stores to "total" print(data_response["businesses"][j]["name"]) name.append(data_response["businesses"][j]["name"]) print(data_response["businesses"][j]["rating"]) rating.append(data_response["businesses"][j]["rating"]) print(data_response["businesses"][j]["review_count"]) review_count.append(data_response["businesses"][j]["review_count"]) print(data_response["businesses"][j]["location"]["address1"]) address.append(data_response["businesses"][j]["location"]["address1"]) print(data_response["businesses"][j]["location"]["city"]) city.append(data_response["businesses"][j]["location"]["city"]) print(data_response["businesses"][j]["location"]["state"]) state.append(data_response["businesses"][j]["location"]["state"]) print(data_response["businesses"][j]["location"]["zip_code"]) zip_.append(data_response["businesses"][j]["location"]["zip_code"]) print(data_response["businesses"][j]["phone"]) phone.append(data_response["businesses"][j]["phone"]) except KeyError: print("no restaurant found!") # print(json.dumps(data, indent=4, sort_keys=True)) # Extract data from Yelp API by location = zip code # Yelp API key is stored in ykey headers = {"Authorization": "bearer %s" % ykey} endpoint = "https://api.yelp.com/v3/businesses/search" name = [] rating = [] review_count = [] address = [] city = [] state = [] zip_ = [] phone = [] for k in zip_codes: for i in range(0, 20): for j in range(50): try: # Define the parameters params = {"term": "restaurants", "location": k, "radius": 5000, "categories": "food", "limit": 50, "offset":(i*5)} print(params) # Make a request to the Yelp API response = requests.get(url = endpoint, params = params, headers = headers) data_response = response.json() # Add the total counts of fast food stores to "total" print(data_response["businesses"][j]["name"]) name.append(data_response["businesses"][j]["name"]) print(data_response["businesses"][j]["rating"]) rating.append(data_response["businesses"][j]["rating"]) print(data_response["businesses"][j]["review_count"]) review_count.append(data_response["businesses"][j]["review_count"]) print(data_response["businesses"][j]["location"]["address1"]) address.append(data_response["businesses"][j]["location"]["address1"]) print(data_response["businesses"][j]["location"]["city"]) city.append(data_response["businesses"][j]["location"]["city"]) print(data_response["businesses"][j]["location"]["state"]) state.append(data_response["businesses"][j]["location"]["state"]) print(data_response["businesses"][j]["location"]["zip_code"]) zip_.append(data_response["businesses"][j]["location"]["zip_code"]) print(data_response["businesses"][j]["phone"]) phone.append(data_response["businesses"][j]["phone"]) except KeyError: print("no restaurant found!") # print(json.dumps(data, indent=4, sort_keys=True)) # Assign keys to the json data keys = {"name":name, "rating":rating, "reviews":review_count, "address":address, "city":city, "state":state, "zip_code":zip_, "phone":phone} print(len(name)) # Create a data frame for the json data yelp_df = pd.DataFrame(keys) yelp_df.head() # Save the Yelp API data into a csv file for future work yelp_df.to_csv("Resources/yelp_api.csv", index=False, header=True) # Cleaning the data for merging in the database yelp_df['zip_code'].dtype yelp_df['name'] = yelp_df['name'].str.lower() yelp_df['address'] = yelp_df['address'].str.lower() yelp_df['zip_code'] = yelp_df['zip_code'].astype(str) yelp_df['phone'] = yelp_df['phone'].astype(str) yelp_df['zip'] = yelp_df['zip_code'].map(lambda x: str(x)[:-2]) yelp_df['phone'] = yelp_df['phone'].map(lambda x: str(x)[:-2]) yelp_df = yelp_df.drop(['zip_code'], axis=1) yelp_df.head() # print(len(yelp_df))
_____no_output_____
MIT
ETL_Project_Completed.ipynb
NormanLo4319/ETL-Project
Storing the data to SQLite database: There are two ways to store the data into SQLite database, 1. Using pandas method "dataframe.to_sql()" 2. Create metadata base and append data from data frames to the specific tables in the database This project use the second method to append data because to_sql() method does not allow database to create primary key for the data. Storing the data to MySQL: The data is also stored in MySQL database and the sql commands are saved in a separated file
# Import SQL Alchemy from sqlalchemy import create_engine # Import and establish Base for which classes will be constructed from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() # Import modules to declare columns and column data types from sqlalchemy import Column, Integer, String, Float # Loading the csv file back to pandas dataframe inspect_csvpath = os.path.join(".", "Resources", "Restaurant_Scores_-_LIVES_Standard.csv") inspection_scores = pd.read_csv(inspect_csvpath) yelp_csvpath = os.path.join(".", "Resources", "yelp_api.csv") yelp_0 = pd.read_csv(yelp_csvpath) # Create the inspection class class inspection(Base): __tablename__ = 'inspection' id = Column(Integer, primary_key=True) business_id = Column(Integer) name = Column(String(255)) address = Column(String(255)) business_city = Column(String(255)) business_state = Column(String(255)) inspection_id = Column(String(255)) inspection_date = Column(String(255)) inspection_score = Column(Float) inspection_type = Column(String(500)) violation_id = Column(String(255)) violation_description = Column(String(800)) risk_category = Column(String(255)) zip = Column(String(255)) phone = Column(String(255)) # Create a connection to a SQLite database engine = create_engine('sqlite:///ELT_Project.db') Base.metadata.create_all(engine) # To push the objects made and query the server we use a Session object from sqlalchemy.orm import Session session = Session(bind=engine) # Appending the dataframe into database for i in range(len(inspection_df['name'])): inspect = inspection(business_id = inspection_scores['business_id'][i], address = inspection_scores['address'][i], business_city = inspection_scores['business_city'][i], business_state = inspection_scores['business_state'][i], inspection_id = inspection_scores['inspection_id'][i], inspection_date = inspection_scores['inspection_date'][i], inspection_score = inspection_scores['inspection_score'][i], inspection_type = inspection_scores['inspection_type'][i], violation_id = inspection_scores['violation_id'][i], violation_description = inspection_scores['violation_description'][i], risk_category = inspection_scores['risk_category'][i], zip = inspection_scores['zip'][i], phone = inspection_scores['phone'][i]) session.add(inspect) session.commit() # Create the inspection class class yelp(Base): __tablename__ = 'yelp' id = Column(Integer, primary_key=True) name = Column(String(255)) rating = Column(Float) reviews = Column(Integer) address = Column(String(255)) city = Column(String(255)) state = Column(String(255)) phone = Column(String(255)) zip = Column(String(255)) # Appending the dataframe into database for j in range(len(yelp_df['name'])): y = yelp(name = yelp_df['name'][j], rating = yelp_df['rating'][j], reviews = yelp_df['reviews'][j], address = yelp_df['address'][j], city = yelp_df['city'][j], state = yelp_df['state'][j], phone = yelp_df['phone'][j], zip = yelp_df['zip'][j]) print(y) session.add(y) session.commit() # Checking the data in the database engine engine.execute("SELECT * FROM inspection").fetchall() engine.execute("SELECT * FROM yelp").fetchall() # Checking the table names inspector = inspect(engine) inspector.get_table_names() # checking the header names in the inspection scores table columns = inspector.get_columns('inspection') for i in columns: print(i['name'], i['type']) # Checking the header names in the yelp rating table columns = inspector.get_columns('yelp') for j in columns: print(j["name"], j["type"])
_____no_output_____
MIT
ETL_Project_Completed.ipynb
NormanLo4319/ETL-Project
Analysis on restaurant inspection scores and customer-based rating Using matplotly for ploting the joined data. After joining the data, only 122 business can be matched by the business name and it's zip code
joined_df = pd.merge(inspection_df, yelp_df, on=['name', 'zip']) # joined_df.head(100) # print(len(joined_df)) joined_df['name'].nunique() # Cleaning the data in the joined data frame joined_df = joined_df.dropna(subset=['inspection_score']) joined_df = joined_df.drop_duplicates(subset='business_id', keep='first') joined_df.head(10) # Plotting the inspection scores from the joined data frame ax_1 = joined_df.plot(x='name', y='inspection_score', style='o', title="Inspection Scores") fig_1 = ax_1.get_figure() fig_1.savefig('./Images/Inspection_Scores.png') # Plotting the yelp rating from the joined data frame ax_2 = joined_df.plot(x='name', y='rating', style='^', title="Yelp Rating") fig_2 = ax_2.get_figure() fig_2.savefig('./Images/Yelp_Scores.png') # Plotting the inspection scores and yelp rating plt.scatter(joined_df['rating'], joined_df['inspection_score']) plt.title("Inspection Scores Vs. Yelp Rating") plt.xlabel("Yelp Rating (Scale: 0 - 5)") plt.ylabel("Inspection Score") plt.savefig('./Images/inspection_scores_vs_yelp_rating.png')
_____no_output_____
MIT
ETL_Project_Completed.ipynb
NormanLo4319/ETL-Project
Desafio 1 - Escolher um título mais descritivo que passe a mensagem adequada
UF = "Unidade da Federação" Ano_Mes = "2008/Ago" ax = dados.plot(x = UF, y = Ano_Mes, kind = "bar", figsize = (9, 6)) ax.yaxis.set_major_formatter(ticker.StrMethodFormatter("{x:,.2f}")) plt.title("Despesas em procedimentos hospitalares do SUS \n {} - Processados em {}".format(UF, Ano_Mes)) plt.show()
_____no_output_____
MIT
notebooks/modulo_01/aula_02_primeiras_visualizacoes_de_dados.ipynb
daviramalho/Bootcamp-DS2-Alura
Desafio 01.2 - Faça a mesma análise para o mês mais recente que você possui.
UF = "Unidade da Federação" Ano_Mes = "2021/Mar" ax = dados.plot(x = UF, y = Ano_Mes, kind = "bar", figsize = (9, 6)) ax.yaxis.set_major_formatter(ticker.StrMethodFormatter("{x:,.2f}")) plt.title("Despesas em procedimentos hospitalares do SUS \n {} - Processados em {}".format(UF, Ano_Mes)) #plt.savefig("../../reports/figures/modulo_01/desafio01_2.jpg", dpi=150, bbox_inches="tight") plt.show()
_____no_output_____
MIT
notebooks/modulo_01/aula_02_primeiras_visualizacoes_de_dados.ipynb
daviramalho/Bootcamp-DS2-Alura
AULA 02 - Primeiras Visualizações de Dados
dados[["2008/Ago", "2008/Set"]].head() dados.mean() colunas_usaveis = dados.mean().index.tolist() colunas_usaveis.insert(0, "Unidade da Federação") colunas_usaveis usaveis = dados[colunas_usaveis] usaveis.head() usaveis = usaveis.set_index("Unidade da Federação") usaveis.head() usaveis["2019/Ago"].head() usaveis.loc["12 Acre"] usaveis.plot(figsize = (12, 6)) usaveis.T.head() usaveis.T.plot(figsize = (12,6)) plt.show() usaveis.T.tail() usaveis = usaveis.drop("Total", axis = 1) usaveis.head() usaveis.T.plot(figsize = (12,6)) plt.show()
_____no_output_____
MIT
notebooks/modulo_01/aula_02_primeiras_visualizacoes_de_dados.ipynb
daviramalho/Bootcamp-DS2-Alura
DESAFIO 02.1 - Reposicionar a legenda fora do gráfico
estados = "Todos os Estados" ano_selecionado = "2007/Ago a 2021/Mar" ax2 = usaveis.T.plot(figsize = (12,6)) ax2.legend(loc = 6, bbox_to_anchor = (1, 0.5)) ax2.yaxis.set_major_formatter(ticker.StrMethodFormatter("R$ {x:,.2f}")) plt.title("Despesas em procedimentos hospitalares do SUS por local de internação \n {} - Processados em {}".format(estados, ano_selecionado)) #plt.savefig("../../reports/figures/modulo_01/desafio02_1.jpg", dpi=150, bbox_inches="tight") plt.show()
_____no_output_____
MIT
notebooks/modulo_01/aula_02_primeiras_visualizacoes_de_dados.ipynb
daviramalho/Bootcamp-DS2-Alura
DESAFIO 02.2 - Plotar o Gráfico de linha com apenas 5 estados de sua preferência
estados = "PA, MG, CE, RS e SP" ano_selecionado = "2007/Ago a 2021/Mar" usaveis_selecionados = usaveis.loc[["15 Pará", "31 Minas Gerais", "23 Ceará", "43 Rio Grande do Sul", "35 São Paulo"]] ax3 = usaveis_selecionados.T.plot(figsize = (12,6)) ax3.legend(loc = 6, bbox_to_anchor = (1, 0.5)) ax3.yaxis.set_major_formatter(ticker.StrMethodFormatter("R$ {x:,.2f}")) plt.title("Despesas em procedimentos hospitalares do SUS por local de internação \n {} - Processados em {}".format(estados, ano_selecionado)) #plt.savefig("../../reports/figures/modulo_01/desafio02_2.jpg", dpi=150, bbox_inches="tight") plt.show()
_____no_output_____
MIT
notebooks/modulo_01/aula_02_primeiras_visualizacoes_de_dados.ipynb
daviramalho/Bootcamp-DS2-Alura
---
md = webdriver.Chrome() # 오픈 md.get('https://cloud.google.com/vision/') # 해당 주소로 이동 md.set_window_size(900,700) # size setting md.execute_script("window.scrollTo(0, 1000);") # 브라우저 스크롤 이동
_____no_output_____
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
현재 윈도우 위치 저장
main = md.current_window_handle
_____no_output_____
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
새로운 탭 오픈 (포커스는 변경x)
md.execute_script("window.open('https://www.google.com');") windows = md.window_handles # 윈도우 체크 windows
_____no_output_____
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
switch_to_window : focus 변경
md.switch_to_window(windows[1]) md.get('https://www.naver.com') md.switch_to_window(main) md.execute_script('location.reload()') #새로고침
_____no_output_____
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
control alert
md.execute_script('alert("selenium test")') alert = md.switch_to_alert() print(alert.text) alert.accept() md.execute_script('alert("selenium test")') md.switch_to_alert().accept() md.execute_script("confirm('confirm?')") # alert = md.switch_to_alert() print(alert.text) # alert.accept() alert.dismiss()
confirm?
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
input key & button
md.switch_to_window(windows[1]) md.find_element_by_css_selector('#query').send_keys('test') md.find_element_by_css_selector(".ico_search_submit").click()
_____no_output_____
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
close driver
md.close() # one for one for i in md.window_handles: md.switch_to_window(i) md.close()
_____no_output_____
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
--- file uploadhttps://visual-recognition-demo.ng.bluemix.nethttps://cloud.google.com/vision/
cr = webdriver.Chrome() cr.get('https://cloud.google.com/vision/')
_____no_output_____
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
iframe의 경우 포커스 이동이 필요함
iframe = cr.find_element_by_css_selector('#vision_demo_section > iframe ') cr.switch_to_frame(iframe) # Switch back default content # cr.switch_to_default_content() #아이프레임 밖으로 포커스 이동 path = !pwd #현재 디렉토리위치 리스트 print(type(path), path) file_path = path[0] + "/screenshot_element.png" cr.find_element_by_css_selector('#input').send_keys(file_path) # 파입 업로드 cr.find_element_by_css_selector('#safeSearchAnnotation').click()
_____no_output_____
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
safe search 항목 점수 출력
a = cr.find_elements_by_css_selector('#card div.row.style-scope.vs-safe') for i in a: print(i.text)
Adult Very Unlikely Spoof Unlikely Medical Very Unlikely Violence Very Unlikely Racy Very Unlikely
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
--- 한번에 실행
driver = webdriver.Chrome() driver.get('https://cloud.google.com/vision/') iframe = driver.find_element_by_css_selector("#vision_demo_section iframe") driver.switch_to_frame(iframe) file_path = path[0] + "/screenshot_element.png" driver.find_element_by_css_selector("#input").send_keys(file_path) time.sleep(15) # 이미지를 업로드하고 데이터를 분석하는 시간 driver.find_element_by_css_selector("#safeSearchAnnotation").click() a = driver.find_elements_by_css_selector('#card div.row.style-scope.vs-safe') for i in a: print(i.text) driver.close()
Adult Very Unlikely Spoof Unlikely Medical Very Unlikely Violence Very Unlikely Racy Very Unlikely
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
element 체크하면서 실행
def check_element(driver, selector): try: driver.find_element_by_css_selector(selector) return True except: return False driver = webdriver.Chrome() driver.get('https://cloud.google.com/vision/') iframe = driver.find_element_by_css_selector("#vision_demo_section iframe") driver.switch_to_frame(iframe) file_path = path[0] + "/screenshot_element.png" driver.find_element_by_css_selector("#input").send_keys(file_path) selector = '#card div.row.style-scope.vs-safe' sec, limit_sec = 0, 10 while True: sec += 1 print("{}sec".format(sec)) time.sleep(1) # element 확인 if check_element(driver, selector): driver.find_element_by_css_selector("#safeSearchAnnotation").click() a = driver.find_elements_by_css_selector('#card div.row.style-scope.vs-safe') for i in a: print(i.text) driver.close() break; # limit_sec가 넘어가면 에러 처리 if sec + 1 > limit_sec: print("error") driver.close() break;
1sec 2sec 3sec 4sec Adult Very Unlikely Spoof Unlikely Medical Very Unlikely Violence Very Unlikely Racy Very Unlikely
MIT
Past/DSS/Programming/Scraping/180220_selenium.ipynb
Moons08/TIL
Precious function
U.shape def get_dfU(U, b,step): d = U.T.shape[1] dfU = pd.DataFrame(u) dim = [i for i in range(d)] dfU[[str(i)+"_follower" for i in list(dfU.columns)]] = dfU[dfU.columns] for k in range(d): dfU_temp = dfU.copy() dfU_temp[k] = dfU_temp[k].apply(lambda x: x+step if x<1 else x-step ) find_in = list(dfU[dim].apply(lambda x: list(np.around(x, 3)),axis=1)) dfU[str(k)+"_follower"] = dfU_temp[dim].apply(lambda x: list(np.around(x, 3)), axis =1).apply(lambda x: find_in.index(x)) dfU['b']=pd.DataFrame(b).apply(np.array,axis=1) for i in range(d): dfU['beta_'+str(i)]=(dfU.loc[list(dfU[str(i)+"_follower"])][['b']].reset_index(drop=True) - dfU[['b']])/step beta = ['beta_'+str(i) for i in range(2)] dfU['beta'] = dfU[beta].apply(lambda x : np.vstack(x), axis = 1) return dfU xeval = np.array([1, 883.99, #883.99**2 ]) xeval.shape xeval.shape ==(d,) df = get_dfU(U, b,step) y_hat = df['beta'].apply(lambda x : np.matmul(x, xeval)) y_1_hat = y_hat.apply(lambda x: x[0]) y_2_hat = y_hat.apply(lambda x: x[1]) y_1_hat = np.abs(y_1_hat) y_2_hat = np.abs(y_2_hat) np.mean(y_1_hat + y_2_hat) from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import numpy as np g = int(np.sqrt(df.shape[0])) fig = plt.figure() ax = fig.gca(projection='3d') x = np.reshape(df[0].ravel(), (g, g)) y = np.reshape(df[1].ravel(), (g, g)) z = np.reshape(y_1_hat.ravel(), (g, g)) surf = ax.plot_surface(-x, y, z, cmap=cm.coolwarm, linewidth=0, antialiased=False) plt.show() from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt from matplotlib import cm from matplotlib.ticker import LinearLocator, FormatStrFormatter import numpy as np g = int(np.sqrt(df.shape[0])) fig = plt.figure() ax = fig.gca(projection='3d') x = np.reshape(df[0].ravel(), (g, g)) y = np.reshape(df[1].ravel(), (g, g)) z = np.reshape(y_2_hat.ravel(), (g, g)) surf = ax.plot_surface(-x, y, z, cmap=cm.coolwarm, linewidth=0, antialiased=False) plt.show()
_____no_output_____
BSD-3-Clause
.ipynb_checkpoints/Python Implementation-checkpoint.ipynb
DatenBiene/Vector_Quantile_Regression
Building a RF model for $\alpha _x$
from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestRegressor rscl_df = pd.DataFrame(rscl_data, columns=[f"Feat_{i}" for i in range(300)]) rscl_df.head() model = RandomForestRegressor(n_estimators=40, max_features='sqrt', min_samples_split=5, n_jobs=-1) X_train, X_test, y_train, y_test = train_test_split(rscl_df, y_classes['αx'], test_size=0.2, random_state=2, stratify = params['stencil_type']) model.fit(X_train, y_train) model.score(X_test ,y_test) fi = pd.DataFrame(np.array([rscl_df.columns,model.feature_importances_]).T, columns=['timestamp','fi']) fi.sort_values('fi', ascending=False, inplace=True) fi.head(10) fi.plot('timestamp', 'fi', figsize=(10,6), legend=False); def plot_fi(fi): return fi.plot('timestamp', 'fi', 'barh', legend=False) plot_fi(fi[:60]); to_keep = fi[fi.fi>0.005].timestamp; len(to_keep) df_keep = rscl_df[to_keep].copy() X_train, X_test, y_train, y_test = train_test_split(df_keep, y_classes['αx'], test_size=0.2, random_state=2, stratify = params['stencil_type']) model2 = RandomForestRegressor(n_estimators=40, max_features="sqrt", n_jobs=-1, oob_score=True) model2.fit(X_train, y_train) model2.score(X_test, y_test) fi2 = pd.DataFrame(np.array([df_keep.columns, model2.feature_importances_]).T, columns=['timestamp','fi']) fi2.sort_values('fi', ascending=False, inplace=True) fi2.plot('timestamp', 'fi', figsize=(10,6)) fig,ax = plt.subplots(figsize=(16,14)) fi2.plot('timestamp', 'fi', 'barh', legend=False, ax=ax)
_____no_output_____
MIT
Week7/july19_hierarchial_clustering.ipynb
Anantha-Rao12/NMR-quantumstates-GSOC21
Removing redundant features
import scipy from scipy.cluster import hierarchy as hc corr = np.round(scipy.stats.spearmanr(df_keep).correlation, 4) corr_condensed = hc.distance.squareform(1-corr) z = hc.linkage(corr_condensed, method='average') fig = plt.figure(figsize=(18,14)) dendrogram = hc.dendrogram(z, labels=df_keep.columns, orientation='left', leaf_font_size=16) plt.show()
_____no_output_____
MIT
Week7/july19_hierarchial_clustering.ipynb
Anantha-Rao12/NMR-quantumstates-GSOC21
Let's try removing some of these related features to see if the model can be simplified without impacting the accuracy.
def get_oob(df): m = RandomForestRegressor(n_estimators=30, min_samples_leaf=5, max_features=0.6, n_jobs=-1, oob_score=True) x, _ = split_vals(df, n_trn) m.fit(x, y_train) return m.oob_score_ !pip install pdpbox from pdpbox import pdp from plotnine import *
_____no_output_____
MIT
Week7/july19_hierarchial_clustering.ipynb
Anantha-Rao12/NMR-quantumstates-GSOC21
Examine sample file from shipboard real-time processed ADCPThis is a pre-cruise examination of the data file to see how to truncate it for sending to shore during the cruise.
import xarray as xr import numpy as np import matplotlib.pyplot as plt import matplotlib import cftime datapath = '../data/raw/shipboard_adcp_initial_look/' file = datapath + 'wh300.nc' ds = xr.open_dataset(file,drop_variables=['amp','pg','pflag','num_pings','tr_temp']) ds ds2=ds.sel(time=slice("2021-09-06", "2021-09-07")) ds2 ds2.to_netcdf(datapath + 'wh300_last_day.nc') fig = plt.figure() plt.contourf(ds2.u) # plt.plot(ds2.uship)
_____no_output_____
MIT
code/truncate_shipboard_adcp.ipynb
jtomfarrar/S-MODE_analysis
METAS uncLib https://www.metas.ch/metas/en/home/fabe/hochfrequenz/unclib.html
from metas_unclib import * import matplotlib.pyplot as plt from sigfig import round %matplotlib inline use_mcprop(n=100000) #use_linprop() def uncLib_PlotHist(mcValue, xLabel='Value / A.U.', yLabel='Probability', title='Histogram of value', bins=1001, coverage=0.95): hObject = mcValue.net_object hValues = [float(bi) for bi in hObject.values] y,x,_ = plt.hist(hValues, bins=bins, density=True) plt.xlabel(xLabel) plt.title(title) plt.ylabel(yLabel) # stat over all coverage_interval=[np.mean(hValues), np.percentile(hValues, ((1.0-coverage)/2.0) * 100), np.percentile(hValues, (coverage+((1.0-coverage)/2.0)) * 100)] plt.axvline( coverage_interval[0]) plt.axvline( coverage_interval[1]) plt.axvline( coverage_interval[2]) outString = round(str(coverage_interval[0]), uncertainty=str((coverage_interval[2]-coverage_interval[1])/2)) plt.text( coverage_interval[2], max(y)/2, outString) plt.show() return [[y,x], coverage_interval]
_____no_output_____
CC0-1.0
empir19nrm02/Jupyter/IBudgetMETAS.ipynb
AndersThorseth/empir19nrm02
Measurement Uncertainty Simplest Possible Example Define the parameter for the calibration factor
k_e = ufloat(0.01, 0.0000045) k_e
_____no_output_____
CC0-1.0
empir19nrm02/Jupyter/IBudgetMETAS.ipynb
AndersThorseth/empir19nrm02