markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
TrainingTrain in two stages:1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
# Train the head branches # Passing layers="heads" freezes all layers except the head # layers. You can also pass a regular expression to select # which layers to train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, epochs=1, layers='heads') # Fine tune all layers # Passing layers="all" trains all layers. You can also # pass a regular expression to select which layers to # train by name pattern. #model.train(dataset_train, dataset_val, # learning_rate=config.LEARNING_RATE / 10, # epochs=2, # layers="all") # Save weights # Typically not needed because callbacks save after every epoch # Uncomment to save manually # model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5") # model.keras_model.save_weights(model_path)
_____no_output_____
MIT
train_t_shirts.ipynb
lumstery/maskrcnn
Detection
class InferenceConfig(ShapesConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1 inference_config = InferenceConfig() # Recreate the model in inference mode model = modellib.MaskRCNN(mode="inference", config=inference_config, model_dir=MODEL_DIR) # Get path to saved weights # Either set a specific path or find last trained weights # model_path = os.path.join(ROOT_DIR, ".h5 file name here") model_path = model.find_last()[1] # Load trained weights (fill in path to trained weights here) assert model_path != "", "Provide path to trained weights" print("Loading weights from ", model_path) model.load_weights(model_path, by_name=True) # Test on a random image image_id = random.choice(dataset_val.image_ids) original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) log("original_image", original_image) log("image_meta", image_meta) log("gt_class_id", gt_class_id) log("gt_bbox", gt_bbox) log("gt_mask", gt_mask) visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id, dataset_train.class_names, figsize=(8, 8)) results = model.detect([original_image], verbose=1) r = results[0] visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'], dataset_val.class_names, r['scores'], figsize=(8, 8))
Processing 1 images image shape: (128, 128, 3) min: 5.00000 max: 255.00000 molded_images shape: (1, 128, 128, 3) min: -115.70000 max: 151.10000 image_metas shape: (1, 10) min: 0.00000 max: 128.00000
MIT
train_t_shirts.ipynb
lumstery/maskrcnn
Evaluation
# Compute VOC-Style mAP @ IoU=0.5 # Running on 10 images. Increase for better accuracy. image_ids = np.random.choice(dataset_val.image_ids, 10) APs = [] for image_id in image_ids: # Load image and ground truth data image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, inference_config, image_id, use_mini_mask=False) molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0) # Run object detection results = model.detect([image], verbose=0) r = results[0] # Compute AP AP, precisions, recalls, overlaps =\ utils.compute_ap(gt_bbox, gt_class_id, gt_mask, r["rois"], r["class_ids"], r["scores"], r['masks']) APs.append(AP) print("mAP: ", np.mean(APs))
mAP: 1.0
MIT
train_t_shirts.ipynb
lumstery/maskrcnn
Ex2 - Filtering and Sorting DataCheck out [Euro 12 Exercises Video Tutorial](https://youtu.be/iqk5d48Qisg) to watch a data scientist go through the exercises This time we are going to pull data directly from the internet. Step 1. Import the necessary libraries
import pandas as pd
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv). Step 3. Assign it to a variable called euro12.
euro12 = pd.read_csv('https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/02_Filtering_%26_Sorting/Euro12/Euro_2012_stats_TEAM.csv', sep=',') euro12
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 4. Select only the Goal column.
euro12.Goals
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 5. How many team participated in the Euro2012?
euro12.shape[0]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 6. What is the number of columns in the dataset?
euro12.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 16 entries, 0 to 15 Data columns (total 35 columns): Team 16 non-null object Goals 16 non-null int64 Shots on target 16 non-null int64 Shots off target 16 non-null int64 Shooting Accuracy 16 non-null object % Goals-to-shots 16 non-null object Total shots (inc. Blocked) 16 non-null int64 Hit Woodwork 16 non-null int64 Penalty goals 16 non-null int64 Penalties not scored 16 non-null int64 Headed goals 16 non-null int64 Passes 16 non-null int64 Passes completed 16 non-null int64 Passing Accuracy 16 non-null object Touches 16 non-null int64 Crosses 16 non-null int64 Dribbles 16 non-null int64 Corners Taken 16 non-null int64 Tackles 16 non-null int64 Clearances 16 non-null int64 Interceptions 16 non-null int64 Clearances off line 15 non-null float64 Clean Sheets 16 non-null int64 Blocks 16 non-null int64 Goals conceded 16 non-null int64 Saves made 16 non-null int64 Saves-to-shots ratio 16 non-null object Fouls Won 16 non-null int64 Fouls Conceded 16 non-null int64 Offsides 16 non-null int64 Yellow Cards 16 non-null int64 Red Cards 16 non-null int64 Subs on 16 non-null int64 Subs off 16 non-null int64 Players Used 16 non-null int64 dtypes: float64(1), int64(29), object(5) memory usage: 4.4+ KB
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 7. View only the columns Team, Yellow Cards and Red Cards and assign them to a dataframe called discipline
# filter only giving the column names discipline = euro12[['Team', 'Yellow Cards', 'Red Cards']] discipline
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 8. Sort the teams by Red Cards, then to Yellow Cards
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 9. Calculate the mean Yellow Cards given per Team
round(discipline['Yellow Cards'].mean())
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 10. Filter teams that scored more than 6 goals
euro12[euro12.Goals > 6]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 11. Select the teams that start with G
euro12[euro12.Team.str.startswith('G')]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 12. Select the first 7 columns
# use .iloc to slices via the position of the passed integers # : means all, 0:7 means from 0 to 7 euro12.iloc[: , 0:7]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 13. Select all columns except the last 3.
# use negative to exclude the last 3 columns euro12.iloc[: , :-3]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Step 14. Present only the Shooting Accuracy from England, Italy and Russia
# .loc is another way to slice, using the labels of the columns and indexes euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
_____no_output_____
BSD-3-Clause
02_Filtering_&_Sorting/Euro12/Exercises_with_Solutions.ipynb
arscool3/pandas_exercises
Data Processing
df = pd.read_csv('../data/num_data.csv') dataset = df dataset.shape def return_rmse(test,predicted): rmse = math.sqrt(mean_squared_error(test, predicted)) return rmse data_size = dataset.shape[0] train_size=int(data_size * 0.6) test_size = 100 valid_size = data_size - train_size - test_size training_set = dataset[:train_size].iloc[:,4:16].values valid_set = dataset[train_size:train_size+valid_size].iloc[:,4:16].values test_set = dataset[data_size-test_size:].iloc[:,4:16].values y = dataset.iloc[:,4].values y = y.reshape(-1,1) n_feature = training_set.shape[1] y.shape # Scaling the dataset sc = MinMaxScaler(feature_range=(0,1)) training_set_scaled = sc.fit_transform(training_set) valid_set_scaled = sc.fit_transform(valid_set) test_set_scaled = sc.fit_transform(test_set) sc_y = MinMaxScaler(feature_range=(0,1)) y_scaled = sc_y.fit_transform(y) # split a multivariate sequence into samples position_of_target = 4 def split_sequences(sequences, n_steps_in, n_steps_out): X_, y_ = list(), list() for i in range(len(sequences)): # find the end of this pattern end_ix = i + n_steps_in out_end_ix = end_ix + n_steps_out-1 # check if we are beyond the dataset if out_end_ix > len(sequences): break # gather input and output parts of the pattern seq_x, seq_y = sequences[i:end_ix, :], sequences[end_ix-1:out_end_ix, position_of_target] X_.append(seq_x) y_.append(seq_y) return np.array(X_), np.array(y_) n_steps_in = 12 n_steps_out = 12 X_train, y_train = split_sequences(training_set_scaled, n_steps_in, n_steps_out) X_valid, y_valid = split_sequences(valid_set_scaled, n_steps_in, n_steps_out) X_test, y_test = split_sequences(test_set_scaled, n_steps_in, n_steps_out) LSTM_4 = Sequential() LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],n_feature), activation='tanh')) LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],n_feature), activation='tanh')) LSTM_4.add(LSTM(units=50, return_sequences=True, input_shape=(X_train.shape[1],n_feature), activation='tanh')) LSTM_4.add(LSTM(units=50, activation='tanh')) LSTM_4.add(Dense(units=n_steps_out)) # Compiling the RNNs adam = optimizers.Adam(lr=0.01) LSTM_4.compile(optimizer=adam,loss='mean_squared_error') RnnModelDict = {'LSTM_4': LSTM_4} rmse_df = pd.DataFrame(columns=['Model', 'train_rmse', 'valid_rmse', 'train_time']) # RnnModelDict = {'LSTM_GRU': LSTM_GRU_reg} for model in RnnModelDict: regressor = RnnModelDict[model] print('training start for', model) start = time.process_time() regressor.fit(X_train,y_train,epochs=50,batch_size=1024) train_time = round(time.process_time() - start, 2) print('results for training set') y_train_pred = regressor.predict(X_train) # plot_predictions(y_train,y_train_pred) train_rmse = return_rmse(y_train,y_train_pred) print('results for valid set') y_valid_pred = regressor.predict(X_valid) # plot_predictions(y_valid,y_valid_pred) valid_rmse = return_rmse(y_valid,y_valid_pred) # print('results for test set - 24 hours') # y_test_pred24 = regressor.predict(X_test_24) # plot_predictions(y_test_24,y_test_pred24) # test24_rmse = return_rmse(y_test_24,y_test_pred24) one_df = pd.DataFrame([[model, train_rmse, valid_rmse, train_time]], columns=['Model', 'train_rmse', 'valid_rmse', 'train_time']) rmse_df = pd.concat([rmse_df, one_df]) # save the rmse results # rmse_df.to_csv('../rmse_24h_plus_time.csv') # history = regressor.fit(X_train, y_train, epochs=50, batch_size=1024, validation_data=(X_valid, y_valid), # verbose=2, shuffle=False) # # plot history # plt.figure(figsize=(30, 15)) # plt.plot(history.history['loss'], label='Training') # plt.plot(history.history['val_loss'], label='Validation') # plt.xlabel('Epochs') # plt.ylabel('Loss') # plt.legend() # plt.show() # Transform back and plot y_train_origin = y[:train_size-46] y_valid_origin = y[train_size:train_size+valid_size] y_train_pred = regressor.predict(X_train) y_train_pred_origin = sc_y.inverse_transform(y_train_pred) y_valid_pred = regressor.predict(X_valid) y_valid_pred_origin = sc_y.inverse_transform(y_valid_pred) _y_train_pred_origin = y_train_pred_origin[:, 0:1] _y_valid_pred_origin = y_valid_pred_origin[:, 0:1] plt.figure(figsize=(20, 8)); plt.plot(pd.to_datetime(valid_original.index), valid_original, alpha=0.5, color='red', label='Actual PM2.5 Concentration',) plt.plot(pd.to_datetime(valid_original.index), y_valid_pred_origin[:,0:1], alpha=0.5, color='blue', label='Predicted PM2.5 Concentation') plt.title('PM2.5 Concentration Prediction') plt.xlabel('Time') plt.ylabel('PM2.5 Concentration') plt.legend() plt.show() sample = 500 plt.figure(figsize=(20, 8)); plt.plot(pd.to_datetime(valid_original.index[-500:]), valid_original[-500:], alpha=0.5, color='red', label='Actual PM2.5 Concentration',) plt.plot(pd.to_datetime(valid_original.index[-500:]), y_valid_pred_origin[:,11:12][-500:], alpha=0.5, color='blue', label='Predicted PM2.5 Concentation') plt.title('PM2.5 Concentration Prediction') plt.xlabel('Time') plt.ylabel('PM2.5 Concentration') plt.legend() plt.show()
_____no_output_____
MIT
notebooks/LSTM4_pred_plots_12h.ipynb
harryli18/hybrid-rnn-models
Author : Dipjyoti Das (https://www.linkedin.com/in/dipjyotidas) This script is used to Train classifiers on the dataset and all the classifers are saved as pickle files. The pickled classfiers are used in the sentiment_analysis.py file. Import all the libraries
import nltk import random #from nltk.corpus import movie_reviews from nltk.classify.scikitlearn import SklearnClassifier import pickle from sklearn.naive_bayes import MultinomialNB, BernoulliNB from sklearn.linear_model import LogisticRegression, SGDClassifier from sklearn.svm import SVC, LinearSVC, NuSVC from nltk.classify import ClassifierI from statistics import mode from nltk.tokenize import word_tokenize class VoteClassifier(ClassifierI): def __init__(self, *classifiers): self._classifiers = classifiers def classify(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) return mode(votes) def confidence(self, features): votes = [] for c in self._classifiers: v = c.classify(features) votes.append(v) choice_votes = votes.count(mode(votes)) conf = choice_votes / len(votes) return conf short_pos = open("data/positive.txt","r").read() short_neg = open("data/negative.txt","r").read() # using POS -parts of speech tag - allow only specific words #pos - tuple- word, parts of speech all_words = [] documents = [] # j is adject, r is adverb, and v is verb #allowed_word_types = ["J","R","V"] allowed_word_types = ["J"] # allowing only Adjectives for p in short_pos.split('\n'): documents.append((p, "pos") ) words = word_tokenize(p) pos = nltk.pos_tag(words) for w in pos: if w[1][0] in allowed_word_types: # w - tuple, not getting Nouns, commas all_words.append(w[0].lower()) for p in short_neg.split('\n'): documents.append((p, "neg")) words = word_tokenize(p) pos = nltk.pos_tag(words) for w in pos: if w[1][0] in allowed_word_types: all_words.append(w[0].lower()) # pickle and store documents # pickled algos - folder created to store all the pickled objects : save_documents = open("pickled_algos/documents.pickle", "wb") pickle.dump(documents, save_documents) save_documents.close() all_words = nltk.FreqDist(all_words) word_features = list(all_words.keys())[:5000] # pickle and store word features save_word_features = open("pickled_algos/word_features5k.pickle","wb") pickle.dump(word_features, save_word_features) save_word_features.close() def find_features(document): words = word_tokenize(document) features = {} for w in word_features: features[w] = (w in words) return features featuresets = [(find_features(rev), category) for (rev, category) in documents] # Pickle and store - featuresets : occupies space of 300 MB, don't store it as pickle object #save_featuresets = open("pickled_algos/featuresets.pickle", "wb") #pickle.dump(featuresets, save_featuresets) #save_featuresets.close() random.shuffle(featuresets) print(len(featuresets)) # Train and Test set: testing_set = featuresets[10000:] training_set = featuresets[:10000] ## List of Classifiers : ## Naive Bayes classifier: classifier = nltk.NaiveBayesClassifier.train(training_set) print("Original Naive Bayes Algo accuracy percent:", (nltk.classify.accuracy(classifier, testing_set))*100) classifier.show_most_informative_features(15) ## pickle and store - Naive Bayes classifier save_classifier = open("pickled_algos/originalnaivebayes5k.pickle","wb") pickle.dump(classifier, save_classifier) save_classifier.close() ## MNB classifier : MNB_classifier = SklearnClassifier(MultinomialNB()) MNB_classifier.train(training_set) print("MNB_classifier accuracy percent:", (nltk.classify.accuracy(MNB_classifier, testing_set))*100) # pickle and store MNB classifier: save_classifier = open("pickled_algos/MNB_classifier5k.pickle","wb") pickle.dump(MNB_classifier, save_classifier) save_classifier.close() ## BernoulliNB classifier: BernoulliNB_classifier = SklearnClassifier(BernoulliNB()) BernoulliNB_classifier.train(training_set) print("BernoulliNB_classifier accuracy percent:", (nltk.classify.accuracy(BernoulliNB_classifier, testing_set))*100) #pickle and store BernoulliNB classifier: save_classifier = open("pickled_algos/BernoulliNB_classifier5k.pickle","wb") pickle.dump(BernoulliNB_classifier, save_classifier) save_classifier.close() ## Logistic Regression classifier: LogisticRegression_classifier = SklearnClassifier(LogisticRegression()) LogisticRegression_classifier.train(training_set) print("LogisticRegression_classifier accuracy percent:", (nltk.classify.accuracy(LogisticRegression_classifier, testing_set))*100) #pickle and store Logistic Regression classifier: save_classifier = open("pickled_algos/LogisticRegression_classifier5k.pickle","wb") pickle.dump(LogisticRegression_classifier, save_classifier) save_classifier.close() ## LinearSVC classifier LinearSVC_classifier = SklearnClassifier(LinearSVC()) LinearSVC_classifier.train(training_set) print("LinearSVC_classifier accuracy percent:", (nltk.classify.accuracy(LinearSVC_classifier, testing_set))*100) # pickle and store LinearSVC classifier: save_classifier = open("pickled_algos/LinearSVC_classifier5k.pickle","wb") pickle.dump(LinearSVC_classifier, save_classifier) save_classifier.close() ## SGDC classifier: SGDC_classifier = SklearnClassifier(SGDClassifier()) SGDC_classifier.train(training_set) print("SGDClassifier accuracy percent:",nltk.classify.accuracy(SGDC_classifier, testing_set)*100) # pickle and store SGDC classifier: save_classifier = open("pickled_algos/SGDC_classifier5k.pickle","wb") pickle.dump(SGDC_classifier, save_classifier) save_classifier.close() ## Can't pickle the Voted Classifier - class of its own
10664 Original Naive Bayes Algo accuracy percent: 73.64457831325302 Most Informative Features wonderful = True pos : neg = 21.8 : 1.0 engrossing = True pos : neg = 19.7 : 1.0 generic = True neg : pos = 16.9 : 1.0 mediocre = True neg : pos = 16.9 : 1.0 inventive = True pos : neg = 15.7 : 1.0 routine = True neg : pos = 14.9 : 1.0 flat = True neg : pos = 14.9 : 1.0 refreshing = True pos : neg = 14.4 : 1.0 boring = True neg : pos = 13.8 : 1.0 warm = True pos : neg = 13.1 : 1.0 intimate = True pos : neg = 11.7 : 1.0 realistic = True pos : neg = 11.7 : 1.0 stale = True neg : pos = 11.6 : 1.0 mindless = True neg : pos = 11.6 : 1.0 delicate = True pos : neg = 11.0 : 1.0 MNB_classifier accuracy percent: 74.24698795180723 BernoulliNB_classifier accuracy percent: 74.3975903614458 LogisticRegression_classifier accuracy percent: 73.49397590361446 LinearSVC_classifier accuracy percent: 72.13855421686746
MIT
sentiment_analysis/training_classfiers.ipynb
dipjyotidas/NLP
We can run upto this cell one time. The sentiment analysis module uses the saved pickle objects and it also has the voting classfier and the sentiment function. The module saved is sentiment_analysis.py After importing the sentiment analysis module : We can use this to check if any sentiment is positive or negative with the confidence level. Examples :
import sentiment_analysis as s # referencing the sentiment function of the sentiment_analysis.py script # Example - Pass through our own positive review print(s.sentiment("This movie was awesome! The story was great and performances were amazing, I really liked it!")) # Example - Pass through a negative review print(s.sentiment("This movie was junk. No story at all and acting sucked. Horrible movie, 1/10")) ## Both are at 100% confidence level
_____no_output_____
MIT
sentiment_analysis/training_classfiers.ipynb
dipjyotidas/NLP
Physically Based Rendering {pbr_example}==========================VTK 9 introduced Physically Based Rendering (PBR) and we have exposedthat functionality in PyVista. Read the [blog aboutPBR](https://blog.kitware.com/vtk-pbr/) for more details.PBR is only supported for `pyvista.PolyData`{.interpreted-textrole="class"} and can be triggered via the `pbr` keyword argument of`add_mesh`. Also use the `metallic` and `roughness` arguments forfurther control.Let\'s show off this functionality by rendering a high quality mesh of astatue as though it were metallic.
import pyvista as pv from pyvista import examples # Load the statue mesh mesh = examples.download_nefertiti() mesh.rotate_x(-90.) # rotate to orient with the skybox # Download skybox cubemap = examples.download_sky_box_cube_map()
_____no_output_____
MIT
locale/examples/02-plot/pbr.ipynb
tkoyama010/pyvista-doc-translations
Let\'s render the mesh with a base color of \"linen\" to give it a metallooking finish.
p = pv.Plotter() p.add_actor(cubemap.to_skybox()) p.set_environment_texture(cubemap) # For reflecting the environment off the mesh p.add_mesh(mesh, color='linen', pbr=True, metallic=0.8, roughness=0.1, diffuse=1) # Define a nice camera perspective cpos = [(-313.40, 66.09, 1000.61), (0.0, 0.0, 0.0), (0.018, 0.99, -0.06)] p.show(cpos=cpos)
_____no_output_____
MIT
locale/examples/02-plot/pbr.ipynb
tkoyama010/pyvista-doc-translations
Show the variation of the metallic and roughness parameters.Plot with metallic increasing from left to right and roughnessincreasing from bottom to top.
colors = ['red', 'teal', 'black', 'orange', 'silver'] p = pv.Plotter() p.set_environment_texture(cubemap) for i in range(5): for j in range(6): sphere = pv.Sphere(radius=0.5, center=(0.0, 4 - i, j)) p.add_mesh(sphere, color=colors[i], pbr=True, metallic=i/4, roughness=j/5) p.view_vector((-1, 0, 0), (0, 1, 0)) p.show()
_____no_output_____
MIT
locale/examples/02-plot/pbr.ipynb
tkoyama010/pyvista-doc-translations
Combine custom lighting and physically based rendering.
# download louis model mesh = examples.download_louis_louvre() mesh.rotate_z(140) plotter = pv.Plotter(lighting=None) plotter.set_background('black') plotter.add_mesh(mesh, color='linen', pbr=True, metallic=0.5, roughness=0.5, diffuse=1) # setup lighting light = pv.Light((-2, 2, 0), (0, 0, 0), 'white') plotter.add_light(light) light = pv.Light((2, 0, 0), (0, 0, 0), (0.7, 0.0862, 0.0549)) plotter.add_light(light) light = pv.Light((0, 0, 10), (0, 0, 0), 'white') plotter.add_light(light) # plot with a good camera position plotter.camera_position = [(9.51, 13.92, 15.81), (-2.836, -0.93, 10.2), (-0.22, -0.18, 0.959)] cpos = plotter.show()
_____no_output_____
MIT
locale/examples/02-plot/pbr.ipynb
tkoyama010/pyvista-doc-translations
Becquerel OverviewThis notebook demonstrates some of the main features and functionalities of `becquerel`:1. [`bq.Spectrum`](1.-bq.Spectrum) - [Constructor](1.1-From-scratch) - [Energy Calibration Models](1.2-Energy-Calibration-Models) - [File IO](1.3-From-File) - [Backrgound Subtraction](1.4-Background-Subtraction) - [Rebinning](1.5-Rebinning) - [Scaling](1.6-Scaling) - [Peak Finding + Auto Calibration](1.7-Automatic-Calibration)1. [Nuclear-Data](2.-Nuclear-Data) - [`bq.Element`](2.1-bq.Element) - [`bq.Isotope`](2.2-bq.Isotope) - [`bq.IsotopeQuantity`](2.3-bq.IsotopeQuantity) - [`bq.materials`](2.4-bq.materials) - [`bq.nndc`](2.5-bq.nndc) - [`bq.xcom`](2.6-bq.xcom)For more details on particular features please see the other notebooks in this directory as noted. In addition, a few practical examples of using `becquerel` are given in the [misc notebook](./misc.ipynb)
%pylab inline import pandas as pd import becquerel as bq from pprint import pprint np.random.seed(0)
Populating the interactive namespace from numpy and matplotlib
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1. `bq.Spectrum`The core class in `bq` is `Spectrum`. This class contains a variety of tools for handling **single spectrum** data.Further details can be found in the [spectrum notebook](./spectrum.ipynb) and [spectrum plotting notebook](./plotting.ipynb).
bq.Spectrum?
_____no_output_____
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1.1 From scratch
c, _ = np.histogram(np.random.poisson(50, 1000), bins=np.arange(101)) spec = bq.Spectrum(counts=c, livetime=60.) spec spec.plot(xmode='channels'); try: spec.plot(xmode='energy') except bq.PlottingError as e: print('ERROR:', e) plt.close('all')
ERROR: Spectrum is not calibrated, however x axis was requested as energy
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1.2 Energy Calibration ModelsAll calibrations are instances of `Calibration`, which stores an arbitrary scalar function and its relevant parameters.Further details can be found in the [energycal notebook](./energycal.ipynb).
chlist = (40, 80) kevlist = (661.7, 1460.83) cal = bq.Calibration.from_points("p[0] + p[1] * x", chlist, kevlist, rng=(-1e3, 1e5)) print(cal.params) spec.apply_calibration(cal) print(spec) spec.plot(xmode='keV'); # New spec c, _ = np.histogram(np.random.poisson(50, 1000), bins=np.arange(101)) spec2 = bq.Spectrum(counts=c, livetime=60.) spec2 spec2.calibrate_like(spec) spec2
_____no_output_____
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1.3 From File`becquerel` currently provides parsers for:- `SPE`- `SPC`- `CNF`
spec = bq.Spectrum.from_file('../tests/samples/1110C NAA cave pottery.Spe') spec spec.is_calibrated spec.plot(yscale='log', linewidth=0.5, ymode='counts'); %%capture spec = bq.Spectrum.from_file('../tests/samples/01122014152731-GT01122014182338-GA37.4963000N-GO122.4633000W.cnf') spec %%capture spec = bq.Spectrum.from_file('../tests/samples/Alcatraz14.Spc') spec
_____no_output_____
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1.4 Background Subtraction
spec = bq.Spectrum.from_file('../tests/samples/1110C NAA cave pottery.Spe') print(spec) bkg = bq.Spectrum.from_file('../tests/samples/1110C NAA cave background May 2017.spe') print(bkg) bkgsub = spec - bkg print('Total pottery countrate: {:6.3f}'.format(np.sum(spec.cps))) print('Total background countrate: {:6.3f}'.format(np.sum(bkg.cps))) print('Total subtracted countrate: {:6.3f}'.format(np.sum(bkgsub.cps))) fig, ax = plt.subplots(1, figsize=(12, 6)) ax = spec.plot(color='firebrick', linewidth=0.5, yscale='log', ax=ax, label='Measurement', ymode='cps') bkgsub.plot(ax=ax, color='dodgerblue', linewidth=0.5, label='Measurement - Background', ymode='cps') bkg.plot(ax=ax, color='olive', linewidth=0.5, label='Background', ymode='cps') ax.set_ylim(bottom=1e-5) ax.set_title('Background Subtraction') ax.legend(); # Is there any Tl-208 in the background-subtracted spectrum? fig, ax = plt.subplots(1, figsize=(12, 6)) ax = spec.plot(color='firebrick', linewidth=1, yscale='linear', ax=ax, label='Measurement', ymode='cps') bkg.plot(ax=ax, color='olive', linewidth=1, label='Background', ymode='cps') bkgsub.plot(ax=ax, color='dodgerblue', linewidth=1, label='Measurement - Background', ymode='cps') ax.set_ylim(bottom=1e-5) ax.set_title('Background Subtraction') ax.legend() plt.xlim(2600, 2630) plt.ylim(0, 0.0008);
_____no_output_____
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1.5 Rebinning- deterministic (interpolation): `interpolation`- stochastic (convert to listmode): `listmode`Further details can be found in the [rebinning notebook](./rebinning.ipynb).
spec = bq.Spectrum.from_file('../tests/samples/1110C NAA cave pottery.Spe') bkg = bq.Spectrum.from_file('../tests/samples/1110C NAA cave background May 2017.spe') bkg_rebin = bkg.rebin(np.linspace(0., 3000., 16000)) try: bkgsub = spec - bkg_rebin except bq.SpectrumError as e: print('ERROR:', e) spec_rebin = spec.rebin_like(bkg_rebin) spec_rebin - bkg_rebin
becquerel/core/spectrum.py:815: SpectrumWarning: Subtraction of counts-based specta, spectra have been converted to CPS warnings.warn(
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1.6 ScalingMultiplication or division will be applied to the data of the spectrum. The following decimates a spectrum by dividing by 10:
spec = bq.Spectrum.from_file('../tests/samples/1110C NAA cave background May 2017.spe') spec_div = spec / 10 print(spec_div)
SpeFile: Reading file ../tests/samples/1110C NAA cave background May 2017.spe becquerel.Spectrum start_time: None stop_time: None realtime: None livetime: None is_calibrated: True num_bins: 16384 gross_counts: (1.0529+/-0.0010)e+05 gross_cps: None filename: None
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
One might however want to decimate a spectrum in a way consistent with Poisson statistics. For that there is the `downsample` method:
spec_downsample = spec.downsample(10, handle_livetime='reduce') print(spec_downsample) ax = spec_div.plot(label='div', ymode='counts', yscale='log') spec_downsample.plot(ax=ax, label='downsample', ymode='counts') ax.legend() ax.set_xlim(600, 620) ax.set_ylim(1.3, 5e1);
becquerel.Spectrum start_time: None stop_time: None realtime: None livetime: 43781.7 is_calibrated: True num_bins: 16384 gross_counts: (1.0484+/-0.0033)e+05 gross_cps: 2.395+/-0.008 filename: None
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1.7 Automatic CalibrationThere are utilities in Becquerel for automatically finding peaks in a raw spectrum and matching them to a list of energies as a first pass at a full calibration.Further details can be found in the [autocal notebook](./autocal.ipynb).Let's load an uncalibrated sodium iodide spectrum that has Cobalt-60 and background lines:
spec = bq.Spectrum.from_file('../tests/samples/digibase_5min_30_1.spe') fig, ax = plt.subplots(1, figsize=(12, 6)) spec.plot(ax=ax, linewidth=0.5, xmode='channels', yscale='log') plt.xlim(0, len(spec)); # filter the spectrum kernel = bq.GaussianPeakFilter(400, 20, 3) finder = bq.PeakFinder(spec, kernel) finder.find_peaks(min_snr=10, xmin=50) cal = bq.AutoCalibrator(finder) plt.figure(figsize=(10, 5)) plt.title('Signal-to-noise ratio of spectral lines after filter') cal.peakfinder.plot() plt.tight_layout() # perform calibration cal.fit( [1173.2, 1332.5, 1460.8, 2614.5], gain_range=[5., 7.], de_max=100., ) spec.apply_calibration(cal.cal) fig, ax = plt.subplots(1, figsize=(12, 6)) spec.plot(ax=ax, linewidth=0.5, xmode='energy', yscale='log') for erg in cal.fit_energies: plt.plot([erg, erg], [1e-1, 1e4], 'r-', alpha=0.5) plt.text(erg, 1e4, '{:.1f} keV'.format(erg), rotation=90) plt.xlim(0, 3000);
found best gain: 6.371703 keV/channel
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
2. Nuclear Data 2.1 `bq.Element`
e1 = bq.Element('Cs') e2 = bq.Element(55) e3 = bq.Element('55') print(e1, e2, e3) print(e1 == e2 == e3) print('{:%n(%s) Z=%z}'.format(e1)) pprint(e1.__dict__, width=10)
Cesium(Cs) Z=55 Cesium(Cs) Z=55 Cesium(Cs) Z=55 True Cesium(Cs) Z=55 {'Z': 55, 'atomic_mass': 132.91, 'name': 'Cesium', 'symbol': 'Cs'}
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
2.2 `bq.Isotope`Further examples of `Isotope` and `IsotopeQuantity` can be found in the [isotopes notebook](./isotopes.ipynb).
i1 = bq.Isotope('Cs-137') i2 = bq.Isotope('137CS') i3 = bq.Isotope('Cs', 137) i4 = bq.Isotope('Cesium-137') i5 = bq.Isotope('137CAESIUM') print(i1, i2, i3, i4, i5) print(i1 == i2 == i3 == i4 == i5)
Cs-137 Cs-137 Cs-137 Cs-137 Cs-137 True
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
Isotope names and properties
iso = bq.Isotope('Tc-99m') print(iso) print('{:%n(%s)-%a%m Z=%z}'.format(iso)) pprint(iso.__dict__) print('half-life: {:.2f} hr'.format(iso.half_life / 3600))
Tc-99m Technetium(Tc)-99m Z=43 {'A': 99, 'M': 1, 'N': 56, 'Z': 43, 'atomic_mass': 98, 'm': 'm', 'name': 'Technetium', 'symbol': 'Tc'} half-life: 6.01 hr
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
More isotope properties such as half-life, stability, and natural abundance are available:
for a in range(39, 42): iso = bq.Isotope('Potassium', a) print('') print('Isotope: {}'.format(iso)) print(' Spin-parity: {}'.format(iso.j_pi)) if iso.abundance is not None: print(' Abundance: {:.2f}%'.format(iso.abundance)) print(' Stable? {}'.format(iso.is_stable)) if not iso.is_stable: print(' Half-life: {:.3e} years'.format(iso.half_life / 365.25 / 24 / 3600)) print(' Decay modes: {}'.format(iso.decay_modes))
Isotope: K-39 Spin-parity: 3/2+ Abundance: 93.26+/-0.00% Stable? True Isotope: K-40 Spin-parity: 4- Abundance: 0.01+/-0.00% Stable? False Half-life: 1.248e+09 years Decay modes: (['EC', 'B-'], [10.72, 89.28]) Isotope: K-41 Spin-parity: 3/2+ Abundance: 6.73+/-0.00% Stable? True
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
2.3 `bq.IsotopeQuantity` Source activity on a given dateHere's a check source activity on today's date:
ba133_chk = bq.IsotopeQuantity('ba133', date='2013-05-01', uci=10.02) ba133_chk.uci_now()
_____no_output_____
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
Or for another date:
ba133_chk.uci_at('2018-02-16')
_____no_output_____
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
2.4 `bq.materials`Access the [NIST X-ray mass attenuation coefficients database](https://www.nist.gov/pml/x-ray-mass-attenuation-coefficients) for [elements](https://physics.nist.gov/PhysRefData/XrayMassCoef/tab1.html) and [compounds](https://physics.nist.gov/PhysRefData/XrayMassCoef/tab2.html) and also data from the [PNNL Materials Compendium](https://compendium.cwmd.pnnl.gov).
mat_data = bq.materials.fetch_materials() pprint(list(mat_data.keys())) pprint(mat_data['Air, Dry (near sea level)'], indent=4)
{ 'density': 0.001205, 'formula': '-', 'source': 'NIST ' '(http://physics.nist.gov/PhysRefData/XrayMassCoef/tab2.html)', 'weight_fractions': [ 'C 0.000124', 'N 0.755268', 'O 0.231781', 'Ar 0.012827']}
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
2.5 `bq.nndc`Tools to query the [National Nuclear Data Center databases](https://www.nndc.bnl.gov/nudat2/) to obtain decay radiation, branching ratios, and many other types of nuclear data.Further details and examples can be found in the [nndc notebook](./nndc.ipynb) and the [nndc_chart_of_nuclides notebook](./nndc_chart_of_nuclides.ipynb).Here are the gamma-ray lines above 5% branching ratio from Co-60:
rad = bq.nndc.fetch_decay_radiation(nuc='Co-60', type='Gamma', i_range=(5, None)) cols = ['Z', 'Element', 'A', 'Decay Mode', 'Radiation', 'Radiation Energy (keV)', 'Radiation Intensity (%)', 'Energy Level (MeV)'] display(rad[cols]) # NNDC nuclear wallet cards are used by bq.Isotope but can be accessed directly like this: data = bq.nndc.fetch_wallet_card( z_range=(19, 19), a_range=(37, 44), elevel_range=(0, 0), # ground states only ) display(data)
_____no_output_____
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
2.6 `bq.xcom`The [NIST XCOM photon cross sections database](https://www.nist.gov/pml/xcom-photon-cross-sections-database) can be [queried](https://physics.nist.gov/PhysRefData/Xcom/html/xcom1.html) in `becquerel`.Further details can be found in the [xcom notebook](./xcom.ipynb)For example, here is how to access the cross section data for an element (Pb)
# query XCOM by element symbol data = bq.xcom.fetch_xcom_data('Pb', e_range_kev=[10., 3000.]) plt.figure() for field in ['total_w_coh', 'total_wo_coh', 'coherent', 'incoherent', 'photoelec', 'pair_nuc', 'pair_elec']: plt.semilogy(data.energy, data[field], label=field) plt.xlim(0, 3000) plt.xlabel('Energy (keV)') plt.ylabel(r'Attenuation coefficient [cm$^2$/g]') plt.legend();
_____no_output_____
BSD-3-Clause-LBNL
examples/overview.ipynb
werthm/becquerel
1. set_index 2. reset_index
import pandas as pd df = pd.read_csv("C:/Users/deepusuresh/Documents/Data Science/08. Data Sets/Pandas.csv") df df.index df.set_index('day') # df.set_index('day',inplace=True) df df.reset_index(inplace=True) df df.set_index('event',inplace=True) df
_____no_output_____
CNRI-Python
7_Change_Index and Column_Header/2_Set_Index.ipynb
sureshmecad/Pandas
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
#url = "C:/Users/deepusuresh/Documents/Data Science/01. Python/3. PANDAS/1. Data Frame" df = pd.read_csv('C:/Users/deepusuresh/Documents/Data Science/01. Python/3. PANDAS/1. Data Frame/IMDB-Movie-Data.csv') df.head(3) df = pd.read_csv("C:/Users/deepusuresh/Documents/Data Science/01. Python/3. PANDAS/1. Data Frame/IMDB-Movie-Data.csv", index_col=0) df.head(3) df = pd.read_csv("C:/Users/deepusuresh/Documents/Data Science/01. Python/3. PANDAS/1. Data Frame/IMDB-Movie-Data.csv", index_col=1) df.head(3) df = pd.read_csv("C:/Users/deepusuresh/Documents/Data Science/01. Python/3. PANDAS/1. Data Frame/IMDB-Movie-Data.csv", index_col=2) df.head(3)
_____no_output_____
CNRI-Python
7_Change_Index and Column_Header/2_Set_Index.ipynb
sureshmecad/Pandas
We're loading this dataset from a CSV and designating the movie titles to be our index.
df = pd.read_csv("C:/Users/deepusuresh/Documents/Data Science/01. Python/3. PANDAS/1. Data Frame/IMDB-Movie-Data.csv", index_col="Title") df.head(3)
_____no_output_____
CNRI-Python
7_Change_Index and Column_Header/2_Set_Index.ipynb
sureshmecad/Pandas
![DataCamp logo](https://camo.githubusercontent.com/37701d42d9ffe96d33524b7fb8d965d2f7ae8380/68747470733a2f2f6769746875622e636f6d2f6461746163616d702f707974686f6e2d6c6976652d747261696e696e672d74656d706c6174652f626c6f622f6d61737465722f6173736574732f6461746163616d702e7376673f7261773d54727565) Visualizing Big Data in R (Answers) by Richie Cotton Prelude Install these additional R packages.
rlib <- "~/lib" dir.create(rlib) .libPaths(rlib) library(remotes) install_version("hexbin", "1.28.1", lib = rlib, upgrade = "never") install_version("fst", "0.9.2", lib = rlib, upgrade = "never") install_version("ggcorrplot", "0.1.3", lib = rlib, upgrade = "never") install_version("trelliscopejs", "0.2.5", lib = rlib, upgrade = "never") install_version("plotly", "4.9.2.1", lib = rlib, upgrade = "never")
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Chapter 1: Too many points - point size, transparency, transformation Here, you'll look at the LA home prices dataset to explore ways of reducing overplotting. Learning objectives- Understands that one cause of overplotting in scatter plots is simply that there are too many points.- Can apply point size adjustments, transparency, and axis scale transformations to reduce overplotting problems.- Can draw and interpret a hex plot. Loading the packagesYou need a way to import a CSV file, `tibble` and `ggplot2`.
# Load tibble, ggplot2, and a CSV reader library(readr) # or library(data.table) library(tibble) library(ggplot2)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Exploring the datasetThe dataset is here. Run this cell!
data_file <- "https://raw.githubusercontent.com/datacamp/Visualizing-Big-Data-in-R-live-training/master/data/LAhomes.csv"
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Read in the dataset from `data_file`, assigning the result to `la_homes`. Explore it (using whichever functions you like).
# Read data_file from CSV la_homes <- read_csv(data_file) # Explore it glimpse(la_homes)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
- `price` is the sale price of the home, in USD.- `sqft` is the area of the home in square feet (about 0.1 square meters).Using `la_homes`, draw a scatter plot of `price` versus `sqft`.
# Using la_homes, plot price vs. sqft with point layer ggplot(la_homes, aes(sqft, price)) + geom_point()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Changing point sizeNotice that points in the plot are heavily overplotted in the bottom left corner.Redraw the basic scatter plot, changing the point size to `0.5`.
# Draw same scatter plot, with size 0.5 ggplot(la_homes, aes(sqft, price)) + geom_point(size = 0.5)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Redraw the basic scatter plot, changing the point shape to be "pixel points".
# Draw same scatter plot, with pixel shape ggplot(la_homes, aes(sqft, price)) + geom_point(shape = ".")
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Using transparencyRedraw the basic scatter plot, changing the transparency level of points to `0.25`. Set a white background by using ggplot2's black and white theme.
# Draw same scatter plot, with transparency 0.25 and black & white theme ggplot(la_homes, aes(sqft, price)) + geom_point(alpha = 0.25) + theme_bw()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Transform the axesMost of the plots are stuck in the bottom-left corner. Transform the x and y axes to spread points more evenly throughout.Redraw the basic scatter plot, applying a log10 transformation to the x and y scales.
# Draw same scatter plot, with log-log scales ggplot(la_homes, aes(sqft, price)) + geom_point() + scale_x_log10() + scale_y_log10()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Redraw the scatter plot using all three tricks at once.- Set the point size to `0.5`.- Set the point transparency to `0.25`.- Using log10 transformations for the x and y scales.- Use the black and white theme.
# Draw same scatter plot, with all 3 tricks ggplot(la_homes, aes(sqft, price)) + geom_point(size = 0.5, alpha = 0.25) + scale_x_log10() + scale_y_log10() + theme_bw()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Hex plotsDraw a hex plot of `price` versus `sqft`.
# Using la_homes, plot price vs. sqft with hex layer ggplot(la_homes, aes(sqft, price)) + geom_hex()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Redraw the hex plot, applying log10 transformations to the x and y scales.
# Draw same hex plot, with log-log scales ggplot(la_homes, aes(sqft, price)) + geom_hex() + scale_x_log10() + scale_y_log10()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Which statement about the trend is true?- [ ] Price increases roughly linearly with area.- [ ] Price increases roughly linearly with log area.- [ ] Log price increases roughly linearly with area.- [x] Log price increases roughly linearly with log area. Which statement about the overplotting is true?- [ ] The majority of the houses are found in the region of darkest blues on the hex plot.- [x] The majority of the houses are found in the region of lightest blues on the hex plot.- [ ] The hex plot tells us nothing about where the majority of the houses are found. Chapter 2: Aligned values - jittering Here you'll take another look at overplotting in the LA homes dataset. Learning objectives- Understands that one cause of overplotting in scatter plots is low-precision, integer, or categorical variables taking exactly the same values.- Can apply jittering, transparency, and scale transformations to solve the problem. Loading the packagesYou'll need `readr`, `dplyr` and `ggplot2`. Just run this code.
library(readr) library(dplyr) library(ggplot2)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Importing and exploring the dataThe dataset is here. Run this chunk!
data_file <- "https://raw.githubusercontent.com/datacamp/Visualizing-Big-Data-in-R-live-training/master/data/LAhomes.csv"
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Import the LA homes dataset from `data_file`, assigning to `la_homes`.
# Import data_file la_homes <- read_csv(data_file)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
- `bed` contains the number of bedrooms in the home.Take a look at the distinct values in `bed` using `distinct()`.
# Look at the distinct values of the bed column la_homes %>% distinct(bed)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Notice that the number of bedrooms is always an integer and sometimes zero. Scatter plots of price vs. bedroomsUsing `la_homes`, draw a scatter plot of `price` versus `bed`.
# Using la_homes, plot price vs. bed with a point layer ggplot(la_homes, aes(bed, price)) + geom_point()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Draw the same plot again, this time jittering the points along the x-axis.- Use a maximum jitter distance of `0.4` in the x direction.- Don't jitter in the y direction.
# Draw the previous plot but jitter points with width 0.4 ggplot(la_homes, aes(bed, price)) + geom_jitter(width = 0.4)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Most of the points are near the bottom of the plot.Draw the same jittered plot again, this time using a log10 transformation on the y-scale.
# Draw the previous plot but use a log y-axis ggplot(la_homes, aes(bed, price)) + geom_jitter(width = 0.4) + scale_y_log10()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Scatter plots of bathrooms vs. bedrooms- `bath` contains the number of bathrooms in the home.Take a look at the distinct values in `bath` using `distinct()`.
# Look at the distinct values of the bath column la_homes %>% distinct(bath)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Notice that the dataset includes half and quarter bathrooms (whatever they are).Draw a scatter plot of `bath` versus `bed`.
# Using la_homes, plot bath vs. bed with a point layer ggplot(la_homes, aes(bed, bath)) + geom_point()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Draw the same plot again, this time jittering the points.- Use a maximum jitter distance of `0.4` in the x direction.- Use a maximum jitter distance of `0.05` in the y direction.
# Using la_homes, plot price vs. bed with a jittered point layer ggplot(la_homes, aes(bed, bath)) + geom_jitter(width = 0.4, height = 0.05)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Filtering and transformationThere are three homes with 10 or more bedrooms. These constitute outliers, and for the purpose of drawing nicer plots, we're going to remove them.Filter `la_homes` for rows where `bed` is less than `10`, assigning to `la_homes10`. Count the number of rows you removed to check you've done it correctly.
# Filter for bed less than 10 la_homes10 <- la_homes %>% filter(bed < 10) # Calculate the number of outliers you removed nrow(la_homes) - nrow(la_homes10)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Draw the same jittered scatter plot again, this time using the filtered dataset (`la_homes10`). As before, use a jitter width of `0.4` and a jitter height of `0.05`.
# Draw the previous plot, but with the la_homes10 dataset ggplot(la_homes10, aes(bed, bath)) + geom_jitter(width = 0.4, height = 0.05)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Most of the points are towards the bottom left of the plot.Draw the same jittered scatter plot again, this time applying square-root transformations to the x and y scales.
# Draw the previous plot but with sqrt-sqrt scales ggplot(la_homes10, aes(bed, bath)) + geom_jitter(width = 0.4, height = 0.05) + scale_x_sqrt() + scale_y_sqrt()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Refine the plot one more time, by making the points transparent.Draw the previous plot again, setting the transparency level to 0.25 (and using a black and white theme).
ggplot(la_homes10, aes(bed, bath)) + geom_jitter(width = 0.4, height = 0.05, alpha = 0.25) + scale_x_sqrt() + scale_y_sqrt() + theme_bw()
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Chapter 3: Too many variables - correlation heatmaps Here you'll look at a dataset on Scotch whisky preferences. Learning objectives- Can draw a correlation heatmap.- Can use hierarchical clustering to order cells in a correlation heatmap.- Can adjust the color scale in a correlation heatmap.- Can interpret a correlation heatmap. Loading the packagesYou'll need `fst`, `dplyr`, `ggplot2`, and `ggcorrplot`. Just run this code.
library(fst) library(tibble) library(ggplot2) library(ggcorrplot)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Get the datasetThe dataset is a modified version of `bayesm::Scotch`. - See https://www.rdocumentation.org/packages/bayesm/topics/Scotch for details.- Each observation is a survey response indicating the brands of Scotch consumed in the last year. Run this to download the data file.
download.file( "https://github.com/datacamp/Visualizing-Big-Data-in-R-live-training/raw/master/data/scotch.fst", "scotch.fst" )
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Import the dataset from `scotch.fst` and assign to `scotch`.
# Import from scotch.fst scotch <- read_fst("scotch.fst") # Explore the dataset, however you wish glimpse(scotch)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Draw a basic correlation heatmapCalculate the correlation matrix for `scotch`, assigning to `correl`.
# Calculate the correlation matrix correl <- cor(scotch)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Draw a correlation heatmap of it (no customization).
# Draw a correlation heatmap ggcorrplot(correl)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Drop redundant cellsDraw the previous plot again, this time only showing the upper triangular portion of the correlation matrix.
# Draw a correlation heatmap of the upper triangular portion ggcorrplot(correl, type = "upper")
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Use hierarchical clusteringDraw the previous plot again, this time using hierarchical clustering to reorder cells.
# Draw a correlation heatmap of the upper triangular portion ggcorrplot(correl, type = "upper", hc.order = TRUE)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Override the color scaleSet the diagonal values in the correlation matrix to `NA`, then calculate the range of the correlation matrix.
# Set the diagonals of correl to NA diag(correl) <- NA # Calculate the range of correl (removing NAs) range(correl, na.rm = TRUE)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
We have both positive and negative correlations, so this is a slightly trickier situation than in the slides. We want a symmetric color scale centered on zero.Define the limits of the color scale.- Calculate the `max`imum `abs`olute correlation (removing NAs). Assign to `max_abs_correl`.- Add some padding to `max_abs_correl` (any small number). Assign to `max_abs_correl_padded`.- Define the scale limits as the vector (`-max_abs_correl_padded`, `max_abs_correl_padded`).
# Calculate the largest absolute correlation (removing NAs) max_abs_correl <- max(abs(correl), na.rm = TRUE) # Add some padding max_abs_correl_padded <- max_abs_correl + 0.02 # Define limits from -max_abs_correl_padded to max_abs_correl_padded scale_limits <- c(-max_abs_correl_padded, max_abs_correl_padded)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Draw the previous plot again, this time overriding the fill color scale.- Add `scale_fill_gradient2()`.- Pass the scale limits.- Set the `high` argument to `"red"`.- Set the `mid` argument to `"white"`.- Set the `low` argument to `"blue"`.
# Draw a correlation heatmap of the upper triangular portion # Override the fill scale to use a 2-way gradient ggcorrplot(correl, type = "upper", hc.order = TRUE) + scale_fill_gradient2( limits = scale_limits, high = "red", mid = "white", low = "blue" )
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Interpreting correlation heatmapsDrinkers of Glenfiddich are most likely to also drink which other whisky?- [ ] Scoresby rare- [ ] J & B- [x] Glenlivet- [ ] Black & White- [ ] Chivas Regal Drinkers of Knockando are most likely to also drink which other whisky?- [ ] Dewar's White Label- [ ] Johnny Walker Red Label- [ ] Johnny Walker Black Label- [x] Macallan- [ ] Chivas Regal Chapter 4: Too many facets - trelliscope plotsHere, you'll explore the 30 stocks in the Dow Jones Industrial Average (DJIA). Learning objectives- Can convert a ggplot into a trelliscope plot.- Can use common arguments to control the appearance of a trelliscope plot.- Can use the interactive filter and sort tools to interpret a trelliscope plot. Load the packagesYou'll need `fst`, `tibble`, `ggplot2`, and `trelliscopejs`. Just run this code.
library(fst) library(tibble) library(ggplot2) library(trelliscopejs)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Get the dataset Run this to download the data file.
download.file( "https://github.com/datacamp/Visualizing-Big-Data-in-R-live-training/raw/master/data/dow_stock_prices.fst", "dow_stock_prices.fst" )
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Import the DJIA data from `dow_stock_prices.fst`, assigning to `dow_stock_prices`. Explore it however you wish.
# Import the dataset from dow_stock_prices.fst dow_stock_prices <- read_fst("dow_stock_prices.fst") # Explore the dataset, however you wish glimpse(dow_stock_prices)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
- `symbol`: The stock ticker symbol (unique ID for company).- `company`: Human-readable company name.- `sector`: Business sector that the company participates in.- `date`: Date on which price and volume data was calculated for.- `volume`: Number of shares traded on `date`.- `adjusted`: Price of 1 share, after adjusting for dividends and splits.- `relative`: Price of 1 share, relative to the maximum of `adjusted` over the time period- `date_of_max_price`: For each stock, the date when the maximum share price was first achieved.- `date_of_min_price`: For each stock, the date when the maximum share price was first achieved.Take a look at the range of the dates in the dataset.
# Get the range of the dates in dow_stock_prices range(dow_stock_prices$date)
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
From ggplot2 to trelliscopejsUsing `dow_stock_prices`, draw a line plot of `relative` versus `date`, faceted by `symbol`.
# Using dow_stock_prices, plot relative vs. date # as a line plot # faceted by symbol ggplot(dow_stock_prices, aes(date, relative)) + geom_line() + facet_wrap(vars(symbol))
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Redraw the previous plot, this time as a trelliscope plot (no customization). - Set the `path` argument to `"trelliscope/basic"`.
# Same plot as before, using trelliscope ggplot(dow_stock_prices, aes(date, relative)) + geom_line() + facet_trelliscope( vars(symbol), path = "trelliscope/basic" )
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Run this next line to open the plot in a new browser tab.
# Browse for the plot URL browseURL("trelliscope/basic/index.html")
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Improving the plotWe can improve on the previous plot by customizing it.Redraw the previous plot, with the following changes.- Set the `path` argument to `"trelliscope/improved"`.- Set the plot title to `Dow Jones Industrial Average`.- Set the plot description to `Share prices 2017-01-01 to 2020-01-01`.- Arrange the panels in `5` rows of `2` columns per page. - Increase the width of each panel to `1200` pixels.
# Draw the same plot again, customizing the display # Set path, name, desc, nrow, ncol, width ggplot(dow_stock_prices, aes(date, relative)) + geom_line() + facet_trelliscope( vars(symbol), path = "trelliscope/improved", name = "Dow Jones Industrial Average", desc = "Share prices 2017-01-01 to 2020-01-01", nrow = 5, ncol = 2, width = 1200 )
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Open the plot in a new browser tab.
# Browse for the plot URL browseURL("trelliscope/improved/index.html")
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
LabelsAdd the `company` to the labels shown on each panel. Filtering Which `sector` contains the most companies?- [ ] Health Care- [x] Information Technology- [ ] Consumer Staples- [ ] Industrials- [ ] Financials Which `Energy` sector company began 2020 with a lower share price than 2017?- [ ] CVX (Chevron)- [x] XOM (Exxon Mobil) How many companies had a maximum price more than double the minimum price during the time period? That is, how many companies had a relative minimum less than `0.5`?- [ ] 4- [ ] 5- [x] 6- [ ] 7 Sorting Based on mean daily volume of trades, which company was the 3rd most traded during the time period?- [ ] AAPL (Apple)- [ ] MSFT (Microsoft)- [ ] CSCO (Cisco Systems)- [ ] PFE (Pfizer)- [x] INTC (Intel) Which company's median realtive price during the time period was lowest?- [x] AAPL (Apple)- [ ] MSFT (Microsoft)- [ ] V (Verizon Communications)- [ ] MRK (Merck & Co.)- [ ] PG () Free scalesThe relative share prices were plotted to make it easier to compare performance between companies. If you want to plot the non-normalized `adjusted` prices, you need to give each panel its own y-axis.Redraw the previous plot, with these changes.- Set the `path` argument to `"trelliscope/yscale"`.- On the y-axis, plot `adjusted` rather than `relative`.- Give each panel a free y-axis scale (while keeping the x-axis scales the same).
# This time plot adjusted vs. date # Use a free y-scale ggplot(dow_stock_prices, aes(date, adjusted)) + geom_line() + facet_trelliscope( vars(symbol), path = "trelliscope/yscale", name = "Dow Jones Industrial Average", desc = "Share prices 2017-01-01 to 2020-01-01", nrow = 5, ncol = 2, width = 1200, scales = c("same", "free") )
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Open the plot in a new browser tab.
# Browse for the plot URL browseURL("trelliscope/yscale/index.html")
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Which company, at it's maximum, had the highest price for 1 share?- [x] BA (Boeing)- [ ] GS (Goldman Sachs Group)- [ ] UNH (UnitedHealth Group)- [ ] AAPL (Apple)- [ ] MMM (3M) Interactive plotting with plotlyBy using plotly to create each panel, each panel becomes interactive. Hover over the line to see the values of individual points.Redraw the previous plot, using plotly to create the panels.- Set the `path` argument to `"trelliscope/plotly"`.
# Redraw the last plot using plotly for panels ggplot(dow_stock_prices, aes(date, adjusted)) + geom_line() + facet_trelliscope( vars(symbol), path = "trelliscope/plotly", name = "Dow Jones Industrial Average", desc = "Share prices 2017-01-01 to 2020-01-01", nrow = 5, ncol = 2, width = 1200, scales = c("same", "free"), as_plotly = TRUE )
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Open the plot in a new browser tab.
# Browse for the plot URL browseURL("trelliscope/plotly/index.html")
_____no_output_____
MIT
notebooks/visualizing-big-data-in-r-codealong-answers.ipynb
marsanul/visualizacionbigdata
Image a cube's vertices grayscale: 0 is black, higher intensities are lighter here intensities are ranges. so foreground should have smaller range, and therefore be darker Optical axis +Z (default) Centered
from camera_model_distort import Camera_model import attitude_utils as attu import optics_utils as optu import itertools from time import time ap = attu.Euler_attitude() object_locations = optu.make_cube(10.,1.0*np.asarray([0,0,200])) object_locations = optu.make_grid() print(object_locations.shape) z = np.expand_dims(400*np.ones(object_locations.shape[0]),axis=1) print(z.shape) object_locations=np.hstack((object_locations,z)) #print(object_locations) agent_location = 1.0*np.asarray([0,0,100]) object_intensities = np.linalg.norm(object_locations-agent_location,axis=1)-50 fov=np.pi/4 yaw = 0.0 pitch = 0.0 roll = 0.0 agent_q = np.asarray([yaw,pitch,roll]) C_cb = optu.rotate_optical_axis(0.0, 0.0, 0.0) r_cb = np.asarray([0,0,0]) k=-0.9 p=0.1 K0 = np.zeros(3) K1 = [2.0,0.5,0.0] K2 = [5.0,10.0,0.0] K3 = [20.,40.,0.0] K4 = [50,100,0] #K3 = [-1.0,-5.0,0.0] K = K3 P0 = np.zeros(2) P1 = [0.1,0.1] P = P0 cm = Camera_model(attitude_parameterization=ap, C_cb=C_cb, r_cb=r_cb, slant=20.0, fov=fov, debug=False, p1=P[0], p2=P[1], k1=K[0],k2=K[1],k3=K[2]) t0 = time() pix1 = cm.get_pixel_coords(agent_location, agent_q, object_locations, object_intensities) t1 = time() print('ET: ',t1-t0) cm.render(agent_location, agent_q, object_locations, object_intensities)
Euler321 Attitude (738, 2) (738, 1) Euler321 Attitude Overriding focal length using FOV: 0.7853981633974483 13.361957121094465 K: [[133.61957121 20. 50. ] [ 0. 133.61957121 50. ] [ 0. 0. 1. ]] C_cb: [[ 1. 0. -0.] [ 0. 1. 0.] [ 0. 0. 1.]] t: ET: 0.004895210266113281
MIT
Imaging/Test_distort.ipynb
CHEN-yongquan/Asteroid_CPO_seeker
Positive Roll, Image should move down in FOV
object_locations = optu.make_cube(10.,1.0*np.asarray([0,0,200])) agent_location = 1.0*np.asarray([0,0,100]) object_intensities = np.linalg.norm(object_locations-agent_location,axis=1)-50 fov=np.pi/4 yaw = 0.0 pitch = 0.0 roll = np.pi/16 agent_q = np.asarray([yaw,pitch,roll]) C_cb = optu.rotate_optical_axis(0.0, 0.0, 0.0) r_cb = np.asarray([0,0,0]) cm = Camera_model(attitude_parameterization=ap, C_cb=C_cb, r_cb=r_cb, fov=fov, debug=False) t0 = time() pix1 = cm.get_pixel_coords(agent_location, agent_q, object_locations, object_intensities) t1 = time() print('ET: ',t1-t0) cm.render(agent_location, agent_q, object_locations, object_intensities)
Euler321 Attitude Overriding focal length using FOV: 0.7853981633974483 13.361957121094465 K: [[133.61957121 0. 50. ] [ 0. 133.61957121 50. ] [ 0. 0. 1. ]] C_cb: [[ 1. 0. -0.] [ 0. 1. 0.] [ 0. 0. 1.]] t: pixel_locs.shape: (8, 2) ET: 0.0016019344329833984 pixel_locs.shape: (8, 2) (8, 2) (8,) (8,)
MIT
Imaging/Test_distort.ipynb
CHEN-yongquan/Asteroid_CPO_seeker
Negative Pitch, Image should move right
object_locations = optu.make_cube(10.,1.0*np.asarray([0,0,200])) agent_location = 1.0*np.asarray([0,0,100]) object_intensities = np.linalg.norm(object_locations-agent_location,axis=1)-50 fov=np.pi/4 yaw = 0.0 pitch = -np.pi/16 roll = 0.0 agent_q = np.asarray([yaw,pitch,roll]) C_cb = optu.rotate_optical_axis(0.0, 0.0, 0.0) r_cb = np.asarray([0,0,0]) cm = Camera_model(attitude_parameterization=ap, C_cb=C_cb, r_cb=r_cb, fov=fov, debug=False) t0 = time() pix1 = cm.get_pixel_coords(agent_location, agent_q, object_locations, object_intensities) t1 = time() print('ET: ',t1-t0) cm.render(agent_location, agent_q, object_locations, object_intensities)
Euler321 Attitude Overriding focal length using FOV: 0.7853981633974483 13.361957121094465 K: [[133.61957121 0. 50. ] [ 0. 133.61957121 50. ] [ 0. 0. 1. ]] C_cb: [[ 1. 0. -0.] [ 0. 1. 0.] [ 0. 0. 1.]] t: pixel_locs.shape: (8, 2) ET: 0.0051801204681396484 pixel_locs.shape: (8, 2) (8, 2) (8,) (8,)
MIT
Imaging/Test_distort.ipynb
CHEN-yongquan/Asteroid_CPO_seeker
Positive Yaw should rotate image
object_locations = optu.make_cube(10.,1.0*np.asarray([0,0,200])) agent_location = 1.0*np.asarray([0,0,100]) object_intensities = np.linalg.norm(object_locations-agent_location,axis=1)-50 fov=np.pi/4 yaw = np.pi/8 pitch = 0.0 roll = 0.0 agent_q = np.asarray([yaw,pitch,roll]) C_cb = optu.rotate_optical_axis(0.0, 0.0, 0.0) r_cb = np.asarray([0,0,0]) cm = Camera_model(attitude_parameterization=ap, C_cb=C_cb, r_cb=r_cb, fov=fov, debug=False) t0 = time() pix1 = cm.get_pixel_coords(agent_location, agent_q, object_locations, object_intensities) t1 = time() print('ET: ',t1-t0) cm.render(agent_location, agent_q, object_locations, object_intensities) print(np.linspace(-100,100,11)) print(np.linspace(-4,4,9)) x = np.linspace(-7,7,15) p = np.linspace(-3,3,7) for i in range(p.shape[0]): line = np.stack((x,p[i]*np.ones_like(x))) print(line)
[[-7. -6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.] [-3. -3. -3. -3. -3. -3. -3. -3. -3. -3. -3. -3. -3. -3. -3.]] [[-7. -6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.] [-2. -2. -2. -2. -2. -2. -2. -2. -2. -2. -2. -2. -2. -2. -2.]] [[-7. -6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.] [-1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1. -1.]] [[-7. -6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.] [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]] [[-7. -6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.] [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]] [[-7. -6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.] [ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]] [[-7. -6. -5. -4. -3. -2. -1. 0. 1. 2. 3. 4. 5. 6. 7.] [ 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3. 3.]]
MIT
Imaging/Test_distort.ipynb
CHEN-yongquan/Asteroid_CPO_seeker
Two interesting tables: seattlecrimeincidents first half of 2015 census_data
# running a simple SQL command %sql select * from seattlecrimeincidents limit 10; # Show specific columns %sql select "Offense Type",latitude,longitude from seattlecrimeincidents limit 10; %%sql -- select rows select "Offense Type", latitude, longitude, month from seattlecrimeincidents where "Offense Type" ='THEFT-BICYCLE' and month = 1 %%sql select count(*) from seattlecrimeincidents; %%sql select count(*) from settlecrimeincidents %%sql select count(*) from (select "Offense Type", latitude, longitude, month from seattlecrimeincidents where "Offense Type" ='THEFT-BICYCLE' and month = 1) as small_table # use max, min functions %%sql select min(latitude) as min_lat,max(latitude) as max_lat, min(longitude)as min_long,max(longitude) as max_long from seattlecrimeincidents; %%sql select year,count(*) from seattlecrimeincidents group by year order by year ASC; %%sql select distinct year from seattlecrimeincidents;
17 rows affected.
BSD-2-Clause
save/13-Structured-Query-Language/ClassNotes.ipynb
ecl95/LectureNotes
1 212 32123 4321234543212345
num=5 for i in range(1,num+1): for j in range(1,num-i+1): print(end=" ") for j in range(i,0,-1): print(j,end="") for j in range(2,i+1): print(j,end="") print()
1 212 32123 4321234 543212345
Apache-2.0
notebooks/pyramid_pattern_1.ipynb
neso613/python_coding