markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Random Forest ClassifierThe Random forest or Random Decision Forest is a supervised Machine learning algorithm used for classification, regression, and other tasks using decision trees.The Random forest classifier creates a set of decision trees from a randomly selected subset of the training set. It is basically a set of decision trees (DT) from a randomly selected subset of the training set and then It collects the votes from different decision trees to decide the final prediction.
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100) clf.fit(x_train, y_train) print("Accuracy Score:", str(round(clf.score(x_test, y_test), 4) * 100) + '%') print('\nClassification Random Forest:\n', classification_report(y_test, clf.predict(x_test)))
Classification Random Forest: precision recall f1-score support 0 0.75 0.64 0.69 3292 1 0.76 0.84 0.80 4417 accuracy 0.76 7709 macro avg 0.75 0.74 0.74 7709 weighted avg 0.76 0.76 0.75 7709
MIT
Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb
JackShen1/sentimento
Now let's visualize our classification report:
plot_classification_report(classification_report(y_test, clf.predict(x_test)), title='Random Forest Classification Report')
_____no_output_____
MIT
Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb
JackShen1/sentimento
SVM ModelSupport vector machines (SVMs) are a set of supervised learning methods used for classification, regression and outliers detection.The advantages of support vector machines are: + Effective in high dimensional spaces. + Still effective in cases where number of dimensions is greater than the number of samples. + Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient. + Versatile: different Kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels.The disadvantages of support vector machines include: - If the number of features is much greater than the number of samples, avoid over-fitting in choosing Kernel functions and regularization term is crucial. - SVMs do not directly provide probability estimates, these are calculated using an expensive five-fold cross-validation (see Scores and probabilities, below). You can read more about it [here](https://en.wikipedia.org/wiki/Support-vector_machine).
from sklearn import svm SVM = svm.SVC(C=1.0, kernel='linear', degree=3, gamma='auto', probability=True) SVM.fit(x_train, y_train) print("Accuracy Score:", str(round(SVM.score(x_test, y_test), 4) * 100) + '%') print('\nClassification SVM:\n', classification_report(y_test, SVM.predict(x_test))) plot_classification_report(classification_report(y_test, SVM.predict(x_test)), title='SVM Classification Report')
_____no_output_____
MIT
Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb
JackShen1/sentimento
Comparison of Models A useful tool when predicting the probability of a binary outcome is the Receiver Operating Characteristic curve, or ROC curve.It is a plot of the false positive rate (x-axis) versus the true positive rate (y-axis) for a number of different candidate threshold values between 0.0 and 1.0. Put another way, it plots the false alarm rate versus the hit rate.The true positive rate is calculated as the number of true positives divided by the sum of the number of true positives and the number of false negatives. It describes how good the model is at predicting the positive class when the actual outcome is positive.The false positive rate is calculated as the number of false positives divided by the sum of the number of false positives and the number of true negatives.It is also called the false alarm rate as it summarizes how often a positive class is predicted when the actual outcome is negative.To make this clear:* Smaller values on the x-axis of the plot indicate lower false positives and higher true negatives.* Larger values on the y-axis of the plot indicate higher true positives and lower false negatives.
from sklearn import metrics from sklearn.metrics import roc_curve, auc fprKNN, tprKNN, thresholdsKNN = metrics.roc_curve(y_test, knn.predict_proba(x_test)[:, 1]) fprLR, tprLR, thresholdsLR = metrics.roc_curve(y_test, logit.predict_proba(x_test)[:, 1]) fprCLF, tprCLF, thresholdCLF = metrics.roc_curve(y_test, clf.predict_proba(x_test)[:, 1]) fprSVM, trpSVM, thresholdSVM = metrics.roc_curve(y_test, SVM.predict_proba(x_test)[:, 1]) linewidth = 2 plt.figure(figsize=(8, 5)) plt.plot(fprKNN, tprKNN, color='#db6114', lw=linewidth, label='ROC Curve KNN (AUC = %0.3f)' % auc(fprKNN, tprKNN)) plt.plot(fprLR, tprLR, color='#1565c0', lw=linewidth, label='ROC Curve Logistic Regression (AUC = %0.3f)' % auc(fprLR, tprLR)) plt.plot(fprCLF, tprCLF, color='#2e7d32',lw=linewidth, label='ROC Curve Random Forest (AUC = %0.3f)' % auc(fprCLF, tprCLF)) plt.plot(fprSVM, trpSVM, color='#6557d2',lw=linewidth, label='ROC Curve SVM (AUC = %0.3f)' % auc(fprSVM, trpSVM)) plt.plot([0, 1], [0, 1], color='#616161', lw=linewidth, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve Plots') plt.legend(loc="lower right") plt.show()
_____no_output_____
MIT
Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb
JackShen1/sentimento
Based on these data, we can conclude that the best model so far is a **Random Forest Model** with `AUC = 83.5%` and `Accuracy Score = 75.56%`.Let's save this model:
with open('RFModel.pickle', 'wb') as m: pickle.dump(logit, m)
_____no_output_____
MIT
Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb
JackShen1/sentimento
Let's check if everything is loaded correctly:
with open('RFModel.pickle', 'rb') as m: rf = pickle.load(m) print("Random Forest Accuracy Score:", str(round(clf.score(x_test, y_test), 4) * 100) + '%')
Random Forest Accuracy Score: 75.56%
MIT
Part II - Sentiment Analysis Classifications - Review and Comparison.ipynb
JackShen1/sentimento
Run source tutorialhttps://www.geeksforgeeks.org/machine-learning-for-anomaly-detection/
import matplotlib.font_manager from pyod.models.knn import KNN from pyod.utils.data import generate_data, get_outliers_inliers # [1] CREATE SYNTHETIC DATA npoints = 300 # Generating a random dataset with two features X_train, y_train = generate_data(n_train = npoints, train_only = True, n_features = 2) # Storing the outliers and inliners in different numpy arrays X_outliers, X_inliers = get_outliers_inliers(X_train, y_train) n_inliers = len(X_inliers) n_outliers = len(X_outliers) print("There are", n_inliers, "inliers and", n_outliers, "outliers") # Separating the two features f1 = X_train[:, [0]] # .reshape(-1, 1) # This destructures the array f1[:,0] f2 = X_train[:, [1]] # .reshape(-1, 1) # [2] VISUALIZE THE DATA # Visualising the dataset # create a meshgrid xx, yy = np.meshgrid(np.linspace(-10, 10, 200), np.linspace(-10, 10, 200)) # scatter plot plt.scatter(f1, f2) plt.xlabel('Feature 1') plt.ylabel('Feature 2') # [3] TRAIN THE MODEL AND EVALUATE # Setting the percentage of outliers outlier_fraction = 0.1 # Training the classifier clf = KNN(contamination = outlier_fraction) clf.fit(X_train, y_train) # You can print this to see all the prediciton scores scores_pred = clf.decision_function(X_train)*-1 y_pred = clf.predict(X_train) n_errors = (y_pred != y_train).sum() # Counting the number of errors print('The number of prediction errors are', n_errors, ', equal to ', "{:.2f}".format(n_errors/npoints), '% out of', npoints, 'data points') # [4] VISUALIZING THE PREDICTIONS # threshold value to consider a # datapoint inlier or outlier threshold = stats.scoreatpercentile(scores_pred, 100 * outlier_fraction) # decision function calculates the raw # anomaly score for every point Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) * -1 Z = Z.reshape(xx.shape) # fill blue colormap from minimum anomaly # score to threshold value subplot = plt.subplot(1, 1, 1) subplot.contourf(xx, yy, Z, levels = np.linspace(Z.min(), threshold, 10), cmap = plt.cm.Blues_r) # draw red contour line where anomaly # score is equal to threshold a = subplot.contour(xx, yy, Z, levels =[threshold], linewidths = 2, colors ='red') # fill orange contour lines where range of anomaly # score is from threshold to maximum anomaly score subplot.contourf(xx, yy, Z, levels =[threshold, Z.max()], colors ='orange') # scatter plot of inliers with white dots b = subplot.scatter(X_train[:-n_outliers, 0], X_train[:-n_outliers, 1], c ='white', s = 20, edgecolor ='k') # scatter plot of outliers with black dots c = subplot.scatter(X_train[-n_outliers:, 0], X_train[-n_outliers:, 1], c ='black', s = 20, edgecolor ='k') subplot.axis('tight') subplot.legend( [a.collections[0], b, c], ['learned decision function', 'true inliers', 'true outliers'], prop = matplotlib.font_manager.FontProperties(size = 10), loc ='lower right') subplot.set_title('K-Nearest Neighbours') #subplot.set_xlim((-3.5, 4.5)) #subplot.set_ylim((-3.5, 4.5)) subplot.set_xlim((-10, 10)) subplot.set_ylim((-10, 10)) plt.show()
_____no_output_____
MIT
data/pyodKNN.DummyDataset-anomaly_detection.ipynb
therobotacademy/kaggle-anomaly-detection
Lesson 7: Pattern Challenges In this set of exercises, we'll play with number patterns, shape patterns, and other types of repetition and variation. This is to get you thinking more flexibly about what you can do with code, so that you can better apply it to practical situations later. FibonacciThe fibonacci sequence of numbers starts from 1 and 1, and then rest come from adding the two prior numbers and putting the answer after them, making a list. Here are the first numbers:1 1 2 3 5 8 13 21 34 55 89So 1 + 1 = 2. 1 + 2 = 3. 2 + 3 = 5. 3 + 5 = 8. And so on.Write a program to calculate and print the first 30 fibonacci numbers. The two 1's that start the sequence are given automatically; have the program calculate the rest. HeadlinesWrite a program that asks for a headline text, and then prints a bar, a centered headline, and another bar. The bar should be 80 characters wide, using the = sign as the bar. The headline text should be centered within the width of the bars. ===================== center ===================== You can find out the length of the headline text by calling the len() function on the variable, like this: size = len(headline) ArrowWrite a program that prints a text-art arrow like below (it does not need to have spaces ahead of it; that's just the way it shows up here). The program should ask the user for the width of the widest row, and then print the right size arrow to match. The widest row can be as much as 60 characters wide, so you don't want to have to make the rows by hand. This is an ideal place for a loop that makes the row the appropriate width. You might even like to create a drawRow() function with a loop inside it.You also need to figure out how many rows you need based on the width of the widest row. There's some arithmetic involved here. Take your time and work it out on paper, possibly gridded graph paper so you can count the boxes and think about it carefully. Mountain RangeNow we're going to flip it sideways. (Yes this will be completely new logic.) Ask the user for two numbers. One is the height of the mountains. The other is the number of mountains. Neither number will be bigger than 8. This is an example of mountains of height 3 with a total of 8 mountains. A single mountain looks like this. Write a program that asks for the number of rows high for the mountain, and asks how many mountains to print. Don't worry if it ends up too wide for your web browser when there are a lot of mountains; just focus on getting the overall logic correct. Hundreds ChartWhen children are learning to count to 100, there's a board of numbers often used to help them understand number relations. It's call a hundred chart. It looks like this: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 ... and so on. Write a program that uses looping and if statements to print a well-formatted hundred chart from 0 to 99. You can print the 100 in the last row without worrying about its formatting.Note that you can use \n to indicate a newline: print "Hello\nJenny"
print "Hello\nJenny"
Hello Jenny
Apache-2.0
lesson7exercises.ipynb
jennybrown8/python-notebook-coding-intro
Point cloudThis tutorial demonstrates basic usage of a point cloud. Visualize point cloudThe first part of the tutorial reads a point cloud and visualizes it.
print("Load a ply point cloud, print it, and render it") ply_point_cloud = o3d.data.PLYPointCloud() pcd = o3d.io.read_point_cloud(ply_point_cloud.path) print(pcd) print(np.asarray(pcd.points)) o3d.visualization.draw_geometries([pcd], zoom=0.3412, front=[0.4257, -0.2125, -0.8795], lookat=[2.6172, 2.0475, 1.532], up=[-0.0694, -0.9768, 0.2024])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
`read_point_cloud` reads a point cloud from a file. It tries to decode the file based on the extension name. For a list of supported file types, refer to [File IO](file_io.ipynb).`draw_geometries` visualizes the point cloud. Use a mouse/trackpad to see the geometry from different view points.It looks like a dense surface, but it is actually a point cloud rendered as surfels. The GUI supports various keyboard functions. For instance, the `-` key reduces the size of the points (surfels). **Note:** Press the `H` key to print out a complete list of keyboard instructions for the GUI. For more information of the visualization GUI, refer to [Visualization](visualization.ipynb) and [Customized visualization](../visualization/customized_visualization.rst). **Note:** On macOS, the GUI window may not receive keyboard events. In this case, try to launch Python with `pythonw` instead of `python`. Voxel downsamplingVoxel downsampling uses a regular voxel grid to create a uniformly downsampled point cloud from an input point cloud. It is often used as a pre-processing step for many point cloud processing tasks. The algorithm operates in two steps:1. Points are bucketed into voxels.2. Each occupied voxel generates exactly one point by averaging all points inside.
print("Downsample the point cloud with a voxel of 0.05") downpcd = pcd.voxel_down_sample(voxel_size=0.05) o3d.visualization.draw_geometries([downpcd], zoom=0.3412, front=[0.4257, -0.2125, -0.8795], lookat=[2.6172, 2.0475, 1.532], up=[-0.0694, -0.9768, 0.2024])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
Vertex normal estimationAnother basic operation for point cloud is point normal estimation.Press `N` to see point normals. The keys `-` and `+` can be used to control the length of the normal.
print("Recompute the normal of the downsampled point cloud") downpcd.estimate_normals( search_param=o3d.geometry.KDTreeSearchParamHybrid(radius=0.1, max_nn=30)) o3d.visualization.draw_geometries([downpcd], zoom=0.3412, front=[0.4257, -0.2125, -0.8795], lookat=[2.6172, 2.0475, 1.532], up=[-0.0694, -0.9768, 0.2024], point_show_normal=True)
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
`estimate_normals` computes the normal for every point. The function finds adjacent points and calculates the principal axis of the adjacent points using covariance analysis.The function takes an instance of `KDTreeSearchParamHybrid` class as an argument. The two key arguments `radius = 0.1` and `max_nn = 30` specifies search radius and maximum nearest neighbor. It has 10cm of search radius, and only considers up to 30 neighbors to save computation time. **Note:** The covariance analysis algorithm produces two opposite directions as normal candidates. Without knowing the global structure of the geometry, both can be correct. This is known as the normal orientation problem. Open3D tries to orient the normal to align with the original normal if it exists. Otherwise, Open3D does a random guess. Further orientation functions such as `orient_normals_to_align_with_direction` and `orient_normals_towards_camera_location` need to be called if the orientation is a concern. Access estimated vertex normalEstimated normal vectors can be retrieved from the `normals` variable of `downpcd`.
print("Print a normal vector of the 0th point") print(downpcd.normals[0])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
To check out other variables, please use `help(downpcd)`. Normal vectors can be transformed as a numpy array using `np.asarray`.
print("Print the normal vectors of the first 10 points") print(np.asarray(downpcd.normals)[:10, :])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
Check [Working with NumPy](working_with_numpy.ipynb) for more examples regarding numpy arrays. Crop point cloud
print("Load a polygon volume and use it to crop the original point cloud") demo_crop_data = o3d.data.DemoCropPointCloud() pcd = o3d.io.read_point_cloud(demo_crop_data.point_cloud_path) vol = o3d.visualization.read_selection_polygon_volume(demo_crop_data.cropped_json_path) chair = vol.crop_point_cloud(pcd) o3d.visualization.draw_geometries([chair], zoom=0.7, front=[0.5439, -0.2333, -0.8060], lookat=[2.4615, 2.1331, 1.338], up=[-0.1781, -0.9708, 0.1608])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
`read_selection_polygon_volume` reads a json file that specifies polygon selection area. `vol.crop_point_cloud(pcd)` filters out points. Only the chair remains. Paint point cloud
print("Paint chair") chair.paint_uniform_color([1, 0.706, 0]) o3d.visualization.draw_geometries([chair], zoom=0.7, front=[0.5439, -0.2333, -0.8060], lookat=[2.4615, 2.1331, 1.338], up=[-0.1781, -0.9708, 0.1608])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
`paint_uniform_color` paints all the points to a uniform color. The color is in RGB space, [0, 1] range. Point cloud distanceOpen3D provides the method `compute_point_cloud_distance` to compute the distance from a source point cloud to a target point cloud. I.e., it computes for each point in the source point cloud the distance to the closest point in the target point cloud.In the example below we use the function to compute the difference between two point clouds. Note that this method could also be used to compute the Chamfer distance between two point clouds.
# Load data demo_crop_data = o3d.data.DemoCropPointCloud() pcd = o3d.io.read_point_cloud(demo_crop_data.point_cloud_path) vol = o3d.visualization.read_selection_polygon_volume(demo_crop_data.cropped_json_path) chair = vol.crop_point_cloud(pcd) dists = pcd.compute_point_cloud_distance(chair) dists = np.asarray(dists) ind = np.where(dists > 0.01)[0] pcd_without_chair = pcd.select_by_index(ind) o3d.visualization.draw_geometries([pcd_without_chair], zoom=0.3412, front=[0.4257, -0.2125, -0.8795], lookat=[2.6172, 2.0475, 1.532], up=[-0.0694, -0.9768, 0.2024])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
Bounding volumesThe `PointCloud` geometry type has bounding volumes as all other geometry types in Open3D. Currently, Open3D implements an `AxisAlignedBoundingBox` and an `OrientedBoundingBox` that can also be used to crop the geometry.
aabb = chair.get_axis_aligned_bounding_box() aabb.color = (1, 0, 0) obb = chair.get_oriented_bounding_box() obb.color = (0, 1, 0) o3d.visualization.draw_geometries([chair, aabb, obb], zoom=0.7, front=[0.5439, -0.2333, -0.8060], lookat=[2.4615, 2.1331, 1.338], up=[-0.1781, -0.9708, 0.1608])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
Convex hullThe convex hull of a point cloud is the smallest convex set that contains all points. Open3D contains the method `compute_convex_hull` that computes the convex hull of a point cloud. The implementation is based on [Qhull](http://www.qhull.org/).In the example code below we first sample a point cloud from a mesh and compute the convex hull that is returned as a triangle mesh. Then, we visualize the convex hull as a red `LineSet`.
bunny = o3d.data.BunnyMesh() mesh = o3d.io.read_triangle_mesh(bunny.path) mesh.compute_vertex_normals() pcl = mesh.sample_points_poisson_disk(number_of_points=2000) hull, _ = pcl.compute_convex_hull() hull_ls = o3d.geometry.LineSet.create_from_triangle_mesh(hull) hull_ls.paint_uniform_color((1, 0, 0)) o3d.visualization.draw_geometries([pcl, hull_ls])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
DBSCAN clusteringGiven a point cloud from e.g. a depth sensor we want to group local point cloud clusters together. For this purpose, we can use clustering algorithms. Open3D implements DBSCAN [\[Ester1996\]](../reference.htmlEster1996) that is a density based clustering algorithm. The algorithm is implemented in `cluster_dbscan` and requires two parameters: `eps` defines the distance to neighbors in a cluster and `min_points` defines the minimum number of points required to form a cluster. The function returns `labels`, where the label `-1` indicates noise.
ply_point_cloud = o3d.data.PLYPointCloud() pcd = o3d.io.read_point_cloud(ply_point_cloud.path) with o3d.utility.VerbosityContextManager( o3d.utility.VerbosityLevel.Debug) as cm: labels = np.array( pcd.cluster_dbscan(eps=0.02, min_points=10, print_progress=True)) max_label = labels.max() print(f"point cloud has {max_label + 1} clusters") colors = plt.get_cmap("tab20")(labels / (max_label if max_label > 0 else 1)) colors[labels < 0] = 0 pcd.colors = o3d.utility.Vector3dVector(colors[:, :3]) o3d.visualization.draw_geometries([pcd], zoom=0.455, front=[-0.4999, -0.1659, -0.8499], lookat=[2.1813, 2.0619, 2.0999], up=[0.1204, -0.9852, 0.1215])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
**Note:** This algorithm precomputes all neighbors in the epsilon radius for all points. This can require a lot of memory if the chosen epsilon is too large. Plane segmentationOpen3D also supports segmententation of geometric primitives from point clouds using RANSAC. To find the plane with the largest support in the point cloud, we can use `segment_plane`. The method has three arguments: `distance_threshold` defines the maximum distance a point can have to an estimated plane to be considered an inlier, `ransac_n` defines the number of points that are randomly sampled to estimate a plane, and `num_iterations` defines how often a random plane is sampled and verified. The function then returns the plane as $(a,b,c,d)$ such that for each point $(x,y,z)$ on the plane we have $ax + by + cz + d = 0$. The function further returns a list of indices of the inlier points.
pcd_point_cloud = o3d.data.PCDPointCloud() pcd = o3d.io.read_point_cloud(pcd_point_cloud.path) plane_model, inliers = pcd.segment_plane(distance_threshold=0.01, ransac_n=3, num_iterations=1000) [a, b, c, d] = plane_model print(f"Plane equation: {a:.2f}x + {b:.2f}y + {c:.2f}z + {d:.2f} = 0") inlier_cloud = pcd.select_by_index(inliers) inlier_cloud.paint_uniform_color([1.0, 0, 0]) outlier_cloud = pcd.select_by_index(inliers, invert=True) o3d.visualization.draw_geometries([inlier_cloud, outlier_cloud], zoom=0.8, front=[-0.4999, -0.1659, -0.8499], lookat=[2.1813, 2.0619, 2.0999], up=[0.1204, -0.9852, 0.1215])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
Hidden point removalImagine you want to render a point cloud from a given view point, but points from the background leak into the foreground because they are not occluded by other points. For this purpose we can apply a hidden point removal algorithm. In Open3D the method by [\[Katz2007\]](../reference.htmlKatz2007) is implemented that approximates the visibility of a point cloud from a given view without surface reconstruction or normal estimation.
print("Convert mesh to a point cloud and estimate dimensions") armadillo = o3d.data.ArmadilloMesh() mesh = o3d.io.read_triangle_mesh(armadillo.path) mesh.compute_vertex_normals() pcd = mesh.sample_points_poisson_disk(5000) diameter = np.linalg.norm( np.asarray(pcd.get_max_bound()) - np.asarray(pcd.get_min_bound())) o3d.visualization.draw_geometries([pcd]) print("Define parameters used for hidden_point_removal") camera = [0, 0, diameter] radius = diameter * 100 print("Get all points that are visible from given view point") _, pt_map = pcd.hidden_point_removal(camera, radius) print("Visualize result") pcd = pcd.select_by_index(pt_map) o3d.visualization.draw_geometries([pcd])
_____no_output_____
MIT
docs/jupyter/geometry/pointcloud.ipynb
pmokeev/Open3D
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# Dependencies and Setup import pandas as pd # File to Load (Remember to Change These) school_data_to_load = "Resources/schools_complete.csv" student_data_to_load = "Resources/students_complete.csv" # Read School and Student Data File and store into Pandas DataFrames school_data = pd.read_csv(school_data_to_load) student_data = pd.read_csv(student_data_to_load) # Combine the data into a single dataset. school_data_complete = pd.merge(student_data, school_data, how="left", on=["school_name", "school_name"]) school_data_complete.head()
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
District Summary* Calculate the total number of schools* Calculate the total number of students* Calculate the total budget* Calculate the average math score * Calculate the average reading score* Calculate the percentage of students with a passing math score (70 or greater)* Calculate the percentage of students with a passing reading score (70 or greater)* Calculate the percentage of students who passed math **and** reading (% Overall Passing)* Create a dataframe to hold the above results* Optional: give the displayed data cleaner formatting
number_of_schools = len(school_data["School ID"].unique()) number_of_schools number_of_students = len(student_data["Student ID"].unique()) number_of_students total_budget = school_data["budget"].sum() total_budget avg_math_score = student_data["math_score"].mean() avg_math_score avg_read_score = student_data["reading_score"].mean() avg_read_score passing_math = (student_data["math_score"] >= 70) math_passers = student_data.loc[passing_math] number_math_passers = len(math_passers) pct_pass_math = number_math_passers * 100 / number_of_students pct_pass_math passing_read = (student_data["reading_score"] >= 70) read_passers = student_data.loc[passing_read] number_read_passers = len(read_passers) pct_pass_read = number_read_passers * 100 / number_of_students pct_pass_read pass_math_read = passing_math & passing_read math_read_passers = student_data.loc[pass_math_read] number_math_read_passers = len(math_read_passers) pct_pass_read_math = number_math_read_passers * 100 / number_of_students pct_pass_read_math district_summary_df = pd.DataFrame( [ {"number of schools": number_of_schools, "number of students": number_of_students, "total budget": total_budget, "average math score": avg_math_score, "average reading score": avg_read_score, "% passing math score": pct_pass_math, "% passing reading score": pct_pass_read, "% passing math and reading score": pct_pass_read_math } ] ) district_summary_df # Format final district summary district_summary_df["number of students"] = district_summary_df["number of students"].map("{:,}".format) district_summary_df["total budget"] = district_summary_df["total budget"].map("${:,}".format) district_summary_df["average math score"] = district_summary_df["average math score"].map("{:.1f}".format) district_summary_df["average reading score"] = district_summary_df["average reading score"].map("{:.1f}".format) district_summary_df["% passing math score"] = district_summary_df["% passing math score"].map("{:.1f}%".format) district_summary_df["% passing reading score"] = district_summary_df["% passing reading score"].map("{:.1f}%".format) district_summary_df["% passing math and reading score"] = district_summary_df["% passing math and reading score"].map("{:.1f}%".format) district_summary_df
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
School Summary * Create an overview table that summarizes key metrics about each school, including: * School Name * School Type * Total Students * Total School Budget * Per Student Budget * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * % Overall Passing (The percentage of students that passed math **and** reading.) * Create a dataframe to hold the above results
# Strategy: school_data already has the first few columns. Format school_data, then calculate # additional series columns separately, then add each series column to the formatted dataframe # start with formatting school_data. Important to set index for future merges school_summary = (school_data.set_index("school_name") .sort_values("school_name") .rename(columns = { "type": "School Type", "size": "Total Students", "budget": "Total School Budget" } ) ) # Calculate Per Student Budget series, append to school_summary school_summary["Per Student Budget"] = school_summary["Total School Budget"] / school_summary["Total Students"] school_summary.head(5) # Group and compute average math and reading scores from student_data school_score_mean = (student_data.groupby(by="school_name") .mean() ) school_score_mean.head(5) # Append average math score and average reading score to school_summary school_summary["Average Math Score"] = school_score_mean["math_score"] school_summary["Average Reading Score"] = school_score_mean["reading_score"] school_summary.head(5) # Get number of students passing math by school. Set index. math_pass_by_school = (math_passers.set_index("school_name") .rename(columns={"Student ID": "Number Students Pass Math"}) .groupby(by="school_name") .count() ) math_pass_by_school.head(5) # Get number of students passing reading by school. Set index. read_pass_by_school = (read_passers.set_index("school_name") .rename(columns={"Student ID": "Number Students Pass Read"}) .groupby(by="school_name") .count() ) read_pass_by_school.head(5) # Get number of students passing math and reading by school. Set index. math_read_pass_by_school = (math_read_passers.set_index("school_name") .rename(columns={"Student ID": "Number Students Pass Math and Read"}) .groupby(by="school_name") .count() ) math_read_pass_by_school.head(5) # Divide number of students passing by number of students per school, then append columns # to school_summary dataframe school_summary["% Passing Math"] = math_pass_by_school["Number Students Pass Math"] / school_summary["Total Students"] * 100 school_summary["% Passing Reading"] = read_pass_by_school["Number Students Pass Read"] / school_summary["Total Students"] * 100 school_summary["% Overall Passing"] = math_read_pass_by_school["Number Students Pass Math and Read"] / school_summary["Total Students"] * 100 school_summary.head() # Make an unformatted copy for to use in 'Scores by School Spending' later on school_summary_unformatted = school_summary.copy() # Add formatting to school_summary. This turns some float columns into strings school_summary["Total School Budget"] = school_summary["Total School Budget"].map("${:,.2f}".format) school_summary["Per Student Budget"] = school_summary["Per Student Budget"].map("${:,.2f}".format) school_summary["Average Math Score"] = school_summary["Average Math Score"].map("{:.2f}".format) school_summary["Average Reading Score"] = school_summary["Average Reading Score"].map("{:.2f}".format) school_summary["% Passing Math"] = school_summary["% Passing Math"].map("{:.2f}%".format) school_summary["% Passing Reading"] = school_summary["% Passing Reading"].map("{:.2f}%".format) school_summary["% Overall Passing"] = school_summary["% Overall Passing"].map("{:.2f}%".format) school_summary
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
Top Performing Schools (By % Overall Passing) * Sort and display the top five performing schools by % overall passing.
(school_summary.sort_values("% Overall Passing", ascending=False) .head(5) )
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
Bottom Performing Schools (By % Overall Passing) * Sort and display the five worst-performing schools by % overall passing.
(school_summary.sort_values("% Overall Passing", ascending=True) .head(5) )
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
Math Scores by Grade * Create a table that lists the average Reading Score for students of each grade level (9th, 10th, 11th, 12th) at each school. * Create a pandas series for each grade. Hint: use a conditional statement. * Group each series by school * Combine the series into a dataframe * Optional: give the displayed data cleaner formatting
# Index student_data and get only relevant columns score_by_grade = student_data[["school_name", "grade", "math_score"]].set_index("school_name") # Create initial math_by_school dataframe, then create additional series and append them to # the dataframe math_by_school = (score_by_grade.loc[score_by_grade["grade"] == "9th"] .groupby(by="school_name") .mean() .rename(columns={"math_score": "9th"}) ) math_by_school["10th"] = (score_by_grade.loc[score_by_grade["grade"] == "10th"] .groupby(by="school_name") .mean() ) math_by_school["11th"] = (score_by_grade.loc[score_by_grade["grade"] == "11th"] .groupby(by="school_name") .mean() ) math_by_school["12th"] = (score_by_grade.loc[score_by_grade["grade"] == "12th"] .groupby(by="school_name") .mean() ) math_by_school
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
Reading Score by Grade * Perform the same operations as above for reading scores
score_by_grade = student_data[["school_name", "grade", "reading_score"]].set_index("school_name") # Create initial read_by_school dataframe, then create additional series and append them to # the dataframe read_by_school = (score_by_grade.loc[score_by_grade["grade"] == "9th"] .groupby(by="school_name") .mean() .rename(columns={"reading_score": "9th"}) ) read_by_school["10th"] = (score_by_grade.loc[score_by_grade["grade"] == "10th"] .groupby(by="school_name") .mean() ) read_by_school["11th"] = (score_by_grade.loc[score_by_grade["grade"] == "11th"] .groupby(by="school_name") .mean() ) read_by_school["12th"] = (score_by_grade.loc[score_by_grade["grade"] == "12th"] .groupby(by="school_name") .mean() ) read_by_school
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
Scores by School Spending * Create a table that breaks down school performances based on average Spending Ranges (Per Student). Use 4 reasonable bins to group school spending. Include in the table each of the following: * Average Math Score * Average Reading Score * % Passing Math * % Passing Reading * Overall Passing Rate (Average of the above two)
# Use school_summary_unformatted dataframe that still has numeric columns as float # Define the cut parameters series_to_cut = school_summary_unformatted["Per Student Budget"] bins_to_fill = [0, 584.9, 629.9, 644.9, 675.9] bin_labels = ["<$584", "$585-629", "$630-644", "$645-675"] # New column with the bin definition into school_summary_unformatted school_summary_unformatted["Spending Ranges (per student)"] = pd.cut(x=series_to_cut, bins=bins_to_fill, labels=bin_labels) # Exclude unneeded columns, group by the bin series and take the average of the scores scores_by_spending = (school_summary_unformatted.groupby(by="Spending Ranges (per student)") .mean() ) scores_by_spending_final = scores_by_spending[["Average Math Score", "Average Reading Score", "% Passing Math", "% Passing Reading", "% Overall Passing"]] scores_by_spending_final
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
Scores by School Size * Perform the same operations as above, based on school size.
# Use school_summary_unformatted dataframe that still has numeric columns as float # Define the cut parameters series_to_cut = school_summary_unformatted["Total Students"] bins_to_fill = [0, 1799.9, 2999.9, 4999.9] bin_labels = ["Small (< 1800)", "Medium (1800-2999)", "Large (3000-5000)"] # New column with the bin definition into school_summary_unformatted school_summary_unformatted["School Size"] = pd.cut(x=series_to_cut, bins=bins_to_fill, labels=bin_labels) # Exclude unneeded columns, group by the bin series and take the average of the scores scores_by_school_size = (school_summary_unformatted.groupby(by="School Size") .mean() ) scores_by_school_size_final = scores_by_school_size[["Average Math Score", "Average Reading Score", "% Passing Math", "% Passing Reading", "% Overall Passing"]] scores_by_school_size_final
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
Scores by School Type * Perform the same operations as above, based on school type
# No cut action needed since 'School Type' is not numeric. Can be grouped as is. # Exclude unneeded columns, group by School Type and take the average of the scores scores_by_school_type = (school_summary_unformatted.groupby(by="School Type") .mean() ) scores_by_school_type_final = scores_by_school_type[["Average Math Score", "Average Reading Score", "% Passing Math", "% Passing Reading", "% Overall Passing"]] scores_by_school_type_final
_____no_output_____
Apache-2.0
PyCitySchools/PyCitySchools_1.ipynb
githotirado/pandas-challenge
window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-59152712-8'); BSSN Time-Evolution C Code Generation Library Author: Zach Etienne This module implements a number of helper functions for generating C-code kernels that solve Einstein's equations in the covariant BSSN formalism [described in this NRPy+ tutorial notebook](Tutorial-BSSN_formulation.ipynb)**Notebook Status:** Not yet validated **Validation Notes:** This module has NOT been validated to exhibit convergence to zero of the Hamiltonian constraint violation at the expected order to the exact solution *after a short numerical evolution of the initial data* (see [plots at bottom](convergence)), and all quantities have been validated against the [original SENR code](https://bitbucket.org/zach_etienne/nrpy). NRPy+ modules that generate needed symbolic expressions:* [BSSN/BSSN_constraints.py](../edit/BSSN/BSSN_constraints.py); [\[**tutorial**\]](Tutorial-BSSN_constraints.ipynb): Hamiltonian constraint in BSSN curvilinear basis/coordinates* [BSSN/BSSN_RHSs.py](../edit/BSSN/BSSN_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_RHSs.ipynb): Generates the right-hand sides for the BSSN evolution equations in singular, curvilinear coordinates* [BSSN/BSSN_gauge_RHSs.py](../edit/BSSN/BSSN_gauge_RHSs.py); [\[**tutorial**\]](Tutorial-BSSN_time_evolution-BSSN_gauge_RHSs.ipynb): Generates the right-hand sides for the BSSN gauge evolution equations in singular, curvilinear coordinates* [BSSN/Enforce_Detgammahat_Constraint.py](../edit/BSSN/Enforce_Detgammahat_Constraint.py); [**tutorial**](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb): Generates symbolic expressions for enforcing the $\det{\bar{\gamma}}=\det{\hat{\gamma}}$ constraint Introduction:Here we use NRPy+ to generate the C source code kernels necessary to generate C functions needed/useful for evolving forward in time the BSSN equations, including:1. the BSSN RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration, with arbitrary gauge choice.1. the BSSN constraints as a check of numerical error Table of Contents$$\label{toc}$$This notebook is organized as follows1. [Step 1](importmodules): Import needed Python modules1. [Step 2](helperfuncs): Helper Python functions for C code generation1. [Step 3.a](bssnrhs): Generate symbolic BSSN RHS expressions1. [Step 3.b](bssnrhs_c_code): `rhs_eval()`: Register C function for evaluating BSSN RHS expressions1. [Step 3.c](ricci): Generate symbolic expressions for 3-Ricci tensor $\bar{R}_{ij}$1. [Step 3.d](ricci_c_code): `Ricci_eval()`: Register C function for evaluating 3-Ricci tensor $\bar{R}_{ij}$1. [Step 4.a](bssnconstraints): Generate symbolic expressions for BSSN Hamiltonian & momentum constraints1. [Step 4.b](bssnconstraints_c_code): `BSSN_constraints()`: Register C function for evaluating BSSN Hamiltonian & momentum constraints1. [Step 5](enforce3metric): `enforce_detgammahat_constraint()`: Register C function for enforcing the conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint1. [Step 6.a](psi4): `psi4_part_{0,1,2}()`: Register C function for evaluating Weyl scalar $\psi_4$, in 3 parts (3 functions)1. [Step 6.b](psi4_tetrad): `psi4_tetrad()`: Register C function for evaluating Weyl scalar $\psi_4$ tetrad1. [Step 6.c](swm2): `SpinWeight_minus2_SphHarmonics()`: Register C function for evaluating spin-weight $s=-2$ spherical harmonics1. [Step 7](validation): Confirm above functions are bytecode-identical to those in `BSSN/BSSN_Ccodegen_library.py`1. [Step 8](latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file Step 1: Import needed Python modules \[Back to [top](toc)\]$$\label{importmodules}$$
# RULES FOR ADDING FUNCTIONS TO THIS ROUTINE: # 1. The function must be runnable from a multiprocessing environment, # which means that the function # 1.a: cannot depend on previous function calls. # 1.b: cannot create directories (this is not multiproc friendly) # Step P1: Import needed NRPy+ core modules: from outputC import lhrh, add_to_Cfunction_dict # NRPy+: Core C code output module import finite_difference as fin # NRPy+: Finite difference C code generation module import NRPy_param_funcs as par # NRPy+: Parameter interface import grid as gri # NRPy+: Functions having to do with numerical grids import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support import reference_metric as rfm # NRPy+: Reference metric support from pickling import pickle_NRPy_env # NRPy+: Pickle/unpickle NRPy+ environment, for parallel codegen import os, time # Standard Python modules for multiplatform OS-level functions, benchmarking import BSSN.BSSN_RHSs as rhs import BSSN.BSSN_gauge_RHSs as gaugerhs import loop as lp
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 2: Helper Python functions for C code generation \[Back to [top](toc)\]$$\label{helperfuncs}$$* `print_msg_with_timing()` gives the user an idea of what's going on/taking so long. Also outputs timing info.* `get_loopopts()` sets up options for NRPy+'s `loop` module* `register_stress_energy_source_terms_return_T4UU()` registers gridfunctions for $T^{\mu\nu}$ if needed and not yet registered.
############################################### # Helper Python functions for C code generation # print_msg_with_timing() gives the user an idea of what's going on/taking so long. Also outputs timing info. def print_msg_with_timing(desc, msg="Symbolic", startstop="start", starttime=0.0): CoordSystem = par.parval_from_str("reference_metric::CoordSystem") elapsed = time.time()-starttime if msg == "Symbolic": if startstop == "start": print("Generating symbolic expressions for " + desc + " (%s coords)..." % CoordSystem) return time.time() else: print("Finished generating symbolic expressions for "+desc+ " (%s coords) in %.1f seconds. Next up: C codegen..." % (CoordSystem, elapsed)) elif msg == "Ccodegen": if startstop == "start": print("Generating C code for "+desc+" (%s coords)..." % CoordSystem) return time.time() else: print("Finished generating C code for "+desc+" (%s coords) in %.1f seconds." % (CoordSystem, elapsed)) # get_loopopts() sets up options for NRPy+'s loop module def get_loopopts(points_to_update, enable_SIMD, enable_rfm_precompute, OMP_pragma_on, enable_xxs=True): loopopts = points_to_update + ",includebraces=False" if enable_SIMD: loopopts += ",enable_SIMD" if enable_rfm_precompute: loopopts += ",enable_rfm_precompute" elif not enable_xxs: pass else: loopopts += ",Read_xxs" if OMP_pragma_on != "i2": loopopts += ",pragma_on_"+OMP_pragma_on return loopopts # register_stress_energy_source_terms_return_T4UU() registers gridfunctions # for T4UU if needed and not yet registered. def register_stress_energy_source_terms_return_T4UU(enable_stress_energy_source_terms): if enable_stress_energy_source_terms: registered_already = False for i in range(len(gri.glb_gridfcs_list)): if gri.glb_gridfcs_list[i].name == "T4UU00": registered_already = True if not registered_already: return ixp.register_gridfunctions_for_single_rank2("AUXEVOL", "T4UU", "sym01", DIM=4) else: return ixp.declarerank2("T4UU", "sym01", DIM=4) return None
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 3.a: Generate symbolic BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs}$$First we generate the symbolic expressions. Be sure to call this function from within a `reference_metric::enable_rfm_precompute="True"` environment if reference metric precomputation is desired.`BSSN_RHSs__generate_symbolic_expressions()` supports the following features* (`"OnePlusLog"` by default) Lapse gauge choice* (`"GammaDriving2ndOrder_Covariant"` by default) Shift gauge choice* (disabled by default) Kreiss-Oliger dissipation* (disabled by default) Stress-energy ($T^{\mu\nu}$) source terms* (enabled by default) "Leave Ricci symbolic": do not compute the 3-Ricci tensor $\bar{R}_{ij}$ within the BSSN RHSs, which only adds to the extreme complexity of the BSSN RHS expressions. Instead leave computation of $\bar{R}_{ij}$=`RbarDD` to a separate function. Doing this generally increases C-code performance by about 10%.Two lists are returned by this function:1. `betaU`: the un-rescaled shift vector $\beta^i$, which is used to perform upwinding.1. `BSSN_RHSs_SymbExpressions`: the BSSN RHS symbolic expressions, using the `lhrh` named-tuple to store a list of LHSs and RHSs, where each LHS and RHS is defined as follows 1. LHS = BSSN gridfunction whose time derivative is being computed at grid point `i0,i1,i2`, and 1. RHS = time derivative expression for given variable at the given point.
def BSSN_RHSs__generate_symbolic_expressions(LapseCondition="OnePlusLog", ShiftCondition="GammaDriving2ndOrder_Covariant", enable_KreissOliger_dissipation=True, enable_stress_energy_source_terms=False, leave_Ricci_symbolic=True): ###################################### # START: GENERATE SYMBOLIC EXPRESSIONS starttime = print_msg_with_timing("BSSN_RHSs", msg="Symbolic", startstop="start") # Returns None if enable_stress_energy_source_terms==False; otherwise returns symb expressions for T4UU T4UU = register_stress_energy_source_terms_return_T4UU(enable_stress_energy_source_terms) # Evaluate BSSN RHSs: import BSSN.BSSN_quantities as Bq par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic", str(leave_Ricci_symbolic)) rhs.BSSN_RHSs() if enable_stress_energy_source_terms: import BSSN.BSSN_stress_energy_source_terms as Bsest Bsest.BSSN_source_terms_for_BSSN_RHSs(T4UU) rhs.trK_rhs += Bsest.sourceterm_trK_rhs for i in range(3): # Needed for Gamma-driving shift RHSs: rhs.Lambdabar_rhsU[i] += Bsest.sourceterm_Lambdabar_rhsU[i] # Needed for BSSN RHSs: rhs.lambda_rhsU[i] += Bsest.sourceterm_lambda_rhsU[i] for j in range(3): rhs.a_rhsDD[i][j] += Bsest.sourceterm_a_rhsDD[i][j] par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::LapseEvolutionOption", LapseCondition) par.set_parval_from_str("BSSN.BSSN_gauge_RHSs::ShiftEvolutionOption", ShiftCondition) gaugerhs.BSSN_gauge_RHSs() # Can depend on above RHSs # Restore BSSN.BSSN_quantities::LeaveRicciSymbolic to False par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic", "False") # Add Kreiss-Oliger dissipation to the BSSN RHSs: if enable_KreissOliger_dissipation: thismodule = "KO_Dissipation" diss_strength = par.Cparameters("REAL", thismodule, "diss_strength", 0.1) # *Bq.cf # *Bq.cf*Bq.cf*Bq.cf # cf**1 is found better than cf**4 over the long term. alpha_dKOD = ixp.declarerank1("alpha_dKOD") cf_dKOD = ixp.declarerank1("cf_dKOD") trK_dKOD = ixp.declarerank1("trK_dKOD") betU_dKOD = ixp.declarerank2("betU_dKOD", "nosym") vetU_dKOD = ixp.declarerank2("vetU_dKOD", "nosym") lambdaU_dKOD = ixp.declarerank2("lambdaU_dKOD", "nosym") aDD_dKOD = ixp.declarerank3("aDD_dKOD", "sym01") hDD_dKOD = ixp.declarerank3("hDD_dKOD", "sym01") for k in range(3): gaugerhs.alpha_rhs += diss_strength * alpha_dKOD[k] * rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k] rhs.cf_rhs += diss_strength * cf_dKOD[k] * rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k] rhs.trK_rhs += diss_strength * trK_dKOD[k] * rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k] for i in range(3): if "2ndOrder" in ShiftCondition: gaugerhs.bet_rhsU[i] += diss_strength * betU_dKOD[i][k] * rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k] gaugerhs.vet_rhsU[i] += diss_strength * vetU_dKOD[i][k] * rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k] rhs.lambda_rhsU[i] += diss_strength * lambdaU_dKOD[i][k] * rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k] for j in range(3): rhs.a_rhsDD[i][j] += diss_strength * aDD_dKOD[i][j][k] * rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k] rhs.h_rhsDD[i][j] += diss_strength * hDD_dKOD[i][j][k] * rfm.ReU[k] # ReU[k] = 1/scalefactor_orthog_funcform[k] # We use betaU as our upwinding control vector: Bq.BSSN_basic_tensors() betaU = Bq.betaU # END: GENERATE SYMBOLIC EXPRESSIONS ###################################### lhs_names = ["alpha", "cf", "trK"] rhs_exprs = [gaugerhs.alpha_rhs, rhs.cf_rhs, rhs.trK_rhs] for i in range(3): lhs_names.append("betU" + str(i)) rhs_exprs.append(gaugerhs.bet_rhsU[i]) lhs_names.append("lambdaU" + str(i)) rhs_exprs.append(rhs.lambda_rhsU[i]) lhs_names.append("vetU" + str(i)) rhs_exprs.append(gaugerhs.vet_rhsU[i]) for j in range(i, 3): lhs_names.append("aDD" + str(i) + str(j)) rhs_exprs.append(rhs.a_rhsDD[i][j]) lhs_names.append("hDD" + str(i) + str(j)) rhs_exprs.append(rhs.h_rhsDD[i][j]) # Sort the lhss list alphabetically, and rhss to match. # This ensures the RHSs are evaluated in the same order # they're allocated in memory: lhs_names, rhs_exprs = [list(x) for x in zip(*sorted(zip(lhs_names, rhs_exprs), key=lambda pair: pair[0]))] # Declare the list of lhrh's BSSN_RHSs_SymbExpressions = [] for var in range(len(lhs_names)): BSSN_RHSs_SymbExpressions.append(lhrh(lhs=gri.gfaccess("rhs_gfs", lhs_names[var]), rhs=rhs_exprs[var])) print_msg_with_timing("BSSN_RHSs", msg="Symbolic", startstop="stop", starttime=starttime) return [betaU, BSSN_RHSs_SymbExpressions]
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 3.b: `rhs_eval()`: Register C code for BSSN RHS expressions \[Back to [top](toc)\]$$\label{bssnrhs_c_code}$$`add_rhs_eval_to_Cfunction_dict()` supports the following features* (enabled by default) reference-metric precomputation* (disabled by default) "golden kernels", which greatly increases the C-code generation time in an attempt to reduce computational cost. Most often this results in no speed-up.* (enabled by default) SIMD output* (disabled by default) splitting of RHSs into smaller pieces (multiple loops) to improve performance. Doesn't help much.* (`"OnePlusLog"` by default) Lapse gauge choice* (`"GammaDriving2ndOrder_Covariant"` by default) Shift gauge choice* (disabled by default) enable Kreiss-Oliger dissipation* (disabled by default) add stress-energy ($T^{\mu\nu}$) source terms* (enabled by default) "Leave Ricci symbolic": do not compute the 3-Ricci tensor $\bar{R}_{ij}$ within the BSSN RHSs, which only adds to the extreme complexity of the BSSN RHS expressions. Instead leave computation of $\bar{R}_{ij}$=`RbarDD` to a separate function. Doing this generally increases C-code performance by about 10%.* (`"i2"` by default) OpenMP pragma acts on which loop (assumes `i2` is outermost and `i0` is innermost loop). For axisymmetric or near-axisymmetric calculations, `"i1"` may be *significantly* faster.Also to enable parallel C-code kernel generation, the NRPy+ environment is pickled and returned.
def add_rhs_eval_to_Cfunction_dict(includes=None, rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=True, enable_golden_kernels=False, enable_SIMD=True, enable_split_for_optimizations_doesnt_help=False, LapseCondition="OnePlusLog", ShiftCondition="GammaDriving2ndOrder_Covariant", enable_KreissOliger_dissipation=False, enable_stress_energy_source_terms=False, leave_Ricci_symbolic=True, OMP_pragma_on="i2", func_name_suffix=""): if includes is None: includes = [] if enable_SIMD: includes += [os.path.join("SIMD", "SIMD_intrinsics.h")] enable_FD_functions = bool(par.parval_from_str("finite_difference::enable_FD_functions")) if enable_FD_functions: includes += ["finite_difference_functions.h"] # Set up the C function for the BSSN RHSs desc = "Evaluate the BSSN RHSs" name = "rhs_eval" + func_name_suffix params = "const paramstruct *restrict params, " if enable_rfm_precompute: params += "const rfm_struct *restrict rfmstruct, " else: params += "REAL *xx[3], " params += """ const REAL *restrict auxevol_gfs,const REAL *restrict in_gfs,REAL *restrict rhs_gfs""" betaU, BSSN_RHSs_SymbExpressions = \ BSSN_RHSs__generate_symbolic_expressions(LapseCondition=LapseCondition, ShiftCondition=ShiftCondition, enable_KreissOliger_dissipation=enable_KreissOliger_dissipation, enable_stress_energy_source_terms=enable_stress_energy_source_terms, leave_Ricci_symbolic=leave_Ricci_symbolic) # Construct body: preloop="" enableCparameters=True # Set up preloop in case we're outputting code for the Einstein Toolkit (ETK) if par.parval_from_str("grid::GridFuncMemAccess") == "ETK": params, preloop = set_ETK_func_params_preloop(func_name_suffix) enableCparameters=False FD_outCparams = "outCverbose=False,enable_SIMD=" + str(enable_SIMD) FD_outCparams += ",GoldenKernelsEnable=" + str(enable_golden_kernels) loopopts = get_loopopts("InteriorPoints", enable_SIMD, enable_rfm_precompute, OMP_pragma_on) FDorder = par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER") starttime = print_msg_with_timing("BSSN_RHSs (FD order="+str(FDorder)+")", msg="Ccodegen", startstop="start") if enable_split_for_optimizations_doesnt_help and FDorder == 6: loopopts += ",DisableOpenMP" BSSN_RHSs_SymbExpressions_pt1 = [] BSSN_RHSs_SymbExpressions_pt2 = [] for lhsrhs in BSSN_RHSs_SymbExpressions: if "BETU" in lhsrhs.lhs or "LAMBDAU" in lhsrhs.lhs: BSSN_RHSs_SymbExpressions_pt1.append(lhrh(lhs=lhsrhs.lhs, rhs=lhsrhs.rhs)) else: BSSN_RHSs_SymbExpressions_pt2.append(lhrh(lhs=lhsrhs.lhs, rhs=lhsrhs.rhs)) preloop += """#pragma omp parallel { """ preloopbody = fin.FD_outputC("returnstring", BSSN_RHSs_SymbExpressions_pt1, params=FD_outCparams, upwindcontrolvec=betaU) preloop += "\n#pragma omp for\n" + lp.simple_loop(loopopts, preloopbody) preloop += "\n#pragma omp for\n" body = fin.FD_outputC("returnstring", BSSN_RHSs_SymbExpressions_pt2, params=FD_outCparams, upwindcontrolvec=betaU) postloop = "\n } // END #pragma omp parallel\n" else: preloop += "" body = fin.FD_outputC("returnstring", BSSN_RHSs_SymbExpressions, params=FD_outCparams, upwindcontrolvec=betaU) postloop = "" print_msg_with_timing("BSSN_RHSs (FD order="+str(FDorder)+")", msg="Ccodegen", startstop="stop", starttime=starttime) add_to_Cfunction_dict( includes=includes, desc=desc, name=name, params=params, preloop=preloop, body=body, loopopts=loopopts, postloop=postloop, rel_path_to_Cparams=rel_path_to_Cparams, enableCparameters=enableCparameters) return pickle_NRPy_env()
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 3.c: Generate symbolic expressions for 3-Ricci tensor $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{ricci}$$As described above, we find a roughly 10% speedup by computing the 3-Ricci tensor $\bar{R}_{ij}$ separately from the BSSN RHS equations and storing the 6 independent components in memory. Here we construct the symbolic expressions for all 6 independent components of $\bar{R}_{ij}$ (which is symmetric under interchange of indices).`Ricci__generate_symbolic_expressions()` does not support any input parameters.One list is returned by `Ricci__generate_symbolic_expressions()`: `Ricci_SymbExpressions`, which contains a list of expressions for the six independent components of $\bar{R}_{ij}$, using the `lhrh` named-tuple to store a list of LHSs and RHSs, where each LHS and RHS is defined as follows1. LHS = gridfunction representation of the component of $\bar{R}_{ij}$, computed at grid point i0,i1,i2, and1. RHS = expression for given component of $\bar{R}_{ij}$.
def Ricci__generate_symbolic_expressions(): ###################################### # START: GENERATE SYMBOLIC EXPRESSIONS starttime = print_msg_with_timing("3-Ricci tensor", msg="Symbolic", startstop="start") # Evaluate 3-Ricci tensor: import BSSN.BSSN_quantities as Bq par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic", "False") # Register all BSSN gridfunctions if not registered already Bq.BSSN_basic_tensors() # Next compute Ricci tensor Bq.RicciBar__gammabarDD_dHatD__DGammaUDD__DGammaU() # END: GENERATE SYMBOLIC EXPRESSIONS ###################################### # Must register RbarDD as gridfunctions, as we're outputting them to gridfunctions here: foundit = False for i in range(len(gri.glb_gridfcs_list)): if "RbarDD00" in gri.glb_gridfcs_list[i].name: foundit = True if not foundit: ixp.register_gridfunctions_for_single_rank2("AUXEVOL", "RbarDD", "sym01") Ricci_SymbExpressions = [lhrh(lhs=gri.gfaccess("auxevol_gfs", "RbarDD00"), rhs=Bq.RbarDD[0][0]), lhrh(lhs=gri.gfaccess("auxevol_gfs", "RbarDD01"), rhs=Bq.RbarDD[0][1]), lhrh(lhs=gri.gfaccess("auxevol_gfs", "RbarDD02"), rhs=Bq.RbarDD[0][2]), lhrh(lhs=gri.gfaccess("auxevol_gfs", "RbarDD11"), rhs=Bq.RbarDD[1][1]), lhrh(lhs=gri.gfaccess("auxevol_gfs", "RbarDD12"), rhs=Bq.RbarDD[1][2]), lhrh(lhs=gri.gfaccess("auxevol_gfs", "RbarDD22"), rhs=Bq.RbarDD[2][2])] print_msg_with_timing("3-Ricci tensor", msg="Symbolic", startstop="stop", starttime=starttime) return Ricci_SymbExpressions
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 3.d: `Ricci_eval()`: Register C function for evaluating 3-Ricci tensor $\bar{R}_{ij}$ \[Back to [top](toc)\]$$\label{ricci_c_code}$$`add_Ricci_eval_to_Cfunction_dict()` supports the following features* (enabled by default) reference-metric precomputation* (disabled by default) "golden kernels", which greatly increases the C-code generation time in an attempt to reduce computational cost. Most often this results in no speed-up.* (enabled by default) SIMD output* (disabled by default) splitting of RHSs into smaller pieces (multiple loops) to improve performance. Doesn't help much.* (`"i2"` by default) OpenMP pragma acts on which loop (assumes `i2` is outermost and `i0` is innermost loop). For axisymmetric or near-axisymmetric calculations, `"i1"` may be *significantly* faster.Also to enable parallel C-code kernel generation, the NRPy+ environment is pickled and returned.
def add_Ricci_eval_to_Cfunction_dict(includes=None, rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=True, enable_golden_kernels=False, enable_SIMD=True, enable_split_for_optimizations_doesnt_help=False, OMP_pragma_on="i2", func_name_suffix=""): if includes is None: includes = [] if enable_SIMD: includes += [os.path.join("SIMD", "SIMD_intrinsics.h")] enable_FD_functions = bool(par.parval_from_str("finite_difference::enable_FD_functions")) if enable_FD_functions: includes += ["finite_difference_functions.h"] # Set up the C function for the 3-Ricci tensor desc = "Evaluate the 3-Ricci tensor" name = "Ricci_eval" + func_name_suffix params = "const paramstruct *restrict params, " if enable_rfm_precompute: params += "const rfm_struct *restrict rfmstruct, " else: params += "REAL *xx[3], " params += "const REAL *restrict in_gfs, REAL *restrict auxevol_gfs" # Construct body: Ricci_SymbExpressions = Ricci__generate_symbolic_expressions() FD_outCparams = "outCverbose=False,enable_SIMD=" + str(enable_SIMD) FD_outCparams += ",GoldenKernelsEnable=" + str(enable_golden_kernels) loopopts = get_loopopts("InteriorPoints", enable_SIMD, enable_rfm_precompute, OMP_pragma_on) FDorder = par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER") starttime = print_msg_with_timing("3-Ricci tensor (FD order="+str(FDorder)+")", msg="Ccodegen", startstop="start") # Construct body: preloop="" enableCparameters=True # Set up preloop in case we're outputting code for the Einstein Toolkit (ETK) if par.parval_from_str("grid::GridFuncMemAccess") == "ETK": params, preloop = set_ETK_func_params_preloop(func_name_suffix) enableCparameters=False if enable_split_for_optimizations_doesnt_help and FDorder >= 8: loopopts += ",DisableOpenMP" Ricci_SymbExpressions_pt1 = [] Ricci_SymbExpressions_pt2 = [] for lhsrhs in Ricci_SymbExpressions: if "RBARDD00" in lhsrhs.lhs or "RBARDD11" in lhsrhs.lhs or "RBARDD22" in lhsrhs.lhs: Ricci_SymbExpressions_pt1.append(lhrh(lhs=lhsrhs.lhs, rhs=lhsrhs.rhs)) else: Ricci_SymbExpressions_pt2.append(lhrh(lhs=lhsrhs.lhs, rhs=lhsrhs.rhs)) preloop = """#pragma omp parallel { #pragma omp for """ preloopbody = fin.FD_outputC("returnstring", Ricci_SymbExpressions_pt1, params=FD_outCparams) preloop += lp.simple_loop(loopopts, preloopbody) preloop += "#pragma omp for\n" body = fin.FD_outputC("returnstring", Ricci_SymbExpressions_pt2, params=FD_outCparams) postloop = "\n } // END #pragma omp parallel\n" else: body = fin.FD_outputC("returnstring", Ricci_SymbExpressions, params=FD_outCparams) postloop = "" print_msg_with_timing("3-Ricci tensor (FD order="+str(FDorder)+")", msg="Ccodegen", startstop="stop", starttime=starttime) add_to_Cfunction_dict( includes=includes, desc=desc, name=name, params=params, preloop=preloop, body=body, loopopts=loopopts, postloop=postloop, rel_path_to_Cparams=rel_path_to_Cparams, enableCparameters=enableCparameters) return pickle_NRPy_env()
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 4.a: Generate symbolic expressions for BSSN Hamiltonian & momentum constraints \[Back to [top](toc)\]$$\label{bssnconstraints}$$Next output the C code for evaluating the BSSN Hamiltonian and momentum constraints [(**Tutorial**)](Tutorial-BSSN_constraints.ipynb). In the absence of numerical error, these constraints should evaluate to zero. However it does not due to numerical (typically truncation) error.We will therefore measure the constraint violations to gauge the accuracy of our simulation, and, ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected.`BSSN_constraints__generate_symbolic_expressions()` supports the following features:* (disabled by default) add stress-energy ($T^{\mu\nu}$) source terms* (disabled by default) output Hamiltonian constraint onlyOne list is returned by `BSSN_constraints__generate_symbolic_expressions()`: `BSSN_constraints_SymbExpressions`, which contains a list of expressions for the Hamiltonian and momentum constraints (4 elements total), using the `lhrh` named-tuple to store a list of LHSs and RHSs, where each LHS and RHS is defined as follows1. LHS = gridfunction representation of the BSSN constraint, computed at grid point i0,i1,i2, and1. RHS = expression for given BSSN constraint
def BSSN_constraints__generate_symbolic_expressions(enable_stress_energy_source_terms=False, leave_Ricci_symbolic=True, output_H_only=False): ###################################### # START: GENERATE SYMBOLIC EXPRESSIONS starttime = print_msg_with_timing("BSSN constraints", msg="Symbolic", startstop="start") # Define the Hamiltonian constraint and output the optimized C code. par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic", str(leave_Ricci_symbolic)) import BSSN.BSSN_constraints as bssncon # Returns None if enable_stress_energy_source_terms==False; otherwise returns symb expressions for T4UU T4UU = register_stress_energy_source_terms_return_T4UU(enable_stress_energy_source_terms) bssncon.BSSN_constraints(add_T4UUmunu_source_terms=False, output_H_only=output_H_only) # We'll add them below if desired. if enable_stress_energy_source_terms: import BSSN.BSSN_stress_energy_source_terms as Bsest Bsest.BSSN_source_terms_for_BSSN_constraints(T4UU) bssncon.H += Bsest.sourceterm_H for i in range(3): bssncon.MU[i] += Bsest.sourceterm_MU[i] BSSN_constraints_SymbExpressions = [lhrh(lhs=gri.gfaccess("aux_gfs", "H"), rhs=bssncon.H)] if not output_H_only: BSSN_constraints_SymbExpressions += [lhrh(lhs=gri.gfaccess("aux_gfs", "MU0"), rhs=bssncon.MU[0]), lhrh(lhs=gri.gfaccess("aux_gfs", "MU1"), rhs=bssncon.MU[1]), lhrh(lhs=gri.gfaccess("aux_gfs", "MU2"), rhs=bssncon.MU[2])] par.set_parval_from_str("BSSN.BSSN_quantities::LeaveRicciSymbolic", "False") print_msg_with_timing("BSSN constraints", msg="Symbolic", startstop="stop", starttime=starttime) # END: GENERATE SYMBOLIC EXPRESSIONS ###################################### return BSSN_constraints_SymbExpressions
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 4.b: `BSSN_constraints()`: Register C function for evaluating BSSN Hamiltonian & momentum constraints \[Back to [top](toc)\]$$\label{bssnconstraints_c_code}$$`add_BSSN_constraints_to_Cfunction_dict()` supports the following features* (enabled by default) reference-metric precomputation* (disabled by default) "golden kernels", which greatly increases the C-code generation time in an attempt to reduce computational cost. Most often this results in no speed-up.* (enabled by default) SIMD output* (disabled by default) splitting of RHSs into smaller pieces (multiple loops) to improve performance. Doesn't help much.* (disabled by default) add stress-energy ($T^{\mu\nu}$) source terms* (disabled by default) output Hamiltonian constraint only* (`"i2"` by default) OpenMP pragma acts on which loop (assumes `i2` is outermost and `i0` is innermost loop). For axisymmetric or near-axisymmetric calculations, `"i1"` may be *significantly* faster.Also to enable parallel C-code kernel generation, the NRPy+ environment is pickled and returned.
def add_BSSN_constraints_to_Cfunction_dict(includes=None, rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=True, enable_golden_kernels=False, enable_SIMD=True, enable_stress_energy_source_terms=False, leave_Ricci_symbolic=True, output_H_only=False, OMP_pragma_on="i2", func_name_suffix=""): if includes is None: includes = [] if enable_SIMD: includes += [os.path.join("SIMD", "SIMD_intrinsics.h")] enable_FD_functions = bool(par.parval_from_str("finite_difference::enable_FD_functions")) if enable_FD_functions: includes += ["finite_difference_functions.h"] # Set up the C function for the BSSN constraints desc = "Evaluate the BSSN constraints" name = "BSSN_constraints" + func_name_suffix params = "const paramstruct *restrict params, " if enable_rfm_precompute: params += "const rfm_struct *restrict rfmstruct, " else: params += "REAL *xx[3], " params += """ const REAL *restrict in_gfs, const REAL *restrict auxevol_gfs, REAL *restrict aux_gfs""" # Construct body: BSSN_constraints_SymbExpressions = BSSN_constraints__generate_symbolic_expressions(enable_stress_energy_source_terms, leave_Ricci_symbolic=leave_Ricci_symbolic, output_H_only=output_H_only) preloop="" enableCparameters=True # Set up preloop in case we're outputting code for the Einstein Toolkit (ETK) if par.parval_from_str("grid::GridFuncMemAccess") == "ETK": params, preloop = set_ETK_func_params_preloop(func_name_suffix) enableCparameters=False FD_outCparams = "outCverbose=False,enable_SIMD=" + str(enable_SIMD) FD_outCparams += ",GoldenKernelsEnable=" + str(enable_golden_kernels) FDorder = par.parval_from_str("finite_difference::FD_CENTDERIVS_ORDER") starttime = print_msg_with_timing("BSSN constraints (FD order="+str(FDorder)+")", msg="Ccodegen", startstop="start") body = fin.FD_outputC("returnstring", BSSN_constraints_SymbExpressions, params=FD_outCparams) print_msg_with_timing("BSSN constraints (FD order="+str(FDorder)+")", msg="Ccodegen", startstop="stop", starttime=starttime) add_to_Cfunction_dict( includes=includes, desc=desc, name=name, params=params, preloop=preloop, body=body, loopopts=get_loopopts("InteriorPoints", enable_SIMD, enable_rfm_precompute, OMP_pragma_on), rel_path_to_Cparams=rel_path_to_Cparams, enableCparameters=enableCparameters) return pickle_NRPy_env()
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 5: `enforce_detgammahat_constraint()`: Register C function for enforcing the conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint \[Back to [top](toc)\]$$\label{enforce3metric}$$To ensure stability when solving the BSSN equations, we must enforce the conformal 3-metric $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint (Eq. 53 of [Ruchlin, Etienne, and Baumgarte (2018)](https://arxiv.org/abs/1712.07658)), as [documented in the corresponding NRPy+ tutorial notebook](Tutorial-BSSN_enforcing_determinant_gammabar_equals_gammahat_constraint.ipynb). This function imposes the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint.Applying curvilinear boundary conditions should affect the initial data at the outer boundary, and will in general cause the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint to be violated there. Thus after we apply these boundary conditions, we must always call the routine for enforcing the $\det{\bar{\gamma}_{ij}}=\det{\hat{\gamma}_{ij}}$ constraint.`add_enforce_detgammahat_constraint_to_Cfunction_dict()` supports the following features* (enabled by default) reference-metric precomputation* (disabled by default) "golden kernels", which greatly increases the C-code generation time in an attempt to reduce computational cost. Most often this results in no speed-up.* (`"i2"` by default) OpenMP pragma acts on which loop (assumes `i2` is outermost and `i0` is innermost loop). For axisymmetric or near-axisymmetric calculations, `"i1"` may be *significantly* faster.Also to enable parallel C-code kernel generation, the NRPy+ environment is pickled and returned.
def add_enforce_detgammahat_constraint_to_Cfunction_dict(includes=None, rel_path_to_Cparams=os.path.join("."), enable_rfm_precompute=True, enable_golden_kernels=False, OMP_pragma_on="i2", func_name_suffix=""): # This function disables SIMD, as it includes cbrt() and abs() functions. if includes is None: includes = [] # This function does not use finite differences! # enable_FD_functions = bool(par.parval_from_str("finite_difference::enable_FD_functions")) # if enable_FD_functions: # includes += ["finite_difference_functions.h"] # Set up the C function for enforcing the det(gammabar) = det(gammahat) BSSN algebraic constraint desc = "Enforce the det(gammabar) = det(gammahat) (algebraic) constraint" name = "enforce_detgammahat_constraint" + func_name_suffix params = "const paramstruct *restrict params, " if enable_rfm_precompute: params += "const rfm_struct *restrict rfmstruct, " else: params += "REAL *xx[3], " params += "REAL *restrict in_gfs" # Construct body: enforce_detg_constraint_symb_expressions = EGC.Enforce_Detgammahat_Constraint_symb_expressions() preloop="" enableCparameters=True # Set up preloop in case we're outputting code for the Einstein Toolkit (ETK) if par.parval_from_str("grid::GridFuncMemAccess") == "ETK": params, preloop = set_ETK_func_params_preloop(func_name_suffix, enable_SIMD=False) enableCparameters=False FD_outCparams = "outCverbose=False,enable_SIMD=False" FD_outCparams += ",GoldenKernelsEnable=" + str(enable_golden_kernels) starttime = print_msg_with_timing("Enforcing det(gammabar)=det(gammahat) constraint", msg="Ccodegen", startstop="start") body = fin.FD_outputC("returnstring", enforce_detg_constraint_symb_expressions, params=FD_outCparams) print_msg_with_timing("Enforcing det(gammabar)=det(gammahat) constraint", msg="Ccodegen", startstop="stop", starttime=starttime) enable_SIMD = False add_to_Cfunction_dict( includes=includes, desc=desc, name=name, params=params, preloop=preloop, body=body, loopopts=get_loopopts("AllPoints", enable_SIMD, enable_rfm_precompute, OMP_pragma_on), rel_path_to_Cparams=rel_path_to_Cparams, enableCparameters=enableCparameters) return pickle_NRPy_env()
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 6.a: `psi4_part_{0,1,2}()`: Register C function for evaluating Weyl scalar $\psi_4$, in 3 parts (3 functions) \[Back to [top](toc)\]$$\label{psi4}$$$\psi_4$ is a complex scalar related to the gravitational wave strain via$$\psi_4 = \ddot{h}_+ - i \ddot{h}_\times.$$We construct the symbolic expression for $\psi_4$ as described in the [corresponding NRPy+ Jupyter notebook](Tutorial-Psi4.ipynb), in three parts. The below `add_psi4_part_to_Cfunction_dict()` function will construct any of these three parts `0`, `1,` or `2`, and output the part to a function `psi4_part0()`, `psi4_part1()`, or `psi4_part2()`, respectively.`add_psi4_part_to_Cfunction_dict()` supports the following features* (`"0"` by default) which part? (`0`, `1,` or `2`), as described above* (disabled by default) "setPsi4tozero", which effectively turns this into a dummy function -- for when $\psi_4$ is not needed, and it's easier to just set `psi_4=0` instead of calculating it.* (`"i2"` by default) OpenMP pragma acts on which loop (assumes `i2` is outermost and `i0` is innermost loop). For axisymmetric or near-axisymmetric calculations, `"i1"` may be *significantly* faster.Also to enable parallel C-code kernel generation, the NRPy+ environment is pickled and returned.
def add_psi4_part_to_Cfunction_dict(includes=None, rel_path_to_Cparams=os.path.join("."), whichpart=0, setPsi4tozero=False, OMP_pragma_on="i2"): starttime = print_msg_with_timing("psi4, part " + str(whichpart), msg="Ccodegen", startstop="start") # Set up the C function for psi4 if includes is None: includes = [] includes += ["NRPy_function_prototypes.h"] desc = "Compute psi4 at all interior gridpoints, part " + str(whichpart) name = "psi4_part" + str(whichpart) params = """const paramstruct *restrict params, const REAL *restrict in_gfs, REAL *restrict xx[3], REAL *restrict aux_gfs""" body = "" gri.register_gridfunctions("AUX", ["psi4_part" + str(whichpart) + "re", "psi4_part" + str(whichpart) + "im"]) FD_outCparams = "outCverbose=False,enable_SIMD=False,CSE_sorting=none" if not setPsi4tozero: # Set the body of the function # First compute the symbolic expressions psi4.Psi4(specify_tetrad=False) # We really don't want to store these "Cparameters" permanently; they'll be set via function call... # so we make a copy of the original par.glb_Cparams_list (sans tetrad vectors) and restore it below Cparams_list_orig = par.glb_Cparams_list.copy() par.Cparameters("REAL", __name__, ["mre4U0", "mre4U1", "mre4U2", "mre4U3"], [0, 0, 0, 0]) par.Cparameters("REAL", __name__, ["mim4U0", "mim4U1", "mim4U2", "mim4U3"], [0, 0, 0, 0]) par.Cparameters("REAL", __name__, ["n4U0", "n4U1", "n4U2", "n4U3"], [0, 0, 0, 0]) body += """ REAL mre4U0,mre4U1,mre4U2,mre4U3,mim4U0,mim4U1,mim4U2,mim4U3,n4U0,n4U1,n4U2,n4U3; psi4_tetrad(params, in_gfs[IDX4S(CFGF, i0,i1,i2)], in_gfs[IDX4S(HDD00GF, i0,i1,i2)], in_gfs[IDX4S(HDD01GF, i0,i1,i2)], in_gfs[IDX4S(HDD02GF, i0,i1,i2)], in_gfs[IDX4S(HDD11GF, i0,i1,i2)], in_gfs[IDX4S(HDD12GF, i0,i1,i2)], in_gfs[IDX4S(HDD22GF, i0,i1,i2)], &mre4U0,&mre4U1,&mre4U2,&mre4U3,&mim4U0,&mim4U1,&mim4U2,&mim4U3,&n4U0,&n4U1,&n4U2,&n4U3, xx, i0,i1,i2); """ body += "REAL xCart_rel_to_globalgrid_center[3];\n" body += "xx_to_Cart(params, xx, i0, i1, i2, xCart_rel_to_globalgrid_center);\n" body += "int ignore_Cart_to_i0i1i2[3]; REAL xx_rel_to_globalgridorigin[3];\n" body += "Cart_to_xx_and_nearest_i0i1i2_global_grid_center(params, xCart_rel_to_globalgrid_center,xx_rel_to_globalgridorigin,ignore_Cart_to_i0i1i2);\n" for i in range(3): body += "const REAL xx" + str(i) + "=xx_rel_to_globalgridorigin[" + str(i) + "];\n" body += fin.FD_outputC("returnstring", [lhrh(lhs=gri.gfaccess("in_gfs", "psi4_part" + str(whichpart) + "re"), rhs=psi4.psi4_re_pt[whichpart]), lhrh(lhs=gri.gfaccess("in_gfs", "psi4_part" + str(whichpart) + "im"), rhs=psi4.psi4_im_pt[whichpart])], params=FD_outCparams) par.glb_Cparams_list = Cparams_list_orig.copy() elif setPsi4tozero: body += fin.FD_outputC("returnstring", [lhrh(lhs=gri.gfaccess("in_gfs", "psi4_part" + str(whichpart) + "re"), rhs=sp.sympify(0)), lhrh(lhs=gri.gfaccess("in_gfs", "psi4_part" + str(whichpart) + "im"), rhs=sp.sympify(0))], params=FD_outCparams) enable_SIMD = False enable_rfm_precompute = False print_msg_with_timing("psi4, part " + str(whichpart), msg="Ccodegen", startstop="stop", starttime=starttime) add_to_Cfunction_dict( includes=includes, desc=desc, name=name, params=params, body=body, loopopts=get_loopopts("InteriorPoints", enable_SIMD, enable_rfm_precompute, OMP_pragma_on, enable_xxs=False), rel_path_to_Cparams=rel_path_to_Cparams) return pickle_NRPy_env()
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 6.b: `psi4_tetrad()`: Register C function for evaluating Weyl scalar $\psi_4$ tetrad \[Back to [top](toc)\]$$\label{psi4_tetrad}$$Computing $\psi_4$ requires that an observer tetrad be specified. We adopt a "quasi-Kinnersley tetrad" as described in [the corresponding NRPy+ tutorial notebook](Tutorial-Psi4_tetrads.ipynb).`add_psi4_tetrad_to_Cfunction_dict()` supports the following features* (disabled by default) "setPsi4tozero", which effectively turns this into a dummy function -- for when $\psi_4$ is not needed, and it's easier to just set `psi_4=0` instead of calculating it.Also to enable parallel C-code kernel generation, the NRPy+ environment is pickled and returned.
def add_psi4_tetrad_to_Cfunction_dict(includes=None, rel_path_to_Cparams=os.path.join("."), setPsi4tozero=False): starttime = print_msg_with_timing("psi4 tetrads", msg="Ccodegen", startstop="start") # Set up the C function for BSSN basis transformations desc = "Compute tetrad for psi4" name = "psi4_tetrad" # First set up the symbolic expressions (RHSs) and their names (LHSs) psi4tet.Psi4_tetrads() list_of_varnames = [] list_of_symbvars = [] for i in range(4): list_of_varnames.append("*mre4U" + str(i)) list_of_symbvars.append(psi4tet.mre4U[i]) for i in range(4): list_of_varnames.append("*mim4U" + str(i)) list_of_symbvars.append(psi4tet.mim4U[i]) for i in range(4): list_of_varnames.append("*n4U" + str(i)) list_of_symbvars.append(psi4tet.n4U[i]) paramsindent = " " params = """const paramstruct *restrict params,\n""" + paramsindent list_of_metricvarnames = ["cf"] for i in range(3): for j in range(i, 3): list_of_metricvarnames.append("hDD" + str(i) + str(j)) for var in list_of_metricvarnames: params += "const REAL " + var + "," params += "\n" + paramsindent for var in list_of_varnames: params += "REAL " + var + "," params += "\n" + paramsindent + "REAL *restrict xx[3], const int i0,const int i1,const int i2" # Set the body of the function body = "" outCparams = "includebraces=False,outCverbose=False,enable_SIMD=False,preindent=1" if not setPsi4tozero: for i in range(3): body += " const REAL xx" + str(i) + " = xx[" + str(i) + "][i" + str(i) + "];\n" body += " // Compute tetrads:\n" body += " {\n" # Sort the lhss list alphabetically, and rhss to match: lhss, rhss = [list(x) for x in zip(*sorted(zip(list_of_varnames, list_of_symbvars), key=lambda pair: pair[0]))] body += outputC(rhss, lhss, filename="returnstring", params=outCparams) body += " }\n" elif setPsi4tozero: body += "return;\n" loopopts = "" print_msg_with_timing("psi4 tetrads", msg="Ccodegen", startstop="stop", starttime=starttime) add_to_Cfunction_dict( includes=includes, desc=desc, name=name, params=params, body=body, loopopts=loopopts, rel_path_to_Cparams=rel_path_to_Cparams) return pickle_NRPy_env()
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 6.c: `SpinWeight_minus2_SphHarmonics()`: Register C function for evaluating spin-weight $s=-2$ spherical harmonics \[Back to [top](toc)\]$$\label{swm2}$$After evaluating $\psi_4$ at all interior gridpoints on a numerical grid, we next decompose $\psi_4$ into spin-weight $s=-2$ spherical harmonics, which are documented in [this NRPy+ tutorial notebook](Tutorial-SpinWeighted_Spherical_Harmonics.ipynb).`SpinWeight_minus2_SphHarmonics()` supports the following features* (`"8"` by default) `maximum_l`, the maximum $\ell$ mode to output. Symbolic expressions $(\ell,m)$ modes up to and including `maximum_l` will be output.Also to enable parallel C-code kernel generation, the NRPy+ environment is pickled and returned.
def add_SpinWeight_minus2_SphHarmonics_to_Cfunction_dict(includes=None, rel_path_to_Cparams=os.path.join("."), maximum_l=8): starttime = print_msg_with_timing("Spin-weight s=-2 Spherical Harmonics", msg="Ccodegen", startstop="start") # Set up the C function for computing the spin-weight -2 spherical harmonic at theta,phi: Y_{s=-2, l,m}(theta,phi) prefunc = r"""// Compute at a single point (th,ph) the spin-weight -2 spherical harmonic Y_{s=-2, l,m}(th,ph) // Manual "inline void" of this function results in compilation error with clang. void SpinWeight_minus2_SphHarmonics(const int l, const int m, const REAL th, const REAL ph, REAL *reYlmswm2_l_m, REAL *imYlmswm2_l_m) { """ # Construct prefunc: outCparams = "preindent=1,outCfileaccess=a,outCverbose=False,includebraces=False" prefunc += """ switch(l) { """ for l in range(maximum_l + 1): # Output values up to and including l=8. prefunc += " case " + str(l) + ":\n" prefunc += " switch(m) {\n" for m in range(-l, l + 1): prefunc += " case " + str(m) + ":\n" prefunc += " {\n" Y_m2_lm = SWm2SH.Y(-2, l, m, SWm2SH.th, SWm2SH.ph) prefunc += outputC([sp.re(Y_m2_lm), sp.im(Y_m2_lm)], ["*reYlmswm2_l_m", "*imYlmswm2_l_m"], "returnstring", outCparams) prefunc += " }\n" prefunc += " return;\n" prefunc += " } // END switch(m)\n" prefunc += " } // END switch(l)\n" prefunc += r""" fprintf(stderr, "ERROR: SpinWeight_minus2_SphHarmonics handles only l=[0,"""+str(maximum_l)+r"""] and only m=[-l,+l] is defined.\n"); fprintf(stderr, " You chose l=%d and m=%d, which is out of these bounds.\n",l,m); exit(1); } void lowlevel_decompose_psi4_into_swm2_modes(const int Nxx_plus_2NGHOSTS1,const int Nxx_plus_2NGHOSTS2, const REAL dxx1, const REAL dxx2, const REAL curr_time, const REAL R_ext, const REAL *restrict th_array, const REAL *restrict sinth_array, const REAL *restrict ph_array, const REAL *restrict psi4r_at_R_ext, const REAL *restrict psi4i_at_R_ext) { for(int l=2;l<="""+str(maximum_l)+r""";l++) { // The maximum l here is set in Python. for(int m=-l;m<=l;m++) { // Parallelize the integration loop: REAL psi4r_l_m = 0.0; REAL psi4i_l_m = 0.0; #pragma omp parallel for reduction(+:psi4r_l_m,psi4i_l_m) for(int i1=0;i1<Nxx_plus_2NGHOSTS1-2*NGHOSTS;i1++) { const REAL th = th_array[i1]; const REAL sinth = sinth_array[i1]; for(int i2=0;i2<Nxx_plus_2NGHOSTS2-2*NGHOSTS;i2++) { const REAL ph = ph_array[i2]; // Construct integrand for psi4 spin-weight s=-2,l=2,m=0 spherical harmonic REAL ReY_sm2_l_m,ImY_sm2_l_m; SpinWeight_minus2_SphHarmonics(l,m, th,ph, &ReY_sm2_l_m,&ImY_sm2_l_m); const int idx2d = i1*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)+i2; const REAL a = psi4r_at_R_ext[idx2d]; const REAL b = psi4i_at_R_ext[idx2d]; const REAL c = ReY_sm2_l_m; const REAL d = ImY_sm2_l_m; psi4r_l_m += (a*c + b*d) * dxx2 * sinth*dxx1; psi4i_l_m += (b*c - a*d) * dxx2 * sinth*dxx1; } } // Step 4: Output the result of the integration to file. char filename[100]; sprintf(filename,"outpsi4_l%d_m%d-r%.2f.txt",l,m, (double)R_ext); // If you love "+"'s in filenames by all means enable this (ugh): //if(m>=0) sprintf(filename,"outpsi4_l%d_m+%d-r%.2f.txt",l,m, (double)R_ext); FILE *outpsi4_l_m; // 0 = n*dt when n=0 is exactly represented in double/long double precision, // so no worries about the result being ~1e-16 in double/ld precision if(curr_time==0) outpsi4_l_m = fopen(filename, "w"); else outpsi4_l_m = fopen(filename, "a"); fprintf(outpsi4_l_m,"%e %.15e %.15e\n", (double)(curr_time), (double)psi4r_l_m,(double)psi4i_l_m); fclose(outpsi4_l_m); } } } """ desc = "" name = "driver__spherlikegrids__psi4_spinweightm2_decomposition" params = r"""const paramstruct *restrict params, REAL *restrict diagnostic_output_gfs, const int *restrict list_of_R_ext_idxs, const int num_of_R_ext_idxs, const REAL time, REAL *restrict xx[3],void xx_to_Cart(const paramstruct *restrict params, REAL *restrict xx[3],const int i0,const int i1,const int i2, REAL xCart[3])""" body = r""" // Step 1: Allocate memory for 2D arrays used to store psi4, theta, sin(theta), and phi. const int sizeof_2Darray = sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS); REAL *restrict psi4r_at_R_ext = (REAL *restrict)malloc(sizeof_2Darray); REAL *restrict psi4i_at_R_ext = (REAL *restrict)malloc(sizeof_2Darray); // ... also store theta, sin(theta), and phi to corresponding 1D arrays. REAL *restrict sinth_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS)); REAL *restrict th_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS1-2*NGHOSTS)); REAL *restrict ph_array = (REAL *restrict)malloc(sizeof(REAL)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)); // Step 2: Loop over all extraction indices: for(int ii0=0;ii0<num_of_R_ext_idxs;ii0++) { // Step 2.a: Set the extraction radius R_ext based on the radial index R_ext_idx REAL R_ext; { REAL xCart[3]; xx_to_Cart(params,xx,list_of_R_ext_idxs[ii0],1,1,xCart); // values for itheta and iphi don't matter. R_ext = sqrt(xCart[0]*xCart[0] + xCart[1]*xCart[1] + xCart[2]*xCart[2]); } // Step 2.b: Compute psi_4 at this extraction radius and store to a local 2D array. const int i0=list_of_R_ext_idxs[ii0]; #pragma omp parallel for for(int i1=NGHOSTS;i1<Nxx_plus_2NGHOSTS1-NGHOSTS;i1++) { th_array[i1-NGHOSTS] = xx[1][i1]; sinth_array[i1-NGHOSTS] = sin(xx[1][i1]); for(int i2=NGHOSTS;i2<Nxx_plus_2NGHOSTS2-NGHOSTS;i2++) { ph_array[i2-NGHOSTS] = xx[2][i2]; // Compute real & imaginary parts of psi_4, output to diagnostic_output_gfs const REAL psi4r = (diagnostic_output_gfs[IDX4S(PSI4_PART0REGF, i0,i1,i2)] + diagnostic_output_gfs[IDX4S(PSI4_PART1REGF, i0,i1,i2)] + diagnostic_output_gfs[IDX4S(PSI4_PART2REGF, i0,i1,i2)]); const REAL psi4i = (diagnostic_output_gfs[IDX4S(PSI4_PART0IMGF, i0,i1,i2)] + diagnostic_output_gfs[IDX4S(PSI4_PART1IMGF, i0,i1,i2)] + diagnostic_output_gfs[IDX4S(PSI4_PART2IMGF, i0,i1,i2)]); // Store result to "2D" array (actually 1D array with 2D storage): const int idx2d = (i1-NGHOSTS)*(Nxx_plus_2NGHOSTS2-2*NGHOSTS)+(i2-NGHOSTS); psi4r_at_R_ext[idx2d] = psi4r; psi4i_at_R_ext[idx2d] = psi4i; } } // Step 3: Perform integrations across all l,m modes from l=2 up to and including L_MAX (global variable): lowlevel_decompose_psi4_into_swm2_modes(Nxx_plus_2NGHOSTS1,Nxx_plus_2NGHOSTS2, dxx1,dxx2, time, R_ext, th_array, sinth_array, ph_array, psi4r_at_R_ext,psi4i_at_R_ext); } // Step 4: Free all allocated memory: free(psi4r_at_R_ext); free(psi4i_at_R_ext); free(sinth_array); free(th_array); free(ph_array); """ print_msg_with_timing("Spin-weight s=-2 Spherical Harmonics", msg="Ccodegen", startstop="stop", starttime=starttime) add_to_Cfunction_dict( includes=includes, prefunc=prefunc, desc=desc, name=name, params=params, body=body, rel_path_to_Cparams=rel_path_to_Cparams) return pickle_NRPy_env()
_____no_output_____
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 7: Confirm above functions are bytecode-identical to those in `BSSN/BSSN_Ccodegen_library.py` \[Back to [top](toc)\]$$\label{validation}$$
import BSSN.BSSN_Ccodegen_library as BCL import sys funclist = [("print_msg_with_timing", print_msg_with_timing, BCL.print_msg_with_timing), ("get_loopopts", get_loopopts, BCL.get_loopopts), ("register_stress_energy_source_terms_return_T4UU", register_stress_energy_source_terms_return_T4UU, BCL.register_stress_energy_source_terms_return_T4UU), ("BSSN_RHSs__generate_symbolic_expressions", BSSN_RHSs__generate_symbolic_expressions, BCL.BSSN_RHSs__generate_symbolic_expressions), ("add_rhs_eval_to_Cfunction_dict", add_rhs_eval_to_Cfunction_dict, BCL.add_rhs_eval_to_Cfunction_dict), ("Ricci__generate_symbolic_expressions", Ricci__generate_symbolic_expressions, BCL.Ricci__generate_symbolic_expressions), ("add_Ricci_eval_to_Cfunction_dict", add_Ricci_eval_to_Cfunction_dict, BCL.add_Ricci_eval_to_Cfunction_dict), ("BSSN_constraints__generate_symbolic_expressions", BSSN_constraints__generate_symbolic_expressions, BCL.BSSN_constraints__generate_symbolic_expressions), ("add_BSSN_constraints_to_Cfunction_dict", add_BSSN_constraints_to_Cfunction_dict, BCL.add_BSSN_constraints_to_Cfunction_dict), ("add_enforce_detgammahat_constraint_to_Cfunction_dict", add_enforce_detgammahat_constraint_to_Cfunction_dict, BCL.add_enforce_detgammahat_constraint_to_Cfunction_dict), ("add_psi4_part_to_Cfunction_dict", add_psi4_part_to_Cfunction_dict, BCL.add_psi4_part_to_Cfunction_dict), ("add_psi4_tetrad_to_Cfunction_dict", add_psi4_tetrad_to_Cfunction_dict, BCL.add_psi4_tetrad_to_Cfunction_dict), ("add_SpinWeight_minus2_SphHarmonics_to_Cfunction_dict", add_SpinWeight_minus2_SphHarmonics_to_Cfunction_dict, BCL.add_SpinWeight_minus2_SphHarmonics_to_Cfunction_dict) ] if sys.version_info.major >= 3: import inspect for func in funclist: # https://stackoverflow.com/questions/20059011/check-if-two-python-functions-are-equal if inspect.getsource(func[1]) != inspect.getsource(func[2]): print("inspect.getsource(func[1]):") print(inspect.getsource(func[1])) print("inspect.getsource(func[2]):") print(inspect.getsource(func[2])) print("ERROR: function " + func[0] + " is not the same as the Ccodegen_library version!") sys.exit(1) print("PASS! ALL FUNCTIONS ARE IDENTICAL") else: print("SORRY CANNOT CHECK FUNCTION IDENTITY WITH PYTHON 2. PLEASE update your Python installation.")
PASS! ALL FUNCTIONS ARE IDENTICAL
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Step 8: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](toc)\]$$\label{latex_pdf_output}$$The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename[Tutorial-BSSN_time_evolution-C_codegen_library.pdf](Tutorial-BSSN_time_evolution-C_codegen_library.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BSSN_time_evolution-C_codegen_library")
Created Tutorial-BSSN_time_evolution-C_codegen_library.tex, and compiled LaTeX file to PDF file Tutorial-BSSN_time_evolution- C_codegen_library.pdf
BSD-2-Clause
Tutorial-BSSN_time_evolution-C_codegen_library.ipynb
stevenrbrandt/nrpytutorial
Here we shall import some data taken from HiROS and import into a pandas dataframe for analysis.
# Import required data broomhall = '../data/broomhall2009.txt' davies = '../data/davies2014.txt' file = input("Please select file: 'broomhall' or 'davies': ") if file == str('broomhall'): file = broomhall elif file == str('davies'): file = davies else: print('Please try again') df = pd.read_csv(file, header=None, delim_whitespace=True, names=['n', 'l', 'nu', 'sg_nu']) df.head()
Please select file: 'broomhall' or 'davies': broomhall
MIT
Dan_notebooks/bisondata.ipynb
daw538/y4project
We can see from the preview above that the file contains a mix of radial modes at increasing orders. To perform any useful analysis, the induvidual modes $l$ must be considered separately. A neat way of doing this is to use a list comprehension, which avoids the need for multiple for loops and appending to arrays each time. This produces separate arrays for each value of $l$ which are contained within an overall list that can be called.
l = [df[(df.l == i)] for i in (range(max(df.l)-min(df.l)+1))] plt.figure(1) plt.errorbar(l[0].n, l[0].nu, yerr=l[0].sg_nu, fmt='x') plt.errorbar(l[1].n, l[1].nu, yerr=l[1].sg_nu, fmt='x') plt.errorbar(l[2].n, l[2].nu, yerr=l[2].sg_nu, fmt='x') plt.errorbar(l[3].n, l[3].nu, yerr=l[3].sg_nu, fmt='x') plt.xlabel('Value of $n$') plt.ylabel('Frequency ($\mu$Hz)') plt.show() print(u"%.5f" % np.median(np.diff(l[0].nu))) print(u"%.5f" % np.median(np.diff(l[1].nu))) print(u"%.5f" % np.median(np.diff(l[2].nu))) print(u"%.5f" % np.median(np.diff(l[3].nu))) # Échelle Plot for the data dnu = 135.2 plt.figure(2) # New plotting method import itertools markers = itertools.cycle(('+', '1', 'x', '*')) for i in range(max(df.l)-min(df.l)+1): plt.scatter(df.loc[(df.l == i) & (df.n > 11)].nu % dnu, df.loc[(df.l == i) & (df.n > 11)].nu, label=r'$l=$'+str(i), marker=next(markers)) plt.scatter(df.loc[(df.l == i) & (df.n < 12)].nu % dnu, df.loc[(df.l == i) & (df.n < 12)].nu, facecolors='none', edgecolors=['lightgrey'], label='') plt.title('Échelle Diagram for Sun') plt.xlabel('Modulo Frequency Spacing ('+ str(dnu) +') $\mu$Hz') plt.ylabel('Frequency ($\mu$Hz)') plt.legend() plt.savefig('seminar/solarechelle.pdf', bbox='tight_layout') plt.show()
_____no_output_____
MIT
Dan_notebooks/bisondata.ipynb
daw538/y4project
The above Échelle diagrams show how the four lowest modes form broadly straight lines in modulo frequency space, though there are significant deviations that form a tail at the lower values of $n$ (visible as faint circles). We shall select only the $l=0$ modes for analysis. Using Vrard PaperTo compute the local frequency separation for a mode $\nu_{n,0}$ we use the average difference over the adjacent modes$$ \Delta\nu(n) = \frac{\nu_{n+1,0} - \nu_{n-1,0}}{2}$$which cannot be appropriately calculated for modes the limits of n.The asymptotic dependence of the large frequency separation wrt. n is given in the paper as$$ \Delta\nu_{\textrm{up}}(n) = \left( 1 + \alpha\left(n-n_\textrm{max}\right)\right) \left$$where $\alpha$ is defined by the power law $\alpha = A\left^Β$. In the paper, the constants are set as $A=0.015$ and $B=-0.32$Having calulated these extra frequencies $\Delta\nu_\textrm{up}$, the difference between the theoretical and observed large frequency separation is calculated with $\delta_\textrm{g,obs} = \Delta\nu(n) - \Delta\nu_{\textrm{up}}(n)$
nmax = 22 # Modelling from Vrard Paper l0 = df.loc[(df.l == 0) & (df.n > 14)] l0['dnu_n'] = (l0['nu'].diff(2).shift(-1))/2 # Differences between neighbouring frequencies alpha = 0.015*np.mean(l0['dnu_n'])**(-0.32) # Equation provided in paper l0['dnu_up'] = (1 + alpha*(l0['n']-nmax)) * (np.mean(l0['dnu_n'])) # Calculating Δν_up (see equation above) l0['dg'] = l0['dnu_n']-l0['dnu_up'] # Difference between theoretical and observed large freq spacings # Plots to replicate results of Figure 2 in the Vrard paper plt.figure(10, figsize=(12,5)) plt.subplot(1,2,1) plt.scatter(l0.nu, l0.dnu_n) plt.xlabel(r'Frequency ($\mu m$)') plt.ylabel(r'$\Delta\nu(n)$') plt.subplot(1,2,2) plt.scatter(l0.nu, l0.dg) plt.xlabel(r'Frequency ($\mu m$)') plt.ylabel(r'$\delta_g$') plt.show() l0
/usr/lib/python3.7/site-packages/ipykernel_launcher.py:5: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy """ /usr/lib/python3.7/site-packages/ipykernel_launcher.py:9: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy if __name__ == '__main__': /usr/lib/python3.7/site-packages/ipykernel_launcher.py:11: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy # This is added back by InteractiveShellApp.init_path()
MIT
Dan_notebooks/bisondata.ipynb
daw538/y4project
In order to provide Stan with suitable starting parameters (to prevent the complete lack of convergance), we shall first attempt to manually fit a rather basic model to the data.$ \Delta\nu(n+\epsilon) + k(\frac{\nu_\textrm{max}}{\Delta\nu}+n)^2 + A\sin(\omega n+\phi)e^{-(n/\tau)}$where the latter terms represent the curvature and a decaying oscillatory component. $k$ is the curvature parameter, whilst $\tau$ is a decay parameter for the glitch. We can then attempt to replicate this using Stan.
# Look at l=0 data initially only plt.figure(3, figsize=(7,4)) plt.scatter(l0.nu % dnu, l0.nu, c=l0.n,cmap='viridis', label=r'$l=$'+str(0)) plt.colorbar(label=r'Value of $n$') def model(n, dnu, nmax, epsilon, k, A, omega, phi, tau): freqs = (n + epsilon) * dnu freqs += (nmax-n)**2 * k freqs += A*np.sin(omega*n + phi)*np.exp(-n/tau) return freqs n = np.arange(12,30,1) dnu = 135.2 nmax = 22 epsilon = 1.435 k = 0.14 A = 2.7 omega = 5 phi = 2.5 tau = 10 f = model(n, dnu, nmax, epsilon, k, A, omega, phi, tau) plt.plot(f % dnu, f, label='Model') plt.ylabel('Frequency ($\mu$Hz)') plt.xlabel(r'Mod. Freq. Spacing ('+ str(dnu) +') $\mu$Hz') plt.legend() plt.savefig('seminar/solarmodes1.pdf', bbox='tight_layout') plt.show() code = ''' functions { real model(real n, real dnu, real nmax, real epsilon, real k, real A, real omega, real phi, real tau){ return (dnu*(n+epsilon) + k*(nmax - n)^2 + A*sin(omega*n + phi)*exp(-n/tau)); } } data { int N; real n[N]; real freq[N]; real freq_err[N]; real dnu_guess; } parameters { real<lower = 0> dnu; real<lower = 0> nmax; real epsilon; real k; real<lower = 0> A; real<lower = 0> omega; real<lower = -2.0*pi(), upper = 2.0*pi()> phi; real<lower = 0> tau; } model { real mod[N]; for (i in 1:N){ mod[i] = model(n[i], dnu, nmax, epsilon, k, A, omega, phi, tau); } mod ~ normal(freq, freq_err); dnu ~ normal(dnu_guess, dnu_guess*0.001); nmax ~ normal(20,4); epsilon ~ normal(1.4, 0.1); k ~ lognormal(log(0.14), 0.3); A ~ lognormal(log(0.1), 0.3); omega ~ normal(0.8, 0.1); tau ~ normal(10,5); // phi ~ normal(0, 1.5); } ''' import pystan sm = pystan.StanModel(model_code=code) stan_data = {'N': len(l0['n'].values), 'n': l0['n'].values, 'freq': (l0['nu'].values), 'freq_err': l0['sg_nu'].values, 'dnu_guess': dnu } start = {'dnu': dnu, 'nmax': 22, 'epsilon': 1.435, 'k': 0.14, 'A': 0.5, 'omega': 5, 'phi': 2.5, 'tau': 50 } nchains = 4 fit = sm.sampling(data=stan_data, iter=5000, chains=nchains, init=[start for n in range(nchains)],) #control=dict(max_treedepth=15)) print(fit) fit.plot() plt.savefig('seminar/solarstan.pdf', bbox='tight_layout') plt.show() import corner data = np.vstack([fit['epsilon'], fit['k'], fit['dnu'], fit['nmax'], fit['A'], fit['omega'], fit['phi'], fit['tau']]).T corner.corner(data, labels=[r'$\epsilon$', r'$k$',r'$\Delta\nu$',r'$n_{max}$', r'$A$', r'$\omega$', r'$\phi$', r'$\tau$']) #, truths=[1.436, 0.07, 0.3, 2, 0]) plt.savefig('seminar/solarcorner.pdf', bbox='tight_layout') plt.show() n = np.arange(12,30,1) plt.figure(4) plt.scatter(df.loc[(df.l == 0) & (df.n > 11)].nu % 135.2, df.loc[(df.l == 0) & (df.n > 11)].nu, c='k', marker='x', label=r'$l=$'+str(0)) #c=df.loc[(df.l == 0) & (df.n > 11)].n,cmap='viridis') mod = 135.2 #plt.colorbar(label=r'Value of $n$') #f = model(n, dnu, 3050.0, 1.436, 0.07, 0.3, 2, 0) g = model(n, fit['dnu'].mean(), fit['nmax'].mean(), fit['epsilon'].mean(), fit['k'].mean(), fit['A'].mean(), fit['omega'].mean(), fit['phi'].mean(), fit['tau'].mean()) plt.plot(f % dnu, f, ':', label='Guess') plt.plot(g % fit['dnu'].mean(), g, label='Fit') #plt.plot(g % dnu, g, label='Fit') plt.errorbar(df.loc[(df.l == 0) & (df.n > 11)].nu % 135.2, df.loc[(df.l == 0) & (df.n > 11)].nu, xerr=df.loc[(df.l == 0) & (df.n > 11)].sg_nu, zorder=0, fmt="none", label="none", c='k', capsize=2, markersize=4, elinewidth=1) plt.ylabel('Frequency') plt.xlabel(r'Mod. Freq. Spacing ('+ str(mod) +') $\mu$Hz') plt.xlim(58,68) plt.legend() plt.savefig('seminar/solarmodes2.pdf', bbox='tight_layout') plt.show()
_____no_output_____
MIT
Dan_notebooks/bisondata.ipynb
daw538/y4project
Models with Multiple Source Populations *ARES* can handle an arbitrary number of source populations. Toaccess this functionality, create a dictionary representing each sourcepopulation of interest. Below, we'll create a population representative of PopII stars and another representative of PopIII stars.Before we start, it is important to note that in *ARES*, source populations are identified by their spectra over some contiguous interval in photon energy. This can be somewhat counterintuitive. For example, though UV emission from stars and X-ray emission from their compact remnants, e.g., X-ray binary systems, are both natural byproducts of star formation, we treat them as separate source populations in *ARES* even though the emission from each type of source is related to the same rate of star formation. However, because stars and XRBs have very different spectra, whose normalizations are parameterized differently, it is more convenient in the code to keep them separate. Because of this, what you might think of as a single source population (stars and their remnants) actually constitutes *two* source populations in *ARES*. Let's start with a PopII source population, and a few standard imports:
%pylab inline import ares import numpy as np import matplotlib.pyplot as pl pars = \ { 'problem_type': 100, # Blank slate global 21-cm signal # Setup star formation 'pop_Tmin{0}': 1e4, # atomic cooling halos 'pop_fstar{0}': 1e-1, # 10% star formation efficiency # Setup UV emission 'pop_sed_model{0}': True, 'pop_sed{0}': 'bb', # PopII stars -> 10^4 K blackbodies 'pop_temperature{0}': 1e4, 'pop_rad_yield{0}': 1e42, 'pop_fesc{0}': 0.2, 'pop_Emin{0}': 10.19, 'pop_Emax{0}': 24.6, 'pop_EminNorm{0}': 13.6, 'pop_EmaxNorm{0}': 24.6, 'pop_lya_src{0}': True, 'pop_ion_src_cgm{0}': True, 'pop_heat_src_igm{0}': False, # Setup X-ray emission 'pop_sed{1}': 'pl', 'pop_alpha{1}': -1.5, 'pop_rad_yield{1}': 2.6e38, 'pop_Emin{1}': 2e2, 'pop_Emax{1}': 3e4, 'pop_EminNorm{1}': 5e2, 'pop_EmaxNorm{1}': 8e3, 'pop_lya_src{1}': False, 'pop_ion_src_cgm{1}': False, 'pop_heat_src_igm{1}': True, 'pop_sfr_model{1}': 'link:sfrd:0', }
_____no_output_____
MIT
docs/examples/example_gs_multipop.ipynb
mirochaj/ares
**NOTE:** See [problem_types](../problem_types.html) for more information about why we chose ``problem_type=100`` here. We might as well go ahead and run this to establish a baseline:
sim = ares.simulations.Global21cm(**pars) sim.run() ax, zax = sim.GlobalSignature(color='k')
# Loaded $ARES/input/inits/inits_planck_TTTEEE_lowl_lowE_best.txt. ############################################################################################################## #### ARES Simulation: Overview #### ############################################################################################################## #### ---------------------------------------------------------------------------------------------------- #### #### Source Populations #### #### ---------------------------------------------------------------------------------------------------- #### #### sfrd sed radio O/IR Ly-a LW Ly-C X-ray RTE #### #### pop #0 : fcoll yes x x x #### #### pop #1 : link:sfrd:0 yes x #### #### ---------------------------------------------------------------------------------------------------- #### #### Physics #### #### ---------------------------------------------------------------------------------------------------- #### #### cgm_initial_temperature : [10000.0] #### #### clumping_factor : 1 #### #### secondary_ionization : 1 #### #### approx_Salpha : 1 #### #### include_He : False #### #### feedback_LW : False #### ############################################################################################################## # Loaded $ARES/input/hmf/hmf_ST_planck_TTTEEE_lowl_lowE_best_logM_1400_4-18_z_1201_0-60.hdf5.
MIT
docs/examples/example_gs_multipop.ipynb
mirochaj/ares
Now, let's add a PopIII-like source population. We'll assume that PopIII sources are brighter on average (in both the UV and X-ray) but live in lower mass halos. We could just copy-pase the dictionary above, change the population ID numbers and, for example, the UV and X-ray ``pop_rad_yield`` parameters. Or, we could use some built-in tricks to speed this up.First, let's take the PopII parameter set and make a ``ParameterBundle`` object:
popII = ares.util.ParameterBundle(**pars)
_____no_output_____
MIT
docs/examples/example_gs_multipop.ipynb
mirochaj/ares
This let's us easily extract parameters according to their ID number, and assign new ones
popIII_uv = popII.pars_by_pop(0, True) popIII_uv.num = 2 popIII_xr = popII.pars_by_pop(1, True) popIII_xr.num = 3
_____no_output_____
MIT
docs/examples/example_gs_multipop.ipynb
mirochaj/ares
The second argument tells *ARES* to remove the parameter ID numbers.Now, we can simply reset the ID numbers and update a few important parameters:
popIII_uv['pop_Tmin{2}'] = 300 popIII_uv['pop_Tmax{2}'] = 1e4 popIII_uv['pop_temperature{2}'] = 1e5 popIII_uv['pop_fstar{2}'] = 1e-4 popIII_xr['pop_sfr_model{3}'] = 'link:sfrd:2' popIII_xr['pop_rad_yield{3}'] = 2.6e39
_____no_output_____
MIT
docs/examples/example_gs_multipop.ipynb
mirochaj/ares
Now, let's make the final parameter dictionary and run it:
pars.update(popIII_uv) pars.update(popIII_xr) sim2 = ares.simulations.Global21cm(**pars) sim2.run() ax, zax = sim.GlobalSignature(color='k') ax, zax = sim2.GlobalSignature(color='b', ax=ax)
# Loaded $ARES/input/inits/inits_planck_TTTEEE_lowl_lowE_best.txt. ############################################################################################################## #### ARES Simulation: Overview #### ############################################################################################################## #### ---------------------------------------------------------------------------------------------------- #### #### Source Populations #### #### ---------------------------------------------------------------------------------------------------- #### #### sfrd sed radio O/IR Ly-a LW Ly-C X-ray RTE #### #### pop #0 : fcoll yes x x x #### #### pop #1 : link:sfrd:0 yes x #### #### pop #2 : fcoll yes x x x #### #### pop #3 : link:sfrd:2 yes x #### #### ---------------------------------------------------------------------------------------------------- #### #### Physics #### #### ---------------------------------------------------------------------------------------------------- #### #### cgm_initial_temperature : [10000.0] #### #### clumping_factor : 1 #### #### secondary_ionization : 1 #### #### approx_Salpha : 1 #### #### include_He : False #### #### feedback_LW : False #### ############################################################################################################## # Loaded $ARES/input/hmf/hmf_ST_planck_TTTEEE_lowl_lowE_best_logM_1400_4-18_z_1201_0-60.hdf5. # Loaded $ARES/input/hmf/hmf_ST_planck_TTTEEE_lowl_lowE_best_logM_1400_4-18_z_1201_0-60.hdf5.
MIT
docs/examples/example_gs_multipop.ipynb
mirochaj/ares
Note that the parameter file hangs onto the parameters of each population separately. To verify a few key changes, you could do:
len(sim2.pf.pfs) for key in ['pop_Tmin', 'pop_fstar', 'pop_rad_yield']: print(key, sim2.pf.pfs[0][key], sim2.pf.pfs[2][key])
pop_Tmin 10000.0 300 pop_fstar 0.1 0.0001 pop_rad_yield 1e+42 1e+42
MIT
docs/examples/example_gs_multipop.ipynb
mirochaj/ares
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).* Refinement ProtocolThe entire standard Rosetta refinement protocol, similar to that presented in Bradley, Misura, & Baker 2005, is available as a `Mover`. Note that the protocol can require ~40 minutes for a 100-residue protein. ```sfxn = get_fa_scorefxn()pose = pose_from_pdb("1YY8.clean.pdb")relax = pyrosetta.rosetta.protocols.relax.ClassicRelax()relax.set_scorefxn(sfxn)relax.apply(pose)```Note that this protocol is DEPRECATED and has been for quite some time. You will want to FastRelax() instead. It still takes quite a while. Replace the ClassicRelax() with FastRelax() and run it now. You will see the FastRelax mover used in many tutorials from here on out. FastRelax with constraints on each atom is useful to get a crystal structure into the Rosetta energy function. FastRelax can also be used for flexible-backbone design. These will all be covered in due time.
# Notebook setup import sys if 'google.colab' in sys.modules: !pip install pyrosettacolabsetup import pyrosettacolabsetup pyrosettacolabsetup.setup() print ("Notebook is set for PyRosetta use in Colab. Have fun!") from pyrosetta import * from pyrosetta.teaching import * init()
core.init: Checking for fconfig files in pwd and ./rosetta/flags core.init: Rosetta version: PyRosetta4.Release.python36.mac r208 2019.04+release.fd666910a5e fd666910a5edac957383b32b3b4c9d10020f34c1 http://www.pyrosetta.org 2019-01-22T15:55:37 core.init: command: PyRosetta -ex1 -ex2aro -database /Users/kathyle/Computational Protein Prediction and Design/PyRosetta4.Release.python36.mac.release-208/pyrosetta/database core.init: 'RNG device' seed mode, using '/dev/urandom', seed=-1509889871 seed_offset=0 real_seed=-1509889871 core.init.random: RandomGenerator:init: Normal mode, seed=-1509889871 RG_type=mt19937
MIT
notebooks/05.02-Refinement-Protocol.ipynb
So-AI-love/PyRosetta.notebooks
**Make sure you are in the directory with the pdb files:**`cd google_drive/My\ Drive/student-notebooks/`
### BEGIN SOLUTION sfxn = get_score_function() pose = pose_from_pdb("inputs/1YY8.clean.pdb") relax = pyrosetta.rosetta.protocols.relax.FastRelax() relax.set_scorefxn(sfxn) #Skip for tests if not os.getenv("DEBUG"): relax.apply(pose) ### END SOLUTION
core.scoring.ScoreFunctionFactory: SCOREFUNCTION: ref2015 core.scoring.etable: Starting energy table calculation core.scoring.etable: smooth_etable: changing atr/rep split to bottom of energy well core.scoring.etable: smooth_etable: spline smoothing lj etables (maxdis = 6) core.scoring.etable: smooth_etable: spline smoothing solvation etables (max_dis = 6) core.scoring.etable: Finished calculating energy tables. basic.io.database: Database file opened: scoring/score_functions/hbonds/ref2015_params/HBPoly1D.csv basic.io.database: Database file opened: scoring/score_functions/hbonds/ref2015_params/HBFadeIntervals.csv basic.io.database: Database file opened: scoring/score_functions/hbonds/ref2015_params/HBEval.csv basic.io.database: Database file opened: scoring/score_functions/hbonds/ref2015_params/DonStrength.csv basic.io.database: Database file opened: scoring/score_functions/hbonds/ref2015_params/AccStrength.csv core.chemical.GlobalResidueTypeSet: Finished initializing fa_standard residue type set. Created 696 residue types core.chemical.GlobalResidueTypeSet: Total time to initialize 1.07793 seconds. basic.io.database: Database file opened: scoring/score_functions/rama/fd/all.ramaProb basic.io.database: Database file opened: scoring/score_functions/rama/fd/prepro.ramaProb basic.io.database: Database file opened: scoring/score_functions/omega/omega_ppdep.all.txt basic.io.database: Database file opened: scoring/score_functions/omega/omega_ppdep.gly.txt basic.io.database: Database file opened: scoring/score_functions/omega/omega_ppdep.pro.txt basic.io.database: Database file opened: scoring/score_functions/omega/omega_ppdep.valile.txt basic.io.database: Database file opened: scoring/score_functions/P_AA_pp/P_AA basic.io.database: Database file opened: scoring/score_functions/P_AA_pp/P_AA_n core.scoring.P_AA: shapovalov_lib::shap_p_aa_pp_smooth_level of 1( aka low_smooth ) got activated. basic.io.database: Database file opened: scoring/score_functions/P_AA_pp/shapovalov/10deg/kappa131/a20.prop core.import_pose.import_pose: File 'inputs/1YY8.clean.pdb' automatically determined to be of type PDB core.conformation.Conformation: [ WARNING ] missing heavyatom: CG on residue ARG 18 core.conformation.Conformation: [ WARNING ] missing heavyatom: CD on residue ARG 18 core.conformation.Conformation: [ WARNING ] missing heavyatom: NE on residue ARG 18 core.conformation.Conformation: [ WARNING ] missing heavyatom: CZ on residue ARG 18 core.conformation.Conformation: [ WARNING ] missing heavyatom: NH1 on residue ARG 18 core.conformation.Conformation: [ WARNING ] missing heavyatom: NH2 on residue ARG 18 core.conformation.Conformation: [ WARNING ] missing heavyatom: CG on residue GLN:NtermProteinFull 214 core.conformation.Conformation: [ WARNING ] missing heavyatom: CD on residue GLN:NtermProteinFull 214 core.conformation.Conformation: [ WARNING ] missing heavyatom: OE1 on residue GLN:NtermProteinFull 214 core.conformation.Conformation: [ WARNING ] missing heavyatom: NE2 on residue GLN:NtermProteinFull 214 core.conformation.Conformation: [ WARNING ] missing heavyatom: CG on residue ARG 452 core.conformation.Conformation: [ WARNING ] missing heavyatom: CD on residue ARG 452 core.conformation.Conformation: [ WARNING ] missing heavyatom: NE on residue ARG 452 core.conformation.Conformation: [ WARNING ] missing heavyatom: CZ on residue ARG 452 core.conformation.Conformation: [ WARNING ] missing heavyatom: NH1 on residue ARG 452 core.conformation.Conformation: [ WARNING ] missing heavyatom: NH2 on residue ARG 452 core.conformation.Conformation: [ WARNING ] missing heavyatom: CG on residue GLN:NtermProteinFull 648 core.conformation.Conformation: [ WARNING ] missing heavyatom: CD on residue GLN:NtermProteinFull 648 core.conformation.Conformation: [ WARNING ] missing heavyatom: OE1 on residue GLN:NtermProteinFull 648 core.conformation.Conformation: [ WARNING ] missing heavyatom: NE2 on residue GLN:NtermProteinFull 648 core.conformation.Conformation: Found disulfide between residues 23 88 core.conformation.Conformation: current variant for 23 CYS core.conformation.Conformation: current variant for 88 CYS core.conformation.Conformation: current variant for 23 CYD core.conformation.Conformation: current variant for 88 CYD core.conformation.Conformation: Found disulfide between residues 134 194 core.conformation.Conformation: current variant for 134 CYS core.conformation.Conformation: current variant for 194 CYS core.conformation.Conformation: current variant for 134 CYD core.conformation.Conformation: current variant for 194 CYD core.conformation.Conformation: Found disulfide between residues 235 308 core.conformation.Conformation: current variant for 235 CYS core.conformation.Conformation: current variant for 308 CYS core.conformation.Conformation: current variant for 235 CYD core.conformation.Conformation: current variant for 308 CYD core.conformation.Conformation: Found disulfide between residues 359 415 core.conformation.Conformation: current variant for 359 CYS core.conformation.Conformation: current variant for 415 CYS core.conformation.Conformation: current variant for 359 CYD core.conformation.Conformation: current variant for 415 CYD core.conformation.Conformation: Found disulfide between residues 457 522 core.conformation.Conformation: current variant for 457 CYS core.conformation.Conformation: current variant for 522 CYS core.conformation.Conformation: current variant for 457 CYD core.conformation.Conformation: current variant for 522 CYD core.conformation.Conformation: Found disulfide between residues 568 628 core.conformation.Conformation: current variant for 568 CYS core.conformation.Conformation: current variant for 628 CYS core.conformation.Conformation: current variant for 568 CYD core.conformation.Conformation: current variant for 628 CYD core.conformation.Conformation: Found disulfide between residues 669 742 core.conformation.Conformation: current variant for 669 CYS core.conformation.Conformation: current variant for 742 CYS core.conformation.Conformation: current variant for 669 CYD core.conformation.Conformation: current variant for 742 CYD core.conformation.Conformation: Found disulfide between residues 793 849 core.conformation.Conformation: current variant for 793 CYS core.conformation.Conformation: current variant for 849 CYS core.conformation.Conformation: current variant for 793 CYD core.conformation.Conformation: current variant for 849 CYD core.pack.pack_missing_sidechains: packing residue number 18 because of missing atom number 6 atom name CG core.pack.pack_missing_sidechains: packing residue number 214 because of missing atom number 6 atom name CG core.pack.pack_missing_sidechains: packing residue number 452 because of missing atom number 6 atom name CG core.pack.pack_missing_sidechains: packing residue number 648 because of missing atom number 6 atom name CG core.pack.task: Packer task: initialize from command line() core.scoring.ScoreFunctionFactory: SCOREFUNCTION: ref2015 basic.io.database: Database file opened: scoring/score_functions/elec_cp_reps.dat core.scoring.elec.util: Read 40 countpair representative atoms core.pack.dunbrack.RotamerLibrary: shapovalov_lib_fixes_enable option is true.
MIT
notebooks/05.02-Refinement-Protocol.ipynb
So-AI-love/PyRosetta.notebooks
Tutorial - Translation> Using the Translation API in AdaptNLP TranslationTranslation is the task of producing the input text in another language.Below, we'll walk through how we can use AdaptNLP's `EasyTranslator` module to translate text with state-of-the-art models. Getting StartedWe'll first import the `EasyTranslator` class from AdaptNLP:
from adaptnlp import EasyTranslator
_____no_output_____
Apache-2.0
nbs/08a_tutorial.translation.ipynb
chsafouane/adaptnlp
Then we'll write some example text to use:
text = ["Machine learning will take over the world very soon.", "Machines can speak in many languages.",]
_____no_output_____
Apache-2.0
nbs/08a_tutorial.translation.ipynb
chsafouane/adaptnlp
Followed by instantiating the `EasyTranslator` class:
translator = EasyTranslator()
_____no_output_____
Apache-2.0
nbs/08a_tutorial.translation.ipynb
chsafouane/adaptnlp
Next we can translate our text. We pass in the text we wish to translate, optionally a prefix for the t5 model (only used with t5 models), a model name, and any keyword arguments from `Transformers.PreTrainedModel.generate()`.Here we'll pass in `text`, have our model translate from English to German, and use the `t5-small` model.
translations = translator.translate(text = text, t5_prefix="translate English to German", model_name_or_path="t5-small", mini_batch_size=1, min_length=0, max_length=100, early_stopping=True)
_____no_output_____
Apache-2.0
nbs/08a_tutorial.translation.ipynb
chsafouane/adaptnlp
And we can look at the outputs:
print("Translations:\n") for t in translations: print(t, "\n")
Translations: Das Maschinenlernen wird die Welt in Kürze übernehmen. Maschinen können in vielen Sprachen sprechen.
Apache-2.0
nbs/08a_tutorial.translation.ipynb
chsafouane/adaptnlp
Finding a Model with the Model Hub Using the `HFModelHub` we can search for any translation models in HuggingFace like so:
from adaptnlp import HFModelHub hub = HFModelHub() models = hub.search_model_by_task('translation'); models
_____no_output_____
Apache-2.0
nbs/08a_tutorial.translation.ipynb
chsafouane/adaptnlp
From there we can pass in any `HFModelResult` from it. Here we'll use the `t5-small` again:
model = models[-1] translations = translator.translate(text = text, t5_prefix="translate English to German", model_name_or_path=model, mini_batch_size=1, min_length=0, max_length=100, early_stopping=True)
_____no_output_____
Apache-2.0
nbs/08a_tutorial.translation.ipynb
chsafouane/adaptnlp
And see that we get similar results:
print("Translations:\n") for t in translations: print(t, "\n")
Translations: Das Maschinenlernen wird die Welt in Kürze übernehmen. Maschinen können in vielen Sprachen sprechen.
Apache-2.0
nbs/08a_tutorial.translation.ipynb
chsafouane/adaptnlp
NO2 Prediction by using Machine Learning Regression Analyses in Google Earth Engine **Machine Learning can create a Model to Predict specific value base on existing data set (dependent and independent values).** **Introduction** **Nitrogen Dioxide (NO2) air pollution**.The World Health Organization estimates that air pollution kills 4.2 million people every year. The main effect of breathing in raised levels of NO2 is the increased likelihood of respiratory problems. NO2 inflames the lining of the lungs, and it can reduce immunity to lung infections.There are connections between respiratory deceases / also exposure to viruses and more deadly cases. ***Sources of NO2***:The rapid population growth, The fast urbanization: * Industrial facilities* Fossil fuels (coal, oil and gas)* Increase of transportation – 80 %.The affect air pollution (NO2): population health, and global warming. **Objective**The theme of this project is to create a Model to Predict specific value (NO2) for past years base on existing data set (Landsat and Sentinel-5P(TROPOMI) images) for 2019. These Prediction can be used for Monitoring and Statistical Analyses of developing NO2 over Time.
_____no_output_____
BSD-3-Clause
scripts_modules/CoLab_Random_Forest_Regression.ipynb
annapav7/NO2-tropomi_prediction_analysis
**DataSet:**The Sentinel-5P satellite with TROPOspheric Monitoring Instrument (TROPOMI) instrument provides high spectral resolution (7x3.5 km2) for all spectral bands to register level of NO2. TROPOMI available from October 13, 2017.Landsat satellite launched in 1972 and images are available for more then 40 years. **Concept:**Regression: The model can make generalizations about new data. The model has been learned from the training data, and can be used to predict the result of test data: here, we might be given an x-value, and the model would allow us to predict the y value. By drawing this separating line, we have learned a model which can generalize to new data. 1._ Install libraries
!pip install earthengine-api
_____no_output_____
BSD-3-Clause
scripts_modules/CoLab_Random_Forest_Regression.ipynb
annapav7/NO2-tropomi_prediction_analysis
2._ Establish connection
!earthengine authenticate
_____no_output_____
BSD-3-Clause
scripts_modules/CoLab_Random_Forest_Regression.ipynb
annapav7/NO2-tropomi_prediction_analysis
**`Complete End to End Python code for Random Forest Regression:`**
# Import necessary Libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import rasterio as rio from rasterio.plot import show # Import the data ( CSV formats) data = pd.read_csv('name_of_file.csv') data.head() # Store the Data in form of dependent and independent variables separatly X = data.ilog[:, 0:1].values y = data.ilog[:, 1].values # Import the Random Forest Regressor from sklearn.ensemble import RandomForestRegressor # Craete a Random Forest Regressor object from Random Forest Regressor Class RFReg = RandomForestRegressor(n_estimators = 100, random_state = 0) # Fit the random forest regressor with Training Data represented by X_train and y_train RFReg.fit(X_train, y_train) #Predicted Height from test dataset w.r.t Random Forest Regression y_predict_rfr = RFReg.predict((X_test)) #Model Evaluation using R-Square for Random Forest Regression from sklearn import metrics r_square = metrics.r2_score(y_test, y_predict_rfr) print('R-Square Error associated with Random Forest Regression is:', r_square) ''' Visualise the Random Forest Regression by creating range of values from min value of X_train to max value of X_train having a difference of 0.01 between two consecutive values''' X_val = np.arange(min(X_train), max(X_train), 0.01) #Reshape the data into a len(X_val)*1 array in order to make a column out of the X_val values X_val = X_val.reshape((len(X_val), 1)) #Define a scatter plot for training data plt.scatter(X_train, y_train, color = 'blue') #Plot the predicted data plt.plot(X_val, RFReg.predict(X_val), color = 'red') #Define the title plt.title('NO2 prediction using Random Forest Regression') #Define X axis label plt.xlabel('NDVI') #Define Y axis label plt.ylabel('Level of NO2') #Set the size of the plot for better clarity plt.figure(figsize=(1,1)) #Draw the plot plt.show() # Predicting Height based on Age using Random Forest Regression no2_pred = RFReg.predict([[41]]) print("Predicted NO2t: % d"% no2_pred)
_____no_output_____
BSD-3-Clause
scripts_modules/CoLab_Random_Forest_Regression.ipynb
annapav7/NO2-tropomi_prediction_analysis
**Model Evaluation**
#Model Evaluation using Mean Square Error (MSE) print('Mean Squared Error:', metrics.mean_squared_error(y_test, y_predict)) #Model Evaluation using Root Mean Square Error (RMSE) print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(y_test, y_predict))) #Model Evaluation using Mean Absolute Error (MAE) print('Mean Absolute Error:', metrics.mean_absolute_error(y_test, y_predict)) #Model Evaluation using R-Square from sklearn import metrics r_square = metrics.r2_score(y_test, y_predict) print('R-Square Error:', r_square) #For Illustration Purpose Only. #Considering Multiple Linear Equation with two Variables : grade = a0 + a1*time_to_study + a2*class_participation #Model Evaluation using Adjusted R-Square. # Here n = no. of observations and p = no. of independent variables n = 50 p = 2 Adj_r_square = 1-(1-r_square)*(n-1)/(n-p-1) print('Adjusted R-Square Error:', Adj_r_square)
_____no_output_____
BSD-3-Clause
scripts_modules/CoLab_Random_Forest_Regression.ipynb
annapav7/NO2-tropomi_prediction_analysis
Widgets without writing widgets: interact The `interact` function (`ipywidgets.interact`) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython's widgets.
from ipywidgets import interact, interactive, fixed, interact_manual import ipywidgets as widgets
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Basic `interact` At the most basic level, `interact` autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use `interact`, you need to define a function that you want to explore. Here is a function that triples its argument, `x`.
def f(x): return 3 * x
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
When you pass this function as the first argument to `interact` along with an integer keyword argument (`x=10`), a slider is generated and bound to the function parameter.
interact(f, x=10);
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
When you move the slider, the function is called, and the return value is printed.If you pass `True` or `False`, `interact` will generate a checkbox:
interact(f, x=True);
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
If you pass a string, `interact` will generate a `Text` field.
interact(f, x='Hi there!');
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
`interact` can also be used as a decorator. This allows you to define a function and interact with it in a single shot. As this example shows, `interact` also works with functions that have multiple arguments.
@widgets.interact(x=True, y=1.0) def g(x, y): return (x, y)
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Fixing arguments using `fixed` There are times when you may want to explore a function using `interact`, but fix one or more of its arguments to specific values. This can be accomplished by wrapping values with the `fixed` function.
def h(p, q): return (p, q)
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
When we call `interact`, we pass `fixed(20)` for q to hold it fixed at a value of `20`.
interact(h, p=5, q=fixed(20));
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Notice that a slider is only produced for `p` as the value of `q` is fixed. Widget abbreviations When you pass an integer-valued keyword argument of `10` (`x=10`) to `interact`, it generates an integer-valued slider control with a range of `[-10, +3*10]`. In this case, `10` is an *abbreviation* for an actual slider widget:```pythonIntSlider(min=-10, max=30, step=1, value=10)```In fact, we can get the same result if we pass this `IntSlider` as the keyword argument for `x`:
interact(f, x=widgets.IntSlider(min=-10, max=30, step=1, value=10));
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
This examples clarifies how `interact` processes its keyword arguments:1. If the keyword argument is a `Widget` instance with a `value` attribute, that widget is used. Any widget with a `value` attribute can be used, even custom ones.2. Otherwise, the value is treated as a *widget abbreviation* that is converted to a widget before it is used.The following table gives an overview of different widget abbreviations: Keyword argumentWidget `True` or `False`Checkbox `'Hi there'`Text `value` or `(min,max)` or `(min,max,step)` if integers are passedIntSlider `value` or `(min,max)` or `(min,max,step)` if floats are passedFloatSlider `['orange','apple']` or `[('one', 1), ('two', 2)]`DropdownNote that a dropdown is used if a list or a list of tuples is given (signifying discrete choices), and a slider is used if a tuple is given (signifying a range). You have seen how the checkbox and text widgets work above. Here, more details about the different abbreviations for sliders and dropdowns are given.If a 2-tuple of integers is passed `(min, max)`, an integer-valued slider is produced with those minimum and maximum values (inclusively). In this case, the default step size of `1` is used.
interact(f, x=(0, 4));
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
A `FloatSlider` is generated if any of the values are floating point. The step size can be changed by passing a third element in the tuple.
interact(f, x=(0, 10, 0.01));
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Exercise: Reverse some textHere is a function that takes text as an input and returns the text backwards.
def reverse(x): return x[::-1] reverse('I am printed backwards.')
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Use `interact` to make interactive controls for this function.
# %load solutions/interact-basic-list/reverse-text.py
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
For both integer and float-valued sliders, you can pick the initial value of the widget by passing a default keyword argument to the underlying Python function. Here we set the initial value of a float slider to `5.5`.
@interact(x=(0.0, 20.0, 0.5)) def h(x=5.5): return x
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Dropdown menus are constructed by passing a list of strings. In this case, the strings are both used as the names in the dropdown menu UI and passed to the underlying Python function.
interact(f, x=['apples','oranges']);
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
If you want a dropdown menu that passes non-string values to the Python function, you can pass a list of tuples of the form `('label', value)`. The first items are the names in the dropdown menu UI and the second items are values that are the arguments passed to the underlying Python function.
interact(f, x=[('one', 10), ('two', 20)]);
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Basic interactive plotThough the examples so far in this notebook had very basic output, more interesting possibilities are straightforward. The function below plots a straight line whose slope and intercept are given by its arguments.
%matplotlib widget import matplotlib.pyplot as plt import numpy as np def f(m, b): plt.figure(2) plt.clf() plt.grid() x = np.linspace(-10, 10, num=1000) plt.plot(x, m * x + b) plt.ylim(-5, 5) plt.show()
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
The interactive below displays a line whose slope and intercept is set by the sliders. Note that if the variable containing the widget, `interactive_plot`, is the last thing in the cell it is displayed.
interact(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Exercise: Make a plotHere is a python function that, given $k$ and $p$, plots $f(x) = \sin(k x - p)$.
def plot_f(k, p): plt.figure(5) plt.clf() plt.grid() x = np.linspace(0, 4 * np.pi) y = np.sin(k*x - p) plt.plot(x, y) plt.show()
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
Copy the above function definition and make it interactive using `interact`, so that there are sliders for the parameters $k$ and $p$, where $0.5\leq k \leq 2$ and $0 \leq p \leq 2\pi$ (hint: use `np.pi` for $\pi$).
# %load solutions/interact-basic-list/plot-function.py
_____no_output_____
BSD-3-Clause
notebooks/02.Interact/02.00-Using-Interact.ipynb
ibdafna/tutorial
We want to analyze participants and patterns of participation across IETF groups. How many people participate, in which groups, how does affiliation, gender, RFC authorship or other characteristics relate to levels of participation, and a variety of other related questions. How do groups relate to one another? Which participants provide important connections between groups? Setup and gather data Start by importing the necessary libraries.
%matplotlib inline import bigbang.ingress.mailman as mailman import bigbang.analysis.graph as graph import bigbang.analysis.process as process from bigbang.parse import get_date from bigbang.archive import Archive import bigbang.utils as utils import pandas as pd import datetime import matplotlib.pyplot as plt import numpy as np import math import pytz import pickle import os import csv import re import scipy import scipy.cluster.hierarchy as sch import email #pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options plt.rcParams['axes.facecolor'] = 'white' import seaborn as sns sns.set() sns.set_style("white")
_____no_output_____
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang
Let's start with a single IETF mailing list. (Later, we can expand to all current groups, or all IETF lists ever.)
list_url = '6lo' # perpass happens to be one that I subscribe to ietf_archives_dir = '../archives' # relative location of the ietf-archives directory/repo list_archive = mailman.open_list_archives(list_url, ietf_archives_dir) activity = Archive(list_archive).get_activity() people = None people = pd.DataFrame(activity.sum(0), columns=['6lo']) # sum the message count, rather than by date people.describe()
_____no_output_____
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang
Now repeat, parsing the archives and collecting the activities for all the mailing lists in the corpus. To make this faster, we try to open pre-created `-activity.csv` files which contain the activity summary for the full list archive. These files are created with `bin/mail_to_activity.py` or might be included in the mailing list archive repository.
f = open('../examples/mm.ietf.org.txt', 'r') ietf_lists = set(f.readlines()) # remove duplicates, which is a bug in list maintenance list_activities = [] for list_url in ietf_lists: try: activity_summary = mailman.open_activity_summary(list_url, ietf_archives_dir) if activity_summary is not None: list_activities.append((list_url, activity_summary)) except Exception as e: print(str(e)) len(list_activities)
_____no_output_____
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang
Merge all of the activity summaries together, so that every row is a "From" field, with a column for every mailing list and a cell that includes the number of messages sent to that list. This will be a very sparse, 2-d table. **This operation is a little slow.** Don't repeat this operation without recreating `people` from the cells above.
list_columns = [] for (list_url, activity_summary) in list_activities: list_name = mailman.get_list_name(list_url) activity_summary.rename(columns={'Message Count': list_name}, inplace=True) # name the message count column for the list people = pd.merge(people, activity_summary, how='outer', left_index=True, right_index=True) list_columns.append(list_name) # keep a list of the columns that specifically represent mailing list message counts # the original message column was duplicated during the merge process, so we remove it here people = people.drop(columns=['6lo_y']) people = people.rename(columns={'6lo_x':'6lo'}) people.describe() # not sure how the index ended up with NaN values, but need to change them to strings here so additional steps will work new_index = people.index.fillna('missing') people.index = new_index
_____no_output_____
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang
Split out the email address and header name from the From header we started with.
froms = pd.Series(people.index) emails = froms.apply(lambda x: email.utils.parseaddr(x)[1]) emails.index = people.index names = froms.apply(lambda x: email.utils.parseaddr(x)[0]) names.index = people.index people['email'] = emails people['name'] = names
_____no_output_____
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang
Let's create some summary statistical columns.
people['Total Messages'] = people[list_columns].sum(axis=1) people['Number of Groups'] = people[list_columns].count(axis=1) people['Median Messages per Group'] = people[list_columns].median(axis=1) people['Total Messages'].sum()
_____no_output_____
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang
In this corpus, **101,510** "people" sent a combined total of **1.2 million messages**. Most people sent only 1 message. Participation patterns The vast majority of people send only a few messages, and to only a couple of lists. (These histograms use a log axis for Y, without which you couldn't even see the columns besides the first.)
people[['Total Messages']].plot(kind='hist', bins=100, logy=True, logx=False) people[['Number of Groups']].plot(kind='hist', bins=100, logy=True, logx=False)
_____no_output_____
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang
Let's limit our analysis for now to people who have sent at least 5 messages. We will also create log base 10 versions of our summary columns for easier graphing later.
working = people[people['Total Messages'] > 5] working['Total Messages (log)'] = np.log10(working['Total Messages']) working['Number of Groups (log)'] = np.log10(working['Number of Groups'])
/home/lem/.local/lib/python2.7/site-packages/ipykernel_launcher.py:3: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy This is separate from the ipykernel package so we can avoid doing imports until /home/lem/.local/lib/python2.7/site-packages/ipykernel_launcher.py:4: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy after removing the cwd from sys.path.
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang
The median number of messages that a user sends to a group is also heavily weighted towards a small number, but the curve doesn't seem to drop off in the same extreme manner. There is a non-random tendency to send some messages to a group?
working[['Median Messages per Group']].plot(kind='hist', bins=100, logy=True)
_____no_output_____
MIT
examples/experimental_notebooks/IETF Participants.ipynb
nllz/bigbang