text
stringlengths
301
426
source
stringclasses
3 values
__index_level_0__
int64
0
404k
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. the train and test module similar to standalone solution. Fig 10. Predicted Regression values (Humidity) Fig 11. 2020Q4 Humidity: Actual and Predicted (Spark Local) Distributed Random Forest Regressor implementation in PySpark also seem to follow the trend, though the mean squared error value is a
medium
3,440
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. bit higher. Even so, the running time of spark local implementation is found 3x times faster, on a quad-core system with 64G RAM. Fig 12. Standalone: 2.14 secs. Spark Local: 0.71 secs for Random Forest Regression training C. Spark Cluster: AWS Elastic Map Reduce + Docker To get double benefits of
medium
3,441
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. compute and data scale, the above solution needs to be deployed across multiple boxes. However, it is time consuming to setup a cluster with Spark, using your local machines. Instead, you can use Amazon EMR to create a cluster with workloads running on Amazon EC2 instances. Amazon EMR is a managed
medium
3,442
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. cluster platform that simplifies running big data frameworks, such as Apache Hadoop and Spark. You can use Docker containers to handle library dependencies, starting from Amazon EMR 6.0.0, instead of installing it on each EC2 cluster instance. But you need to configure the Docker registry and
medium
3,443
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. define additional parameters during “Cluster Creation” in AWS EMR. Create a Docker Image (or modify a docker image) ✓ Create a directory and a requirements.txt file. The requirements.txt file should contain all the dependencies required by your Spark application. mkdir pyspark vi
medium
3,444
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. pyspark/requirements.txt vi pyspark/Dockerfile Sample requirements.txt: python-dateutil==2.8.1 scikit-image==0.18.1 statsmodels==0.12.2 scikit-learn==0.23.2 ✓ Create a Dockerfile inside the directory with following contents. A specific numpy version is installed to confirm docker execution from EMR
medium
3,445
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. Notebook. FROM amazoncorretto:8 RUN yum -y update RUN yum -y install yum-utils RUN yum -y groupinstall development RUN yum list python3* RUN yum -y install python3 python3-dev python3-pip python3-virtualenv python-dev RUN python -V RUN python3 -V ENV PYSPARK_DRIVER_PYTHON python3 ENV PYSPARK_PYTHON
medium
3,446
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. python3 RUN pip3 install --upgrade pip RUN pip3 install 'numpy==1.17.5' RUN python3 -c "import numpy as np" WORKDIR /app COPY requirements.txt requirements.txt RUN pip3 install -r requirements.txt ✓ Build the docker image using the command below. 2. Create an ECR repository. Tag and upload the
medium
3,447
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. locally built image to ECR. sudo docker build -t local/pyspark-example pyspark/ aws ecr create-repository --repository-name emr-docker-examples sudo docker tag local/pyspark-example 123456789123.dkr.ecr.us-east-1.amazonaws.com/emr-docker-examples:pyspark-example sudo aws ecr get-login --region
medium
3,448
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. us-east-1 --no-include-email | sudo docker login -u AWS -p <password> https://123456789123.dkr.ecr.us-east-1.amazonaws.com sudo docker push 123456789123.dkr.ecr.us-east-1.amazonaws.com/emr-docker-examples:pyspark-example You can also upload to Docker Hub and give ‘docker.io/account/docker-name:tag’
medium
3,449
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. instead of AWS ECR Image URI above. 3. Create Cluster with Spark in EMR ✓ Open the Amazon EMR console (Ref here) ✓ Instead of ‘Quick Options’, click ‘Go to Advanced Options’ to enable, Jupyter Enterprise Gateway: a web server that helps launch kernels on behalf of remote notebooks. JupyterHub: to
medium
3,450
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. host multiple instances of Jupyter notebook server. Apache Livy: a service that enables interaction with Spark cluster over a REST interface. ✓ Select the number of nodes of each node type, as per required parallelism Master node: manages the cluster by coordinating the distribution of data and
medium
3,451
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. tasks among other nodes Core Node: run tasks and store data in HDFS (at least one for multi-node) Task Node: only runs tasks & does not store data in HDFS (optional) Thus, core nodes add both data and compute parallelism while task node adds only compute parallelism. Fig 13. How to create Spark
medium
3,452
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. Cluster using AWS EMR 4. Enter below configuration under ‘Software Settings’ To avoid userid error, set “livy.impersonation” in the below JSON [ { "Classification": "container-executor", "Configurations": [ { "Classification": "docker", "Properties": { "docker.trusted.registries":
medium
3,453
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. "local,centos,123456789123.dkr.ecr.us-east-1.amazonaws.com", "docker.privileged-containers.registries": "local,centos,123456789123.dkr.ecr.us-east-1.amazonaws.com" } }, ] }, { "Classification":"livy-conf", "Properties":{ "livy.impersonation.enabled": "true", "livy.spark.master":"yarn",
medium
3,454
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. "livy.spark.deploy-mode":"cluster", "livy.server.session.timeout":"16h" } }, { "Classification": "core-site", "Properties": { "hadoop.proxyuser.livy.groups": "*", "hadoop.proxyuser.livy.hosts": "*" } }, { "Classification":"hive-site", "Properties":{ "hive.execution.mode":"container" } }, {
medium
3,455
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. "spark.executorEnv.YARN_CONTAINER_RUNTIME_DOCKER_CLIENT_CONFIG":"hdfs:///user/hadoop/config.json", "spark.executorEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE":"123456789123.dkr.ecr.us-east-1.amazonaws.com/scalableml:s3spark", "spark.executor.instances":"2",
medium
3,457
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. "spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_CLIENT_CONFIG":"hdfs:///user/hadoop/config.json", "spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE":"123456789123.dkr.ecr.us-east-1.amazonaws.com/scalableml:s3spark" } } ] 5. Enable ECR access from YARN To enable YARN to access
medium
3,458
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. images from ECR, we set container environment variable YARN_CONTAINER_RUNTIME_DOCKER_CLIENT_CONFIG. However, we need to generate config.json and put in HDFS so that it can be used by jobs running on the cluster. To do this, login to one of the core nodes and execute below commands. ssh -i
medium
3,459
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. permission.pem [email protected] aws ecr get-login-password --region us-east-1 | sudo docker login --username AWS --password-stdin 123456789123.dkr.ecr.us-east-1.amazonaws.com mkdir -p ~/.docker sudo cp /root/.docker/config.json ~/.docker/config.json sudo chmod 644
medium
3,460
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. ~/.docker/config.json hadoop fs -put ~/.docker/config.json /user/hadoop/ Fig 14. To give ECR access to YARN 6. EMR Notebook and Configuration You can create jupyter notebooks and attach to Amazon EMR clusters running Hadoop, Spark, and Livy. EMR Notebooks are saved in AWS S3 independently of
medium
3,461
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. clusters. Click ‘Notebooks’ on Amazon EMR console & ‘Create Notebook’ Select the cluster to run the notebook on In Jupyter notebook, give below config as the first cell. %%configure -f {"conf": { "spark.pyspark.virtualenv.enabled": "false",
medium
3,462
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. "spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_TYPE":"docker", "spark.executorEnv.YARN_CONTAINER_RUNTIME_DOCKER_CLIENT_CONFIG":"hdfs:///user/hadoop/config.json", "spark.executorEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE":"123456789123.dkr.ecr.us-east-1.amazonaws.com/scalableml:s3spark",
medium
3,463
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. "spark.executor.instances":"2", "spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_CLIENT_CONFIG":"hdfs:///user/hadoop/config.json", "spark.yarn.appMasterEnv.YARN_CONTAINER_RUNTIME_DOCKER_IMAGE":"123456789123.dkr.ecr.us-east-1.amazonaws.com/scalableml:s3spark" } } Thus, you can give
medium
3,464
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. Notebook-scoped docker in EMR Notebooks to resolve dependencies. Now you can execute PySpark code in subsequent cells. from pyspark.sql import SparkSession spark = SparkSession.builder.appName("docker-numpy").getOrCreate() sc = spark.sparkContext import numpy as np print(np.__version__) Ideally,
medium
3,465
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. you should see numpy version as 1.17.5, if the above code is running inside the built docker. If not, you need to find the S3 cluster logging path. You can use the above Spark standalone code, except that the input data should be read from S3, as shown below, and convert to RDD. As visualized
medium
3,466
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. below, the time taken by the RandomForest training workload steadily decreases as the number of nodes in the EMR cluster increase. The base overhead of inter-node communication will remain, even with small datasets. Hence, the graph will fall steeper when the training dataset gets bigger. Fig 15.
medium
3,467
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. EMR Cluster Performance for various cluster sizes Alternatively, you can execute the PySpark code in the cluster by using spark-submit command, after connecting to master node via SSH. Set DOCKER_IMAGE_NAME & DOCKER_CLIENT_CONFIG env variable Execute .py file with spark-submit & deploy-mode as
medium
3,468
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. ‘cluster’ You can read the input and write the output from S3 as below. However, it is more convenient to submit a Spark job via Notebook running on Spark Cluster, along with docker to resolve dependancies in all cluster nodes. D. GPU Cluster: AWS EMR + Spark RAPIDs You can use ‘Nvidia Spark-RAPIDS
medium
3,469
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. plugin for Spark’ to accelerate ML pipelines using GPUs attached to EC2 nodes. The core and task instance groups must use EC2 GPU instance types while master node can be non-ARM non-GPU. Spark RAPID would speed up data processing and model training without any code change. To create the
medium
3,470
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. GPU-accelerated cluster, you can follow the steps in Section C, but with these changes: Use the JSON given at Step 5 here, instead of config in Section C Save the bootstrap script given here in S3 as my-bootstap-action.sh Give the S3 file path from Step 2 as bootstrap script (to use YARN on GPU)
medium
3,471
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. SSH to master node and install dependencies. To execute the time series project, execute the command below. pip3 install sklearn statsmodels seaborn matplotlib pandas boto3 No code change is required. Below is the timing comparison of Random Forest Regression training on Spark-RAPIDS Cluster of 1x
medium
3,472
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. m5.2xlarge master node and 2x g4dn.4xlarge core nodes, with other execution modes. Fig 16. Timing Comparison: Standalone vs Spark Local vs RAPIDS However, the speed gain is not much in the above case, as the data set is small. Let’s do a variation of the earlier ‘alphabet count’ code to compare the
medium
3,473
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. time stats between Spark Local and Spark RAPIDS. Below code generates 100M tuples of random string and count, to feed in to distributed count operation. Fig 17. Spark Job Progress Report in AWS EMR Notebook Fig 18. Timing Comparison: Spark Local vs Spark RAPIDS Closing Thoughts Apologies for the
medium
3,474
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. blog length. If you have read this far, you should consider making your hands dirty with some distributed coding. This article gave you a historical & logical context of Spark, with multiple sample implementations on Local, AWS EMR & Spark Rapids. If it has inspired you to explore further, then it
medium
3,475
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. has served its purpose. The complete source code of the above experiments can be found here. If you have any query or suggestion, you can reach me here References [1] www.appliedaicourse.com [2] Dean, Jeffrey, and Sanjay Ghemawat. “MapReduce: simplified data processing on large clusters.”
medium
3,476
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. Communications of the ACM 51.1 (2008) [3] Bagavathi, Arunkumar & Tzacheva, Angelina. (2017). Rule Based Systems in a Distributed Environment: Survey. 10.11159/cca17.107. [4] PySpark Documentation: https://spark.apache.org/docs/latest/api/python/index.html [5] Spark MLLib:
medium
3,477
Spark, Distributed Systems, Machine Learning, Hadoop, Scalability. https://spark.apache.org/docs/1.2.1/mllib-guide.html [6] Spark SQL: https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#sql [7] AWS EMR: https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-spark-docker.html [8] AWS EMR with Docker:
medium
3,478
Ordinary Least Squares, P Value, Regression, Statsmodels. In this article, you will learn: What is the Ordinary Least Squares (aka OLS) model? How to use it to predict values? How to use the Scikit-learn library to fit the model? How to use the StatsModels library to fit the model? How to interpret p value in the regression? How to interpret the other
medium
3,480
Ordinary Least Squares, P Value, Regression, Statsmodels. results? Linear Regression — Ordinary Least Squares (OLS) Linear regression model is a function that expresses a relationship between one or more independent variables xi (also called the features or explanatory variables) and a target variable yi. Simple linear regression model: Simple Linear
medium
3,481
Ordinary Least Squares, P Value, Regression, Statsmodels. regression model is a regression with only one dependent variable: With i the i-th observation in the dataset, and ɛi the error term. The parameters of the model are w0, the intercept, and w1 the slope. We call these parameters the regression coefficients (or beta). Multiple linear regression
medium
3,482
Ordinary Least Squares, P Value, Regression, Statsmodels. model: Multiple Linear regression model is regression with several dependent variables: With k is the number of the explanatory variables, and i is the i-th observation. Representation and Cost function Let’s focus on a simple linear regression model, to be able to explain and represent simply the
medium
3,483
Ordinary Least Squares, P Value, Regression, Statsmodels. different concepts that will be used to estimate the parameters of the model. Let’s assume we have one independent variable X and we want to predict y: The function of the green line is our regression model: The parameters of this model, the intercept w0 and the slope w1 will be used to predict yi
medium
3,484
Ordinary Least Squares, P Value, Regression, Statsmodels. , with an error term ɛi. This error term is the difference between the predicted value of yi and its projection on the regression line. It’s also called the residual. The goal of the regression model is to minimize this error term, to find the line that will fit the best our dataset. In other term,
medium
3,485
Ordinary Least Squares, P Value, Regression, Statsmodels. the objective is to find the best estimate of w0 and w1 that will minimize the error term ɛi. To be able to determine which line is the best fit for our dataset f0, f1 or f2 as shown in the following figure, we need first to introduce a quality metric: RSS (Residual sum of squares) RSS: Residual
medium
3,486
Ordinary Least Squares, P Value, Regression, Statsmodels. sum of squares RSS is the sum of the squared difference between the predicted value, using the parameters w0 and w1 (yi=w0+w1xi), and the real one y. Otherwise, RSS is the sum of squared error term ɛi: In other terms, RSS could be written with w0 and w1: The RSS is our cost function, the smallest
medium
3,487
Ordinary Least Squares, P Value, Regression, Statsmodels. is the value, the smallest is the distance between our predicted value and our real one, and the best is the regression line. Now we will discuss the algorithm that will minimize this cost over all possible values of w0 and w1: (You can download for free the sample ebook of linear regression to
medium
3,488
Ordinary Least Squares, P Value, Regression, Statsmodels. have more information about the algorithm used — Gradient Descent to minimize the cost function). All this logic is developed in Scikit-learn and StatsModels library. Before practicing with Python, let’s have a look at the global process to follow when you’re working with machine learning models.
medium
3,489
Ordinary Least Squares, P Value, Regression, Statsmodels. Process of Fitting the models in machine learning The steps to follow to use machine learning models are: Import libraries you need to work with in you project Load your dataset Split the dataset to train, and test sets : The goal is to train your model on the training sets, and compute the
medium
3,490
Ordinary Least Squares, P Value, Regression, Statsmodels. accuracy of the model on the test sets, which was not discovered yet by the model (to be the most realistic) Normalize your data train, and then infer this transformation to the test sets. Fit the model Predict Evaluate the model in “fit” and “predict” steps, you can use several models, and
medium
3,491
Ordinary Least Squares, P Value, Regression, Statsmodels. evaluate them, to keep the most performing one. Let’s now take an example to practice. Python: Practical example Dataset overview The dataset that we will be using in this chapter and the following ones is the “diabetes” data that has been used in the “Least Angle Regression” paper. In this dataset
medium
3,492
Ordinary Least Squares, P Value, Regression, Statsmodels. we have 442 observations (patients) and 10 variables (n=442 patients, p=10 predictors). One row per patient and the last column is the response variable. Ten baseline variables, age, sex, body mass index, average blood pressure, and six blood serum measurements, were obtained for each of n = 442
medium
3,493
Ordinary Least Squares, P Value, Regression, Statsmodels. diabetes patients, as well as the response of interest, a quantitative measure of disease progression one year after baseline. Import the libraries Let’s import the different packages we need: import pandas as pd import numpy as np from sklearn.linear_model import LinearRegression import
medium
3,494
Ordinary Least Squares, P Value, Regression, Statsmodels. statsmodels.api as sm from sklearn.metrics import r2_score from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler import matplotlib.pyplot as plt import seaborn as sns sns.set() Load the dataset Let’s load the dataset
medium
3,495
Ordinary Least Squares, P Value, Regression, Statsmodels. df=pd.read_csv("https://www4.stat.ncsu.edu/~boos/var.select/diabetes.tab.txt", sep="\t") print(df.shape) df.head() #(442, 11) Our features are Age, Sex, BP, S1, S2, S3, S4, S5 and S6, and the target variable is Y. Statistical description: To have an overview of the data used:
medium
3,496
Ordinary Least Squares, P Value, Regression, Statsmodels. pd.options.display.float_format = '{:,.1f}'.format df.describe() In the whole data set we have 442 observations. The maximum value of Y is 346 and the median is 140.5. The quantile 25 for Age is 38.2 and the quantile 75 is 59. Split to train and test Let’s split the dataset to train and test sets :
medium
3,497
Ordinary Least Squares, P Value, Regression, Statsmodels. (20% on test, and 80% on train): X_train,X_test,y_train,y_test=train_test_split(df.iloc[:,:-1],df['Y'],test_size=0.2,random_state=42) print("Train shape",X_train.shape, "Test shape", X_test.shape) sc_x=StandardScaler() X_train_sc=sc_x.fit_transform(X_train) # Apply the scaling to test data sets
medium
3,498
Ordinary Least Squares, P Value, Regression, Statsmodels. X_test_sc=sc_x.transform(X_test) #Train shape (353, 10) Test shape (89, 10) Fit the model : OLS in Scikit-Learn Let’s fit the Ordinary Least Square model from Sckit-learn reg_ols=LinearRegression() #fit_intercept=True,positive=True reg_ols.fit(X_train_sc,y_train) Default parameters will apply, like
medium
3,499
Ordinary Least Squares, P Value, Regression, Statsmodels. fit_intercept=True. If you want to mody the value, you can by passing: fit_intercept=False. We fit the OLS to the scaled train data. The intercept and the coefficients of the fitted model are: intercept=reg_ols.intercept_ coef_ols=[round(coef,2) for coef in reg_ols.coef_] print("Intercept",
medium
3,500
Ordinary Least Squares, P Value, Regression, Statsmodels. round(intercept,2)) print( "Coefs", coef_ols) #Intercept 153.74 #Coefs [1.75, -11.51, 25.61, 16.83, -44.45, 24.64, 7.68, 13.14, 35.16, 2.35] Interpret the results The intercept means that If all coefficients are 0, the value of y will be 153.74 For the slopes, for Age the coefficient is 1.75. It
medium
3,501
Ordinary Least Squares, P Value, Regression, Statsmodels. means if we hold all the other variables constant, when age increases by (a normalized value of) 1 year, the progression of diabetes is about 1.75 points. Predict #Predict values y_train_predict=reg_ols.predict(X_train_sc) y_test_predict=reg_ols.predict(X_test_sc) Evaluate Now let’s evaluate this
medium
3,502
Ordinary Least Squares, P Value, Regression, Statsmodels. model. We will use the R2 score. There are many metrics to use to evaluate the model. You find in this article, a detailed list of all these metrics, including R2 score, their mathematical formula, and how they are calculated using either hand made computation in Python or Scikit-learn library. #
medium
3,503
Ordinary Least Squares, P Value, Regression, Statsmodels. The coefficient of determination: 1 is perfect prediction r2_train=r2_score(y_train, y_train_predict) r2_test=r2_score(y_test, y_test_predict) print("TRAIN-model direct score R2: %.5f"%reg_ols.score(X_train_sc,y_train)) print("TRAIN-recomputed R2: %.5f" % r2_train) print("TEST R2: %.3f" % r2_test)
medium
3,504
Ordinary Least Squares, P Value, Regression, Statsmodels. #TRAIN-model direct score R2: 0.52792 #TRAIN-recomputed R2: 0.52792 #TEST R2: 0.453 R2 score for the train is about 0.528, and the test is 0.453. Visualization plt.scatter(y_train_predict,y_train,label='train') plt.scatter(y_test_predict,y_test,label='test') plt.plot(y_train,y_train,'-r')
medium
3,505
Ordinary Least Squares, P Value, Regression, Statsmodels. plt.annotate(r"R2_train={0}, R2_test={1}".format(round(r2_train,3),round(r2_test,3)), xy=(180, 20),xytext=(0, 0), textcoords='offset points', fontsize=12) plt.xlabel("y_predict_ols") plt.ylabel("y_true") plt.title("Regression: Predicted vs True y") plt.legend() Now let’s use StatsModels OLS Fit the
medium
3,506
Ordinary Least Squares, P Value, Regression, Statsmodels. model: OLS in StatsModels: X_train_sc_cst=sm.add_constant(X_train_sc) sm_ols=sm.OLS(y_train,X_train_sc_cst).fit() print("TRAIN: R2: %.5f"%sm_ols.rsquared) sm_ols.summary() #TRAIN: R2: 0.52792 Interpret the results The summary of OLS in StatsModels gives detailed statistical values of the model. The
medium
3,507
Ordinary Least Squares, P Value, Regression, Statsmodels. most important values are: R-squared Coefficient of the intercept (const in the table) Coefficients of the independent variables from x1 to x10, corresponding to AGE to S6: 1.75 to 2.3514. The p-value for the intercept and for each feature. We get the same values of coefficients as the model used
medium
3,508
Ordinary Least Squares, P Value, Regression, Statsmodels. in scikit-learn. #Intercept 153.74 #Coefs [1.75, -11.51, 25.61, 16.83, -44.45, 24.64, 7.68, 13.14, 35.16, 2.35] Predict X_test_sc_cst=sm.add_constant(X_test_sc) y_test_pred=sm_ols.predict(X_test_sc_cst) Evaluate r2_test=r2_score(y_test, y_test_pred) print("TEST R2: %.3f" % r2_test) #TEST R2: 0.453
medium
3,509
Ordinary Least Squares, P Value, Regression, Statsmodels. Either R2 score coming from training dataset or test dataset is the same in Scikit-learn and in StatsModel. Visualization y_train_predict=sm_ols.predict(X_train_sc_cst) plt.scatter(y_train_predict,y_train,label='train') plt.scatter(y_test_pred,y_test,label='test') plt.plot(y_train,y_train,'-r')
medium
3,510
Ordinary Least Squares, P Value, Regression, Statsmodels. plt.annotate(r"R2_train={0}, R2_test={1}".format(round(r2_train,3),round(r2_test,3)), xy=(180, 20),xytext=(0, 0), textcoords='offset points', fontsize=12) plt.xlabel("y_predict_ols") plt.ylabel("y_true") plt.title("Regression: Predicted vs True y") plt.legend() How do we interpret the p-values in
medium
3,511
Ordinary Least Squares, P Value, Regression, Statsmodels. regression? The p-value leads to the decision to reject or not the null hypothesis. The null hypothesis states that the coefficient is not statistically different from 0. In other terms, if the p-value is less than a certain threshold (commonly used value is 0.05), then we reject the null
medium
3,512
Ordinary Least Squares, P Value, Regression, Statsmodels. hypothesis, meaning that the coefficient has an explanatory effect on the target value. In our example: AGE, S6, S3,S4,S2 are all having a p-value greater than 0.05. What’s next? What you could do is: To remove the feature (in train and test sets) having the most important p-value in your model (in
medium
3,513
Ordinary Least Squares, P Value, Regression, Statsmodels. our case AGE). Then refit the model. Check again the results of the p-values, R2 score for train AND test sets. Repeat again until you find all p-values less than 0.05, and compare the R2 scores. If you do this exercise you will see that the R2 score for the test sets will improve: 0.463. There are
medium
3,514
Ordinary Least Squares, P Value, Regression, Statsmodels. other models that better perform this logic by removing features that are not useful for the prediction: Regularization models (Sample of Linear Regression). Summary In this article you, you learned: The Ordinary Least Squares (aka OLS) model (Linear Regression) Its cost function, What is the RSS?
medium
3,515
Ordinary Least Squares, P Value, Regression, Statsmodels. the process of using machine learning models How to fit the OLS using Scikit-learn How to interpret the intercept and the coefficients How to fit the OLS using StatsModels How to interpret the p-value If you enjoyed reading my article, I appreciate if you leave me a comment 👇 Have a look on my
medium
3,516
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. Why You Shouldn’t Learn AI in 2024 The Hype Surrounding AI in 2024 is Overblown I know the hype around AI is at an all-time high, but hear me out. As of 2024, only 35% of companies worldwide have actually adopted AI. That means a whopping 65% of companies are still not using it. So is it really
medium
3,518
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. worth your time and effort to learn AI in 2024 when the majority of companies haven’t even started using it yet? Sure, the adoption rate might increase in the next few years, but let’s look at the bigger picture. What about 10 years from now? When the adoption rate is predicted to reach 100% and
medium
3,519
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. every technology-driven company is using AI, that’s when it might be worth your while to learn it. We strongly recommend that you check out our guide on how to take advantage of AI in today’s passive income economy. Two Use Cases for Learning Generative AI When it comes to learning generative AI,
medium
3,520
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. there are two main use cases to consider. The first is learning it as a consumer or end-user, which is often referred to as “prompt engineering.” The second is learning it as a creator, where you actually develop AI technologies using machine learning and deep learning. In this article, I’ll be
medium
3,521
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. focusing on the first use case — prompt engineering. I’ll explain why, as a tech and non-tech employee, you should be learning how to use generative AI, and how it can benefit you in your career. Generative AI as a Tech Employee Let’s take an example of two software engineers, Jessica and Jack, who
medium
3,522
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. both work at a company that has adopted generative AI. Jessica uses the technology to debug code and solve customer problems, while Jack relies on his experience and expertise alone. Over the course of a year, Jessica is able to solve 85 tickets, while Jack solves only 56. When it comes to
medium
3,523
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. performance reviews, who do you think will stand out? The answer is obvious — Jessica, who has leveraged generative AI to her advantage. If your company allows you to use generative AI, you should start using it to save time, improve efficiency, and make your work more productive. There are already
medium
3,524
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. many tools available for tech roles, such as data scientists, that incorporate generative AI capabilities. Generative AI for Non-Tech Employees But what about non-tech employees? Do they need to learn generative AI as well? The answer is yes, and here’s why. Let’s consider the example of two
medium
3,525
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. customer service representatives, Jeffrey and Jennifer. Both of them receive 50 calls on average, but Jeffrey uses generative AI to find solutions to common customer problems, while Jennifer relies on her experience alone. As a result, Jeffrey is able to solve customer issues in 7 hours, while
medium
3,526
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. Jennifer takes 8 hours. Over the course of a year, this time difference adds up, making Jeffrey more productive and efficient. The same principle can apply to other non-tech job families, such as construction, plumbing, or even medical professionals. While they may not need to develop AI
medium
3,527
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. technologies, they can still benefit from using generative AI to enhance their problem-solving abilities and streamline their workflows. The Future of AI Adoption It’s important to note that the current AI adoption rate of 35% is not a big number. This means there is still a large chunk of
medium
3,528
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. companies that are not using AI. However, the adoption rate is expected to continue accelerating, and within the next 10 years, it is predicted to reach 100%. This means that sooner or later, every company that touches technology will be using AI in some capacity. So, even if you don’t feel the
medium
3,529
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. immediate need to learn generative AI in 2024, it’s worth considering investing in this skill for the long-term. By getting ahead of the curve, you can position yourself as a valuable asset in the job market and stay ahead of the competition. In conclusion, while the hype around AI in 2024 may seem
medium
3,530
AI, Make Money Online, Artificial Intelligence, Earn Money Online, Passive Income. overblown, the reality is that it is on the path to becoming an essential skill for both tech and non-tech employees. By learning generative AI, you can gain a competitive edge, improve your productivity, and future-proof your career. We strongly recommend that you check out our guide on how to
medium
3,531
Energy Efficiency, Blockchain, Covid-19, Woz. Energy efficiency is at the heart of a paradox. Many have already heralded it as the solution to the main problems of our time, above all climate change and the post-Covid-19 economic crisis. But, in spite of this, over 68% of the world’s energy is currently produced and consumed inefficiently.
medium
3,533
Energy Efficiency, Blockchain, Covid-19, Woz. There is a lot of talk about energy efficiency, but only a few people benefit from it. And yet its value has never been as concrete as it is today. First and foremost, energy efficiency means making industrial processes more healthy. Obtaining more with less. Indeed, a company that makes its energy
medium
3,534
Energy Efficiency, Blockchain, Covid-19, Woz. system more efficient can improve its performance and increase the volume of its activity, consuming less energy and reducing the related costs. Then there is the second meaning: energy efficiency is a far-sighted choice. It’s an approach that looks to the future of one’s company. With an
medium
3,535
Energy Efficiency, Blockchain, Covid-19, Woz. advantageous return also in terms of the quality of life in our cities. The world’s population is growing and it is estimated that by 2030, approx. 60% of people will live in urban areas: the high density will have to be accompanied by building projects that aim for efficiency and technological
medium
3,536
Energy Efficiency, Blockchain, Covid-19, Woz. development, in order to contain the consumption of resources and energy. In this way we will be able to limit the negative environmental impact, with a focus on air quality and CO2 emissions that have increased by more than 60% over the last 30 years. Steve Wozniak — EFFORCE Co-Founder Finally,
medium
3,537
Energy Efficiency, Blockchain, Covid-19, Woz. energy efficiency must be seen as a booster for the economy: a concrete opportunity to generate jobs and employment. This evidence is also confirmed by IEA, the International Energy Agency, which in a recent article gives it “a strong role to play in boosting jobs and economic growth while also
medium
3,538
Energy Efficiency, Blockchain, Covid-19, Woz. supporting clean energy transitions around the world”. Especially in this particular moment in time, in which governments worldwide are called upon to limit the economic impacts of the pandemic and to stimulate recovery when the health emergency is over. Top ten CO2 emitting end uses in selected
medium
3,539
Energy Efficiency, Blockchain, Covid-19, Woz. IEA countries, 2017 Benefits for companies, the environment, people and the economy. So why is 68% of the world’s energy currently produced and consumed inefficiently? While public debate and information are helping to raise the level of knowledge and awareness on the benefits of energy efficiency,
medium
3,540
Energy Efficiency, Blockchain, Covid-19, Woz. companies choosing to invest in these projects often face bureaucratic and financial difficulties in accessing the capital needed to sustain the high initial costs of energy efficiency projects. Even in the presence of a virtually certain return on the initial investment. EFFORCE was founded
medium
3,541
Energy Efficiency, Blockchain, Covid-19, Woz. precisely to overcome this limit: to encourage the dissemination of energy efficiency projects worldwide, bridging the gap between supply and demand of investments. Through its revolutionary platform, EFFORCE enables anyone to invest in energy efficiency and benefit from the savings generated by
medium
3,542
Energy Efficiency, Blockchain, Covid-19, Woz. the project funded. And it does so in an absolutely safe and certified way, thanks to the use of blockchain technology. Jacopo Visetti — EFFORCE Co-Founder EFFORCE is addressed to private individuals, large companies and governments, making a market that hitherto was limited to a few accessible
medium
3,543