title
stringlengths
3
221
text
stringlengths
17
477k
parsed
listlengths
0
3.17k
Table Design Best Practices for ETL | by Kovid Rathee | Towards Data Science
Not so far ago, the approach taken to table design in source systems (application databases) used to be — we don’t care about ETL. Figure it out, we’ll concentrate on building the application. The last couple of years have been great for the development of ETL methodologies with a lot of open-source tools coming in from some of the big tech companies like Airbnb, LinkedIn, Google, Facebook and so on. And with cloud going mainstream, providers like Azure, Google and Microsoft have made sure that they build upon and support all the open source technologies in the data engineering space. I have been a part of many ETL projects, some of which have failed miserably and the rest have succeeded. There are many ways an ETL project can go wrong. We’ll talk about one of the most important aspects today — table design in the source system. ETL pipelines are as good as the source systems they’re built upon. This statement holds completely true irrespective of the effort one puts in the T layer of the ETL pipeline. The transform layer is usually misunderstood as the layer which fixes everything that is wrong with your application and the data generated by the application. That is absolutely untrue. Without further ago, let’s look at the bare minimum that you should take into account while designing tables which are going to be ETL’d to a target system — This should have be left unsaid but I have seen systems this is also not enforced (as part of the design). It doesn’t matter if the unique key is a single column or composite in nature. Although, one can potentially be asked to do a full load for a table without a unique key and infer the changes after doing the full load every time. This solution would actually be worse than it sounds. By enabling data engineers to identify new & updated records by accessing simple fields like created_timestamp and updated_timestamp. Make sure that both these fields are populated by the database and not the application. You should have a separate datetime or timestamp field if you want to populate it from the application. These ones should be defined something like — 1. created_timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP2. updated_timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP Append only tables with an always-incrementing primary key would work as they are. You don’t need to have audit timestamp columns for those. In most cases, we’re dealing with relational databases as source systems. It is, therefore, of utmost importance to understand the data flow & lineage within the source system. In most cases, the data flow & lineage in the target system for load remains the same, although that’s not mandatory. Two things will help here — service architecture diagram from the application which produces the data and an ER diagram for the source database. All these problems can be completely or partially solved even if these points aren’t taken care of in the source system but none of the solutions are going to be sustainable. And yes, that’s all it takes to get started for building a neat ETL pipeline. As a rule, ETL systems should be tasked just to move data from one place to another with a general layer of transformations (one that doesn’t take care of bugs, special one-off cases). Remember the universal principle of Garbage in Garbage out — if you have buggy data in the source system, your target system will have buggy data. A practice that I have seen at many places is that data teams try to patch the buggy data and handle-it-for-the-time-being. It usually works until the sprint is over and invariably the same issue comes back to haunt you. Resist the temptation of getting into a habit of fixing issues like that. The right way to do this is to report the issue, fix the data in source system and do a clean reload for the time period in question. A reload will be easier to justify than different data showing up on the application UI and the data BI dashboard.
[ { "code": null, "e": 764, "s": 172, "text": "Not so far ago, the approach taken to table design in source systems (application databases) used to be — we don’t care about ETL. Figure it out, we’ll concentrate on building the application. The last couple of years have been great for the development of ETL methodologies with a lot of open-source tools coming in from some of the big tech companies like Airbnb, LinkedIn, Google, Facebook and so on. And with cloud going mainstream, providers like Azure, Google and Microsoft have made sure that they build upon and support all the open source technologies in the data engineering space." }, { "code": null, "e": 1013, "s": 764, "text": "I have been a part of many ETL projects, some of which have failed miserably and the rest have succeeded. There are many ways an ETL project can go wrong. We’ll talk about one of the most important aspects today — table design in the source system." }, { "code": null, "e": 1081, "s": 1013, "text": "ETL pipelines are as good as the source systems they’re built upon." }, { "code": null, "e": 1535, "s": 1081, "text": "This statement holds completely true irrespective of the effort one puts in the T layer of the ETL pipeline. The transform layer is usually misunderstood as the layer which fixes everything that is wrong with your application and the data generated by the application. That is absolutely untrue. Without further ago, let’s look at the bare minimum that you should take into account while designing tables which are going to be ETL’d to a target system —" }, { "code": null, "e": 1925, "s": 1535, "text": "This should have be left unsaid but I have seen systems this is also not enforced (as part of the design). It doesn’t matter if the unique key is a single column or composite in nature. Although, one can potentially be asked to do a full load for a table without a unique key and infer the changes after doing the full load every time. This solution would actually be worse than it sounds." }, { "code": null, "e": 2297, "s": 1925, "text": "By enabling data engineers to identify new & updated records by accessing simple fields like created_timestamp and updated_timestamp. Make sure that both these fields are populated by the database and not the application. You should have a separate datetime or timestamp field if you want to populate it from the application. These ones should be defined something like —" }, { "code": null, "e": 2456, "s": 2297, "text": "1. created_timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP2. updated_timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP" }, { "code": null, "e": 2597, "s": 2456, "text": "Append only tables with an always-incrementing primary key would work as they are. You don’t need to have audit timestamp columns for those." }, { "code": null, "e": 3037, "s": 2597, "text": "In most cases, we’re dealing with relational databases as source systems. It is, therefore, of utmost importance to understand the data flow & lineage within the source system. In most cases, the data flow & lineage in the target system for load remains the same, although that’s not mandatory. Two things will help here — service architecture diagram from the application which produces the data and an ER diagram for the source database." }, { "code": null, "e": 3290, "s": 3037, "text": "All these problems can be completely or partially solved even if these points aren’t taken care of in the source system but none of the solutions are going to be sustainable. And yes, that’s all it takes to get started for building a neat ETL pipeline." }, { "code": null, "e": 3622, "s": 3290, "text": "As a rule, ETL systems should be tasked just to move data from one place to another with a general layer of transformations (one that doesn’t take care of bugs, special one-off cases). Remember the universal principle of Garbage in Garbage out — if you have buggy data in the source system, your target system will have buggy data." } ]
Traffic Sign Detection using Convolutional Neural Network | by Sanket Doshi | Towards Data Science
Convolutional neural networks or ConvNets or CNN’s are very important to learn if you want to pursue a career in the computer vision field. CNN help in running neural networks directly on images and are more efficient and accurate than many of the deep neural networks. ConvNet models are easy and faster to train on images comparatively to the other models. If you’re not familiar with the basics of ConvNet’s you can learn it from here. We will be using keras package to build CNN model. The German traffic signs detection dataset is provided here. The dataset consists of 39209 images with 43 different classes. The images are distributed unevenly between those classes and hence the model may predict some classes more accurately than other classes. We can populate the dataset with various image modifying techniques such as rotation, colour distortion or blurring the image. We will be training the model on the original dataset and will see the accuracy of the model. Then we’ll be adding more data and making each class even and check the model’s accuracy. One of the limitations of the CNN model is that they cannot be trained on a different dimension of images. So, it is mandatory to have same dimension images in the dataset. We’ll check the dimension of all the images of the dataset so that we can process the images into having similar dimensions. In this dataset, the images have a very dynamic range of dimensions from 16*16*3 to 128*128*3 hence cannot be passed directly to the ConvNet model. We need to compress or interpolate the images to a single dimension. Not, to compress much of the data and not to stretch the image too much we need to decide the dimension which is in between and keep the image data mostly accurate. I’ve decided to use dimension 64*64*3. We will transform the image into the given dimension using opencv package. import cv2def resize_cv(img): return cv2.resize(img, (64, 64), interpolation = cv2.INTER_AREA) cv2 is a package of opencv . resize method transforms the image into the given dimension. Here, we’re transforming an image into the 64*64 dimension. Interpolation will define what type of technique you want to use for stretching or for compressing the images. Opencv provides 5 types of interpolation techniques based on the method they use to evaluate the pixel values of the resulting image. The techniques are INTER_AREA, INTER_NEAREST, INTER_LINEAR, INTER_CUBIC, INTER_LANCZOS4 . We’ll be using INTER_AREA interpolation technique it’s more preferred for image decimation but for extrapolation technique it’s similar as INTER_NEAREST . We could have used INTER_CUBIC but it requires high computation power so will be not using it. Above we learned how we’ll pre-process the images. Now, we’ll load the dataset along with converting them in the decided dimension. The dataset consist of 43 classes total. In other words, 43 different types of traffic signs are present in that dataset and each sign has it’s own folder consisting of images in different sizes and clarity. Total 39209 number of images are present in the dataset. We can plot the histogram for number of images present for different traffic signs. import seaborn as snsfig = sns.distplot(output, kde=False, bins = 43, hist = True, hist_kws=dict(edgecolor="black", linewidth=2))fig.set(title = "Traffic signs frequency graph", xlabel = "ClassId", ylabel = "Frequency") ClassId is the unique id given for each unique traffic signs. As, we can see from the graph that the dataset does not contain equal amount of images for each class and hence, the model may be biased in detecting some traffic signs more accurately than other. We can make the dataset consistent by altering the images using rotation or distortion techniques but we’ll do this some other time. As the dataset is divided into multiple folders and the naming of images is not consistent we’ll load all the images by converting them in (64*64*3) dimension into one list list_image and the traffic sign it resembles into another list output. We’ll be reading the images using imread . list_images = []output = []for dir in os.listdir(data_dir): if dir == '.DS_Store' : continue inner_dir = os.path.join(data_dir, dir) csv_file = pd.read_csv(os.path.join(inner_dir,"GT-" + dir + '.csv'), sep=';') for row in csv_file.iterrows() : img_path = os.path.join(inner_dir, row[1].Filename) img = imread(img_path) img = img[row[1]['Roi.X1']:row[1]['Roi.X2'],row[1]['Roi.Y1']:row[1]['Roi.Y2'],:] img = resize_cv(img) list_images.append(img) output.append(row[1].ClassId) data_dir is the path to the directory where the dataset is present. The dataset is loaded and now we need to divide it into training and testing set. And also in validation set. But if we divide directly then the model will not be get trained all the traffic signs as the dataset is not randomized. So, first we’ll randomize the dataset. input_array = np.stack(list_images)import kerastrain_y = keras.utils.np_utils.to_categorical(output)randomize = np.arange(len(input_array))np.random.shuffle(randomize)x = input_array[randomize]y = train_y[randomize] We can see that I’ve converted the output array to categorical output as the model will return in such a way. Now, splitting the dataset. We’ll split the dataset in 60:20:20 ratio as training, validation, test dataset respectively. split_size = int(x.shape[0]*0.6)train_x, val_x = x[:split_size], x[split_size:]train1_y, val_y = y[:split_size], y[split_size:]split_size = int(val_x.shape[0]*0.5)val_x, test_x = val_x[:split_size], val_x[split_size:]val_y, test_y = val_y[:split_size], val_y[split_size:] from keras.layers import Dense, Dropout, Flatten, Inputfrom keras.layers import Conv2D, MaxPooling2Dfrom keras.layers import BatchNormalizationfrom keras.optimizers import Adamfrom keras.models import Sequentialhidden_num_units = 2048hidden_num_units1 = 1024hidden_num_units2 = 128output_num_units = 43epochs = 10batch_size = 16pool_size = (2, 2)#list_images /= 255.0input_shape = Input(shape=(32, 32,3))model = Sequential([Conv2D(16, (3, 3), activation='relu', input_shape=(64,64,3), padding='same'), BatchNormalization(),Conv2D(16, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2), Conv2D(32, (3, 3), activation='relu', padding='same'), BatchNormalization(), Conv2D(32, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2), Conv2D(64, (3, 3), activation='relu', padding='same'), BatchNormalization(), Conv2D(64, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2),Flatten(),Dense(units=hidden_num_units, activation='relu'), Dropout(0.3), Dense(units=hidden_num_units1, activation='relu'), Dropout(0.3), Dense(units=hidden_num_units2, activation='relu'), Dropout(0.3), Dense(units=output_num_units, input_dim=hidden_num_units, activation='softmax'),])model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-4), metrics=['accuracy'])trained_model_conv = model.fit(train_x.reshape(-1,64,64,3), train1_y, epochs=epochs, batch_size=batch_size, validation_data=(val_x, val_y)) We’ve used keras package. For understanding about each layers significance you can read this blog. model.evaluate(test_x, test_y) The model gets evaluated and you can find accuracy of 99%. pred = model.predict_classes(test_x) You can predict the class for each image and can verify how the model works. You can find the whole working code here.
[ { "code": null, "e": 530, "s": 171, "text": "Convolutional neural networks or ConvNets or CNN’s are very important to learn if you want to pursue a career in the computer vision field. CNN help in running neural networks directly on images and are more efficient and accurate than many of the deep neural networks. ConvNet models are easy and faster to train on images comparatively to the other models." }, { "code": null, "e": 610, "s": 530, "text": "If you’re not familiar with the basics of ConvNet’s you can learn it from here." }, { "code": null, "e": 661, "s": 610, "text": "We will be using keras package to build CNN model." }, { "code": null, "e": 925, "s": 661, "text": "The German traffic signs detection dataset is provided here. The dataset consists of 39209 images with 43 different classes. The images are distributed unevenly between those classes and hence the model may predict some classes more accurately than other classes." }, { "code": null, "e": 1236, "s": 925, "text": "We can populate the dataset with various image modifying techniques such as rotation, colour distortion or blurring the image. We will be training the model on the original dataset and will see the accuracy of the model. Then we’ll be adding more data and making each class even and check the model’s accuracy." }, { "code": null, "e": 1409, "s": 1236, "text": "One of the limitations of the CNN model is that they cannot be trained on a different dimension of images. So, it is mandatory to have same dimension images in the dataset." }, { "code": null, "e": 1682, "s": 1409, "text": "We’ll check the dimension of all the images of the dataset so that we can process the images into having similar dimensions. In this dataset, the images have a very dynamic range of dimensions from 16*16*3 to 128*128*3 hence cannot be passed directly to the ConvNet model." }, { "code": null, "e": 1955, "s": 1682, "text": "We need to compress or interpolate the images to a single dimension. Not, to compress much of the data and not to stretch the image too much we need to decide the dimension which is in between and keep the image data mostly accurate. I’ve decided to use dimension 64*64*3." }, { "code": null, "e": 2030, "s": 1955, "text": "We will transform the image into the given dimension using opencv package." }, { "code": null, "e": 2128, "s": 2030, "text": "import cv2def resize_cv(img): return cv2.resize(img, (64, 64), interpolation = cv2.INTER_AREA)" }, { "code": null, "e": 2863, "s": 2128, "text": "cv2 is a package of opencv . resize method transforms the image into the given dimension. Here, we’re transforming an image into the 64*64 dimension. Interpolation will define what type of technique you want to use for stretching or for compressing the images. Opencv provides 5 types of interpolation techniques based on the method they use to evaluate the pixel values of the resulting image. The techniques are INTER_AREA, INTER_NEAREST, INTER_LINEAR, INTER_CUBIC, INTER_LANCZOS4 . We’ll be using INTER_AREA interpolation technique it’s more preferred for image decimation but for extrapolation technique it’s similar as INTER_NEAREST . We could have used INTER_CUBIC but it requires high computation power so will be not using it." }, { "code": null, "e": 2995, "s": 2863, "text": "Above we learned how we’ll pre-process the images. Now, we’ll load the dataset along with converting them in the decided dimension." }, { "code": null, "e": 3260, "s": 2995, "text": "The dataset consist of 43 classes total. In other words, 43 different types of traffic signs are present in that dataset and each sign has it’s own folder consisting of images in different sizes and clarity. Total 39209 number of images are present in the dataset." }, { "code": null, "e": 3344, "s": 3260, "text": "We can plot the histogram for number of images present for different traffic signs." }, { "code": null, "e": 3578, "s": 3344, "text": "import seaborn as snsfig = sns.distplot(output, kde=False, bins = 43, hist = True, hist_kws=dict(edgecolor=\"black\", linewidth=2))fig.set(title = \"Traffic signs frequency graph\", xlabel = \"ClassId\", ylabel = \"Frequency\")" }, { "code": null, "e": 3640, "s": 3578, "text": "ClassId is the unique id given for each unique traffic signs." }, { "code": null, "e": 3837, "s": 3640, "text": "As, we can see from the graph that the dataset does not contain equal amount of images for each class and hence, the model may be biased in detecting some traffic signs more accurately than other." }, { "code": null, "e": 3970, "s": 3837, "text": "We can make the dataset consistent by altering the images using rotation or distortion techniques but we’ll do this some other time." }, { "code": null, "e": 4257, "s": 3970, "text": "As the dataset is divided into multiple folders and the naming of images is not consistent we’ll load all the images by converting them in (64*64*3) dimension into one list list_image and the traffic sign it resembles into another list output. We’ll be reading the images using imread ." }, { "code": null, "e": 4797, "s": 4257, "text": "list_images = []output = []for dir in os.listdir(data_dir): if dir == '.DS_Store' : continue inner_dir = os.path.join(data_dir, dir) csv_file = pd.read_csv(os.path.join(inner_dir,\"GT-\" + dir + '.csv'), sep=';') for row in csv_file.iterrows() : img_path = os.path.join(inner_dir, row[1].Filename) img = imread(img_path) img = img[row[1]['Roi.X1']:row[1]['Roi.X2'],row[1]['Roi.Y1']:row[1]['Roi.Y2'],:] img = resize_cv(img) list_images.append(img) output.append(row[1].ClassId)" }, { "code": null, "e": 4865, "s": 4797, "text": "data_dir is the path to the directory where the dataset is present." }, { "code": null, "e": 5135, "s": 4865, "text": "The dataset is loaded and now we need to divide it into training and testing set. And also in validation set. But if we divide directly then the model will not be get trained all the traffic signs as the dataset is not randomized. So, first we’ll randomize the dataset." }, { "code": null, "e": 5351, "s": 5135, "text": "input_array = np.stack(list_images)import kerastrain_y = keras.utils.np_utils.to_categorical(output)randomize = np.arange(len(input_array))np.random.shuffle(randomize)x = input_array[randomize]y = train_y[randomize]" }, { "code": null, "e": 5461, "s": 5351, "text": "We can see that I’ve converted the output array to categorical output as the model will return in such a way." }, { "code": null, "e": 5583, "s": 5461, "text": "Now, splitting the dataset. We’ll split the dataset in 60:20:20 ratio as training, validation, test dataset respectively." }, { "code": null, "e": 5855, "s": 5583, "text": "split_size = int(x.shape[0]*0.6)train_x, val_x = x[:split_size], x[split_size:]train1_y, val_y = y[:split_size], y[split_size:]split_size = int(val_x.shape[0]*0.5)val_x, test_x = val_x[:split_size], val_x[split_size:]val_y, test_y = val_y[:split_size], val_y[split_size:]" }, { "code": null, "e": 7444, "s": 5855, "text": "from keras.layers import Dense, Dropout, Flatten, Inputfrom keras.layers import Conv2D, MaxPooling2Dfrom keras.layers import BatchNormalizationfrom keras.optimizers import Adamfrom keras.models import Sequentialhidden_num_units = 2048hidden_num_units1 = 1024hidden_num_units2 = 128output_num_units = 43epochs = 10batch_size = 16pool_size = (2, 2)#list_images /= 255.0input_shape = Input(shape=(32, 32,3))model = Sequential([Conv2D(16, (3, 3), activation='relu', input_shape=(64,64,3), padding='same'), BatchNormalization(),Conv2D(16, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2), Conv2D(32, (3, 3), activation='relu', padding='same'), BatchNormalization(), Conv2D(32, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2), Conv2D(64, (3, 3), activation='relu', padding='same'), BatchNormalization(), Conv2D(64, (3, 3), activation='relu', padding='same'), BatchNormalization(), MaxPooling2D(pool_size=pool_size), Dropout(0.2),Flatten(),Dense(units=hidden_num_units, activation='relu'), Dropout(0.3), Dense(units=hidden_num_units1, activation='relu'), Dropout(0.3), Dense(units=hidden_num_units2, activation='relu'), Dropout(0.3), Dense(units=output_num_units, input_dim=hidden_num_units, activation='softmax'),])model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=1e-4), metrics=['accuracy'])trained_model_conv = model.fit(train_x.reshape(-1,64,64,3), train1_y, epochs=epochs, batch_size=batch_size, validation_data=(val_x, val_y))" }, { "code": null, "e": 7470, "s": 7444, "text": "We’ve used keras package." }, { "code": null, "e": 7543, "s": 7470, "text": "For understanding about each layers significance you can read this blog." }, { "code": null, "e": 7574, "s": 7543, "text": "model.evaluate(test_x, test_y)" }, { "code": null, "e": 7633, "s": 7574, "text": "The model gets evaluated and you can find accuracy of 99%." }, { "code": null, "e": 7670, "s": 7633, "text": "pred = model.predict_classes(test_x)" }, { "code": null, "e": 7747, "s": 7670, "text": "You can predict the class for each image and can verify how the model works." } ]
Randomly SELECT distinct rows in a MySQL table?
To randomly select rows, use ORDER BY RAND() with LIMIT. Use DISTINCT for distinct rows. Let us first see an example and create a table − mysql> create table DemoTable ( Id int NOT NULL AUTO_INCREMENT PRIMARY KEY, Name varchar(40) ); Query OK, 0 rows affected (0.54 sec) Insert some records in the table using insert command − mysql> insert into DemoTable(Name) values('John Doe'); Query OK, 1 row affected (0.13 sec) mysql> insert into DemoTable(Name) values('Chris Brown'); Query OK, 1 row affected (0.11 sec) mysql> insert into DemoTable(Name) values('Adam Smith'); Query OK, 1 row affected (0.09 sec) mysql> insert into DemoTable(Name) values('John Doe'); Query OK, 1 row affected (0.24 sec) mysql> insert into DemoTable(Name) values('John Doe'); Query OK, 1 row affected (0.23 sec) mysql> insert into DemoTable(Name) values('Chris Brown'); Query OK, 1 row affected (0.53 sec) mysql> insert into DemoTable(Name) values('Adam Smith'); Query OK, 1 row affected (0.12 sec) Display all records from the table using select statement − mysql> select *from DemoTable; This will produce the following output − +----+-------------+ | Id | Name | +----+-------------+ | 1 | John Doe | | 2 | Chris Brown | | 3 | Adam Smith | | 4 | John Doe | | 5 | John Doe | | 6 | Chris Brown | | 7 | Adam Smith | +----+-------------+ 7 rows in set (0.00 sec) Following is the query to randomly select two distinct rows in a table − mysql> select distinct Name from DemoTable order by rand() limit 3; This will produce the following output − +-------------+ | Name | +-------------+ | Chris Brown | | John Doe | | Adam Smith | +-------------+ 3 rows in set (0.00 sec)
[ { "code": null, "e": 1200, "s": 1062, "text": "To randomly select rows, use ORDER BY RAND() with LIMIT. Use DISTINCT for distinct rows. Let us first see an example and create a table −" }, { "code": null, "e": 1339, "s": 1200, "text": "mysql> create table DemoTable\n(\n Id int NOT NULL AUTO_INCREMENT PRIMARY KEY,\n Name varchar(40)\n);\nQuery OK, 0 rows affected (0.54 sec)" }, { "code": null, "e": 1395, "s": 1339, "text": "Insert some records in the table using insert command −" }, { "code": null, "e": 2042, "s": 1395, "text": "mysql> insert into DemoTable(Name) values('John Doe');\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into DemoTable(Name) values('Chris Brown');\nQuery OK, 1 row affected (0.11 sec)\nmysql> insert into DemoTable(Name) values('Adam Smith');\nQuery OK, 1 row affected (0.09 sec)\nmysql> insert into DemoTable(Name) values('John Doe');\nQuery OK, 1 row affected (0.24 sec)\nmysql> insert into DemoTable(Name) values('John Doe');\nQuery OK, 1 row affected (0.23 sec)\nmysql> insert into DemoTable(Name) values('Chris Brown');\nQuery OK, 1 row affected (0.53 sec)\nmysql> insert into DemoTable(Name) values('Adam Smith');\nQuery OK, 1 row affected (0.12 sec)" }, { "code": null, "e": 2102, "s": 2042, "text": "Display all records from the table using select statement −" }, { "code": null, "e": 2133, "s": 2102, "text": "mysql> select *from DemoTable;" }, { "code": null, "e": 2174, "s": 2133, "text": "This will produce the following output −" }, { "code": null, "e": 2430, "s": 2174, "text": "+----+-------------+\n| Id | Name |\n+----+-------------+\n| 1 | John Doe |\n| 2 | Chris Brown |\n| 3 | Adam Smith |\n| 4 | John Doe |\n| 5 | John Doe |\n| 6 | Chris Brown |\n| 7 | Adam Smith |\n+----+-------------+\n7 rows in set (0.00 sec)" }, { "code": null, "e": 2503, "s": 2430, "text": "Following is the query to randomly select two distinct rows in a table −" }, { "code": null, "e": 2571, "s": 2503, "text": "mysql> select distinct Name from DemoTable order by rand() limit 3;" }, { "code": null, "e": 2612, "s": 2571, "text": "This will produce the following output −" }, { "code": null, "e": 2749, "s": 2612, "text": "+-------------+\n| Name |\n+-------------+\n| Chris Brown |\n| John Doe |\n| Adam Smith |\n+-------------+\n3 rows in set (0.00 sec)" } ]
Introduction to Hydra.cc: A Powerful Framework to Configure your Data Science Projects | by Khuyen Tran | Towards Data Science
It is fun to play with different feature engineering methods and machine learning models, but you will most likely need to adjust your feature engineering methods and tuning your machine learning models before getting a good result. For example, in the speed dating data below, you might want to drop iid, id, idg, wave, career considering that they are not important features. But after doing more research about the data, you realize that career would be an important feature to predict whether two people would have the next date. So you decide not to dropcareer column. If you are hard coding, which means to embed data directly into the source code of a script, like below and your file is long, it might take a while for you to find the code that specifies which columns to drop. Wouldn’t it be great if you fix the columns from a simple text that solely contains information about the data without other python code like this instead? This is when you need a configuration file. A configuration file contains plain text parameters that define settings running a program. It is a good practice to avoid hard-coding in your python scripts while keeping all information related to the data such as which columns to drop, categorical variables in your config file. This practice not only saves you from wasting time searching for a specific variable in your scripts but also make your scripts more reproducible. For example, I could reuse this code for an entirely different data because there are no columns’ names specified in the code. To make the code work for the new data, all I need to fix is to change the columns’ names in my config file! A common language of a config file is YAML. YAML is a human-friendly data serialization standard for all programming languages. The syntax is easy to read and almost similar to Python. Find out more about YAML syntax here. I hope the short explanation above helps you somewhat understand the importance of a config file. But how do we go about accessing the parameters inside a config file? There are some tools out there to read a config file such as PyYaml but my favorite one is Hydra.cc. Why? Because it allows me to: Seamlessly change my default parameters in the terminal Switch between different config groups Automatically log the results Let’s find out how to get started with hydra.cc and explore the benefits of using this powerful tool. Install Hydra.cc with pip install hydra-core --upgrade Let’s start with a concrete example: For example, if you have the config file like below with all the specific information about the data path, encoding, the kind of pipeline, and target column All you need to do is to add the decorator @hydra.main(config_path='path/to/config.yaml') to the function that will use the config file. Make sure to add config inside the function to get access to the config file. Now you are all set to use any parameters in your config file! If you want to get the name of the target, target: match all you need to do is to call config.target to get the string ‘match’! Notice that you don’t need to put the quotation mark around the word ‘match’. The YAML file will consider it as a string if it is a word. Hydra.cc allows you to override your default parameters inside the config file in the terminal. For example, if you want to switch the machine learning model from decision tree to logistic regression you don’t need to rewrite the config file. You could instead type in the terminal the alternative parameters of your variables when running a file python file.py model=logisticregression And the model will be switched to logistic regression! Better yet, if your config file is complex, Hydra.cc also allows you to access parameters in the file easier with the tab completion! You could find the details about tab completion here. To keep your config files short and structured, you might want to create different files for different models along with their parameters such as this You can specify the config file of the model you want to train on the command line python file.py model=logistic Now you can switch between different models and get access to their hyperparameters effortlessly! Logging is important if you want to keep track of the results of your run. But many people don’t use Python logging because of the setup cost. Hydra.cc make it easy by automatically creating and saving all of your results in the folder ‘outputs’ based on the day Each day folder is organized based on hours and minutes. You will see all the logs associated with your runs as well as the config files you use for that run! If you happened to change your config file and don’t remember how the config file you used to produce a certain output looks like, you can look at the folder that day to find out! Find more about logging here. Since you are in the ‘output’s directory when running the function that is wrapped around by the hydra decorator, make sure to use utils.to_absolute_path('path/to/file') if you want to get access to other files in the parent directory. Current working directory : /Users/khuyentran/dev/hydra/outputs/2019-10-23/10-53-03Original working directory : /Users/khuyentran/dev/hydrato_absolute_path('foo') : /Users/khuyentran/dev/hydra/footo_absolute_path('/foo') : /foo Congratulations! You have learned about why the configuration file is important and how to seamlessly configure your data science projects. I find it much more organized when I have all information that is related to the data in a separate file. I also find it easier to experiment with different parameters when all I need to do is to call python file.py variable=new_value. I hope you also gain the same benefits by incorporating both the configuration files and Hydra.cc in your data science practice. Here is the example project that uses hydra.cc and config file. I like to write about basic data science concepts and play with different algorithms and data science tools. You could connect with me on LinkedIn and Twitter. Star this repo if you want to check out the codes for all of the articles I have written. Follow me on Medium to stay informed with my latest data science articles like these
[ { "code": null, "e": 405, "s": 172, "text": "It is fun to play with different feature engineering methods and machine learning models, but you will most likely need to adjust your feature engineering methods and tuning your machine learning models before getting a good result." }, { "code": null, "e": 746, "s": 405, "text": "For example, in the speed dating data below, you might want to drop iid, id, idg, wave, career considering that they are not important features. But after doing more research about the data, you realize that career would be an important feature to predict whether two people would have the next date. So you decide not to dropcareer column." }, { "code": null, "e": 850, "s": 746, "text": "If you are hard coding, which means to embed data directly into the source code of a script, like below" }, { "code": null, "e": 1114, "s": 850, "text": "and your file is long, it might take a while for you to find the code that specifies which columns to drop. Wouldn’t it be great if you fix the columns from a simple text that solely contains information about the data without other python code like this instead?" }, { "code": null, "e": 1158, "s": 1114, "text": "This is when you need a configuration file." }, { "code": null, "e": 1440, "s": 1158, "text": "A configuration file contains plain text parameters that define settings running a program. It is a good practice to avoid hard-coding in your python scripts while keeping all information related to the data such as which columns to drop, categorical variables in your config file." }, { "code": null, "e": 1587, "s": 1440, "text": "This practice not only saves you from wasting time searching for a specific variable in your scripts but also make your scripts more reproducible." }, { "code": null, "e": 1823, "s": 1587, "text": "For example, I could reuse this code for an entirely different data because there are no columns’ names specified in the code. To make the code work for the new data, all I need to fix is to change the columns’ names in my config file!" }, { "code": null, "e": 2046, "s": 1823, "text": "A common language of a config file is YAML. YAML is a human-friendly data serialization standard for all programming languages. The syntax is easy to read and almost similar to Python. Find out more about YAML syntax here." }, { "code": null, "e": 2214, "s": 2046, "text": "I hope the short explanation above helps you somewhat understand the importance of a config file. But how do we go about accessing the parameters inside a config file?" }, { "code": null, "e": 2345, "s": 2214, "text": "There are some tools out there to read a config file such as PyYaml but my favorite one is Hydra.cc. Why? Because it allows me to:" }, { "code": null, "e": 2401, "s": 2345, "text": "Seamlessly change my default parameters in the terminal" }, { "code": null, "e": 2440, "s": 2401, "text": "Switch between different config groups" }, { "code": null, "e": 2470, "s": 2440, "text": "Automatically log the results" }, { "code": null, "e": 2572, "s": 2470, "text": "Let’s find out how to get started with hydra.cc and explore the benefits of using this powerful tool." }, { "code": null, "e": 2594, "s": 2572, "text": "Install Hydra.cc with" }, { "code": null, "e": 2627, "s": 2594, "text": "pip install hydra-core --upgrade" }, { "code": null, "e": 2664, "s": 2627, "text": "Let’s start with a concrete example:" }, { "code": null, "e": 2821, "s": 2664, "text": "For example, if you have the config file like below with all the specific information about the data path, encoding, the kind of pipeline, and target column" }, { "code": null, "e": 3036, "s": 2821, "text": "All you need to do is to add the decorator @hydra.main(config_path='path/to/config.yaml') to the function that will use the config file. Make sure to add config inside the function to get access to the config file." }, { "code": null, "e": 3142, "s": 3036, "text": "Now you are all set to use any parameters in your config file! If you want to get the name of the target," }, { "code": null, "e": 3156, "s": 3142, "text": "target: match" }, { "code": null, "e": 3227, "s": 3156, "text": "all you need to do is to call config.target to get the string ‘match’!" }, { "code": null, "e": 3365, "s": 3227, "text": "Notice that you don’t need to put the quotation mark around the word ‘match’. The YAML file will consider it as a string if it is a word." }, { "code": null, "e": 3565, "s": 3365, "text": "Hydra.cc allows you to override your default parameters inside the config file in the terminal. For example, if you want to switch the machine learning model from decision tree to logistic regression" }, { "code": null, "e": 3712, "s": 3565, "text": "you don’t need to rewrite the config file. You could instead type in the terminal the alternative parameters of your variables when running a file" }, { "code": null, "e": 3752, "s": 3712, "text": "python file.py model=logisticregression" }, { "code": null, "e": 3807, "s": 3752, "text": "And the model will be switched to logistic regression!" }, { "code": null, "e": 3995, "s": 3807, "text": "Better yet, if your config file is complex, Hydra.cc also allows you to access parameters in the file easier with the tab completion! You could find the details about tab completion here." }, { "code": null, "e": 4146, "s": 3995, "text": "To keep your config files short and structured, you might want to create different files for different models along with their parameters such as this" }, { "code": null, "e": 4229, "s": 4146, "text": "You can specify the config file of the model you want to train on the command line" }, { "code": null, "e": 4259, "s": 4229, "text": "python file.py model=logistic" }, { "code": null, "e": 4357, "s": 4259, "text": "Now you can switch between different models and get access to their hyperparameters effortlessly!" }, { "code": null, "e": 4620, "s": 4357, "text": "Logging is important if you want to keep track of the results of your run. But many people don’t use Python logging because of the setup cost. Hydra.cc make it easy by automatically creating and saving all of your results in the folder ‘outputs’ based on the day" }, { "code": null, "e": 4779, "s": 4620, "text": "Each day folder is organized based on hours and minutes. You will see all the logs associated with your runs as well as the config files you use for that run!" }, { "code": null, "e": 4959, "s": 4779, "text": "If you happened to change your config file and don’t remember how the config file you used to produce a certain output looks like, you can look at the folder that day to find out!" }, { "code": null, "e": 4989, "s": 4959, "text": "Find more about logging here." }, { "code": null, "e": 5225, "s": 4989, "text": "Since you are in the ‘output’s directory when running the function that is wrapped around by the hydra decorator, make sure to use utils.to_absolute_path('path/to/file') if you want to get access to other files in the parent directory." }, { "code": null, "e": 5459, "s": 5225, "text": "Current working directory : /Users/khuyentran/dev/hydra/outputs/2019-10-23/10-53-03Original working directory : /Users/khuyentran/dev/hydrato_absolute_path('foo') : /Users/khuyentran/dev/hydra/footo_absolute_path('/foo') : /foo" }, { "code": null, "e": 5964, "s": 5459, "text": "Congratulations! You have learned about why the configuration file is important and how to seamlessly configure your data science projects. I find it much more organized when I have all information that is related to the data in a separate file. I also find it easier to experiment with different parameters when all I need to do is to call python file.py variable=new_value. I hope you also gain the same benefits by incorporating both the configuration files and Hydra.cc in your data science practice." }, { "code": null, "e": 6028, "s": 5964, "text": "Here is the example project that uses hydra.cc and config file." }, { "code": null, "e": 6188, "s": 6028, "text": "I like to write about basic data science concepts and play with different algorithms and data science tools. You could connect with me on LinkedIn and Twitter." } ]
Adversarial Machine Learning Mitigation: Adversarial Learning | by Ferhat Ozgur Catak | Towards Data Science
There are several attacks against deep learning models in the literature, including fast-gradient sign method (FGSM), basic iterative method (BIM) or momentum iterative method (MIM) attacks. These attacks are the purest form of the gradient-based evading technique that is used by attackers to evade the classification model. If you find those results useful please cite this paper : @PROCEEDINGS{catak-adv-ml-2020, title = {Deep Neural Network based Malicious Network Activity Detection Under Adversarial Machine Learning Attacks}, booktitle = {Proc.\ 3rd International Conference on Intelligent Technologies and Applications (INTAP 2020)}, volume = 5805, series = {LNCS},author = {Ferhat Ozgur Catak}, publisher = {Springer}, year = {2020} } In this work, I will present a new approach to protect a malicious activity detection model from the several adversarial machine learning attacks. Hence, we explore the power of applying adversarial training to build a robust model against FGSM attacks. Accordingly, (1) dataset enhanced with the adversarial examples; (2) deep neural network-based detection model is trained using the KDDCUP99 dataset to learn the FGSM based attack patterns. We applied this training model to the benchmark cybersecurity dataset. The adversarial machine learning has been used to describe the attacks to machine learning models, which tries to mislead models by malicious input instances. The figure shows the typical adversarial machine learning attack. A typical machine learning model basically consists of two stages as training time and decision time. Thus, the adversarial machine learning attacks occur in either training time or decision time. The techniques used by hackers for adversarial machine learning can be divided into two, according to the time of the attack: Data Poisoning: The attacker changes some labels of training input instances to mislead the output model. Model Poisoning: The hacker drives model to produce false labelling using some perturbated instance after the model is created. Our model is able to respond to the model attacks by hackers who use the adversarial machine learning methods. The figure illustrates the system architecture used to protect the model and to classify correctly. We import the usual standard libraries plus one cleverhans library to make an adversarial attack to the deep learning model. from sklearn.datasets import fetch_kddcup99from sklearn.model_selection import train_test_splitfrom sklearn.metrics import confusion_matrixfrom sklearn import preprocessingimport tensorflow as tfimport pandas as pdimport numpy as npfrom keras.utils import np_utilsfrom cleverhans.future.tf2.attacks import fast_gradient_method, \ basic_iterative_method, momentum_iterative_methodnp.random.seed(10) In this work, we will use standard KDDCUP’99 intrusion detection dataset to show the results. We need to extract the numerical features from the dataset. I created a new method to load and extract the KDDCUP’99 dataset. COL_NAME = ['duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate']NUMERIC_COLS = ['duration', 'src_bytes', 'dst_bytes', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate']def get_ds(): """ get_ds: Get the numeric values of the KDDCUP'99 dataset. """ x_kddcup, y_kddcup = fetch_kddcup99(return_X_y=True, shuffle=False) df_kddcup = pd.DataFrame(x_kddcup, columns=COL_NAME) df_kddcup['label'] = y_kddcup df_kddcup.drop_duplicates(keep='first', inplace=True) df_kddcup['label'] = df_kddcup['label'].apply(lambda d: \ str(d).replace('.', '').replace("b'", "").\ replace("'", "")) conversion_dict = {'back':'dos', 'buffer_overflow':'u2r', 'ftp_write':'r2l', 'guess_passwd':'r2l', 'imap':'r2l', 'ipsweep':'probe', 'land':'dos', 'loadmodule':'u2r', 'multihop':'r2l', 'neptune':'dos', 'nmap':'probe', 'perl':'u2r', 'phf':'r2l', 'pod':'dos', 'portsweep':'probe', 'rootkit':'u2r', 'satan':'probe', 'smurf':'dos', 'spy':'r2l', 'teardrop':'dos', 'warezclient':'r2l', 'warezmaster':'r2l'} df_kddcup['label'] = df_kddcup['label'].replace(conversion_dict) df_kddcup = df_kddcup.query("label != 'u2r'") df_y = pd.DataFrame(df_kddcup.label, columns=["label"], dtype="category") df_kddcup.drop(["label"], inplace=True, axis=1) x_kddcup = df_kddcup[NUMERIC_COLS].values x_kddcup = preprocessing.scale(x_kddcup) y_kddcup = df_y.label.cat.codes.to_numpy() return x_kddcup, y_kddcup The tensorflow based classification model is then given for example as exercise here: def create_tf_model(input_size, num_of_class): """ This method creates the tensorflow classification model """ model_kddcup = tf.keras.Sequential([ tf.keras.layers.Dense(200, input_dim=input_size, activation=tf.nn.relu), tf.keras.layers.Dense(500, activation=tf.nn.relu), tf.keras.layers.Dense(200, activation=tf.nn.relu), tf.keras.layers.Dense(num_of_class), # We seperate the activation layer to be able to access # the logits of the previous layer later tf.keras.layers.Activation(tf.nn.softmax) ]) model_kddcup.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model_kddcup The next step is to create adversarial machine learning attacks using CleverHans library. I used fast-gradient sign method (FGSM), basic iterative method (BIM) or momentum iterative method (MIM) attacks for the Tensorflow library. I created 3 methods for each attack. def gen_tf2_fgsm_attack(org_model, x_test): """ This method creates adversarial examples with fgsm """ logits_model = tf.keras.Model(org_model.input, model.layers[-1].output) epsilon = 0.1 adv_fgsm_x = fast_gradient_method(logits_model, x_test, epsilon, np.inf, targeted=False) return adv_fgsm_xdef gen_tf2_bim(org_model, x_test): """ This method creates adversarial examples with bim """ logits_model = tf.keras.Model(org_model.input, model.layers[-1].output) epsilon = 0.1 adv_bim_x = basic_iterative_method(logits_model, x_test, epsilon, 0.1, nb_iter=10, norm=np.inf, targeted=True) return adv_bim_xdef gen_tf2_mim(org_model, x_test): """ This method creates adversarial examples with mim """ logits_model = tf.keras.Model(org_model.input, model.layers[-1].output) epsilon = 0.1 adv_mim_x = momentum_iterative_method(logits_model, x_test, epsilon, 0.1, nb_iter=100, norm=np.inf, targeted=True) return adv_mim_x Let’s continue with the training of the attack detection model with the normal (non-manipulated) KDDCUP’99 dataset EPOCH = 50TEST_RATE = 0.2VALIDATION_RATE = 0.2X, y = get_ds()num_class = len(np.unique(y))attack_functions = [gen_tf2_bim, gen_tf2_fgsm_attack, gen_tf2_mim]model = create_tf_model(X.shape[1], num_class)X_train, X_test, y_train, y_test = train_test_split(X, y, \ test_size=TEST_RATE)y_train_cat = np_utils.to_categorical(y_train)y_test_cat = np_utils.to_categorical(y_test)history = model.fit(X_train, y_train_cat, epochs=EPOCH, batch_size=50000, verbose=0, validation_split=VALIDATION_RATE)y_pred = model.predict_classes(X_test)cm_org = confusion_matrix(y_test, y_pred)print("*"*50)print("Original confusion matrix")print(cm_org)C:\Users\ferhatoc\AppData\Roaming\Python\Python37\site-packages\pandas\core\frame.py:3997: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrameSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy errors=errors,WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x0000025288139948> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x0000025288139948> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x0000025287FF4318> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x0000025287FF4318> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:From <ipython-input-7-6c756bab0648>:24: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.Instructions for updating:Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype("int32")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation).WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025283D0E798> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025283D0E798> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert**************************************************Original confusion matrix[[10873 16 0 0] [ 16 17528 6 18] [ 7 20 403 11] [ 3 23 4 179]] The original model’s confusion matrix is shown here. According to the confusion matrix, the model’s classification performance quite good. Let’s continue with the attacks, attacked model’s confusion matrices and adversarial trained model’s confusion matrices for attack_function in attack_functions: print("*"*20) print("Attack function is ", attack_function) model = create_tf_model(X.shape[1], num_class) history = model.fit(X_train, y_train_cat, epochs=EPOCH, batch_size=50000, verbose=0, validation_split=VALIDATION_RATE) X_adv_list = [] y_adv_list = [] adv_x = attack_function(model, X_test) y_pred = model.predict_classes(adv_x) cm_adv = confusion_matrix(y_test, y_pred) print("*"*20) print("Attacked confusion matrix") print(cm_adv) print("Adversarial training") # define the checkpoint adv_x = attack_function(model, X_train) adv_x_test = attack_function(model, X_test) concat_adv_x = np.concatenate([X_train, adv_x]) concat_y_train = np.concatenate([y_train_cat, y_train_cat]) history = model.fit(concat_adv_x, concat_y_train, epochs=EPOCH, batch_size=50000, verbose=0, validation_data=(adv_x_test, y_test_cat)) y_pred = model.predict_classes(adv_x_test) cm_adv = confusion_matrix(y_test, y_pred) print("*"*20) print("Attacked confusion matrix - adv training") print(cm_adv)********************Attack function is <function gen_tf2_bim at 0x00000252FCF84A68>WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x00000252FD05FDC8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x00000252FD05FDC8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x00000252877B80D8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x00000252877B80D8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025281E8B168> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025281E8B168> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert********************Attacked confusion matrix[[10874 15 0 0] [ 14 17532 7 15] [ 6 19 404 12] [ 3 23 5 178]]Adversarial training********************Attacked confusion matrix - adv training[[10877 12 0 0] [ 12 17535 6 15] [ 1 13 425 2] [ 0 22 3 184]]********************Attack function is <function gen_tf2_fgsm_attack at 0x00000252FCF84B88>WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x0000025281E8B438> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x0000025281E8B438> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x0000025288BB38B8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x0000025288BB38B8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025287EF2558> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025287EF2558> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert********************Attacked confusion matrix[[10702 180 6 1] [ 79 17353 31 105] [ 9 47 376 9] [ 3 88 8 110]]Adversarial training********************Attacked confusion matrix - adv training[[10877 11 0 1] [ 9 17543 4 12] [ 1 15 422 3] [ 2 25 2 180]]********************Attack function is <function gen_tf2_mim at 0x00000252FCF84D38>WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x00000252875459D8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x00000252875459D8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x00000252F9990048> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x00000252F9990048> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:5 out of the last 14 calls to <function compute_gradient at 0x00000252FCF6A318> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025280355A68> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025280355A68> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert********************Attacked confusion matrix[[10874 15 0 0] [ 16 17530 5 17] [ 6 24 400 11] [ 3 23 5 178]]Adversarial training********************Attacked confusion matrix - adv training[[10878 11 0 0] [ 12 17537 4 15] [ 1 16 420 4] [ 0 21 2 186]] Attacked model’s confusion matrix Adversarial trained model’s confusion matrix Fast Gradient Sign Method (FGSM) Attacked model’s confusion matrix Adversarial trained model’s confusion matrix Momentum Iterative Method (MIM) Attacked model’s confusion matrix Adversarial trained model’s confusion matrix
[ { "code": null, "e": 498, "s": 172, "text": "There are several attacks against deep learning models in the literature, including fast-gradient sign method (FGSM), basic iterative method (BIM) or momentum iterative method (MIM) attacks. These attacks are the purest form of the gradient-based evading technique that is used by attackers to evade the classification model." }, { "code": null, "e": 556, "s": 498, "text": "If you find those results useful please cite this paper :" }, { "code": null, "e": 938, "s": 556, "text": "@PROCEEDINGS{catak-adv-ml-2020, title = {Deep Neural Network based Malicious Network Activity Detection Under Adversarial Machine Learning Attacks}, booktitle = {Proc.\\ 3rd International Conference on Intelligent Technologies and Applications (INTAP 2020)}, volume = 5805, series = {LNCS},author = {Ferhat Ozgur Catak}, publisher = {Springer}, year = {2020} }" }, { "code": null, "e": 1453, "s": 938, "text": "In this work, I will present a new approach to protect a malicious activity detection model from the several adversarial machine learning attacks. Hence, we explore the power of applying adversarial training to build a robust model against FGSM attacks. Accordingly, (1) dataset enhanced with the adversarial examples; (2) deep neural network-based detection model is trained using the KDDCUP99 dataset to learn the FGSM based attack patterns. We applied this training model to the benchmark cybersecurity dataset." }, { "code": null, "e": 1678, "s": 1453, "text": "The adversarial machine learning has been used to describe the attacks to machine learning models, which tries to mislead models by malicious input instances. The figure shows the typical adversarial machine learning attack." }, { "code": null, "e": 2001, "s": 1678, "text": "A typical machine learning model basically consists of two stages as training time and decision time. Thus, the adversarial machine learning attacks occur in either training time or decision time. The techniques used by hackers for adversarial machine learning can be divided into two, according to the time of the attack:" }, { "code": null, "e": 2107, "s": 2001, "text": "Data Poisoning: The attacker changes some labels of training input instances to mislead the output model." }, { "code": null, "e": 2235, "s": 2107, "text": "Model Poisoning: The hacker drives model to produce false labelling using some perturbated instance after the model is created." }, { "code": null, "e": 2446, "s": 2235, "text": "Our model is able to respond to the model attacks by hackers who use the adversarial machine learning methods. The figure illustrates the system architecture used to protect the model and to classify correctly." }, { "code": null, "e": 2571, "s": 2446, "text": "We import the usual standard libraries plus one cleverhans library to make an adversarial attack to the deep learning model." }, { "code": null, "e": 2972, "s": 2571, "text": "from sklearn.datasets import fetch_kddcup99from sklearn.model_selection import train_test_splitfrom sklearn.metrics import confusion_matrixfrom sklearn import preprocessingimport tensorflow as tfimport pandas as pdimport numpy as npfrom keras.utils import np_utilsfrom cleverhans.future.tf2.attacks import fast_gradient_method, \\ basic_iterative_method, momentum_iterative_methodnp.random.seed(10)" }, { "code": null, "e": 3192, "s": 2972, "text": "In this work, we will use standard KDDCUP’99 intrusion detection dataset to show the results. We need to extract the numerical features from the dataset. I created a new method to load and extract the KDDCUP’99 dataset." }, { "code": null, "e": 6293, "s": 3192, "text": "COL_NAME = ['duration', 'protocol_type', 'service', 'flag', 'src_bytes', 'dst_bytes', 'land', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'logged_in', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'is_host_login', 'is_guest_login', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate']NUMERIC_COLS = ['duration', 'src_bytes', 'dst_bytes', 'wrong_fragment', 'urgent', 'hot', 'num_failed_logins', 'num_compromised', 'root_shell', 'su_attempted', 'num_root', 'num_file_creations', 'num_shells', 'num_access_files', 'num_outbound_cmds', 'count', 'srv_count', 'serror_rate', 'srv_serror_rate', 'rerror_rate', 'srv_rerror_rate', 'same_srv_rate', 'diff_srv_rate', 'srv_diff_host_rate', 'dst_host_count', 'dst_host_srv_count', 'dst_host_same_srv_rate', 'dst_host_diff_srv_rate', 'dst_host_same_src_port_rate', 'dst_host_srv_diff_host_rate', 'dst_host_serror_rate', 'dst_host_srv_serror_rate', 'dst_host_rerror_rate', 'dst_host_srv_rerror_rate']def get_ds(): \"\"\" get_ds: Get the numeric values of the KDDCUP'99 dataset. \"\"\" x_kddcup, y_kddcup = fetch_kddcup99(return_X_y=True, shuffle=False) df_kddcup = pd.DataFrame(x_kddcup, columns=COL_NAME) df_kddcup['label'] = y_kddcup df_kddcup.drop_duplicates(keep='first', inplace=True) df_kddcup['label'] = df_kddcup['label'].apply(lambda d: \\ str(d).replace('.', '').replace(\"b'\", \"\").\\ replace(\"'\", \"\")) conversion_dict = {'back':'dos', 'buffer_overflow':'u2r', 'ftp_write':'r2l', 'guess_passwd':'r2l', 'imap':'r2l', 'ipsweep':'probe', 'land':'dos', 'loadmodule':'u2r', 'multihop':'r2l', 'neptune':'dos', 'nmap':'probe', 'perl':'u2r', 'phf':'r2l', 'pod':'dos', 'portsweep':'probe', 'rootkit':'u2r', 'satan':'probe', 'smurf':'dos', 'spy':'r2l', 'teardrop':'dos', 'warezclient':'r2l', 'warezmaster':'r2l'} df_kddcup['label'] = df_kddcup['label'].replace(conversion_dict) df_kddcup = df_kddcup.query(\"label != 'u2r'\") df_y = pd.DataFrame(df_kddcup.label, columns=[\"label\"], dtype=\"category\") df_kddcup.drop([\"label\"], inplace=True, axis=1) x_kddcup = df_kddcup[NUMERIC_COLS].values x_kddcup = preprocessing.scale(x_kddcup) y_kddcup = df_y.label.cat.codes.to_numpy() return x_kddcup, y_kddcup" }, { "code": null, "e": 6379, "s": 6293, "text": "The tensorflow based classification model is then given for example as exercise here:" }, { "code": null, "e": 7111, "s": 6379, "text": "def create_tf_model(input_size, num_of_class): \"\"\" This method creates the tensorflow classification model \"\"\" model_kddcup = tf.keras.Sequential([ tf.keras.layers.Dense(200, input_dim=input_size, activation=tf.nn.relu), tf.keras.layers.Dense(500, activation=tf.nn.relu), tf.keras.layers.Dense(200, activation=tf.nn.relu), tf.keras.layers.Dense(num_of_class), # We seperate the activation layer to be able to access # the logits of the previous layer later tf.keras.layers.Activation(tf.nn.softmax) ]) model_kddcup.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) return model_kddcup" }, { "code": null, "e": 7379, "s": 7111, "text": "The next step is to create adversarial machine learning attacks using CleverHans library. I used fast-gradient sign method (FGSM), basic iterative method (BIM) or momentum iterative method (MIM) attacks for the Tensorflow library. I created 3 methods for each attack." }, { "code": null, "e": 8960, "s": 7379, "text": "def gen_tf2_fgsm_attack(org_model, x_test): \"\"\" This method creates adversarial examples with fgsm \"\"\" logits_model = tf.keras.Model(org_model.input, model.layers[-1].output) epsilon = 0.1 adv_fgsm_x = fast_gradient_method(logits_model, x_test, epsilon, np.inf, targeted=False) return adv_fgsm_xdef gen_tf2_bim(org_model, x_test): \"\"\" This method creates adversarial examples with bim \"\"\" logits_model = tf.keras.Model(org_model.input, model.layers[-1].output) epsilon = 0.1 adv_bim_x = basic_iterative_method(logits_model, x_test, epsilon, 0.1, nb_iter=10, norm=np.inf, targeted=True) return adv_bim_xdef gen_tf2_mim(org_model, x_test): \"\"\" This method creates adversarial examples with mim \"\"\" logits_model = tf.keras.Model(org_model.input, model.layers[-1].output) epsilon = 0.1 adv_mim_x = momentum_iterative_method(logits_model, x_test, epsilon, 0.1, nb_iter=100, norm=np.inf, targeted=True) return adv_mim_x" }, { "code": null, "e": 9075, "s": 8960, "text": "Let’s continue with the training of the attack detection model with the normal (non-manipulated) KDDCUP’99 dataset" }, { "code": null, "e": 13580, "s": 9075, "text": "EPOCH = 50TEST_RATE = 0.2VALIDATION_RATE = 0.2X, y = get_ds()num_class = len(np.unique(y))attack_functions = [gen_tf2_bim, gen_tf2_fgsm_attack, gen_tf2_mim]model = create_tf_model(X.shape[1], num_class)X_train, X_test, y_train, y_test = train_test_split(X, y, \\ test_size=TEST_RATE)y_train_cat = np_utils.to_categorical(y_train)y_test_cat = np_utils.to_categorical(y_test)history = model.fit(X_train, y_train_cat, epochs=EPOCH, batch_size=50000, verbose=0, validation_split=VALIDATION_RATE)y_pred = model.predict_classes(X_test)cm_org = confusion_matrix(y_test, y_pred)print(\"*\"*50)print(\"Original confusion matrix\")print(cm_org)C:\\Users\\ferhatoc\\AppData\\Roaming\\Python\\Python37\\site-packages\\pandas\\core\\frame.py:3997: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrameSee the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy errors=errors,WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x0000025288139948> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x0000025288139948> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x0000025287FF4318> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x0000025287FF4318> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:From <ipython-input-7-6c756bab0648>:24: Sequential.predict_classes (from tensorflow.python.keras.engine.sequential) is deprecated and will be removed after 2021-01-01.Instructions for updating:Please use instead:* `np.argmax(model.predict(x), axis=-1)`, if your model does multi-class classification (e.g. if it uses a `softmax` last-layer activation).* `(model.predict(x) > 0.5).astype(\"int32\")`, if your model does binary classification (e.g. if it uses a `sigmoid` last-layer activation).WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025283D0E798> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025283D0E798> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert**************************************************Original confusion matrix[[10873 16 0 0] [ 16 17528 6 18] [ 7 20 403 11] [ 3 23 4 179]]" }, { "code": null, "e": 13719, "s": 13580, "text": "The original model’s confusion matrix is shown here. According to the confusion matrix, the model’s classification performance quite good." }, { "code": null, "e": 13839, "s": 13719, "text": "Let’s continue with the attacks, attacked model’s confusion matrices and adversarial trained model’s confusion matrices" }, { "code": null, "e": 25123, "s": 13839, "text": "for attack_function in attack_functions: print(\"*\"*20) print(\"Attack function is \", attack_function) model = create_tf_model(X.shape[1], num_class) history = model.fit(X_train, y_train_cat, epochs=EPOCH, batch_size=50000, verbose=0, validation_split=VALIDATION_RATE) X_adv_list = [] y_adv_list = [] adv_x = attack_function(model, X_test) y_pred = model.predict_classes(adv_x) cm_adv = confusion_matrix(y_test, y_pred) print(\"*\"*20) print(\"Attacked confusion matrix\") print(cm_adv) print(\"Adversarial training\") # define the checkpoint adv_x = attack_function(model, X_train) adv_x_test = attack_function(model, X_test) concat_adv_x = np.concatenate([X_train, adv_x]) concat_y_train = np.concatenate([y_train_cat, y_train_cat]) history = model.fit(concat_adv_x, concat_y_train, epochs=EPOCH, batch_size=50000, verbose=0, validation_data=(adv_x_test, y_test_cat)) y_pred = model.predict_classes(adv_x_test) cm_adv = confusion_matrix(y_test, y_pred) print(\"*\"*20) print(\"Attacked confusion matrix - adv training\") print(cm_adv)********************Attack function is <function gen_tf2_bim at 0x00000252FCF84A68>WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x00000252FD05FDC8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x00000252FD05FDC8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x00000252877B80D8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x00000252877B80D8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025281E8B168> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025281E8B168> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert********************Attacked confusion matrix[[10874 15 0 0] [ 14 17532 7 15] [ 6 19 404 12] [ 3 23 5 178]]Adversarial training********************Attacked confusion matrix - adv training[[10877 12 0 0] [ 12 17535 6 15] [ 1 13 425 2] [ 0 22 3 184]]********************Attack function is <function gen_tf2_fgsm_attack at 0x00000252FCF84B88>WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x0000025281E8B438> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x0000025281E8B438> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x0000025288BB38B8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x0000025288BB38B8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025287EF2558> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025287EF2558> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert********************Attacked confusion matrix[[10702 180 6 1] [ 79 17353 31 105] [ 9 47 376 9] [ 3 88 8 110]]Adversarial training********************Attacked confusion matrix - adv training[[10877 11 0 1] [ 9 17543 4 12] [ 1 15 422 3] [ 2 25 2 180]]********************Attack function is <function gen_tf2_mim at 0x00000252FCF84D38>WARNING:tensorflow:AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x00000252875459D8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_train_function.<locals>.train_function at 0x00000252875459D8> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x00000252F9990048> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_test_function.<locals>.test_function at 0x00000252F9990048> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING:tensorflow:5 out of the last 14 calls to <function compute_gradient at 0x00000252FCF6A318> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings is likely due to passing python objects instead of tensors. Also, tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. Please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.WARNING:tensorflow:AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025280355A68> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convertWARNING: AutoGraph could not transform <function Model.make_predict_function.<locals>.predict_function at 0x0000025280355A68> and will run it as-is.Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.Cause: Bad argument number for Name: 4, expecting 3To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert********************Attacked confusion matrix[[10874 15 0 0] [ 16 17530 5 17] [ 6 24 400 11] [ 3 23 5 178]]Adversarial training********************Attacked confusion matrix - adv training[[10878 11 0 0] [ 12 17537 4 15] [ 1 16 420 4] [ 0 21 2 186]]" }, { "code": null, "e": 25157, "s": 25123, "text": "Attacked model’s confusion matrix" }, { "code": null, "e": 25202, "s": 25157, "text": "Adversarial trained model’s confusion matrix" }, { "code": null, "e": 25235, "s": 25202, "text": "Fast Gradient Sign Method (FGSM)" }, { "code": null, "e": 25269, "s": 25235, "text": "Attacked model’s confusion matrix" }, { "code": null, "e": 25314, "s": 25269, "text": "Adversarial trained model’s confusion matrix" }, { "code": null, "e": 25346, "s": 25314, "text": "Momentum Iterative Method (MIM)" }, { "code": null, "e": 25380, "s": 25346, "text": "Attacked model’s confusion matrix" } ]
Data Structure & Algorithms - Tree Traversal
Traversal is a process to visit all the nodes of a tree and may print their values too. Because, all nodes are connected via edges (links) we always start from the root (head) node. That is, we cannot randomly access a node in a tree. There are three ways which we use to traverse a tree − In-order Traversal Pre-order Traversal Post-order Traversal Generally, we traverse a tree to search or locate a given item or key in the tree or to print all the values it contains. In this traversal method, the left subtree is visited first, then the root and later the right sub-tree. We should always remember that every node may represent a subtree itself. If a binary tree is traversed in-order, the output will produce sorted key values in an ascending order. We start from A, and following in-order traversal, we move to its left subtree B. B is also traversed in-order. The process goes on until all the nodes are visited. The output of inorder traversal of this tree will be − D → B → E → A → F → C → G Until all nodes are traversed − Step 1 − Recursively traverse left subtree. Step 2 − Visit root node. Step 3 − Recursively traverse right subtree. In this traversal method, the root node is visited first, then the left subtree and finally the right subtree. We start from A, and following pre-order traversal, we first visit A itself and then move to its left subtree B. B is also traversed pre-order. The process goes on until all the nodes are visited. The output of pre-order traversal of this tree will be − A → B → D → E → C → F → G Until all nodes are traversed − Step 1 − Visit root node. Step 2 − Recursively traverse left subtree. Step 3 − Recursively traverse right subtree. In this traversal method, the root node is visited last, hence the name. First we traverse the left subtree, then the right subtree and finally the root node. We start from A, and following Post-order traversal, we first visit the left subtree B. B is also traversed post-order. The process goes on until all the nodes are visited. The output of post-order traversal of this tree will be − D → E → B → F → G → C → A Until all nodes are traversed − Step 1 − Recursively traverse left subtree. Step 2 − Recursively traverse right subtree. Step 3 − Visit root node. To check the C implementation of tree traversing, please click here. 42 Lectures 1.5 hours Ravi Kiran 141 Lectures 13 hours Arnab Chakraborty 26 Lectures 8.5 hours Parth Panjabi 65 Lectures 6 hours Arnab Chakraborty 75 Lectures 13 hours Eduonix Learning Solutions 64 Lectures 10.5 hours Eduonix Learning Solutions Print Add Notes Bookmark this page
[ { "code": null, "e": 2870, "s": 2580, "text": "Traversal is a process to visit all the nodes of a tree and may print their values too. Because, all nodes are connected via edges (links) we always start from the root (head) node. That is, we cannot randomly access a node in a tree. There are three ways which we use to traverse a tree −" }, { "code": null, "e": 2889, "s": 2870, "text": "In-order Traversal" }, { "code": null, "e": 2909, "s": 2889, "text": "Pre-order Traversal" }, { "code": null, "e": 2930, "s": 2909, "text": "Post-order Traversal" }, { "code": null, "e": 3052, "s": 2930, "text": "Generally, we traverse a tree to search or locate a given item or key in the tree or to print all the values it contains." }, { "code": null, "e": 3231, "s": 3052, "text": "In this traversal method, the left subtree is visited first, then the root and later the right sub-tree. We should always remember that every node may represent a subtree itself." }, { "code": null, "e": 3336, "s": 3231, "text": "If a binary tree is traversed in-order, the output will produce sorted key values in an ascending order." }, { "code": null, "e": 3556, "s": 3336, "text": "We start from A, and following in-order traversal, we move to its left subtree B. B is also traversed in-order. The process goes on until all the nodes are visited. The output of inorder traversal of this tree will be −" }, { "code": null, "e": 3582, "s": 3556, "text": "D → B → E → A → F → C → G" }, { "code": null, "e": 3730, "s": 3582, "text": "Until all nodes are traversed −\nStep 1 − Recursively traverse left subtree.\nStep 2 − Visit root node.\nStep 3 − Recursively traverse right subtree.\n" }, { "code": null, "e": 3841, "s": 3730, "text": "In this traversal method, the root node is visited first, then the left subtree and finally the right subtree." }, { "code": null, "e": 4095, "s": 3841, "text": "We start from A, and following pre-order traversal, we first visit A itself and then move to its left subtree B. B is also traversed pre-order. The process goes on until all the nodes are visited. The output of pre-order traversal of this tree will be −" }, { "code": null, "e": 4121, "s": 4095, "text": "A → B → D → E → C → F → G" }, { "code": null, "e": 4269, "s": 4121, "text": "Until all nodes are traversed −\nStep 1 − Visit root node.\nStep 2 − Recursively traverse left subtree.\nStep 3 − Recursively traverse right subtree.\n" }, { "code": null, "e": 4428, "s": 4269, "text": "In this traversal method, the root node is visited last, hence the name. First we traverse the left subtree, then the right subtree and finally the root node." }, { "code": null, "e": 4659, "s": 4428, "text": "We start from A, and following Post-order traversal, we first visit the left subtree B. B is also traversed post-order. The process goes on until all the nodes are visited. The output of post-order traversal of this tree will be −" }, { "code": null, "e": 4685, "s": 4659, "text": "D → E → B → F → G → C → A" }, { "code": null, "e": 4833, "s": 4685, "text": "Until all nodes are traversed −\nStep 1 − Recursively traverse left subtree.\nStep 2 − Recursively traverse right subtree.\nStep 3 − Visit root node.\n" }, { "code": null, "e": 4902, "s": 4833, "text": "To check the C implementation of tree traversing, please click here." }, { "code": null, "e": 4937, "s": 4902, "text": "\n 42 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4949, "s": 4937, "text": " Ravi Kiran" }, { "code": null, "e": 4984, "s": 4949, "text": "\n 141 Lectures \n 13 hours \n" }, { "code": null, "e": 5003, "s": 4984, "text": " Arnab Chakraborty" }, { "code": null, "e": 5038, "s": 5003, "text": "\n 26 Lectures \n 8.5 hours \n" }, { "code": null, "e": 5053, "s": 5038, "text": " Parth Panjabi" }, { "code": null, "e": 5086, "s": 5053, "text": "\n 65 Lectures \n 6 hours \n" }, { "code": null, "e": 5105, "s": 5086, "text": " Arnab Chakraborty" }, { "code": null, "e": 5139, "s": 5105, "text": "\n 75 Lectures \n 13 hours \n" }, { "code": null, "e": 5167, "s": 5139, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 5203, "s": 5167, "text": "\n 64 Lectures \n 10.5 hours \n" }, { "code": null, "e": 5231, "s": 5203, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 5238, "s": 5231, "text": " Print" }, { "code": null, "e": 5249, "s": 5238, "text": " Add Notes" } ]
Deep Dive into Querying Elasticsearch. Filter vs Query. Full-text search | by Artem | Towards Data Science
If I had to describe Elasticsearch in one phrase I would say something like: When search meets analytics at scale (in near real time) Elasticsearch is in the top 10 most popular open-source technologies at the moment. Fair enough, it unites many crucial features that are not unique itself, however, it can make the best search engine/analytics platform when combined. More precisely, Elasticsearch has become so popular due to a combination of the following features: Search with relevance scoring Full-text search Analytics (aggregations) Schemaless (no limitations on data schema), NoSQL, document-oriented Rich choice of data types Horizontally scalable Fault-tolerant Working with Elasticsearch for my side-project I quickly realized that official documentation looks more like a “squeeze” from what should be called documentation. I had to google around and stackowerflowing a lot, so I decided to compile all the information in this post. In this article, I will write mostly about the querying/searching Elasticsearch cluster. There are many different ways you could accomplish more or less the same result, therefore, I will try to explain the pros and cons of each method. More importantly, I will introduce you two important concepts — query and filter contexts — which are not well explained in the documentation. I will give you a set of rules for when it is better to use which method. If there was just one thing that I would like you to remember after reading this article that would be: Do you really need to score your documents while querying? There is always a relevance score when we talk about Elasticsearch. The relevance score is a strictly positive float that indicates how well each document satisfies the searching criteria. This score is relative to the highest score assigned, therefore, higher the score better the relevance of a document to the searching criteria. However, filter and query are two different concepts that you should be able to understand before writing your query. Generally speaking, filter context is a yes/no option where each document either matches the query or not. A good example will be SQL WHERE followed by some conditions. SQL queries always return you the rows that strictly match the criteria. There is no way for an SQL query to return an ambiguous result. Filters are automatically cached and do not contribute to the relevance score. Elastisearch query context, on the other hand, shows you how well does each document matches your requirements. To do so, the query uses an analyzer to find the best matches. The rule of a thumb would be to use filters for: yes/no search search on exact values (numeric, range and keyword) Use queries for: ambiguous result (some documents suit more than others) full-text search Unless you need relevance score or full-text search always try to use filters. Filters are “cheaper”. In addition, Elasticsearch will automatically cache the results of filters. In parts 1. and 2. I will speak about queries (that can be transformed into filters). Please do not confuse structured vs full text with query vs filters — those are two different things. Also called term-level queries, structured queries are a group of querying methods that checks if a document should be selected or not. Therefore, there is no real need for relevance score in many cases — document either going to match or not (especially numerics). Term-level queries are still queries, so they will return the score. Term query Returns the documents where the value of a field exactly matches the criteria. The term query is somewhat an alternative of SQL select * from table_name where column_name =... The term query goes directly to the inverted index which makes it fast. It is preferred to use term only for keyword fields when working with text data. GET /_search{ "query": { "term": { "<field_name>": { "value": "<your_value>" } } }} The term query is run in the query context by default, therefore, it will calculate the score. Even if the score will be identical for all documents returned, additional computing power will be involved. If we want to speed up term query and get it cached then it should be wrapped up in a constant_score filter. Remember the rule of thumb? Use this method if you do not care about the relevance score. GET /_search{ "query": { "constant_score" : { "filter" : { "term" : {"<field_name>" : "<your_value>"} } } }} Now, the query is not calculating any relevance score, therefore, it is faster. Moreover, it is automatically cached. Quick advise — use match instead of term for text fields. Remember, the term query goes directly to the inverted index. Term query takes the value you provide and searches for it as it is, that is why it suits well for querying keyword fields that are stored without any transformations. Terms query As you could have guessed, the term query allows you to return documents which are matching at least one exact term. Term query is somewhat an alternative of SQL select * from table_name where column_name is in... Important to understand that querying field in Elasticsearch might be a list, for example { "name" : ["Odin", "Woden", "Wodan"] }. If you perform a terms query that contains one f the following names then this record will be matched — it does not have to match all the values in the field, but only one. GET /_search{ "query" : { "terms" : { "name" : ["Frigg", "Odin", "Baldr"] } }} Same as terms query but this time you can specify how many exact terms should be in the queried field. You specify how many have to match — one, two, three or all of them. However, this number is another numeric field. Therefore, each document should contain this number (specific to this particular document). Returns documents in which queried field’s value is within the defined range. Equivalent of SQL select * from table_name where column_name is between... Range query has its own syntax: gt is greater than gte is greater than or equal to lt is less than lte is less than or equal to An example where the field’s value should be ≥ 4 and ≤ 17: GET _search{ "query": { "range" : { "<field_name>" : { "gte" : 4, "lte" : 17 } } }} The range query also works well with dates. Regexp query returns the documents in which fields match your regular expression. If you have never used regular expression then I highly advise you to get at least some understanding about what it is and when you could apply it. Elasticsearch’s regexp is Lucene’s one. It has standard reserved characters and operators. If you worked already with Python’s re package then it should not be a problem to use it here. The only difference is that Lucene’s engine does not support anchor operators such as ^ and $. You may find the entire list for the regexp in the official documentation. In addition to the regexp query Elsticsearch has wildcard and prefix queries. Logically, those two are just special cases of regexp. Unfortunately, I could not find any information regarding the performance of those 3 queries, therefore, I decided to test it myself to see if I find any significant difference. I could not find any difference in performance while comparing a wildcard query using rehexp and wildcard query. In case you know what is the difference, please, tweet me. Due to the fact that Elasticsearch is schemaless (or no strict schema limitation), it is a fairly common situation when different documents have different fields. As a result, there is a lot of use to know whether a document has any certain field or not. Exists query returns documents that contain an indexed value for a field GET /_search{ "query": { "exists": { "field": "<your_field_name>" } }} Full-text queries work well with unstructured text data. Full-text queries take advantage of the analyzer. Therefore, I will briefly outline the Elasticsearch’s analyzer so that we can better analyze full-text querying. Every time text type data is inserted into the Elasticsearch index it is analyzed and, then, stored at the inverted index. Depending on how you configure the analyzer will impact your searching capabilities because analyzer is also applied for full-text search. Analyzer pipe consists of three stages: Character filter (0+) → Tokenizer (1) → Token filter (0+) There is always one tokenizer and zero or more character & token filters. 1) Character filter receives the text data as it is, then it might preprocess the data before it gets tokenized. Character filters are used to: Replace characters matching given regular expression Replace characters matching given strings Clean HTML text 2) Tokenizer breaks text data received after character filter (if any) into tokens. For example, whitespace tokenizer simply breaks text by the whitespace (it is not the standard one). Therefore, Wednesday is called after Woden. will be split into [Wednesday, is, called, after, Woden.]. There are many build-in tokenizers that can be used to create custom analyzers. Standard tokenizer breaks text by whitespace after removing the punctuation. It is the most neutral option for the vast majority of languages. In addition to tokenization, tokenizer does the following: keeps track of tokens order, notes start and end of each word defines the type of token 3) The token filter applies some transformation on the tokens. There are many different token filters that you might choose to add to your analyzer. Some of the most popular are: lowercase stemmer (exist for many languages!) remove duplicate transformation to the ASCII equivalent workaround with patterns limit on token count stop list of tokens (removes tokens from the stop list) Now, when we know what the analyzer consists of we might think about how we are going to work with our data. Then, we might compose an analyzer that fits our case the most by choosing proper components. The analyzer can be specified on a per-field basis. Enough theory, let’s see how the default analyzer works. The standard analyzer is the default one. It has 0 character filters, standard tokenizer, lowercase and stops token filters. You can compose your custom analyzer as you wish, but there are also few build-in analyzers. Some of the most efficient out of a box analyzers are the language analyzers that are taking the specifics of each language to make a more advanced transformation. Therefore, if you know in advance the language of your data, I would recommend switching from the standard analyzer to the one of the data’s languages. The full-text query will use the same analyzer that was used while indexing the data. More precisely, the text of your query will go through the same transformations as the text data in the searching field, so that both are at the same level. Match query is the standard query for querying the text fields. We might call match query an equivalent of the term query but for the text type fields (while term should be used solely for the keyword type field when working with text data). GET /_search{ "query" : { "match" : { "<text_field>" { "query" : "<your_value>" } } }} The string that is passed into the query parameter (required one), by default, going to be processed by the same analyzer as the one that has been applied to the searched field. Unless you specify the analyzer yourself using analyzer parameter. When you specify your phrase to be searched for it is being analyzed and the result is always a set of tokens. By default, Elasticsearch will be using OR operator between all of those tokens. That means that at least one should match — more matches will hit a higher score though. You might switch this to AND in operator parameter. In this case, all of the tokens will have to be found in the document for it to be returned. If you want to have something in between OR and AND you might specify minimum_should_match parameter which specifies the number of clauses that should match. It can be specified in both, number and percentage. fuzziness parameter (optional) allows you to omit the typos. Levenshtein distance is used for calculations. If you apply match query to the keyword field then it will perform the same as term query. More interestingly, if you pass the exact value of a token that is stored in an inverted index to the term query then it will return exactly the same result as match query but faster as it will go straight to the inverted index. Same as match but the sequence order and proximity are important. Match query is not aware of the sequence and proximity, therefore, it is only possible to achieve the phrase match with a different type of a query. GET /_search{ "query": { "match_phrase" : { "<text_field>" : { "query" : "<your_value>", "slop" : "0" } } }} match_phrase query has slop parameter (default value 0) which is responsible for skipping the terms. Therefore, if you specify slop equal to 1 then one word out of a phrase might be omitted. Multi-match query does the same job as the match with the only difference that it is applied to more than one field. GET /_search{ "query": { "multi_match" : { "query": "<your_value>", "fields": [ "<text_field1>", "<text_field2>" ] } }} fields names can be specified using wildcards each field is equally weighted by default each field’s contribution to the score can be boosted if no fields specified in the fields parameter then all eligible fields will be searched There are different types of multi_match. I am not going to describe them all in this post, but I will explain the most popular: best_fields type (default) prefers results where tokens from searched value are found in one field to those results where searched tokens are split among different fields. most_fields is somewhat opposite to best_fields type. phrase type behaves as best_fields but searches for the entire phrase similar to match_phrase. I highly recommend going through the official documentation to check how exactly the score is calculated for each of those fields. Compound queries wrap together other queries. Compound queries: combine te score change behavior of wrapped queries switch query context to filter context any of above combined Boolean query combines together other queries. It is the most important compound query. Boolean query allows you to combine searches in query context with filter context searches. The boolean query has four occurrences (types) that can be combined together: must or “has to satisfy the clause” should or “additional points to relevance score if clause is satisfied” filter or “has to satisfy the clause but relevance score is not calculated” must_not or “inverse to must, does not contribute to relevance score” must and should → query context filter and must_not → filter context For those who are familiar with SQL must is AND while should is OR operators. Therefore, each query inside the must clause has to be satisfied. Boosting query is alike with boost parameter for most queries but is not the same. Boosting query returns documents that match positive clause and reduces the score for the documents that match negative clause. As we previously saw in term query example, constant_score query converts any query into filter context with relevance score equal to the boost parameter (default 1). To sum up, Elasticsearch fits many purposes nowadays, and sometimes it is difficult to understand what is the best tool to use. The main thing that I would like you to remember is that you do not always need to use the most advanced features to resolve easy problems. If you do not need a relevance score to retrieve your data try to switch to the filter context. Also, understanding how Elasticsearch works under the hood is crucial, so I recommend you to always know what your analyzer does. There are many more query types in Elasticsearch. I tried to describe the most used ones. I hope you liked it. Let me know if you would like to read another post where I give real examples of all queries. I plan to post a few more posts on Elasticsearch, so do not miss it. That was quite a long one, so if you got till there: About meMy name is Artem, I build newscatcherapi.com - ultra-fast API to find news articles by any topic, country, language, website, or keyword.I write about Python, cloud architecture, elasticsearch, data engineering, and entrepreneurship.
[ { "code": null, "e": 249, "s": 172, "text": "If I had to describe Elasticsearch in one phrase I would say something like:" }, { "code": null, "e": 306, "s": 249, "text": "When search meets analytics at scale (in near real time)" }, { "code": null, "e": 541, "s": 306, "text": "Elasticsearch is in the top 10 most popular open-source technologies at the moment. Fair enough, it unites many crucial features that are not unique itself, however, it can make the best search engine/analytics platform when combined." }, { "code": null, "e": 641, "s": 541, "text": "More precisely, Elasticsearch has become so popular due to a combination of the following features:" }, { "code": null, "e": 671, "s": 641, "text": "Search with relevance scoring" }, { "code": null, "e": 688, "s": 671, "text": "Full-text search" }, { "code": null, "e": 713, "s": 688, "text": "Analytics (aggregations)" }, { "code": null, "e": 782, "s": 713, "text": "Schemaless (no limitations on data schema), NoSQL, document-oriented" }, { "code": null, "e": 808, "s": 782, "text": "Rich choice of data types" }, { "code": null, "e": 830, "s": 808, "text": "Horizontally scalable" }, { "code": null, "e": 845, "s": 830, "text": "Fault-tolerant" }, { "code": null, "e": 1118, "s": 845, "text": "Working with Elasticsearch for my side-project I quickly realized that official documentation looks more like a “squeeze” from what should be called documentation. I had to google around and stackowerflowing a lot, so I decided to compile all the information in this post." }, { "code": null, "e": 1355, "s": 1118, "text": "In this article, I will write mostly about the querying/searching Elasticsearch cluster. There are many different ways you could accomplish more or less the same result, therefore, I will try to explain the pros and cons of each method." }, { "code": null, "e": 1572, "s": 1355, "text": "More importantly, I will introduce you two important concepts — query and filter contexts — which are not well explained in the documentation. I will give you a set of rules for when it is better to use which method." }, { "code": null, "e": 1676, "s": 1572, "text": "If there was just one thing that I would like you to remember after reading this article that would be:" }, { "code": null, "e": 1735, "s": 1676, "text": "Do you really need to score your documents while querying?" }, { "code": null, "e": 2068, "s": 1735, "text": "There is always a relevance score when we talk about Elasticsearch. The relevance score is a strictly positive float that indicates how well each document satisfies the searching criteria. This score is relative to the highest score assigned, therefore, higher the score better the relevance of a document to the searching criteria." }, { "code": null, "e": 2186, "s": 2068, "text": "However, filter and query are two different concepts that you should be able to understand before writing your query." }, { "code": null, "e": 2492, "s": 2186, "text": "Generally speaking, filter context is a yes/no option where each document either matches the query or not. A good example will be SQL WHERE followed by some conditions. SQL queries always return you the rows that strictly match the criteria. There is no way for an SQL query to return an ambiguous result." }, { "code": null, "e": 2571, "s": 2492, "text": "Filters are automatically cached and do not contribute to the relevance score." }, { "code": null, "e": 2746, "s": 2571, "text": "Elastisearch query context, on the other hand, shows you how well does each document matches your requirements. To do so, the query uses an analyzer to find the best matches." }, { "code": null, "e": 2795, "s": 2746, "text": "The rule of a thumb would be to use filters for:" }, { "code": null, "e": 2809, "s": 2795, "text": "yes/no search" }, { "code": null, "e": 2861, "s": 2809, "text": "search on exact values (numeric, range and keyword)" }, { "code": null, "e": 2878, "s": 2861, "text": "Use queries for:" }, { "code": null, "e": 2934, "s": 2878, "text": "ambiguous result (some documents suit more than others)" }, { "code": null, "e": 2951, "s": 2934, "text": "full-text search" }, { "code": null, "e": 3053, "s": 2951, "text": "Unless you need relevance score or full-text search always try to use filters. Filters are “cheaper”." }, { "code": null, "e": 3129, "s": 3053, "text": "In addition, Elasticsearch will automatically cache the results of filters." }, { "code": null, "e": 3317, "s": 3129, "text": "In parts 1. and 2. I will speak about queries (that can be transformed into filters). Please do not confuse structured vs full text with query vs filters — those are two different things." }, { "code": null, "e": 3583, "s": 3317, "text": "Also called term-level queries, structured queries are a group of querying methods that checks if a document should be selected or not. Therefore, there is no real need for relevance score in many cases — document either going to match or not (especially numerics)." }, { "code": null, "e": 3652, "s": 3583, "text": "Term-level queries are still queries, so they will return the score." }, { "code": null, "e": 3663, "s": 3652, "text": "Term query" }, { "code": null, "e": 3839, "s": 3663, "text": "Returns the documents where the value of a field exactly matches the criteria. The term query is somewhat an alternative of SQL select * from table_name where column_name =..." }, { "code": null, "e": 3992, "s": 3839, "text": "The term query goes directly to the inverted index which makes it fast. It is preferred to use term only for keyword fields when working with text data." }, { "code": null, "e": 4133, "s": 3992, "text": "GET /_search{ \"query\": { \"term\": { \"<field_name>\": { \"value\": \"<your_value>\" } } }}" }, { "code": null, "e": 4337, "s": 4133, "text": "The term query is run in the query context by default, therefore, it will calculate the score. Even if the score will be identical for all documents returned, additional computing power will be involved." }, { "code": null, "e": 4446, "s": 4337, "text": "If we want to speed up term query and get it cached then it should be wrapped up in a constant_score filter." }, { "code": null, "e": 4536, "s": 4446, "text": "Remember the rule of thumb? Use this method if you do not care about the relevance score." }, { "code": null, "e": 4702, "s": 4536, "text": "GET /_search{ \"query\": { \"constant_score\" : { \"filter\" : { \"term\" : {\"<field_name>\" : \"<your_value>\"} } } }}" }, { "code": null, "e": 4820, "s": 4702, "text": "Now, the query is not calculating any relevance score, therefore, it is faster. Moreover, it is automatically cached." }, { "code": null, "e": 4878, "s": 4820, "text": "Quick advise — use match instead of term for text fields." }, { "code": null, "e": 5108, "s": 4878, "text": "Remember, the term query goes directly to the inverted index. Term query takes the value you provide and searches for it as it is, that is why it suits well for querying keyword fields that are stored without any transformations." }, { "code": null, "e": 5120, "s": 5108, "text": "Terms query" }, { "code": null, "e": 5237, "s": 5120, "text": "As you could have guessed, the term query allows you to return documents which are matching at least one exact term." }, { "code": null, "e": 5334, "s": 5237, "text": "Term query is somewhat an alternative of SQL select * from table_name where column_name is in..." }, { "code": null, "e": 5638, "s": 5334, "text": "Important to understand that querying field in Elasticsearch might be a list, for example { \"name\" : [\"Odin\", \"Woden\", \"Wodan\"] }. If you perform a terms query that contains one f the following names then this record will be matched — it does not have to match all the values in the field, but only one." }, { "code": null, "e": 5748, "s": 5638, "text": "GET /_search{ \"query\" : { \"terms\" : { \"name\" : [\"Frigg\", \"Odin\", \"Baldr\"] } }}" }, { "code": null, "e": 5851, "s": 5748, "text": "Same as terms query but this time you can specify how many exact terms should be in the queried field." }, { "code": null, "e": 6059, "s": 5851, "text": "You specify how many have to match — one, two, three or all of them. However, this number is another numeric field. Therefore, each document should contain this number (specific to this particular document)." }, { "code": null, "e": 6137, "s": 6059, "text": "Returns documents in which queried field’s value is within the defined range." }, { "code": null, "e": 6212, "s": 6137, "text": "Equivalent of SQL select * from table_name where column_name is between..." }, { "code": null, "e": 6244, "s": 6212, "text": "Range query has its own syntax:" }, { "code": null, "e": 6263, "s": 6244, "text": "gt is greater than" }, { "code": null, "e": 6295, "s": 6263, "text": "gte is greater than or equal to" }, { "code": null, "e": 6311, "s": 6295, "text": "lt is less than" }, { "code": null, "e": 6340, "s": 6311, "text": "lte is less than or equal to" }, { "code": null, "e": 6399, "s": 6340, "text": "An example where the field’s value should be ≥ 4 and ≤ 17:" }, { "code": null, "e": 6555, "s": 6399, "text": "GET _search{ \"query\": { \"range\" : { \"<field_name>\" : { \"gte\" : 4, \"lte\" : 17 } } }}" }, { "code": null, "e": 6599, "s": 6555, "text": "The range query also works well with dates." }, { "code": null, "e": 6681, "s": 6599, "text": "Regexp query returns the documents in which fields match your regular expression." }, { "code": null, "e": 6829, "s": 6681, "text": "If you have never used regular expression then I highly advise you to get at least some understanding about what it is and when you could apply it." }, { "code": null, "e": 7110, "s": 6829, "text": "Elasticsearch’s regexp is Lucene’s one. It has standard reserved characters and operators. If you worked already with Python’s re package then it should not be a problem to use it here. The only difference is that Lucene’s engine does not support anchor operators such as ^ and $." }, { "code": null, "e": 7185, "s": 7110, "text": "You may find the entire list for the regexp in the official documentation." }, { "code": null, "e": 7318, "s": 7185, "text": "In addition to the regexp query Elsticsearch has wildcard and prefix queries. Logically, those two are just special cases of regexp." }, { "code": null, "e": 7496, "s": 7318, "text": "Unfortunately, I could not find any information regarding the performance of those 3 queries, therefore, I decided to test it myself to see if I find any significant difference." }, { "code": null, "e": 7668, "s": 7496, "text": "I could not find any difference in performance while comparing a wildcard query using rehexp and wildcard query. In case you know what is the difference, please, tweet me." }, { "code": null, "e": 7923, "s": 7668, "text": "Due to the fact that Elasticsearch is schemaless (or no strict schema limitation), it is a fairly common situation when different documents have different fields. As a result, there is a lot of use to know whether a document has any certain field or not." }, { "code": null, "e": 7996, "s": 7923, "text": "Exists query returns documents that contain an indexed value for a field" }, { "code": null, "e": 8098, "s": 7996, "text": "GET /_search{ \"query\": { \"exists\": { \"field\": \"<your_field_name>\" } }}" }, { "code": null, "e": 8318, "s": 8098, "text": "Full-text queries work well with unstructured text data. Full-text queries take advantage of the analyzer. Therefore, I will briefly outline the Elasticsearch’s analyzer so that we can better analyze full-text querying." }, { "code": null, "e": 8580, "s": 8318, "text": "Every time text type data is inserted into the Elasticsearch index it is analyzed and, then, stored at the inverted index. Depending on how you configure the analyzer will impact your searching capabilities because analyzer is also applied for full-text search." }, { "code": null, "e": 8620, "s": 8580, "text": "Analyzer pipe consists of three stages:" }, { "code": null, "e": 8678, "s": 8620, "text": "Character filter (0+) → Tokenizer (1) → Token filter (0+)" }, { "code": null, "e": 8752, "s": 8678, "text": "There is always one tokenizer and zero or more character & token filters." }, { "code": null, "e": 8896, "s": 8752, "text": "1) Character filter receives the text data as it is, then it might preprocess the data before it gets tokenized. Character filters are used to:" }, { "code": null, "e": 8949, "s": 8896, "text": "Replace characters matching given regular expression" }, { "code": null, "e": 8991, "s": 8949, "text": "Replace characters matching given strings" }, { "code": null, "e": 9007, "s": 8991, "text": "Clean HTML text" }, { "code": null, "e": 9375, "s": 9007, "text": "2) Tokenizer breaks text data received after character filter (if any) into tokens. For example, whitespace tokenizer simply breaks text by the whitespace (it is not the standard one). Therefore, Wednesday is called after Woden. will be split into [Wednesday, is, called, after, Woden.]. There are many build-in tokenizers that can be used to create custom analyzers." }, { "code": null, "e": 9518, "s": 9375, "text": "Standard tokenizer breaks text by whitespace after removing the punctuation. It is the most neutral option for the vast majority of languages." }, { "code": null, "e": 9577, "s": 9518, "text": "In addition to tokenization, tokenizer does the following:" }, { "code": null, "e": 9606, "s": 9577, "text": "keeps track of tokens order," }, { "code": null, "e": 9639, "s": 9606, "text": "notes start and end of each word" }, { "code": null, "e": 9665, "s": 9639, "text": "defines the type of token" }, { "code": null, "e": 9844, "s": 9665, "text": "3) The token filter applies some transformation on the tokens. There are many different token filters that you might choose to add to your analyzer. Some of the most popular are:" }, { "code": null, "e": 9854, "s": 9844, "text": "lowercase" }, { "code": null, "e": 9890, "s": 9854, "text": "stemmer (exist for many languages!)" }, { "code": null, "e": 9907, "s": 9890, "text": "remove duplicate" }, { "code": null, "e": 9946, "s": 9907, "text": "transformation to the ASCII equivalent" }, { "code": null, "e": 9971, "s": 9946, "text": "workaround with patterns" }, { "code": null, "e": 9992, "s": 9971, "text": "limit on token count" }, { "code": null, "e": 10048, "s": 9992, "text": "stop list of tokens (removes tokens from the stop list)" }, { "code": null, "e": 10303, "s": 10048, "text": "Now, when we know what the analyzer consists of we might think about how we are going to work with our data. Then, we might compose an analyzer that fits our case the most by choosing proper components. The analyzer can be specified on a per-field basis." }, { "code": null, "e": 10360, "s": 10303, "text": "Enough theory, let’s see how the default analyzer works." }, { "code": null, "e": 10578, "s": 10360, "text": "The standard analyzer is the default one. It has 0 character filters, standard tokenizer, lowercase and stops token filters. You can compose your custom analyzer as you wish, but there are also few build-in analyzers." }, { "code": null, "e": 10894, "s": 10578, "text": "Some of the most efficient out of a box analyzers are the language analyzers that are taking the specifics of each language to make a more advanced transformation. Therefore, if you know in advance the language of your data, I would recommend switching from the standard analyzer to the one of the data’s languages." }, { "code": null, "e": 11137, "s": 10894, "text": "The full-text query will use the same analyzer that was used while indexing the data. More precisely, the text of your query will go through the same transformations as the text data in the searching field, so that both are at the same level." }, { "code": null, "e": 11201, "s": 11137, "text": "Match query is the standard query for querying the text fields." }, { "code": null, "e": 11379, "s": 11201, "text": "We might call match query an equivalent of the term query but for the text type fields (while term should be used solely for the keyword type field when working with text data)." }, { "code": null, "e": 11491, "s": 11379, "text": "GET /_search{ \"query\" : { \"match\" : { \"<text_field>\" { \"query\" : \"<your_value>\" } } }}" }, { "code": null, "e": 11736, "s": 11491, "text": "The string that is passed into the query parameter (required one), by default, going to be processed by the same analyzer as the one that has been applied to the searched field. Unless you specify the analyzer yourself using analyzer parameter." }, { "code": null, "e": 12162, "s": 11736, "text": "When you specify your phrase to be searched for it is being analyzed and the result is always a set of tokens. By default, Elasticsearch will be using OR operator between all of those tokens. That means that at least one should match — more matches will hit a higher score though. You might switch this to AND in operator parameter. In this case, all of the tokens will have to be found in the document for it to be returned." }, { "code": null, "e": 12372, "s": 12162, "text": "If you want to have something in between OR and AND you might specify minimum_should_match parameter which specifies the number of clauses that should match. It can be specified in both, number and percentage." }, { "code": null, "e": 12480, "s": 12372, "text": "fuzziness parameter (optional) allows you to omit the typos. Levenshtein distance is used for calculations." }, { "code": null, "e": 12800, "s": 12480, "text": "If you apply match query to the keyword field then it will perform the same as term query. More interestingly, if you pass the exact value of a token that is stored in an inverted index to the term query then it will return exactly the same result as match query but faster as it will go straight to the inverted index." }, { "code": null, "e": 13015, "s": 12800, "text": "Same as match but the sequence order and proximity are important. Match query is not aware of the sequence and proximity, therefore, it is only possible to achieve the phrase match with a different type of a query." }, { "code": null, "e": 13196, "s": 13015, "text": "GET /_search{ \"query\": { \"match_phrase\" : { \"<text_field>\" : { \"query\" : \"<your_value>\", \"slop\" : \"0\" } } }}" }, { "code": null, "e": 13387, "s": 13196, "text": "match_phrase query has slop parameter (default value 0) which is responsible for skipping the terms. Therefore, if you specify slop equal to 1 then one word out of a phrase might be omitted." }, { "code": null, "e": 13504, "s": 13387, "text": "Multi-match query does the same job as the match with the only difference that it is applied to more than one field." }, { "code": null, "e": 13647, "s": 13504, "text": "GET /_search{ \"query\": { \"multi_match\" : { \"query\": \"<your_value>\", \"fields\": [ \"<text_field1>\", \"<text_field2>\" ] } }}" }, { "code": null, "e": 13693, "s": 13647, "text": "fields names can be specified using wildcards" }, { "code": null, "e": 13735, "s": 13693, "text": "each field is equally weighted by default" }, { "code": null, "e": 13789, "s": 13735, "text": "each field’s contribution to the score can be boosted" }, { "code": null, "e": 13878, "s": 13789, "text": "if no fields specified in the fields parameter then all eligible fields will be searched" }, { "code": null, "e": 14007, "s": 13878, "text": "There are different types of multi_match. I am not going to describe them all in this post, but I will explain the most popular:" }, { "code": null, "e": 14179, "s": 14007, "text": "best_fields type (default) prefers results where tokens from searched value are found in one field to those results where searched tokens are split among different fields." }, { "code": null, "e": 14233, "s": 14179, "text": "most_fields is somewhat opposite to best_fields type." }, { "code": null, "e": 14328, "s": 14233, "text": "phrase type behaves as best_fields but searches for the entire phrase similar to match_phrase." }, { "code": null, "e": 14459, "s": 14328, "text": "I highly recommend going through the official documentation to check how exactly the score is calculated for each of those fields." }, { "code": null, "e": 14523, "s": 14459, "text": "Compound queries wrap together other queries. Compound queries:" }, { "code": null, "e": 14540, "s": 14523, "text": "combine te score" }, { "code": null, "e": 14575, "s": 14540, "text": "change behavior of wrapped queries" }, { "code": null, "e": 14614, "s": 14575, "text": "switch query context to filter context" }, { "code": null, "e": 14636, "s": 14614, "text": "any of above combined" }, { "code": null, "e": 14724, "s": 14636, "text": "Boolean query combines together other queries. It is the most important compound query." }, { "code": null, "e": 14816, "s": 14724, "text": "Boolean query allows you to combine searches in query context with filter context searches." }, { "code": null, "e": 14894, "s": 14816, "text": "The boolean query has four occurrences (types) that can be combined together:" }, { "code": null, "e": 14930, "s": 14894, "text": "must or “has to satisfy the clause”" }, { "code": null, "e": 15002, "s": 14930, "text": "should or “additional points to relevance score if clause is satisfied”" }, { "code": null, "e": 15078, "s": 15002, "text": "filter or “has to satisfy the clause but relevance score is not calculated”" }, { "code": null, "e": 15148, "s": 15078, "text": "must_not or “inverse to must, does not contribute to relevance score”" }, { "code": null, "e": 15180, "s": 15148, "text": "must and should → query context" }, { "code": null, "e": 15217, "s": 15180, "text": "filter and must_not → filter context" }, { "code": null, "e": 15361, "s": 15217, "text": "For those who are familiar with SQL must is AND while should is OR operators. Therefore, each query inside the must clause has to be satisfied." }, { "code": null, "e": 15572, "s": 15361, "text": "Boosting query is alike with boost parameter for most queries but is not the same. Boosting query returns documents that match positive clause and reduces the score for the documents that match negative clause." }, { "code": null, "e": 15739, "s": 15572, "text": "As we previously saw in term query example, constant_score query converts any query into filter context with relevance score equal to the boost parameter (default 1)." }, { "code": null, "e": 15867, "s": 15739, "text": "To sum up, Elasticsearch fits many purposes nowadays, and sometimes it is difficult to understand what is the best tool to use." }, { "code": null, "e": 16007, "s": 15867, "text": "The main thing that I would like you to remember is that you do not always need to use the most advanced features to resolve easy problems." }, { "code": null, "e": 16103, "s": 16007, "text": "If you do not need a relevance score to retrieve your data try to switch to the filter context." }, { "code": null, "e": 16233, "s": 16103, "text": "Also, understanding how Elasticsearch works under the hood is crucial, so I recommend you to always know what your analyzer does." }, { "code": null, "e": 16344, "s": 16233, "text": "There are many more query types in Elasticsearch. I tried to describe the most used ones. I hope you liked it." }, { "code": null, "e": 16438, "s": 16344, "text": "Let me know if you would like to read another post where I give real examples of all queries." }, { "code": null, "e": 16507, "s": 16438, "text": "I plan to post a few more posts on Elasticsearch, so do not miss it." }, { "code": null, "e": 16560, "s": 16507, "text": "That was quite a long one, so if you got till there:" } ]
\circ - Tex Command
\circ - Used to draw circle symbol. { \circ } \circ command draws circle symbol. (f\circ g)(x) = f(g(x)) (f∘g)(x)=f(g(x)) 45^\circ 45∘ (f\circ g)(x) = f(g(x)) (f∘g)(x)=f(g(x)) (f\circ g)(x) = f(g(x)) 45^\circ 45∘ 45^\circ 14 Lectures 52 mins Ashraf Said 11 Lectures 1 hours Ashraf Said 9 Lectures 1 hours Emenwa Global, Ejike IfeanyiChukwu 29 Lectures 2.5 hours Mohammad Nauman 14 Lectures 1 hours Daniel Stern 15 Lectures 47 mins Nishant Kumar Print Add Notes Bookmark this page
[ { "code": null, "e": 8022, "s": 7986, "text": "\\circ - Used to draw circle symbol." }, { "code": null, "e": 8032, "s": 8022, "text": "{ \\circ }" }, { "code": null, "e": 8067, "s": 8032, "text": "\\circ command draws circle symbol." }, { "code": null, "e": 8131, "s": 8067, "text": "\n(f\\circ g)(x) = f(g(x)) \n\n(f∘g)(x)=f(g(x))\n\n\n45^\\circ \n\n45∘\n\n\n" }, { "code": null, "e": 8176, "s": 8131, "text": "(f\\circ g)(x) = f(g(x)) \n\n(f∘g)(x)=f(g(x))\n\n" }, { "code": null, "e": 8201, "s": 8176, "text": "(f\\circ g)(x) = f(g(x)) " }, { "code": null, "e": 8218, "s": 8201, "text": "45^\\circ \n\n45∘\n\n" }, { "code": null, "e": 8228, "s": 8218, "text": "45^\\circ " }, { "code": null, "e": 8260, "s": 8228, "text": "\n 14 Lectures \n 52 mins\n" }, { "code": null, "e": 8273, "s": 8260, "text": " Ashraf Said" }, { "code": null, "e": 8306, "s": 8273, "text": "\n 11 Lectures \n 1 hours \n" }, { "code": null, "e": 8319, "s": 8306, "text": " Ashraf Said" }, { "code": null, "e": 8351, "s": 8319, "text": "\n 9 Lectures \n 1 hours \n" }, { "code": null, "e": 8387, "s": 8351, "text": " Emenwa Global, Ejike IfeanyiChukwu" }, { "code": null, "e": 8422, "s": 8387, "text": "\n 29 Lectures \n 2.5 hours \n" }, { "code": null, "e": 8439, "s": 8422, "text": " Mohammad Nauman" }, { "code": null, "e": 8472, "s": 8439, "text": "\n 14 Lectures \n 1 hours \n" }, { "code": null, "e": 8486, "s": 8472, "text": " Daniel Stern" }, { "code": null, "e": 8518, "s": 8486, "text": "\n 15 Lectures \n 47 mins\n" }, { "code": null, "e": 8533, "s": 8518, "text": " Nishant Kumar" }, { "code": null, "e": 8540, "s": 8533, "text": " Print" }, { "code": null, "e": 8551, "s": 8540, "text": " Add Notes" } ]
MATLAB - continue Statement
The continue statement is used for passing control to next iteration of for or while loop. The continue statement in MATLAB works somewhat like the break statement. Instead of forcing termination, however, 'continue' forces the next iteration of the loop to take place, skipping any code in between. Create a script file and type the following code − a = 9; %while loop execution while a < 20 a = a + 1; if a == 15 % skip the iteration continue; end fprintf('value of a: %d\n', a); end When you run the file, it displays the following result − value of a: 10 value of a: 11 value of a: 12 value of a: 13 value of a: 14 value of a: 16 value of a: 17 value of a: 18 value of a: 19 value of a: 20 30 Lectures 4 hours Nouman Azam 127 Lectures 12 hours Nouman Azam 17 Lectures 3 hours Sanjeev 37 Lectures 5 hours TELCOMA Global 22 Lectures 4 hours TELCOMA Global 18 Lectures 3 hours Phinite Academy Print Add Notes Bookmark this page
[ { "code": null, "e": 2232, "s": 2141, "text": "The continue statement is used for passing control to next iteration of for or while loop." }, { "code": null, "e": 2441, "s": 2232, "text": "The continue statement in MATLAB works somewhat like the break statement. Instead of forcing termination, however, 'continue' forces the next iteration of the loop to take place, skipping any code in between." }, { "code": null, "e": 2492, "s": 2441, "text": "Create a script file and type the following code −" }, { "code": null, "e": 2652, "s": 2492, "text": "a = 9;\n%while loop execution \nwhile a < 20\n a = a + 1; \n if a == 15\n % skip the iteration \n continue;\n end \nfprintf('value of a: %d\\n', a);\nend" }, { "code": null, "e": 2710, "s": 2652, "text": "When you run the file, it displays the following result −" }, { "code": null, "e": 2861, "s": 2710, "text": "value of a: 10\nvalue of a: 11\nvalue of a: 12\nvalue of a: 13\nvalue of a: 14\nvalue of a: 16\nvalue of a: 17\nvalue of a: 18\nvalue of a: 19\nvalue of a: 20\n" }, { "code": null, "e": 2894, "s": 2861, "text": "\n 30 Lectures \n 4 hours \n" }, { "code": null, "e": 2907, "s": 2894, "text": " Nouman Azam" }, { "code": null, "e": 2942, "s": 2907, "text": "\n 127 Lectures \n 12 hours \n" }, { "code": null, "e": 2955, "s": 2942, "text": " Nouman Azam" }, { "code": null, "e": 2988, "s": 2955, "text": "\n 17 Lectures \n 3 hours \n" }, { "code": null, "e": 2997, "s": 2988, "text": " Sanjeev" }, { "code": null, "e": 3030, "s": 2997, "text": "\n 37 Lectures \n 5 hours \n" }, { "code": null, "e": 3046, "s": 3030, "text": " TELCOMA Global" }, { "code": null, "e": 3079, "s": 3046, "text": "\n 22 Lectures \n 4 hours \n" }, { "code": null, "e": 3095, "s": 3079, "text": " TELCOMA Global" }, { "code": null, "e": 3128, "s": 3095, "text": "\n 18 Lectures \n 3 hours \n" }, { "code": null, "e": 3145, "s": 3128, "text": " Phinite Academy" }, { "code": null, "e": 3152, "s": 3145, "text": " Print" }, { "code": null, "e": 3163, "s": 3152, "text": " Add Notes" } ]
How to Create and Assign Lists in Python?
We can Create and Assign List by Insert, Append Length, Index, Remove and Extend, etc.. Lists are mutable and the changeble object is enclosed within the square brackets i.e [ ], Lists in python is easy. list =["Tutorials ","Point", "Pvt","Ltd"] list ['Tutorials ', 'Point', 'Pvt', 'Ltd'] Assign Values in Lists To assign values in lists, use square brackets list1 = ['physics', 'chemistry', 1997, 2000]; list2 = [1, 2, 3, 4, 5, 6, 7 ]; print ("list1[0]: ", list1[0]) print ("list2[1:5]: ", list2[1:5]) When the above code is executed, Following is the preceding output list1[0]: physics list2[1:5]: [2, 3, 4, 5] append() and extend() in python append add its argument/parameter as a single element at the end of a list. the list length will increase by one. list=['Red','Green','Yellow','Purple','White','Black'] list.append('Orange') print(list) ['Red', 'Green', 'Yellow', 'Purple', 'White', 'Black', 'Orange'] extend() iterates over its parameter adding each element to the list and extending the list length of the list will increase through the iterable argument list=['Car','Bike','Scooty','Bus','Metro'] list.extend('Car') print(list) When the above code is executed, Following is the preceding output ['Car', 'Bike', 'Scooty', 'Bus', 'Metro', 'C', 'a', 'r'] Index: Index method can search easily, that expects a value is to be searched list=['Car','Bike','Scooty','Bus','Metro'] list.index('Scooty') 2
[ { "code": null, "e": 1151, "s": 1062, "text": "We can Create and Assign List by Insert, Append Length, Index, Remove and Extend, etc.. " }, { "code": null, "e": 1267, "s": 1151, "text": "Lists are mutable and the changeble object is enclosed within the square brackets i.e [ ], Lists in python is easy." }, { "code": null, "e": 1314, "s": 1267, "text": "list =[\"Tutorials \",\"Point\", \"Pvt\",\"Ltd\"]\nlist" }, { "code": null, "e": 1353, "s": 1314, "text": "['Tutorials ', 'Point', 'Pvt', 'Ltd']\n" }, { "code": null, "e": 1423, "s": 1353, "text": "Assign Values in Lists To assign values in lists, use square brackets" }, { "code": null, "e": 1567, "s": 1423, "text": "list1 = ['physics', 'chemistry', 1997, 2000];\nlist2 = [1, 2, 3, 4, 5, 6, 7 ];\nprint (\"list1[0]: \", list1[0])\nprint (\"list2[1:5]: \", list2[1:5])" }, { "code": null, "e": 1634, "s": 1567, "text": "When the above code is executed, Following is the preceding output" }, { "code": null, "e": 1677, "s": 1634, "text": "list1[0]: physics\nlist2[1:5]: [2, 3, 4, 5]" }, { "code": null, "e": 1823, "s": 1677, "text": "append() and extend() in python\nappend add its argument/parameter as a single element at the end of a list.\nthe list length will increase by one." }, { "code": null, "e": 1912, "s": 1823, "text": "list=['Red','Green','Yellow','Purple','White','Black']\nlist.append('Orange')\nprint(list)" }, { "code": null, "e": 1978, "s": 1912, "text": "['Red', 'Green', 'Yellow', 'Purple', 'White', 'Black', 'Orange']\n" }, { "code": null, "e": 2133, "s": 1978, "text": "extend() iterates over its parameter adding each element to the list and extending the list\nlength of the list will increase through the iterable argument" }, { "code": null, "e": 2207, "s": 2133, "text": "list=['Car','Bike','Scooty','Bus','Metro']\nlist.extend('Car')\nprint(list)" }, { "code": null, "e": 2274, "s": 2207, "text": "When the above code is executed, Following is the preceding output" }, { "code": null, "e": 2332, "s": 2274, "text": "['Car', 'Bike', 'Scooty', 'Bus', 'Metro', 'C', 'a', 'r']\n" }, { "code": null, "e": 2410, "s": 2332, "text": "Index: Index method can search easily, that expects a value is to be searched" }, { "code": null, "e": 2474, "s": 2410, "text": "list=['Car','Bike','Scooty','Bus','Metro']\nlist.index('Scooty')" }, { "code": null, "e": 2477, "s": 2474, "text": "2\n" } ]
How to add styles to autocomplete in ReactJS ? - GeeksforGeeks
02 Jul, 2021 Autocomplete Component is used for auto-completing the text value with the option value. It basically allows the user to type and select the item from a list of suggestions. It improves the user experience by giving suggestions while the user type. In this article, you will examine how to style an Autocomplete component in ReactJS. Creating React Application And Installing Module: Step 1: Create a React application using the following command:npx create-react-app foldername Step 1: Create a React application using the following command: npx create-react-app foldername Step 2: After creating your project folder i.e. foldername, move to it using the following command:cd foldername Step 2: After creating your project folder i.e. foldername, move to it using the following command: cd foldername Step 3: After creating the ReactJS application, Install the required module using the following command:npm install --save react-autocomplete Step 3: After creating the ReactJS application, Install the required module using the following command: npm install --save react-autocomplete Project Structure: It will look like the following. Project Structure Example: Now write down the following code in the App.js file. Here, App is our default component where we have written our code. Following code has Autocomplete Component with styling added. App.js import React, { useState } from 'react'import Autocomplete from 'react-autocomplete' function App() { // Defining a state named value, which // we can update by calling setValue // Value will store the typed value or // selected suggestion by the user const [value, setValue] = useState(''); return ( <div style={{ display: 'flex', flexDirection: 'column', alignItems: 'center' }}> <div> {/* Inline css*/} <h4 style={{ padding: '15px', border: '13px solid #b4f0b4', color: 'rgb(11, 167, 11)' }}> Geeks for Geeks : React Autocomplete Component Styling </h4> </div> <div> <Autocomplete // Items is the list of suggestions // displayed while the user type items={[ { label: 'C++' }, { label: 'C' }, { label: 'Python' }, { label: 'JavaScript' }, { label: 'Julia' }, { label: 'Java' }, { label: 'Objective C' }, { label: 'C#' }, { label: 'Dart' }, { label: 'Perl' } ]} // To handle the case that when // the user type, suggested // values should be independent // of upper or lower case shouldItemRender={(item, value ) => item.label.toLowerCase() .indexOf(value.toLowerCase()) > -1} getItemValue={item => item.label} renderItem={(item, isHighlighted) => // Styling to highlight selected item <div style={{ background: isHighlighted ? '#bcf5bc' : 'white' }} key={item.id}> {item.label} </div> } value={value} // The onChange event watches for // changes in an input field onChange={e => setValue(e.target.value)} // To set the state variable to value // selected from dropdown onSelect={(val) => setValue(val)} // Added style in Autocomplete component inputProps={{ style: { width: '300px', height: '20px', background: '#e4f3f7', border: '2px outset lightgray' }, placeholder: 'Search language' }} /> </div> </div> );} export default App; Explanation: On the Autocomplete element, className doesn’t apply to the text input as you might expect. You can’t use the className on the autocomplete component, you have to use the inputProps property. The inputProps is commonly used for (but not limited to) placeholder, event handlers (onFocus, onBlur, etc.), autoFocus, etc. You can customize the styles in inputProps in the above code. In the renderItem prop, you can add style as a set of styles that can be applied to improve the look of the items in the dropdown menu. Step to Run Application: Run the application using the following command from the root directory of the project: npm start Output: Now open your browser and go to http://localhost:3000/, you will see the following output: Autocomplete on select language Reference: https://www.npmjs.com/package/react-autocomplete Picked React-Modules React-Questions ReactJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments React-Router Hooks How to set background images in ReactJS ? How to create a table in ReactJS ? How to navigate on path by button click in react router ? ReactJS useNavigate() Hook Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to insert spaces/tabs in text using HTML/CSS? Top 10 Projects For Beginners To Practice HTML and CSS Skills Convert a string to an integer in JavaScript
[ { "code": null, "e": 24850, "s": 24822, "text": "\n02 Jul, 2021" }, { "code": null, "e": 25099, "s": 24850, "text": "Autocomplete Component is used for auto-completing the text value with the option value. It basically allows the user to type and select the item from a list of suggestions. It improves the user experience by giving suggestions while the user type." }, { "code": null, "e": 25184, "s": 25099, "text": "In this article, you will examine how to style an Autocomplete component in ReactJS." }, { "code": null, "e": 25234, "s": 25184, "text": "Creating React Application And Installing Module:" }, { "code": null, "e": 25330, "s": 25234, "text": "Step 1: Create a React application using the following command:npx create-react-app foldername " }, { "code": null, "e": 25394, "s": 25330, "text": "Step 1: Create a React application using the following command:" }, { "code": null, "e": 25426, "s": 25394, "text": "npx create-react-app foldername" }, { "code": null, "e": 25541, "s": 25428, "text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command:cd foldername" }, { "code": null, "e": 25641, "s": 25541, "text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command:" }, { "code": null, "e": 25655, "s": 25641, "text": "cd foldername" }, { "code": null, "e": 25797, "s": 25655, "text": "Step 3: After creating the ReactJS application, Install the required module using the following command:npm install --save react-autocomplete" }, { "code": null, "e": 25902, "s": 25797, "text": "Step 3: After creating the ReactJS application, Install the required module using the following command:" }, { "code": null, "e": 25940, "s": 25902, "text": "npm install --save react-autocomplete" }, { "code": null, "e": 25992, "s": 25940, "text": "Project Structure: It will look like the following." }, { "code": null, "e": 26010, "s": 25992, "text": "Project Structure" }, { "code": null, "e": 26202, "s": 26010, "text": "Example: Now write down the following code in the App.js file. Here, App is our default component where we have written our code. Following code has Autocomplete Component with styling added." }, { "code": null, "e": 26209, "s": 26202, "text": "App.js" }, { "code": "import React, { useState } from 'react'import Autocomplete from 'react-autocomplete' function App() { // Defining a state named value, which // we can update by calling setValue // Value will store the typed value or // selected suggestion by the user const [value, setValue] = useState(''); return ( <div style={{ display: 'flex', flexDirection: 'column', alignItems: 'center' }}> <div> {/* Inline css*/} <h4 style={{ padding: '15px', border: '13px solid #b4f0b4', color: 'rgb(11, 167, 11)' }}> Geeks for Geeks : React Autocomplete Component Styling </h4> </div> <div> <Autocomplete // Items is the list of suggestions // displayed while the user type items={[ { label: 'C++' }, { label: 'C' }, { label: 'Python' }, { label: 'JavaScript' }, { label: 'Julia' }, { label: 'Java' }, { label: 'Objective C' }, { label: 'C#' }, { label: 'Dart' }, { label: 'Perl' } ]} // To handle the case that when // the user type, suggested // values should be independent // of upper or lower case shouldItemRender={(item, value ) => item.label.toLowerCase() .indexOf(value.toLowerCase()) > -1} getItemValue={item => item.label} renderItem={(item, isHighlighted) => // Styling to highlight selected item <div style={{ background: isHighlighted ? '#bcf5bc' : 'white' }} key={item.id}> {item.label} </div> } value={value} // The onChange event watches for // changes in an input field onChange={e => setValue(e.target.value)} // To set the state variable to value // selected from dropdown onSelect={(val) => setValue(val)} // Added style in Autocomplete component inputProps={{ style: { width: '300px', height: '20px', background: '#e4f3f7', border: '2px outset lightgray' }, placeholder: 'Search language' }} /> </div> </div> );} export default App;", "e": 29307, "s": 26209, "text": null }, { "code": null, "e": 29321, "s": 29307, "text": "Explanation: " }, { "code": null, "e": 29513, "s": 29321, "text": "On the Autocomplete element, className doesn’t apply to the text input as you might expect. You can’t use the className on the autocomplete component, you have to use the inputProps property." }, { "code": null, "e": 29639, "s": 29513, "text": "The inputProps is commonly used for (but not limited to) placeholder, event handlers (onFocus, onBlur, etc.), autoFocus, etc." }, { "code": null, "e": 29701, "s": 29639, "text": "You can customize the styles in inputProps in the above code." }, { "code": null, "e": 29837, "s": 29701, "text": "In the renderItem prop, you can add style as a set of styles that can be applied to improve the look of the items in the dropdown menu." }, { "code": null, "e": 29950, "s": 29837, "text": "Step to Run Application: Run the application using the following command from the root directory of the project:" }, { "code": null, "e": 29960, "s": 29950, "text": "npm start" }, { "code": null, "e": 30059, "s": 29960, "text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:" }, { "code": null, "e": 30091, "s": 30059, "text": "Autocomplete on select language" }, { "code": null, "e": 30151, "s": 30091, "text": "Reference: https://www.npmjs.com/package/react-autocomplete" }, { "code": null, "e": 30158, "s": 30151, "text": "Picked" }, { "code": null, "e": 30172, "s": 30158, "text": "React-Modules" }, { "code": null, "e": 30188, "s": 30172, "text": "React-Questions" }, { "code": null, "e": 30196, "s": 30188, "text": "ReactJS" }, { "code": null, "e": 30213, "s": 30196, "text": "Web Technologies" }, { "code": null, "e": 30311, "s": 30213, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 30320, "s": 30311, "text": "Comments" }, { "code": null, "e": 30333, "s": 30320, "text": "Old Comments" }, { "code": null, "e": 30352, "s": 30333, "text": "React-Router Hooks" }, { "code": null, "e": 30394, "s": 30352, "text": "How to set background images in ReactJS ?" }, { "code": null, "e": 30429, "s": 30394, "text": "How to create a table in ReactJS ?" }, { "code": null, "e": 30487, "s": 30429, "text": "How to navigate on path by button click in react router ?" }, { "code": null, "e": 30514, "s": 30487, "text": "ReactJS useNavigate() Hook" }, { "code": null, "e": 30556, "s": 30514, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 30589, "s": 30556, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 30639, "s": 30589, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 30701, "s": 30639, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" } ]
HTML - Footer Tag
The HTML <footer> tag specifies a footer for a document or section. <!DOCTYPE html> <html> <head> <title>HTML Footer Tag</title> </head> <body> <header> <h1>Simply Easy Learning</h1> <p>You're visiting tutorialspoint.com - tutorial hub for simply easy learning.</p> </header> <footer> © Copyright 2014, All Rights Reserved </footer> </body> </html> This will produce the following result − You're visiting tutorialspoint.com - tutorial hub for simply easy learning.
[ { "code": null, "e": 2442, "s": 2374, "text": "The HTML <footer> tag specifies a footer for a document or section." }, { "code": null, "e": 2805, "s": 2442, "text": "<!DOCTYPE html>\n<html>\n\n <head>\n <title>HTML Footer Tag</title>\n </head>\n\n <body>\n <header>\n <h1>Simply Easy Learning</h1>\n <p>You're visiting tutorialspoint.com - tutorial hub for simply easy learning.</p>\n </header>\n\t\t\n <footer>\n © Copyright 2014, All Rights Reserved\n </footer>\n \n </body>\n\n</html>" }, { "code": null, "e": 2846, "s": 2805, "text": "This will produce the following result −" } ]
Hive - Alter Table
This chapter explains how to alter the attributes of a table such as changing its table name, changing column names, adding columns, and deleting or replacing columns. It is used to alter a table in Hive. The statement takes any of the following syntaxes based on what attributes we wish to modify in a table. ALTER TABLE name RENAME TO new_name ALTER TABLE name ADD COLUMNS (col_spec[, col_spec ...]) ALTER TABLE name DROP [COLUMN] column_name ALTER TABLE name CHANGE column_name new_name new_type ALTER TABLE name REPLACE COLUMNS (col_spec[, col_spec ...]) The following query renames the table from employee to emp. hive> ALTER TABLE employee RENAME TO emp; The JDBC program to rename a table is as follows. import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveAlterRenameTo { private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver"; public static void main(String[] args) throws SQLException { // Register driver and create driver instance Class.forName(driverName); // get connection Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/userdb", "", ""); // create statement Statement stmt = con.createStatement(); // execute statement stmt.executeQuery("ALTER TABLE employee RENAME TO emp;"); System.out.println("Table Renamed Successfully"); con.close(); } } Save the program in a file named HiveAlterRenameTo.java. Use the following commands to compile and execute this program. $ javac HiveAlterRenameTo.java $ java HiveAlterRenameTo Table renamed successfully. The following table contains the fields of employee table and it shows the fields to be changed (in bold). The following queries rename the column name and column data type using the above data: hive> ALTER TABLE employee CHANGE name ename String; hive> ALTER TABLE employee CHANGE salary salary Double; Given below is the JDBC program to change a column. import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveAlterChangeColumn { private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver"; public static void main(String[] args) throws SQLException { // Register driver and create driver instance Class.forName(driverName); // get connection Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/userdb", "", ""); // create statement Statement stmt = con.createStatement(); // execute statement stmt.executeQuery("ALTER TABLE employee CHANGE name ename String;"); stmt.executeQuery("ALTER TABLE employee CHANGE salary salary Double;"); System.out.println("Change column successful."); con.close(); } } Save the program in a file named HiveAlterChangeColumn.java. Use the following commands to compile and execute this program. $ javac HiveAlterChangeColumn.java $ java HiveAlterChangeColumn Change column successful. The following query adds a column named dept to the employee table. hive> ALTER TABLE employee ADD COLUMNS ( dept STRING COMMENT 'Department name'); The JDBC program to add a column to a table is given below. import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveAlterAddColumn { private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver"; public static void main(String[] args) throws SQLException { // Register driver and create driver instance Class.forName(driverName); // get connection Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/userdb", "", ""); // create statement Statement stmt = con.createStatement(); // execute statement stmt.executeQuery("ALTER TABLE employee ADD COLUMNS " + " (dept STRING COMMENT 'Department name');"); System.out.prinln("Add column successful."); con.close(); } } Save the program in a file named HiveAlterAddColumn.java. Use the following commands to compile and execute this program. $ javac HiveAlterAddColumn.java $ java HiveAlterAddColumn Add column successful. The following query deletes all the columns from the employee table and replaces it with emp and name columns: hive> ALTER TABLE employee REPLACE COLUMNS ( eid INT empid Int, ename STRING name String); Given below is the JDBC program to replace eid column with empid and ename column with name. import java.sql.SQLException; import java.sql.Connection; import java.sql.ResultSet; import java.sql.Statement; import java.sql.DriverManager; public class HiveAlterReplaceColumn { private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver"; public static void main(String[] args) throws SQLException { // Register driver and create driver instance Class.forName(driverName); // get connection Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/userdb", "", ""); // create statement Statement stmt = con.createStatement(); // execute statement stmt.executeQuery("ALTER TABLE employee REPLACE COLUMNS " +" (eid INT empid Int," +" ename STRING name String);"); System.out.println(" Replace column successful"); con.close(); } } Save the program in a file named HiveAlterReplaceColumn.java. Use the following commands to compile and execute this program. $ javac HiveAlterReplaceColumn.java $ java HiveAlterReplaceColumn Replace column successful. 50 Lectures 4 hours Navdeep Kaur 67 Lectures 4 hours Bigdata Engineer 109 Lectures 2 hours Bigdata Engineer Print Add Notes Bookmark this page
[ { "code": null, "e": 2118, "s": 1950, "text": "This chapter explains how to alter the attributes of a table such as changing its table name, changing column names, adding columns, and deleting or replacing columns." }, { "code": null, "e": 2155, "s": 2118, "text": "It is used to alter a table in Hive." }, { "code": null, "e": 2260, "s": 2155, "text": "The statement takes any of the following syntaxes based on what attributes we wish to modify in a table." }, { "code": null, "e": 2510, "s": 2260, "text": "ALTER TABLE name RENAME TO new_name\nALTER TABLE name ADD COLUMNS (col_spec[, col_spec ...])\nALTER TABLE name DROP [COLUMN] column_name\nALTER TABLE name CHANGE column_name new_name new_type\nALTER TABLE name REPLACE COLUMNS (col_spec[, col_spec ...])\n" }, { "code": null, "e": 2570, "s": 2510, "text": "The following query renames the table from employee to emp." }, { "code": null, "e": 2613, "s": 2570, "text": "hive> ALTER TABLE employee RENAME TO emp;\n" }, { "code": null, "e": 2663, "s": 2613, "text": "The JDBC program to rename a table is as follows." }, { "code": null, "e": 3466, "s": 2663, "text": "import java.sql.SQLException;\nimport java.sql.Connection;\nimport java.sql.ResultSet; \nimport java.sql.Statement;\nimport java.sql.DriverManager;\n\npublic class HiveAlterRenameTo {\n private static String driverName = \"org.apache.hadoop.hive.jdbc.HiveDriver\";\n \n public static void main(String[] args) throws SQLException {\n \n // Register driver and create driver instance\n Class.forName(driverName);\n \n // get connection\n Connection con = DriverManager.getConnection(\"jdbc:hive://localhost:10000/userdb\", \"\", \"\");\n \n // create statement\n Statement stmt = con.createStatement();\n \n // execute statement\n stmt.executeQuery(\"ALTER TABLE employee RENAME TO emp;\");\n System.out.println(\"Table Renamed Successfully\");\n con.close();\n }\n}" }, { "code": null, "e": 3587, "s": 3466, "text": "Save the program in a file named HiveAlterRenameTo.java. Use the following commands to compile and execute this program." }, { "code": null, "e": 3644, "s": 3587, "text": "$ javac HiveAlterRenameTo.java\n$ java HiveAlterRenameTo\n" }, { "code": null, "e": 3673, "s": 3644, "text": "Table renamed successfully.\n" }, { "code": null, "e": 3780, "s": 3673, "text": "The following table contains the fields of employee table and it shows the fields to be changed (in bold)." }, { "code": null, "e": 3868, "s": 3780, "text": "The following queries rename the column name and column data type using the above data:" }, { "code": null, "e": 3978, "s": 3868, "text": "hive> ALTER TABLE employee CHANGE name ename String;\nhive> ALTER TABLE employee CHANGE salary salary Double;\n" }, { "code": null, "e": 4030, "s": 3978, "text": "Given below is the JDBC program to change a column." }, { "code": null, "e": 4931, "s": 4030, "text": "import java.sql.SQLException;\nimport java.sql.Connection;\nimport java.sql.ResultSet;\nimport java.sql.Statement;\nimport java.sql.DriverManager;\n\npublic class HiveAlterChangeColumn {\n private static String driverName = \"org.apache.hadoop.hive.jdbc.HiveDriver\";\n \n public static void main(String[] args) throws SQLException {\n \n // Register driver and create driver instance\n Class.forName(driverName);\n \n // get connection\n Connection con = DriverManager.getConnection(\"jdbc:hive://localhost:10000/userdb\", \"\", \"\");\n \n // create statement\n Statement stmt = con.createStatement();\n \n // execute statement\n stmt.executeQuery(\"ALTER TABLE employee CHANGE name ename String;\");\n stmt.executeQuery(\"ALTER TABLE employee CHANGE salary salary Double;\");\n \n System.out.println(\"Change column successful.\");\n con.close();\n }\n}" }, { "code": null, "e": 5056, "s": 4931, "text": "Save the program in a file named HiveAlterChangeColumn.java. Use the following commands to compile and execute this program." }, { "code": null, "e": 5121, "s": 5056, "text": "$ javac HiveAlterChangeColumn.java\n$ java HiveAlterChangeColumn\n" }, { "code": null, "e": 5148, "s": 5121, "text": "Change column successful.\n" }, { "code": null, "e": 5216, "s": 5148, "text": "The following query adds a column named dept to the employee table." }, { "code": null, "e": 5299, "s": 5216, "text": "hive> ALTER TABLE employee ADD COLUMNS ( \ndept STRING COMMENT 'Department name');\n" }, { "code": null, "e": 5359, "s": 5299, "text": "The JDBC program to add a column to a table is given below." }, { "code": null, "e": 6196, "s": 5359, "text": "import java.sql.SQLException;\nimport java.sql.Connection;\nimport java.sql.ResultSet;\nimport java.sql.Statement;\nimport java.sql.DriverManager;\n\npublic class HiveAlterAddColumn {\n private static String driverName = \"org.apache.hadoop.hive.jdbc.HiveDriver\";\n \n public static void main(String[] args) throws SQLException {\n \n // Register driver and create driver instance\n Class.forName(driverName);\n\n // get connection\n Connection con = DriverManager.getConnection(\"jdbc:hive://localhost:10000/userdb\", \"\", \"\");\n\n // create statement\n Statement stmt = con.createStatement();\n \n // execute statement\n stmt.executeQuery(\"ALTER TABLE employee ADD COLUMNS \" + \" (dept STRING COMMENT 'Department name');\");\n System.out.prinln(\"Add column successful.\");\n \n con.close();\n }\n}" }, { "code": null, "e": 6318, "s": 6196, "text": "Save the program in a file named HiveAlterAddColumn.java. Use the following commands to compile and execute this program." }, { "code": null, "e": 6377, "s": 6318, "text": "$ javac HiveAlterAddColumn.java\n$ java HiveAlterAddColumn\n" }, { "code": null, "e": 6401, "s": 6377, "text": "Add column successful.\n" }, { "code": null, "e": 6512, "s": 6401, "text": "The following query deletes all the columns from the employee table and replaces it with emp and name columns:" }, { "code": null, "e": 6606, "s": 6512, "text": "hive> ALTER TABLE employee REPLACE COLUMNS ( \neid INT empid Int, \nename STRING name String);\n" }, { "code": null, "e": 6699, "s": 6606, "text": "Given below is the JDBC program to replace eid column with empid and ename column with name." }, { "code": null, "e": 7592, "s": 6699, "text": "import java.sql.SQLException;\nimport java.sql.Connection;\nimport java.sql.ResultSet;\nimport java.sql.Statement;\nimport java.sql.DriverManager;\n\npublic class HiveAlterReplaceColumn {\n\n private static String driverName = \"org.apache.hadoop.hive.jdbc.HiveDriver\";\n \n public static void main(String[] args) throws SQLException {\n \n // Register driver and create driver instance\n Class.forName(driverName);\n \n // get connection\n Connection con = DriverManager.getConnection(\"jdbc:hive://localhost:10000/userdb\", \"\", \"\");\n \n // create statement\n Statement stmt = con.createStatement();\n \n // execute statement\n stmt.executeQuery(\"ALTER TABLE employee REPLACE COLUMNS \"\n +\" (eid INT empid Int,\"\n +\" ename STRING name String);\");\n \n System.out.println(\" Replace column successful\");\n con.close();\n }\n}" }, { "code": null, "e": 7718, "s": 7592, "text": "Save the program in a file named HiveAlterReplaceColumn.java. Use the following commands to compile and execute this program." }, { "code": null, "e": 7784, "s": 7718, "text": "$ javac HiveAlterReplaceColumn.java\n$ java HiveAlterReplaceColumn" }, { "code": null, "e": 7812, "s": 7784, "text": "Replace column successful.\n" }, { "code": null, "e": 7845, "s": 7812, "text": "\n 50 Lectures \n 4 hours \n" }, { "code": null, "e": 7859, "s": 7845, "text": " Navdeep Kaur" }, { "code": null, "e": 7892, "s": 7859, "text": "\n 67 Lectures \n 4 hours \n" }, { "code": null, "e": 7910, "s": 7892, "text": " Bigdata Engineer" }, { "code": null, "e": 7944, "s": 7910, "text": "\n 109 Lectures \n 2 hours \n" }, { "code": null, "e": 7962, "s": 7944, "text": " Bigdata Engineer" }, { "code": null, "e": 7969, "s": 7962, "text": " Print" }, { "code": null, "e": 7980, "s": 7969, "text": " Add Notes" } ]
Exterior angle of a cyclic quadrilateral when the opposite interior angle is given - GeeksforGeeks
22 Mar, 2021 Given cyclic quadrilateral inside a circle, the task is to find the exterior angle of the cyclic quadrilateral when the opposite interior angle is given.Examples: Input: 48 Output: 48 degrees Input: 83 Output: 83 degrees Approach: Let, the exterior angle, angle CDE = x and, it’s opposite interior angle is angle ABC as, ADE is a straight line so, angle ADC = (180-x) degrees since, opposite angles of a cyclic quadrilateral are supplementary, angle ABC = x Below is the implementation of the above approach: C++ Java Python3 C# Javascript // C++ program to find the exterior angle// of a cyclic quadrilateral when// the opposite interior angle is given #include <bits/stdc++.h>using namespace std; void angleextcycquad(int z){ cout << "The exterior angle of the" << " cyclic quadrilateral is " << z << " degrees" << endl;} // Driver codeint main(){ int z = 48; angleextcycquad(z); return 0;} // Java program to find the exterior angle// of a cyclic quadrilateral when// the opposite interior angle is given import java.io.*; class GFG{ static void angleextcycquad(int z){ System.out.print( "The exterior angle of the" + " cyclic quadrilateral is " + z +" degrees");} // Driver codepublic static void main (String[] args){ int z = 48; angleextcycquad(z);}} // This code is contributed by anuj_67.. # Python program to find the exterior angle# of a cyclic quadrilateral when# the opposite interior angle is given def angleextcycquad(z): print("The exterior angle of the",end=""); print("cyclic quadrilateral is ",end=""); print(z," degrees"); # Driver codez = 48;angleextcycquad(z); # This code is contributed by 29AjayKumar // C# program to find the exterior angle// of a cyclic quadrilateral when// the opposite interior angle is givenusing System; class GFG{ static void angleextcycquad(int z){ Console.WriteLine( "The exterior angle of the" + " cyclic quadrilateral is " + z +" degrees");} // Driver codepublic static void Main (){ int z = 48; angleextcycquad(z);}} // This code is contributed by anuj_67.. <script>// javascript program to find the exterior angle// of a cyclic quadrilateral when// the opposite interior angle is givenfunction angleextcycquad(z){ document.write( "The exterior angle of the" + " cyclic quadrilateral is " + z +" degrees");} // Driver codevar z = 48;angleextcycquad(z); // This code is contributed by Princi Singh</script> The exterior angle of the cyclic quadrilateral is 48 degrees vt_m 29AjayKumar princi singh Geometric Mathematical Mathematical Geometric Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Program for distance between two points on earth Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping) Convex Hull | Set 2 (Graham Scan) Given n line segments, find if any two segments intersect Line Clipping | Set 1 (Cohen–Sutherland Algorithm) Program for Fibonacci numbers Write a program to print all permutations of a given string C++ Data Types Set in C++ Standard Template Library (STL) Coin Change | DP-7
[ { "code": null, "e": 25352, "s": 25324, "text": "\n22 Mar, 2021" }, { "code": null, "e": 25517, "s": 25352, "text": "Given cyclic quadrilateral inside a circle, the task is to find the exterior angle of the cyclic quadrilateral when the opposite interior angle is given.Examples: " }, { "code": null, "e": 25577, "s": 25517, "text": "Input: 48 \nOutput: 48 degrees\n\nInput: 83\nOutput: 83 degrees" }, { "code": null, "e": 25591, "s": 25579, "text": "Approach: " }, { "code": null, "e": 25630, "s": 25591, "text": "Let, the exterior angle, angle CDE = x" }, { "code": null, "e": 25677, "s": 25630, "text": "and, it’s opposite interior angle is angle ABC" }, { "code": null, "e": 25704, "s": 25677, "text": "as, ADE is a straight line" }, { "code": null, "e": 25736, "s": 25704, "text": "so, angle ADC = (180-x) degrees" }, { "code": null, "e": 25804, "s": 25736, "text": "since, opposite angles of a cyclic quadrilateral are supplementary," }, { "code": null, "e": 25818, "s": 25804, "text": "angle ABC = x" }, { "code": null, "e": 25870, "s": 25818, "text": "Below is the implementation of the above approach: " }, { "code": null, "e": 25874, "s": 25870, "text": "C++" }, { "code": null, "e": 25879, "s": 25874, "text": "Java" }, { "code": null, "e": 25887, "s": 25879, "text": "Python3" }, { "code": null, "e": 25890, "s": 25887, "text": "C#" }, { "code": null, "e": 25901, "s": 25890, "text": "Javascript" }, { "code": "// C++ program to find the exterior angle// of a cyclic quadrilateral when// the opposite interior angle is given #include <bits/stdc++.h>using namespace std; void angleextcycquad(int z){ cout << \"The exterior angle of the\" << \" cyclic quadrilateral is \" << z << \" degrees\" << endl;} // Driver codeint main(){ int z = 48; angleextcycquad(z); return 0;}", "e": 26282, "s": 25901, "text": null }, { "code": "// Java program to find the exterior angle// of a cyclic quadrilateral when// the opposite interior angle is given import java.io.*; class GFG{ static void angleextcycquad(int z){ System.out.print( \"The exterior angle of the\" + \" cyclic quadrilateral is \" + z +\" degrees\");} // Driver codepublic static void main (String[] args){ int z = 48; angleextcycquad(z);}} // This code is contributed by anuj_67..", "e": 26714, "s": 26282, "text": null }, { "code": "# Python program to find the exterior angle# of a cyclic quadrilateral when# the opposite interior angle is given def angleextcycquad(z): print(\"The exterior angle of the\",end=\"\"); print(\"cyclic quadrilateral is \",end=\"\"); print(z,\" degrees\"); # Driver codez = 48;angleextcycquad(z); # This code is contributed by 29AjayKumar", "e": 27049, "s": 26714, "text": null }, { "code": "// C# program to find the exterior angle// of a cyclic quadrilateral when// the opposite interior angle is givenusing System; class GFG{ static void angleextcycquad(int z){ Console.WriteLine( \"The exterior angle of the\" + \" cyclic quadrilateral is \" + z +\" degrees\");} // Driver codepublic static void Main (){ int z = 48; angleextcycquad(z);}} // This code is contributed by anuj_67..", "e": 27462, "s": 27049, "text": null }, { "code": "<script>// javascript program to find the exterior angle// of a cyclic quadrilateral when// the opposite interior angle is givenfunction angleextcycquad(z){ document.write( \"The exterior angle of the\" + \" cyclic quadrilateral is \" + z +\" degrees\");} // Driver codevar z = 48;angleextcycquad(z); // This code is contributed by Princi Singh</script>", "e": 27827, "s": 27462, "text": null }, { "code": null, "e": 27888, "s": 27827, "text": "The exterior angle of the cyclic quadrilateral is 48 degrees" }, { "code": null, "e": 27895, "s": 27890, "text": "vt_m" }, { "code": null, "e": 27907, "s": 27895, "text": "29AjayKumar" }, { "code": null, "e": 27920, "s": 27907, "text": "princi singh" }, { "code": null, "e": 27930, "s": 27920, "text": "Geometric" }, { "code": null, "e": 27943, "s": 27930, "text": "Mathematical" }, { "code": null, "e": 27956, "s": 27943, "text": "Mathematical" }, { "code": null, "e": 27966, "s": 27956, "text": "Geometric" }, { "code": null, "e": 28064, "s": 27966, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28113, "s": 28064, "text": "Program for distance between two points on earth" }, { "code": null, "e": 28166, "s": 28113, "text": "Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping)" }, { "code": null, "e": 28200, "s": 28166, "text": "Convex Hull | Set 2 (Graham Scan)" }, { "code": null, "e": 28258, "s": 28200, "text": "Given n line segments, find if any two segments intersect" }, { "code": null, "e": 28309, "s": 28258, "text": "Line Clipping | Set 1 (Cohen–Sutherland Algorithm)" }, { "code": null, "e": 28339, "s": 28309, "text": "Program for Fibonacci numbers" }, { "code": null, "e": 28399, "s": 28339, "text": "Write a program to print all permutations of a given string" }, { "code": null, "e": 28414, "s": 28399, "text": "C++ Data Types" }, { "code": null, "e": 28457, "s": 28414, "text": "Set in C++ Standard Template Library (STL)" } ]
Version control Big Query with Terraform (with CI/CD too) | by Jonathan Law | Towards Data Science
Working with teams using Big Query and creating views can get messy when people start changing views without informing other members of the team. Some teams are accustomed to using comments on the top of the view, or even a spreadsheet to record changes. Obviously, this leads to a problem in the long run as people may forget to comment or not doing it properly. Without further ado, how can I version control my Big Query to ensure I can keep track and roll back changes when needed and better manage the project with my team. To achieve this, we will be looking into 2 technology which is Terraform (Infrastructure as code), and a version control system of your choice (Github, Gitlab, etc). We will be using Github today! Do not be too worried if you have never heard of Terraform, this post will sort of be a “quick introduction to Terraform on GCP” too! Getting StartedInitialize Terraform environmentCreating our dataset and view in TerraformVersion control and setupgit commit, git pushMake changes and test if we achieved our goalConclusion Getting Started Initialize Terraform environment Creating our dataset and view in Terraform Version control and setup git commit, git push Make changes and test if we achieved our goal Conclusion This part will be for environments that have not been set up yet. Not much to get started! You just need to install:1. Terraform. They provided a great guide, so just make sure you follow along and you should be able to call terraform in your terminal/command prompt2. git. Please install git for your OS so we can perform basic commands such as git push TL;DR of steps Create a service account on a GCP projectCreate and download JSON key for that service accountSetup Google Cloud Storage(GCS) to store Terraform stateGrant service account permission to read, write and create objects in that bucketSetup Terraform to connect to GCS Create a service account on a GCP project Create and download JSON key for that service account Setup Google Cloud Storage(GCS) to store Terraform state Grant service account permission to read, write and create objects in that bucket Setup Terraform to connect to GCS Details for first-timersCreating a service account and key Terraform will be interacting with our Google Cloud Platform (GCP) project on behalf of us using something called a service account. Go ahead and create your GCP project with this tutorial provided by Google. When you reached the prompt to choose the IAM role (Step 7 in that tutorial as of June 2021), you will need to grant the Big Query Data Editor role for this tutorial's sake as shown in the snippet above. As you progress, feel free to narrow down the permission to what is required only. Click next and done. After creating the service account, click into the service account and you will be greeted with a screen similar to the snippet above. Choose KEYS -> ADD KEY -> Create new key. Next, a popup will appear, choose JSON and click create. It will download a JSON file. Keep it safe! Create a Google Cloud Storage (GCS) to store Terraform State Terraform state basically remembers and tells Terraform how your infrastructure looks like now. I suggest reading Terraform docs to understand it better. We will be using GCS to store the state for 2 main reasons:- Sharing the state on Github is generally not a good practice as it may contain sensitive information- You do not need to worry about losing it and rebuilding the infrastructure So let us start by creating a GCS bucket following the tutorial here. Remember the bucket name you created as we will use it later. On that bucket, choose permission and add the service account created earlier with Storage Admin permission. You can follow this tutorial on how to do so. Create a working folder, and inside that folder, we will create a file called providers.tf. I usually use Visual Studio Code to manage my Terraform configurations. tf is a Terraform configuration file, where all our Terraform stuff will live in. Terraform providers are basically the platform you are going to work with, whether it is Microsoft Azure, GCP, AWS, or others. I am calling this file providers.tf, as this is where I am keeping all my provider's configurations, you can name it however you want but providers would be a better standard. Paste the above content in your providers.tf. In the first “block” called terraform {...}, we are telling Terraform to store our state in GCS. Change YOUR_BUCKET_NAME to the bucket name you created earlier. Prefix basically is just the folder within the bucket.The next block called provider indicates that we are using Terraform Google Provider configuration, and initializing it with your YOUR_PROJECT_NAME. Now Terraform will not be able to read your GCS storage yet, and this is where our service account comes in. There will be a few methods declaring your service account to Terraform, but the method we will be using will be useful in setting up the CI/CD on Github later. We will set the environment variable to our JSON file we downloaded earlier, so move that JSON file into your project folder too. In that same folder, open terminal/command prompt and set the environment as: Windows user: set GOOGLE_CREDENTIALS=JSON_FILE_NAME.json Linux/macOS: export GOOGLE_CREDENTIALS=JSON_FILE_NAME.json Google provider on Terraform will automatically detect the service account key from the environment variable. Now here comes your very first Terraform command to initialize the environment! terraform init If all goes well, you should see lots of green text on your console. GCS should now show a folder called state and a file inside that folder that ends with .tfstate. However, if you are met with errors, do not worry and carefully read what are the errors. If it is something about NewClient() failed, ensure your environment variable is set correctly to the JSON file. If it is about access denied, ensure you gave proper access to your service account on the Google IAM page. Great! Now that we have initialized our environment, we can start managing Big Query using Terraform. Just like how we have providers.tf to manage our providers, we are going to create bigquery.tf to manage our Big Query. You can even drill down to have 3 separate files for Dataset, Tables, and Views, but for the time being, we will just create bigquery.tf. We will create another folder to store our view SQL at bigquery/views/vw_aggregrated.sql. # Some demo content in vw_aggregrated.sqlSELECT LoadDate, Origin, Destination, COUNT(ID) AS countsFROM `terraform-testing-gcp.demoset.DemoTable`GROUP BY 1, 2, 3 Copy and paste the above code into bigquery.tf. To understand what is going on, the first resource contains 2 parameters, the first being the resource type (google_bigquery_dataset) and the second being the ID (views) you can define yourself. You can find the available resources for Google providers here. We are creating a dataset in the US using the “google_bigquery_dataset” block with the ID “views”. I have given it a description and some labels. The labels are completely optional and you can remove them. Next, we are creating a resource “google_bigquery_table” with the ID “vw_aggregated”. For the dataset_id, it is referring to the views dataset_id we created earlier on. This time, since we are creating a view, we will have to open a view {...} block as described in the tutorial here. The first parameter we will pass in will be the SQL we want to use. There are 2 methods of doing it, one being directly typing the SQL into bigquery.tf itself, eg: query = "SELECT * FROM ... WHERE 1 = 1. However we are looking into maintainability, so we have defined our SQL in a folder just now at bigquery/views/vw_aggregated.sql. Alright! Let us run a command to standardize our Terraform code format, then dry run our configuration. Terraform plan will basically “plan” and let us know what resources will be created/deleted/modified etc. terraform fmtterraform plan If you see x resource to be created, with x being a number, you are good to go! If you are getting x resource to be deleted, please check and verify if that is intended. Once you have verified this is the intended action, run terraform apply to apply the configuration. If done correctly, check your Big Query UI and you should see the resource you defined in your Terraform configuration created! In our case, a dataset called views with vw_aggregated in it was created. After we have verified our Terraform configuration works as intended, it is time to version control changes to the configuration. The first step would be to create a Github project, and set it to a private repo as you would not want your infrastructure setup exposed to the public :). Follow this guide by Github to create a secret called GOOGLE_CREDENTIALS. The value for this secret will be the contents of our JSON key file we used to set our environment variable just now. Just open that JSON file with a text editor and copy-paste the content into the value field of the secret. It should look something similar to the snippet above. The “secret” we will be keeping is the service account key to allow Github Actions to help apply our Terraform configuration every time we commit a change. Github secret is great in our scenario for 2 reasons:- We do not need to commit our JSON key file that other contributors/members can download and misuse- Github Actions will be applying the configurations, and you do not need to distribute service accounts with write permissions to contributors. This forces contributors/members to always commit their work and changes for it to be applied to production Next, we will need to tell Github Actions what needs to be done once a commit happens. Create a folder called .github/workflows and paste the following content into a file in that newly created directory called terraform.yml. Before pushing to Github, we would want to exclude some files or folders. On the root folder, create a file called .gitignore and paste the following content below. This will exclude the JSON key we copied into the directory earlier on, and any state file if there is. Feel free to add your own files too! *.json*.tfstate For those who are still confused about the many files we created earlier, here is how your directory should look like if you followed this tutorial exactly (ignore vw_origin_KUL.sql). Here comes the golden moment. The following command below will initialize git and push your configuration to Github. Ideally, if it works fine, Github Actions will start running the minute your code is pushed. terraform fmtgit add .git remote add origin LINK_TO_YOUR_GIT_REPOgit branch -M maingit commit -m "Initial commit"git push -u origin main git add will add new and changed files and folders in that directory, except those defined in .gitignore. git remote add origin https://github.com/... will define the repository that your current directory should belong to.git branch -M main will create a branch called maingit commit -m “YOUR_MESSAGE_HERE” will create a “changelog”, and a good message will help yourself and others in your team identify what was changed without reading too much of codesgit push -u origin main will push your codes to the main branch. Check your Github repository now and you should see your codes there. Clicking on the Actions tab, you will see the workflow we created earlier at .github/workflows/terraform.yml. If you see a green tick or orange as it is running, you are good to go! In the event of an error, just click into the workflow and read the logs from each step as the errors are quite easy to narrow down. As you can see the steps are as what we defined in .github/workflows/terraform.yml. We are nearly done with the entire pipeline! Next is to test if what we planned to achieve has been achieved. Let us make some changes to bigquery/views/vw_aggregated.sql. I have changed my query to # Some demo content in vw_aggregrated.sqlSELECT CAST(LoadDate AS DATE) AS LoadDate, Origin, Destination, COUNT(ID) AS countsFROM `terraform-testing-gcp.demoset.DemoTable`GROUP BY 1, 2, 3 Now we will need to add whatever was changed, make a changelog message, and then push it to our repository so that Github Actions can apply it for us. Always terraform fmt to ensure code standardization. terraform fmtgit add .git commit -m "Updated vw_aggregated to CAST LoadDate"git push origin main Checking on the repo, we can see the changes that were made by me (jonathanlawhh) as shown on the snippet above. And checking the Actions tab, we can see the workflow for that commit has been completed successfully! And finally, we can see the view in our Big Query has been updated as expected. What we have done in summary is to first set up a Github repository, initialize a Terraform environment and push that Terraform configuration to the Github repository. At the very same time, we have also prepared a CI/CD configuration terraform.yml to help deploy our configuration once we push a change. With that, it is safe to say that we have achieved the goal of versioning control Big Query using Terraform and implemented continuous integration and deployment to our project. Hopefully, this post is sufficient to kick start your implementation and help you learn more about what can be achieved by Terraform! Of course, the implementation can be improved based on your environment. If you have any questions, feel free to reach out or even say hi to me throughMessenger widget: https://jonathanlawhh.comEmail: [email protected]
[ { "code": null, "e": 317, "s": 171, "text": "Working with teams using Big Query and creating views can get messy when people start changing views without informing other members of the team." }, { "code": null, "e": 535, "s": 317, "text": "Some teams are accustomed to using comments on the top of the view, or even a spreadsheet to record changes. Obviously, this leads to a problem in the long run as people may forget to comment or not doing it properly." }, { "code": null, "e": 700, "s": 535, "text": "Without further ado, how can I version control my Big Query to ensure I can keep track and roll back changes when needed and better manage the project with my team." }, { "code": null, "e": 1031, "s": 700, "text": "To achieve this, we will be looking into 2 technology which is Terraform (Infrastructure as code), and a version control system of your choice (Github, Gitlab, etc). We will be using Github today! Do not be too worried if you have never heard of Terraform, this post will sort of be a “quick introduction to Terraform on GCP” too!" }, { "code": null, "e": 1221, "s": 1031, "text": "Getting StartedInitialize Terraform environmentCreating our dataset and view in TerraformVersion control and setupgit commit, git pushMake changes and test if we achieved our goalConclusion" }, { "code": null, "e": 1237, "s": 1221, "text": "Getting Started" }, { "code": null, "e": 1270, "s": 1237, "text": "Initialize Terraform environment" }, { "code": null, "e": 1313, "s": 1270, "text": "Creating our dataset and view in Terraform" }, { "code": null, "e": 1339, "s": 1313, "text": "Version control and setup" }, { "code": null, "e": 1360, "s": 1339, "text": "git commit, git push" }, { "code": null, "e": 1406, "s": 1360, "text": "Make changes and test if we achieved our goal" }, { "code": null, "e": 1417, "s": 1406, "text": "Conclusion" }, { "code": null, "e": 1772, "s": 1417, "text": "This part will be for environments that have not been set up yet. Not much to get started! You just need to install:1. Terraform. They provided a great guide, so just make sure you follow along and you should be able to call terraform in your terminal/command prompt2. git. Please install git for your OS so we can perform basic commands such as git push" }, { "code": null, "e": 1787, "s": 1772, "text": "TL;DR of steps" }, { "code": null, "e": 2052, "s": 1787, "text": "Create a service account on a GCP projectCreate and download JSON key for that service accountSetup Google Cloud Storage(GCS) to store Terraform stateGrant service account permission to read, write and create objects in that bucketSetup Terraform to connect to GCS" }, { "code": null, "e": 2094, "s": 2052, "text": "Create a service account on a GCP project" }, { "code": null, "e": 2148, "s": 2094, "text": "Create and download JSON key for that service account" }, { "code": null, "e": 2205, "s": 2148, "text": "Setup Google Cloud Storage(GCS) to store Terraform state" }, { "code": null, "e": 2287, "s": 2205, "text": "Grant service account permission to read, write and create objects in that bucket" }, { "code": null, "e": 2321, "s": 2287, "text": "Setup Terraform to connect to GCS" }, { "code": null, "e": 2380, "s": 2321, "text": "Details for first-timersCreating a service account and key" }, { "code": null, "e": 2513, "s": 2380, "text": "Terraform will be interacting with our Google Cloud Platform (GCP) project on behalf of us using something called a service account." }, { "code": null, "e": 2897, "s": 2513, "text": "Go ahead and create your GCP project with this tutorial provided by Google. When you reached the prompt to choose the IAM role (Step 7 in that tutorial as of June 2021), you will need to grant the Big Query Data Editor role for this tutorial's sake as shown in the snippet above. As you progress, feel free to narrow down the permission to what is required only. Click next and done." }, { "code": null, "e": 3175, "s": 2897, "text": "After creating the service account, click into the service account and you will be greeted with a screen similar to the snippet above. Choose KEYS -> ADD KEY -> Create new key. Next, a popup will appear, choose JSON and click create. It will download a JSON file. Keep it safe!" }, { "code": null, "e": 3236, "s": 3175, "text": "Create a Google Cloud Storage (GCS) to store Terraform State" }, { "code": null, "e": 3628, "s": 3236, "text": "Terraform state basically remembers and tells Terraform how your infrastructure looks like now. I suggest reading Terraform docs to understand it better. We will be using GCS to store the state for 2 main reasons:- Sharing the state on Github is generally not a good practice as it may contain sensitive information- You do not need to worry about losing it and rebuilding the infrastructure" }, { "code": null, "e": 3915, "s": 3628, "text": "So let us start by creating a GCS bucket following the tutorial here. Remember the bucket name you created as we will use it later. On that bucket, choose permission and add the service account created earlier with Storage Admin permission. You can follow this tutorial on how to do so." }, { "code": null, "e": 4464, "s": 3915, "text": "Create a working folder, and inside that folder, we will create a file called providers.tf. I usually use Visual Studio Code to manage my Terraform configurations. tf is a Terraform configuration file, where all our Terraform stuff will live in. Terraform providers are basically the platform you are going to work with, whether it is Microsoft Azure, GCP, AWS, or others. I am calling this file providers.tf, as this is where I am keeping all my provider's configurations, you can name it however you want but providers would be a better standard." }, { "code": null, "e": 4874, "s": 4464, "text": "Paste the above content in your providers.tf. In the first “block” called terraform {...}, we are telling Terraform to store our state in GCS. Change YOUR_BUCKET_NAME to the bucket name you created earlier. Prefix basically is just the folder within the bucket.The next block called provider indicates that we are using Terraform Google Provider configuration, and initializing it with your YOUR_PROJECT_NAME." }, { "code": null, "e": 5352, "s": 4874, "text": "Now Terraform will not be able to read your GCS storage yet, and this is where our service account comes in. There will be a few methods declaring your service account to Terraform, but the method we will be using will be useful in setting up the CI/CD on Github later. We will set the environment variable to our JSON file we downloaded earlier, so move that JSON file into your project folder too. In that same folder, open terminal/command prompt and set the environment as:" }, { "code": null, "e": 5366, "s": 5352, "text": "Windows user:" }, { "code": null, "e": 5409, "s": 5366, "text": "set GOOGLE_CREDENTIALS=JSON_FILE_NAME.json" }, { "code": null, "e": 5422, "s": 5409, "text": "Linux/macOS:" }, { "code": null, "e": 5468, "s": 5422, "text": "export GOOGLE_CREDENTIALS=JSON_FILE_NAME.json" }, { "code": null, "e": 5658, "s": 5468, "text": "Google provider on Terraform will automatically detect the service account key from the environment variable. Now here comes your very first Terraform command to initialize the environment!" }, { "code": null, "e": 5673, "s": 5658, "text": "terraform init" }, { "code": null, "e": 5839, "s": 5673, "text": "If all goes well, you should see lots of green text on your console. GCS should now show a folder called state and a file inside that folder that ends with .tfstate." }, { "code": null, "e": 6150, "s": 5839, "text": "However, if you are met with errors, do not worry and carefully read what are the errors. If it is something about NewClient() failed, ensure your environment variable is set correctly to the JSON file. If it is about access denied, ensure you gave proper access to your service account on the Google IAM page." }, { "code": null, "e": 6252, "s": 6150, "text": "Great! Now that we have initialized our environment, we can start managing Big Query using Terraform." }, { "code": null, "e": 6600, "s": 6252, "text": "Just like how we have providers.tf to manage our providers, we are going to create bigquery.tf to manage our Big Query. You can even drill down to have 3 separate files for Dataset, Tables, and Views, but for the time being, we will just create bigquery.tf. We will create another folder to store our view SQL at bigquery/views/vw_aggregrated.sql." }, { "code": null, "e": 6768, "s": 6600, "text": "# Some demo content in vw_aggregrated.sqlSELECT LoadDate, Origin, Destination, COUNT(ID) AS countsFROM `terraform-testing-gcp.demoset.DemoTable`GROUP BY 1, 2, 3" }, { "code": null, "e": 7281, "s": 6768, "text": "Copy and paste the above code into bigquery.tf. To understand what is going on, the first resource contains 2 parameters, the first being the resource type (google_bigquery_dataset) and the second being the ID (views) you can define yourself. You can find the available resources for Google providers here. We are creating a dataset in the US using the “google_bigquery_dataset” block with the ID “views”. I have given it a description and some labels. The labels are completely optional and you can remove them." }, { "code": null, "e": 7900, "s": 7281, "text": "Next, we are creating a resource “google_bigquery_table” with the ID “vw_aggregated”. For the dataset_id, it is referring to the views dataset_id we created earlier on. This time, since we are creating a view, we will have to open a view {...} block as described in the tutorial here. The first parameter we will pass in will be the SQL we want to use. There are 2 methods of doing it, one being directly typing the SQL into bigquery.tf itself, eg: query = \"SELECT * FROM ... WHERE 1 = 1. However we are looking into maintainability, so we have defined our SQL in a folder just now at bigquery/views/vw_aggregated.sql." }, { "code": null, "e": 8110, "s": 7900, "text": "Alright! Let us run a command to standardize our Terraform code format, then dry run our configuration. Terraform plan will basically “plan” and let us know what resources will be created/deleted/modified etc." }, { "code": null, "e": 8138, "s": 8110, "text": "terraform fmtterraform plan" }, { "code": null, "e": 8408, "s": 8138, "text": "If you see x resource to be created, with x being a number, you are good to go! If you are getting x resource to be deleted, please check and verify if that is intended. Once you have verified this is the intended action, run terraform apply to apply the configuration." }, { "code": null, "e": 8610, "s": 8408, "text": "If done correctly, check your Big Query UI and you should see the resource you defined in your Terraform configuration created! In our case, a dataset called views with vw_aggregated in it was created." }, { "code": null, "e": 8895, "s": 8610, "text": "After we have verified our Terraform configuration works as intended, it is time to version control changes to the configuration. The first step would be to create a Github project, and set it to a private repo as you would not want your infrastructure setup exposed to the public :)." }, { "code": null, "e": 9249, "s": 8895, "text": "Follow this guide by Github to create a secret called GOOGLE_CREDENTIALS. The value for this secret will be the contents of our JSON key file we used to set our environment variable just now. Just open that JSON file with a text editor and copy-paste the content into the value field of the secret. It should look something similar to the snippet above." }, { "code": null, "e": 9811, "s": 9249, "text": "The “secret” we will be keeping is the service account key to allow Github Actions to help apply our Terraform configuration every time we commit a change. Github secret is great in our scenario for 2 reasons:- We do not need to commit our JSON key file that other contributors/members can download and misuse- Github Actions will be applying the configurations, and you do not need to distribute service accounts with write permissions to contributors. This forces contributors/members to always commit their work and changes for it to be applied to production" }, { "code": null, "e": 10037, "s": 9811, "text": "Next, we will need to tell Github Actions what needs to be done once a commit happens. Create a folder called .github/workflows and paste the following content into a file in that newly created directory called terraform.yml." }, { "code": null, "e": 10343, "s": 10037, "text": "Before pushing to Github, we would want to exclude some files or folders. On the root folder, create a file called .gitignore and paste the following content below. This will exclude the JSON key we copied into the directory earlier on, and any state file if there is. Feel free to add your own files too!" }, { "code": null, "e": 10359, "s": 10343, "text": "*.json*.tfstate" }, { "code": null, "e": 10543, "s": 10359, "text": "For those who are still confused about the many files we created earlier, here is how your directory should look like if you followed this tutorial exactly (ignore vw_origin_KUL.sql)." }, { "code": null, "e": 10753, "s": 10543, "text": "Here comes the golden moment. The following command below will initialize git and push your configuration to Github. Ideally, if it works fine, Github Actions will start running the minute your code is pushed." }, { "code": null, "e": 10890, "s": 10753, "text": "terraform fmtgit add .git remote add origin LINK_TO_YOUR_GIT_REPOgit branch -M maingit commit -m \"Initial commit\"git push -u origin main" }, { "code": null, "e": 11411, "s": 10890, "text": "git add will add new and changed files and folders in that directory, except those defined in .gitignore. git remote add origin https://github.com/... will define the repository that your current directory should belong to.git branch -M main will create a branch called maingit commit -m “YOUR_MESSAGE_HERE” will create a “changelog”, and a good message will help yourself and others in your team identify what was changed without reading too much of codesgit push -u origin main will push your codes to the main branch." }, { "code": null, "e": 11591, "s": 11411, "text": "Check your Github repository now and you should see your codes there. Clicking on the Actions tab, you will see the workflow we created earlier at .github/workflows/terraform.yml." }, { "code": null, "e": 11880, "s": 11591, "text": "If you see a green tick or orange as it is running, you are good to go! In the event of an error, just click into the workflow and read the logs from each step as the errors are quite easy to narrow down. As you can see the steps are as what we defined in .github/workflows/terraform.yml." }, { "code": null, "e": 12079, "s": 11880, "text": "We are nearly done with the entire pipeline! Next is to test if what we planned to achieve has been achieved. Let us make some changes to bigquery/views/vw_aggregated.sql. I have changed my query to" }, { "code": null, "e": 12276, "s": 12079, "text": "# Some demo content in vw_aggregrated.sqlSELECT CAST(LoadDate AS DATE) AS LoadDate, Origin, Destination, COUNT(ID) AS countsFROM `terraform-testing-gcp.demoset.DemoTable`GROUP BY 1, 2, 3" }, { "code": null, "e": 12480, "s": 12276, "text": "Now we will need to add whatever was changed, make a changelog message, and then push it to our repository so that Github Actions can apply it for us. Always terraform fmt to ensure code standardization." }, { "code": null, "e": 12577, "s": 12480, "text": "terraform fmtgit add .git commit -m \"Updated vw_aggregated to CAST LoadDate\"git push origin main" }, { "code": null, "e": 12690, "s": 12577, "text": "Checking on the repo, we can see the changes that were made by me (jonathanlawhh) as shown on the snippet above." }, { "code": null, "e": 12793, "s": 12690, "text": "And checking the Actions tab, we can see the workflow for that commit has been completed successfully!" }, { "code": null, "e": 12873, "s": 12793, "text": "And finally, we can see the view in our Big Query has been updated as expected." }, { "code": null, "e": 13356, "s": 12873, "text": "What we have done in summary is to first set up a Github repository, initialize a Terraform environment and push that Terraform configuration to the Github repository. At the very same time, we have also prepared a CI/CD configuration terraform.yml to help deploy our configuration once we push a change. With that, it is safe to say that we have achieved the goal of versioning control Big Query using Terraform and implemented continuous integration and deployment to our project." }, { "code": null, "e": 13563, "s": 13356, "text": "Hopefully, this post is sufficient to kick start your implementation and help you learn more about what can be achieved by Terraform! Of course, the implementation can be improved based on your environment." } ]
Telco Customer Churn Prediction. Customer churn, also known as customer... | by Ifeoma Ojialor | Towards Data Science
Customer churn, also known as customer attrition, occurs when customers stop doing business with a company or stop using a company’s services. By being aware of and monitoring churn rate, companies are equipped to determine their customer retention success rates and identify strategies for improvement. We will use a machine learning model to understand the precise customer behaviors and attributes which signal the risk and timing of customer churn. Understanding our dataset: We will use the Telco Customer Churn dataset from Kaggle. The raw dataset contains 7043 entries. All entries have several features and a column stating if the customer has churned or not.To better understand the data we will first load it into pandas and explore it with the help of some very basic commands. import numpy as npimport pandas as pdfrom sklearn.preprocessing import MinMaxScalerfrom sklearn.cluster import KMeansimport matplotlib.pyplot as plt import seaborn as sns%matplotlib inlinefrom sklearn.model_selection import train_test_split#Loading the datadf = pd.read_csv(r’...Churn\telco_customer.csv’)df.info() <class ‘pandas.core.frame.DataFrame’> RangeIndex: 7043 entries, 0 to 7042 Data columns (total 21 columns): customerID 7043 non-null object gender 7043 non-null object SeniorCitizen 7043 non-null int64 Partner 7043 non-null objectDependents 7043 non-null objecttenure 7043 non-null int64 PhoneService 7043 non-null object MultipleLines 7043 non-null object InternetService 7043 non-null object OnlineSecurity 7043 non-null object OnlineBackup 7043 non-null object DeviceProtection 7043 non-null object TechSupport 7043 non-null object StreamingTV 7043 non-null object StreamingMovies 7043 non-null object Contract 7043 non-null object PaperlessBilling 7043 non-null object PaymentMethod 7043 non-null object MonthlyCharges 7043 non-null float64 TotalCharges 7043 non-null object Churn 7043 non-null object dtypes: float64(1), int64(2), object(18) memory usage: 1.1+ MB df.info() gives us detailed information about every column. We can see that our data is divided into three types; object: Object format means variables are categorical. Categorical variables in our dataset are: customerID, gender, partner, dependents, phone service, multiple lines, internet service, online security, online backup, device protection, tech support, streaming tv, streaming movies, contract, paperless billing, payment method, total charges, and churn. int64: It represents the integer variables. Senior citizen and tenure are of this format. float64: It represents the variables which have some decimal values involved. They are also numerical variables. There is only one variable with this format in our dataset which is monthly charges. Exploratory Data Analysis The goal of this section is to get comfortable with our data. We will do bivariate analysis. It is the simplest form of analyzing data where we examine how each variable relates to the churn rate. For categorical features, we can use frequency table or bar plots which will calculate the number of each category in a particular variable. For numerical features, probability density plots can be used to look at the distribution of the variable. All visualizations for categorical variables will be done in tableau public. The following inferences can be made from the above bar plots; The churn percent is almost equal in case of Male and Females. The percent of churn is higher in case of senior citizens. The churn rate is higher in case of customers who have phone services. Customers with Partners and Dependents have lower churn rate as compared to those who don’t have partners & Dependents. Customers with an electronic payment method have a higher churn rate compared to other payment methods. Customers with no internet service has a lower churn rate. Churn rate is much higher in case of Fiber Optic InternetServices. Customers who do not have services like OnlineSecurity, OnlineBackup, and TechSupport have left the platform in the past month. Now let's look at the numerical variables. plt.figure(1), plt.subplot(121), sns.distplot(df['tenure']);plt.figure(1), plt.subplot(121), sns.distplot(df['MonthlyCharges']);plt.figure(1), plt.subplot(121), sns.distplot(df['TotalCharges']); Data Preparation & Feature Engineering: This section is a fundamental part of machine learning. If this section is not done properly, our model will not work. In this section we will clean up our dataset by dropping irrelevant data, treating missing values, and converting our variables to the proper data type. Treating Irrelevant data & missing values In our dataset, we can see that customer ID is not needed for our model so we drop the variable. We do not need to treat missing values as there are none in this dataset. df.drop([‘customerID’], axis=1, inplace=True) Converting Categorical to numerical data Machine learning works with only numerical values. Therefore, we need to convert our categorical values to numerical values. By using the Pandas function “get_dummies()”, we can replace the gender column with “gender_Female” and “gender_Male”. We will use df.info() to show us which ones are categorical and numerical. df.info()<class ‘pandas.core.frame.DataFrame’> RangeIndex: 7043 entries, 0 to 7042Data columns (total 21 columns): customerID 7043 non-null object gender 7043 non-null object SeniorCitizen 7043 non-null int64 Partner 7043 non-null object Dependents 7043 non-null object tenure 7043 non-null int64 PhoneService 7043 non-null objectMultipleLines 7043 non-null objectInternetService 7043 non-null object OnlineSecurity 7043 non-null object OnlineBackup 7043 non-null object DeviceProtection 7043 non-null object TechSupport 7043 non-null object StreamingTV 7043 non-null object StreamingMovies 7043 non-null object Contract 7043 non-null object PaperlessBilling 7043 non-null object PaymentMethod 7043 non-null object MonthlyCharges 7043 non-null float64 TotalCharges 7043 non-null object Churn 7043 non-null object dtypes: float64(1), int64(2), object(18) memory usage: 1.1+ MB From the results above, we can see that the variables with the object datatype need to be converted to numerical. df = pd.get_dummies(df, columns = [‘gender’, ‘Partner’, ‘Dependents’,’PhoneService’,’MultipleLines’,’InternetService’,‘OnlineSecurity’,’OnlineBackup’,’DeviceProtection’,’TechSupport’,’StreamingTV’,‘StreamingMovies’,’Contract’,’PaperlessBilling’,’PaymentMethod’,’Churn’], drop_first = True) It is always advisable to use df.info() to check that all our variables were converted to the appropriate data type. After this step, I noticed that total charges still had an object data type. Hence, I will use pd.numeric() function to convert it to a float. df[‘TotalCharges’] = pd.to_numeric(df.TotalCharges, errors = ‘coerce’)df.drop([‘TotalCharges’], axis = 1, inplace = True) Splitting the dataset. First our model needs to be trained, second our model needs to be tested. Therefore it is best to have two different datasets. As for now we only have one, it is very common to split the data accordingly. X is the data with the independent variables, Y is the data with the dependent variable. The test size variable determines in which ratio the data will be split. It is quite common to do this in an 80 Training / 20 Test ratio. df[‘Churn_Yes’] = df[‘Churn_Yes’].astype(int)Y = df[“Churn_Yes”].valuesX = df.drop(labels = [“Churn_Yes”],axis = 1)# Create Train & Test Datafrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=101) Let us make our first model to predict the target variable. We will start with Logistic Regression which is used for predicting binary outcome. We will use scikit-learn (sklearn) for making different models which is an open source library for Python. It is one of the most efficient tools which contains many inbuilt functions that can be used for modeling in Python. from sklearn.linear_model import LogisticRegressionmodel = LogisticRegression()result = model.fit(X_train, y_train) The dataset has been divided into training and validation part. Let us import LogisticRegression and accuracy_score from sklearn and fit the logistic regression model. from sklearn import metricsprediction_test = model.predict(X_test)# Print the prediction accuracyprint (metrics.accuracy_score(y_test, prediction_test)) 0.800567778566 So our predictions are almost 81% accurate, i.e. we have identified 80% of the churn rate correctly. So with the final objective to reduce churn and take the right preventing actions in time, we want to know which independent variables have the most influence on our predicted outcome. Therefore we set the coefficients in our model to zero and look at the weights of each variable. weights = pd.Series(model.coef_[0], index=X.columns.values)weights.sort_values(ascending = False) Here are the results; InternetService_Fiber optic 0.800533 PaperlessBilling_Yes 0.417392 PaymentMethod_Electronic check 0.293135 StreamingTV_Yes 0.267554 MultipleLines_Yes 0.253773 SeniorCitizen 0.247112 StreamingMovies_Yes 0.197304 MultipleLines_No phone service 0.120019 PaymentMethod_Mailed check 0.019744 gender_Male 0.018991 MonthlyCharges 0.004611 Partner_Yes 0.000535 DeviceProtection_Yes -0.021993 tenure -0.035615 StreamingTV_No internet service -0.095143 TechSupport_No internet service -0.095143 DeviceProtection_No internet service -0.095143 StreamingMovies_No internet service -0.095143 OnlineBackup_No internet service -0.095143 OnlineSecurity_No internet service -0.095143 InternetService_No -0.095143 PaymentMethod_Credit card (automatic) -0.156123 Dependents_Yes -0.182597 OnlineBackup_Yes -0.190353 OnlineSecurity_Yes -0.335387 TechSupport_Yes -0.337772 PhoneService_Yes -0.551735 Contract_One year -0.591394 Contract_Two year -1.257341 dtype: float64 It can be observed that some variables have a positive relation to our predicted variable and some have a negative relation. Customers with negative values show that they are unlikely to churn while those with positive values shows they are likely to churn. These Most of what I write about keeps me busy in my own Startup analyx. Looking forward to hearing your experience :)
[ { "code": null, "e": 625, "s": 172, "text": "Customer churn, also known as customer attrition, occurs when customers stop doing business with a company or stop using a company’s services. By being aware of and monitoring churn rate, companies are equipped to determine their customer retention success rates and identify strategies for improvement. We will use a machine learning model to understand the precise customer behaviors and attributes which signal the risk and timing of customer churn." }, { "code": null, "e": 652, "s": 625, "text": "Understanding our dataset:" }, { "code": null, "e": 961, "s": 652, "text": "We will use the Telco Customer Churn dataset from Kaggle. The raw dataset contains 7043 entries. All entries have several features and a column stating if the customer has churned or not.To better understand the data we will first load it into pandas and explore it with the help of some very basic commands." }, { "code": null, "e": 2144, "s": 961, "text": "import numpy as npimport pandas as pdfrom sklearn.preprocessing import MinMaxScalerfrom sklearn.cluster import KMeansimport matplotlib.pyplot as plt import seaborn as sns%matplotlib inlinefrom sklearn.model_selection import train_test_split#Loading the datadf = pd.read_csv(r’...Churn\\telco_customer.csv’)df.info() <class ‘pandas.core.frame.DataFrame’> RangeIndex: 7043 entries, 0 to 7042 Data columns (total 21 columns): customerID 7043 non-null object gender 7043 non-null object SeniorCitizen 7043 non-null int64 Partner 7043 non-null objectDependents 7043 non-null objecttenure 7043 non-null int64 PhoneService 7043 non-null object MultipleLines 7043 non-null object InternetService 7043 non-null object OnlineSecurity 7043 non-null object OnlineBackup 7043 non-null object DeviceProtection 7043 non-null object TechSupport 7043 non-null object StreamingTV 7043 non-null object StreamingMovies 7043 non-null object Contract 7043 non-null object PaperlessBilling 7043 non-null object PaymentMethod 7043 non-null object MonthlyCharges 7043 non-null float64 TotalCharges 7043 non-null object Churn 7043 non-null object dtypes: float64(1), int64(2), object(18) memory usage: 1.1+ MB" }, { "code": null, "e": 2258, "s": 2144, "text": "df.info() gives us detailed information about every column. We can see that our data is divided into three types;" }, { "code": null, "e": 2613, "s": 2258, "text": "object: Object format means variables are categorical. Categorical variables in our dataset are: customerID, gender, partner, dependents, phone service, multiple lines, internet service, online security, online backup, device protection, tech support, streaming tv, streaming movies, contract, paperless billing, payment method, total charges, and churn." }, { "code": null, "e": 2703, "s": 2613, "text": "int64: It represents the integer variables. Senior citizen and tenure are of this format." }, { "code": null, "e": 2901, "s": 2703, "text": "float64: It represents the variables which have some decimal values involved. They are also numerical variables. There is only one variable with this format in our dataset which is monthly charges." }, { "code": null, "e": 2927, "s": 2901, "text": "Exploratory Data Analysis" }, { "code": null, "e": 3372, "s": 2927, "text": "The goal of this section is to get comfortable with our data. We will do bivariate analysis. It is the simplest form of analyzing data where we examine how each variable relates to the churn rate. For categorical features, we can use frequency table or bar plots which will calculate the number of each category in a particular variable. For numerical features, probability density plots can be used to look at the distribution of the variable." }, { "code": null, "e": 3449, "s": 3372, "text": "All visualizations for categorical variables will be done in tableau public." }, { "code": null, "e": 3512, "s": 3449, "text": "The following inferences can be made from the above bar plots;" }, { "code": null, "e": 3575, "s": 3512, "text": "The churn percent is almost equal in case of Male and Females." }, { "code": null, "e": 3634, "s": 3575, "text": "The percent of churn is higher in case of senior citizens." }, { "code": null, "e": 3705, "s": 3634, "text": "The churn rate is higher in case of customers who have phone services." }, { "code": null, "e": 3825, "s": 3705, "text": "Customers with Partners and Dependents have lower churn rate as compared to those who don’t have partners & Dependents." }, { "code": null, "e": 3929, "s": 3825, "text": "Customers with an electronic payment method have a higher churn rate compared to other payment methods." }, { "code": null, "e": 3988, "s": 3929, "text": "Customers with no internet service has a lower churn rate." }, { "code": null, "e": 4183, "s": 3988, "text": "Churn rate is much higher in case of Fiber Optic InternetServices. Customers who do not have services like OnlineSecurity, OnlineBackup, and TechSupport have left the platform in the past month." }, { "code": null, "e": 4226, "s": 4183, "text": "Now let's look at the numerical variables." }, { "code": null, "e": 4421, "s": 4226, "text": "plt.figure(1), plt.subplot(121), sns.distplot(df['tenure']);plt.figure(1), plt.subplot(121), sns.distplot(df['MonthlyCharges']);plt.figure(1), plt.subplot(121), sns.distplot(df['TotalCharges']);" }, { "code": null, "e": 4461, "s": 4421, "text": "Data Preparation & Feature Engineering:" }, { "code": null, "e": 4733, "s": 4461, "text": "This section is a fundamental part of machine learning. If this section is not done properly, our model will not work. In this section we will clean up our dataset by dropping irrelevant data, treating missing values, and converting our variables to the proper data type." }, { "code": null, "e": 4775, "s": 4733, "text": "Treating Irrelevant data & missing values" }, { "code": null, "e": 4946, "s": 4775, "text": "In our dataset, we can see that customer ID is not needed for our model so we drop the variable. We do not need to treat missing values as there are none in this dataset." }, { "code": null, "e": 4992, "s": 4946, "text": "df.drop([‘customerID’], axis=1, inplace=True)" }, { "code": null, "e": 5033, "s": 4992, "text": "Converting Categorical to numerical data" }, { "code": null, "e": 5352, "s": 5033, "text": "Machine learning works with only numerical values. Therefore, we need to convert our categorical values to numerical values. By using the Pandas function “get_dummies()”, we can replace the gender column with “gender_Female” and “gender_Male”. We will use df.info() to show us which ones are categorical and numerical." }, { "code": null, "e": 6228, "s": 5352, "text": "df.info()<class ‘pandas.core.frame.DataFrame’> RangeIndex: 7043 entries, 0 to 7042Data columns (total 21 columns): customerID 7043 non-null object gender 7043 non-null object SeniorCitizen 7043 non-null int64 Partner 7043 non-null object Dependents 7043 non-null object tenure 7043 non-null int64 PhoneService 7043 non-null objectMultipleLines 7043 non-null objectInternetService 7043 non-null object OnlineSecurity 7043 non-null object OnlineBackup 7043 non-null object DeviceProtection 7043 non-null object TechSupport 7043 non-null object StreamingTV 7043 non-null object StreamingMovies 7043 non-null object Contract 7043 non-null object PaperlessBilling 7043 non-null object PaymentMethod 7043 non-null object MonthlyCharges 7043 non-null float64 TotalCharges 7043 non-null object Churn 7043 non-null object dtypes: float64(1), int64(2), object(18) memory usage: 1.1+ MB" }, { "code": null, "e": 6342, "s": 6228, "text": "From the results above, we can see that the variables with the object datatype need to be converted to numerical." }, { "code": null, "e": 6634, "s": 6342, "text": "df = pd.get_dummies(df, columns = [‘gender’, ‘Partner’, ‘Dependents’,’PhoneService’,’MultipleLines’,’InternetService’,‘OnlineSecurity’,’OnlineBackup’,’DeviceProtection’,’TechSupport’,’StreamingTV’,‘StreamingMovies’,’Contract’,’PaperlessBilling’,’PaymentMethod’,’Churn’], drop_first = True)" }, { "code": null, "e": 6894, "s": 6634, "text": "It is always advisable to use df.info() to check that all our variables were converted to the appropriate data type. After this step, I noticed that total charges still had an object data type. Hence, I will use pd.numeric() function to convert it to a float." }, { "code": null, "e": 7016, "s": 6894, "text": "df[‘TotalCharges’] = pd.to_numeric(df.TotalCharges, errors = ‘coerce’)df.drop([‘TotalCharges’], axis = 1, inplace = True)" }, { "code": null, "e": 7039, "s": 7016, "text": "Splitting the dataset." }, { "code": null, "e": 7471, "s": 7039, "text": "First our model needs to be trained, second our model needs to be tested. Therefore it is best to have two different datasets. As for now we only have one, it is very common to split the data accordingly. X is the data with the independent variables, Y is the data with the dependent variable. The test size variable determines in which ratio the data will be split. It is quite common to do this in an 80 Training / 20 Test ratio." }, { "code": null, "e": 7755, "s": 7471, "text": "df[‘Churn_Yes’] = df[‘Churn_Yes’].astype(int)Y = df[“Churn_Yes”].valuesX = df.drop(labels = [“Churn_Yes”],axis = 1)# Create Train & Test Datafrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=101)" }, { "code": null, "e": 7899, "s": 7755, "text": "Let us make our first model to predict the target variable. We will start with Logistic Regression which is used for predicting binary outcome." }, { "code": null, "e": 8123, "s": 7899, "text": "We will use scikit-learn (sklearn) for making different models which is an open source library for Python. It is one of the most efficient tools which contains many inbuilt functions that can be used for modeling in Python." }, { "code": null, "e": 8239, "s": 8123, "text": "from sklearn.linear_model import LogisticRegressionmodel = LogisticRegression()result = model.fit(X_train, y_train)" }, { "code": null, "e": 8407, "s": 8239, "text": "The dataset has been divided into training and validation part. Let us import LogisticRegression and accuracy_score from sklearn and fit the logistic regression model." }, { "code": null, "e": 8560, "s": 8407, "text": "from sklearn import metricsprediction_test = model.predict(X_test)# Print the prediction accuracyprint (metrics.accuracy_score(y_test, prediction_test))" }, { "code": null, "e": 8575, "s": 8560, "text": "0.800567778566" }, { "code": null, "e": 8958, "s": 8575, "text": "So our predictions are almost 81% accurate, i.e. we have identified 80% of the churn rate correctly. So with the final objective to reduce churn and take the right preventing actions in time, we want to know which independent variables have the most influence on our predicted outcome. Therefore we set the coefficients in our model to zero and look at the weights of each variable." }, { "code": null, "e": 9056, "s": 8958, "text": "weights = pd.Series(model.coef_[0], index=X.columns.values)weights.sort_values(ascending = False)" }, { "code": null, "e": 9078, "s": 9056, "text": "Here are the results;" }, { "code": null, "e": 10026, "s": 9078, "text": "InternetService_Fiber optic 0.800533 PaperlessBilling_Yes 0.417392 PaymentMethod_Electronic check 0.293135 StreamingTV_Yes 0.267554 MultipleLines_Yes 0.253773 SeniorCitizen 0.247112 StreamingMovies_Yes 0.197304 MultipleLines_No phone service 0.120019 PaymentMethod_Mailed check 0.019744 gender_Male 0.018991 MonthlyCharges 0.004611 Partner_Yes 0.000535 DeviceProtection_Yes -0.021993 tenure -0.035615 StreamingTV_No internet service -0.095143 TechSupport_No internet service -0.095143 DeviceProtection_No internet service -0.095143 StreamingMovies_No internet service -0.095143 OnlineBackup_No internet service -0.095143 OnlineSecurity_No internet service -0.095143 InternetService_No -0.095143 PaymentMethod_Credit card (automatic) -0.156123 Dependents_Yes -0.182597 OnlineBackup_Yes -0.190353 OnlineSecurity_Yes -0.335387 TechSupport_Yes -0.337772 PhoneService_Yes -0.551735 Contract_One year -0.591394 Contract_Two year -1.257341 dtype: float64" }, { "code": null, "e": 10290, "s": 10026, "text": "It can be observed that some variables have a positive relation to our predicted variable and some have a negative relation. Customers with negative values show that they are unlikely to churn while those with positive values shows they are likely to churn. These" } ]
Koa.js - Templating
Pug is a templating engine. Templating engines are used to remove the cluttering of our server code with HTML, concatenating strings wildly to existing HTML templates. Pug is a very powerful templating engine, which has a variety of features such as filters, includes, inheritance, interpolation, etc. There is a lot of ground to cover on this. To use Pug with Koa, we need to install it using the following command. $ npm install --save pug koa-pug Once pug is installed, set it as the templating engine for your app. Add the following code to your app.js file. var koa = require('koa'); var router = require('koa-router'); var app = koa(); var Pug = require('koa-pug'); var pug = new Pug({ viewPath: './views', basedir: './views', app: app //Equivalent to app.use(pug) }); var _ = router(); //Instantiate the router app.use(_.routes()); //Use the routes defined using the router app.listen(3000); Now, create a new directory called views. Inside the directory, create a file named first_view.pug, and enter the following data in it. doctype html html head title = "Hello Pug" body p.greetings#people Hello Views! To run this page, add the following route to your app. _.get('/hello', getMessage); // Define routes function *getMessage(){ this.render('first_view'); }; You'll receive the output as − What Pug does is, it converts this very simple looking markup to html. We don’t need to keep track of closing our tags, no need to use class and id keywords, rather use '.' and '#' to define them. The above code first gets converted to <!DOCTYPE html> <html> <head> <title>Hello Pug</title> </head> <body> <p class = "greetings" id = "people">Hello Views!</p> </body> </html> Pug is capable of doing much more than simplifying HTML markup. Let’s explore some of these features of Pug. Tags are nested according to their indentation. Like in the above example, <title> was indented within the <head> tag, so it was inside it. However, the <body> tag was on the same indentation, thus it was a sibling of <head> tag. We don’t need to close tags. As soon as Pug encounters the next tag on the same or the outer indentation level, it closes the tag for us. There are three methods to put text inside of a tag − Space seperated − h1 Welcome to Pug Piped text − div | To insert multiline text, | You can use the pipe operator. Block of text − div. But that gets tedious if you have a lot of text. You can use "." at the end of tag to denote block of text. To put tags inside this block, simply enter tag in a new line and indent it accordingly. Pug uses the same syntax as JavaScript(//) for creating comments. These comments are converted to html comments(<!--comment-->). For example, //This is a Pug comment This comment gets converted to − <!--This is a Pug comment--> To define attributes, we use a comma separated list of attributes, in parenthesis. Class and ID attributes have special representations. The following line of code covers defining attributes, classes, and id for a given html tag. div.container.column.main#division(width = "100",height = "100") This line of code, gets converted to − <div class = "container column main" id = "division" width = "100" height = "100"></div> When we render a Pug template, we can actually pass it a value from our route handler, which we can then use in our template. Create a new route handler with the following code. var koa = require('koa'); var router = require('koa-router'); var app = koa(); var Pug = require('koa-pug'); var pug = new Pug({ viewPath: './views', basedir: './views', app: app // equals to pug.use(app) and app.use(pug.middleware) }); var _ = router(); //Instantiate the router _.get('//dynamic_view', dynamicMessage); // Define routes function *dynamicMessage(){ this.render('dynamic', { name: "TutorialsPoint", url:"https://www.tutorialspoint.com" }); }; app.use(_.routes()); //Use the routes defined using the router app.listen(3000); Then, create a new view file in the views directory, named dynamic.pug, using the following code. html head title = name body h1 = name a(href = url) URL Open localhost:3000/dynamic in your browser and following should be the output. − We can also use these passed variables within the text. To insert passed variables in between text of a tag, we use #{variableName} syntax. For example, in the above example, if we want to insert Greetings from TutorialsPoint, then we have to use the following code. html head title = name body h1 Greetings from #{name} a(href = url) URL This method of using values is called interpolation. We can use conditional statements and looping constructs as well. Consider this practical example, if a user is logged in we would want to display "Hi, User" and if not, then we would want to show him a "Login/Sign Up" link. To achieve this, we can define a simple template such as − html head title Simple template body if(user) h1 Hi, #{user.name} else a(href = "/sign_up") Sign Up When we render this using our routes, and if we pass an object like − this.render('/dynamic',{user: {name: "Ayush", age: "20"} }); It'll give a message displaying Hi, Ayush. However, if we don’t pass any object or pass one with no user key, then we will get a Sign up link. Pug provides a very intuitive way to create components for a web page. For example, if you see a news website, the header with logo and categories is always fixed. Instead of copying that to every view, we can use an include. Following example shows how we can use an include − Create three views with the following code − div.header. I'm the header for this website. html head title Simple template body include ./header.pug h3 I'm the main content include ./footer.pug div.footer. I'm the footer for this website. Create a route for this as follows. var koa = require('koa'); var router = require('koa-router'); var app = koa(); var Pug = require('koa-pug'); var pug = new Pug({ viewPath: './views', basedir: './views', app: app //Equivalent to app.use(pug) }); var _ = router(); //Instantiate the router _.get('/components', getComponents); function *getComponents(){ this.render('content.pug'); } app.use(_.routes()); //Use the routes defined using the router app.listen(3000); Go to localhost:3000/components, you should get the following output. include can also be used to include plaintext, CSS and JavaScript. There are many other features of Pug. However, those are out of the scope for this tutorial. You can further explore Pug at Pug. Print Add Notes Bookmark this page
[ { "code": null, "e": 2451, "s": 2106, "text": "Pug is a templating engine. Templating engines are used to remove the cluttering of our server code with HTML, concatenating strings wildly to existing HTML templates. Pug is a very powerful templating engine, which has a variety of features such as filters, includes, inheritance, interpolation, etc. There is a lot of ground to cover on this." }, { "code": null, "e": 2523, "s": 2451, "text": "To use Pug with Koa, we need to install it using the following command." }, { "code": null, "e": 2557, "s": 2523, "text": "$ npm install --save pug koa-pug\n" }, { "code": null, "e": 2670, "s": 2557, "text": "Once pug is installed, set it as the templating engine for your app. Add the following code to your app.js file." }, { "code": null, "e": 3018, "s": 2670, "text": "var koa = require('koa');\nvar router = require('koa-router');\nvar app = koa();\n\nvar Pug = require('koa-pug');\nvar pug = new Pug({\n viewPath: './views',\n basedir: './views',\n app: app //Equivalent to app.use(pug)\n});\n\nvar _ = router(); //Instantiate the router\n\napp.use(_.routes()); //Use the routes defined using the router\napp.listen(3000);" }, { "code": null, "e": 3154, "s": 3018, "text": "Now, create a new directory called views. Inside the directory, create a file named first_view.pug, and enter the following data in it." }, { "code": null, "e": 3252, "s": 3154, "text": "doctype html\nhtml\n head\n title = \"Hello Pug\"\n body\n p.greetings#people Hello Views!" }, { "code": null, "e": 3307, "s": 3252, "text": "To run this page, add the following route to your app." }, { "code": null, "e": 3411, "s": 3307, "text": "_.get('/hello', getMessage); // Define routes\n\nfunction *getMessage(){\n this.render('first_view');\n};" }, { "code": null, "e": 3442, "s": 3411, "text": "You'll receive the output as −" }, { "code": null, "e": 3678, "s": 3442, "text": "What Pug does is, it converts this very simple looking markup to html. We don’t need to keep track of closing our tags, no need to use class and id keywords, rather use '.' and '#' to define them. The above code first gets converted to" }, { "code": null, "e": 3847, "s": 3678, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Hello Pug</title>\n </head>\n \n <body>\n <p class = \"greetings\" id = \"people\">Hello Views!</p>\n </body>\n</html>" }, { "code": null, "e": 3956, "s": 3847, "text": "Pug is capable of doing much more than simplifying HTML markup. Let’s explore some of these features of Pug." }, { "code": null, "e": 4186, "s": 3956, "text": "Tags are nested according to their indentation. Like in the above example, <title> was indented within the <head> tag, so it was inside it. However, the <body> tag was on the same indentation, thus it was a sibling of <head> tag." }, { "code": null, "e": 4324, "s": 4186, "text": "We don’t need to close tags. As soon as Pug encounters the next tag on the same or the outer indentation level, it closes the tag for us." }, { "code": null, "e": 4378, "s": 4324, "text": "There are three methods to put text inside of a tag −" }, { "code": null, "e": 4396, "s": 4378, "text": "Space seperated −" }, { "code": null, "e": 4415, "s": 4396, "text": "h1 Welcome to Pug\n" }, { "code": null, "e": 4428, "s": 4415, "text": "Piped text −" }, { "code": null, "e": 4501, "s": 4428, "text": "div\n | To insert multiline text, \n | You can use the pipe operator.\n" }, { "code": null, "e": 4517, "s": 4501, "text": "Block of text −" }, { "code": null, "e": 4735, "s": 4517, "text": "div.\n But that gets tedious if you have a lot of text. \n You can use \".\" at the end of tag to denote block of text. \n To put tags inside this block, simply enter tag in a new line and \n indent it accordingly.\n" }, { "code": null, "e": 4877, "s": 4735, "text": "Pug uses the same syntax as JavaScript(//) for creating comments. These comments are converted to html comments(<!--comment-->). For example," }, { "code": null, "e": 4902, "s": 4877, "text": "//This is a Pug comment\n" }, { "code": null, "e": 4935, "s": 4902, "text": "This comment gets converted to −" }, { "code": null, "e": 4965, "s": 4935, "text": "<!--This is a Pug comment-->\n" }, { "code": null, "e": 5195, "s": 4965, "text": "To define attributes, we use a comma separated list of attributes, in parenthesis. Class and ID attributes have special representations. The following line of code covers defining attributes, classes, and id for a given html tag." }, { "code": null, "e": 5261, "s": 5195, "text": "div.container.column.main#division(width = \"100\",height = \"100\")\n" }, { "code": null, "e": 5300, "s": 5261, "text": "This line of code, gets converted to −" }, { "code": null, "e": 5390, "s": 5300, "text": "<div class = \"container column main\" id = \"division\" width = \"100\" height = \"100\"></div>\n" }, { "code": null, "e": 5568, "s": 5390, "text": "When we render a Pug template, we can actually pass it a value from our route handler, which we can then use in our template. Create a new route handler with the following code." }, { "code": null, "e": 6141, "s": 5568, "text": "var koa = require('koa');\nvar router = require('koa-router');\nvar app = koa();\n\nvar Pug = require('koa-pug');\nvar pug = new Pug({\n viewPath: './views',\n basedir: './views',\n app: app // equals to pug.use(app) and app.use(pug.middleware)\n});\n\nvar _ = router(); //Instantiate the router\n\n_.get('//dynamic_view', dynamicMessage); // Define routes\n\nfunction *dynamicMessage(){\n this.render('dynamic', {\n name: \"TutorialsPoint\", \n url:\"https://www.tutorialspoint.com\"\n });\n};\n\napp.use(_.routes()); //Use the routes defined using the router\napp.listen(3000);" }, { "code": null, "e": 6239, "s": 6141, "text": "Then, create a new view file in the views directory, named dynamic.pug, using the following code." }, { "code": null, "e": 6319, "s": 6239, "text": "html\n head\n title = name\n body\n h1 = name\n a(href = url) URL" }, { "code": null, "e": 6401, "s": 6319, "text": "Open localhost:3000/dynamic in your browser and following should be the output. −" }, { "code": null, "e": 6668, "s": 6401, "text": "We can also use these passed variables within the text. To insert passed variables in between text of a tag, we use #{variableName} syntax. For example, in the above example, if we want to insert Greetings from TutorialsPoint, then we have to use the following code." }, { "code": null, "e": 6764, "s": 6668, "text": "html\n head\n title = name\n body\n h1 Greetings from #{name}\n a(href = url) URL" }, { "code": null, "e": 6817, "s": 6764, "text": "This method of using values is called interpolation." }, { "code": null, "e": 7101, "s": 6817, "text": "We can use conditional statements and looping constructs as well. Consider this practical example, if a user is logged in we would want to display \"Hi, User\" and if not, then we would want to show him a \"Login/Sign Up\" link. To achieve this, we can define a simple template such as −" }, { "code": null, "e": 7243, "s": 7101, "text": "html\n head\n title Simple template\n body\n if(user)\n h1 Hi, #{user.name}\n else\n a(href = \"/sign_up\") Sign Up" }, { "code": null, "e": 7313, "s": 7243, "text": "When we render this using our routes, and if we pass an object like −" }, { "code": null, "e": 7378, "s": 7313, "text": "this.render('/dynamic',{user: \n {name: \"Ayush\", age: \"20\"}\n});" }, { "code": null, "e": 7521, "s": 7378, "text": "It'll give a message displaying Hi, Ayush. However, if we don’t pass any object or pass one with no user key, then we will get a Sign up link." }, { "code": null, "e": 7799, "s": 7521, "text": "Pug provides a very intuitive way to create components for a web page. For example, if you see a news website, the header with logo and categories is always fixed. Instead of copying that to every view, we can use an include. Following example shows how we can use an include −" }, { "code": null, "e": 7844, "s": 7799, "text": "Create three views with the following code −" }, { "code": null, "e": 7893, "s": 7844, "text": "div.header.\n I'm the header for this website.\n" }, { "code": null, "e": 8026, "s": 7893, "text": "html\n head\n title Simple template\n body\n include ./header.pug\n h3 I'm the main content\n include ./footer.pug" }, { "code": null, "e": 8075, "s": 8026, "text": "div.footer.\n I'm the footer for this website.\n" }, { "code": null, "e": 8111, "s": 8075, "text": "Create a route for this as follows." }, { "code": null, "e": 8558, "s": 8111, "text": "var koa = require('koa');\nvar router = require('koa-router');\nvar app = koa();\n\nvar Pug = require('koa-pug');\nvar pug = new Pug({\n viewPath: './views',\n basedir: './views',\n app: app //Equivalent to app.use(pug)\n});\n\nvar _ = router(); //Instantiate the router\n\n_.get('/components', getComponents);\n\nfunction *getComponents(){\n this.render('content.pug');\n}\n\napp.use(_.routes()); //Use the routes defined using the router\napp.listen(3000);" }, { "code": null, "e": 8628, "s": 8558, "text": "Go to localhost:3000/components, you should get the following output." }, { "code": null, "e": 8695, "s": 8628, "text": "include can also be used to include plaintext, CSS and JavaScript." }, { "code": null, "e": 8824, "s": 8695, "text": "There are many other features of Pug. However, those are out of the scope for this tutorial. You can further explore Pug at Pug." }, { "code": null, "e": 8831, "s": 8824, "text": " Print" }, { "code": null, "e": 8842, "s": 8831, "text": " Add Notes" } ]
10 Examples to Master Pandas Styler | by Soner Yıldırım | Towards Data Science
Pandas is highly efficient at data analysis and manipulation tasks. It provides numerous functions and methods to operate on tabular data seamlessly. However, all we see is plain numbers in tabular form. What if we integrate a few visual components into Pandas dataframes? I think it makes them look more appealing and informative in many cases. We can achieve this by using Style property of pandas dataframes. Style property returns a styler object which provides many options for formatting and displaying dataframes. A styler object is basically a dataframe with some style. In this article, we will go through 10 examples to master how styling works. We will use a customer churn dataset which is available on Kaggle and also create some sample dataframes. churn = pd.read_csv("/home/soner/Downloads/datasets/BankChurners.csv",usecols=np.arange(1,10))churn.head() The screenshot above shows only a part of the dataframe. The dataset contains relevant information about the customers of bank and whether they churned (i.e. left the bank). Consider a case where we want to see the average customer age for each category in the education level column. This task can be done using the group by function. We can add some styling on what group by function returns. For instance, we can highlight the minimum value. churn[['Education_Level','Months_on_book']].\groupby(['Education_Level'], as_index=False).mean().\style.highlight_min(color='red') We apply the functions together with the style property of Pandas. The main syntax is as follows: pandas.DataFrame.style In this example, we have used one of the built-in styling functions which is highlight_min. There are other built-in functions as we will see in the following examples. We can apply multiple styling functions by chaining them together. For instance, it is possible to highlight both minimum and maximum values. churn[['Education_Level','Customer_Age']].\groupby(['Education_Level'], as_index=False).mean().\style.highlight_min(color='red').highlight_max(color='blue') The functions in the first two examples highlight the maximum and minimum values of columns. We can also use to highlight values row-wise. It does not make sense for the previous cases because there is only one column. Let’s create a sample dataframe with multiple columns and apply these styling functions. df = pd.DataFrame(np.random.randint(100, size=(6,8)))df.style.highlight_min(color='red',axis=1)\.highlight_max(color='green', axis=1) The highlighted values are the maximum and minimum values of rows. Another built-in styling function is the bar function. It displays a colored bar in each cell whose length is proportional to the value in that cell. churn[['Attrition_Flag','Gender','Customer_Age']].\groupby(['Attrition_Flag','Gender'], as_index=False).mean().\style.bar(color='green') We have calculated the average customer age for each group in attrition flag and gender columns. The bar function provides us a visual overview of the values. We can easily realize the minimum and maximum values as well as the order of the values in between. In this example, we will see an extended use of the bar function. Consider a case where we have both positive and negative values in columns. We can set 0 as reference point and use bars with different colors for negative and positive values. df = pd.DataFrame((np.random.randint(20, size=(6,3)) - 8) * 3.2)df.style.bar(align='mid', color=['red','green']) It makes it easy to visually differentiate positive and negative values. Up to this point, we have used the built-in styling functions. It is possible to use our own functions. Once we create our own styler, we can apply it using the apply or applymap functions of Pandas. For instance, the function below highlights the values of a column that are higher than the column average. def above_mean(col): is_above = col > col.mean() return ['background-color: green' if v else '' for v in is_above] We use the apply function to do column-wise styling. churn[['Marital_Status','Gender','Customer_Age','Dependent_count']].groupby(['Marital_Status','Gender']).mean().\style.apply(above_mean) We have calculated the average value for each category in the marital status and gender columns. The above_mean function is applied to the results to highlight the values that are higher than the average value of the column. It is possible to apply the styling only for some of the columns. The subset parameter is used to select the desired columns. For instance, the following code will only apply the above_mean function to the customer age column. churn[['Marital_Status','Gender','Customer_Age','Dependent_count']].groupby(['Marital_Status','Gender']).mean().\style.apply(above_mean, subset=['Customer_Age']) We pass the list of columns that we want to style to the subset parameter of the apply function. The apply function is used to do column-wise styling. If we want to do element-wise styling, the applymap function is used. For instance, the above_zero function below colors positive and negative values in a dataframe differently. def above_zero(val): color = 'green' if val > 0 else 'red' return 'color: %s' % color We can use the applymap function to do element-wise styling with the above_zero function. df.style.applymap(above_zero) The set_properties function of the Styler attribute allows for combining different styling operations. For instance, we can choose specific colors for the background and the characters. df = pd.DataFrame(np.random.randint(100, size=(6,8)))df.style.set_properties(**{'background-color': 'yellow', 'color':'black'}) The style functions we used here are pretty simple ones. However, we can also create more complex style functions that enhance the informative power. We may want to use the same styling for multiple times. Pandas offers a way to transfer styles between dataframes. A styler object is returned when we apply the style function. We can save this styler object in a variable and then use it to transfer the style. df = pd.DataFrame(np.random.randint(100, size=(6,8)) - 50)df.iloc[-1,-1] = np.nanstyle1 = df.style.highlight_min(color='red')\ .highlight_max(color='blue')\ .highlight_null(null_color='green') The variable style1 is a styler object which is basically a dataframe with style. Here is how it looks: Let’s create another styler object based on a different dataframe. df2 = pd.DataFrame(np.random.randint(50, size=(6,8)))df2.iloc[-1,-1] = np.nanstyle2 = df2.style Style2 is a styler object that looks as below: We can now transfer the style of the style1 object to the style2 object. style2.use(style1.export()) Here is how the style2 object looks now: We have seen how to use the built-in style function as well as creating a custom-made one. We have also used the apply and applymap functions to actually apply the custom-made styles on the dataframes. We have also seen how to transfer styles from one styler object to another. There are other styling and formatting options available that can be accessed on the styling section of pandas user guide. Thank you for reading. Please let me know if you have any feedback.
[ { "code": null, "e": 376, "s": 172, "text": "Pandas is highly efficient at data analysis and manipulation tasks. It provides numerous functions and methods to operate on tabular data seamlessly. However, all we see is plain numbers in tabular form." }, { "code": null, "e": 518, "s": 376, "text": "What if we integrate a few visual components into Pandas dataframes? I think it makes them look more appealing and informative in many cases." }, { "code": null, "e": 828, "s": 518, "text": "We can achieve this by using Style property of pandas dataframes. Style property returns a styler object which provides many options for formatting and displaying dataframes. A styler object is basically a dataframe with some style. In this article, we will go through 10 examples to master how styling works." }, { "code": null, "e": 934, "s": 828, "text": "We will use a customer churn dataset which is available on Kaggle and also create some sample dataframes." }, { "code": null, "e": 1041, "s": 934, "text": "churn = pd.read_csv(\"/home/soner/Downloads/datasets/BankChurners.csv\",usecols=np.arange(1,10))churn.head()" }, { "code": null, "e": 1215, "s": 1041, "text": "The screenshot above shows only a part of the dataframe. The dataset contains relevant information about the customers of bank and whether they churned (i.e. left the bank)." }, { "code": null, "e": 1377, "s": 1215, "text": "Consider a case where we want to see the average customer age for each category in the education level column. This task can be done using the group by function." }, { "code": null, "e": 1486, "s": 1377, "text": "We can add some styling on what group by function returns. For instance, we can highlight the minimum value." }, { "code": null, "e": 1617, "s": 1486, "text": "churn[['Education_Level','Months_on_book']].\\groupby(['Education_Level'], as_index=False).mean().\\style.highlight_min(color='red')" }, { "code": null, "e": 1715, "s": 1617, "text": "We apply the functions together with the style property of Pandas. The main syntax is as follows:" }, { "code": null, "e": 1738, "s": 1715, "text": "pandas.DataFrame.style" }, { "code": null, "e": 1907, "s": 1738, "text": "In this example, we have used one of the built-in styling functions which is highlight_min. There are other built-in functions as we will see in the following examples." }, { "code": null, "e": 2049, "s": 1907, "text": "We can apply multiple styling functions by chaining them together. For instance, it is possible to highlight both minimum and maximum values." }, { "code": null, "e": 2206, "s": 2049, "text": "churn[['Education_Level','Customer_Age']].\\groupby(['Education_Level'], as_index=False).mean().\\style.highlight_min(color='red').highlight_max(color='blue')" }, { "code": null, "e": 2425, "s": 2206, "text": "The functions in the first two examples highlight the maximum and minimum values of columns. We can also use to highlight values row-wise. It does not make sense for the previous cases because there is only one column." }, { "code": null, "e": 2514, "s": 2425, "text": "Let’s create a sample dataframe with multiple columns and apply these styling functions." }, { "code": null, "e": 2648, "s": 2514, "text": "df = pd.DataFrame(np.random.randint(100, size=(6,8)))df.style.highlight_min(color='red',axis=1)\\.highlight_max(color='green', axis=1)" }, { "code": null, "e": 2715, "s": 2648, "text": "The highlighted values are the maximum and minimum values of rows." }, { "code": null, "e": 2865, "s": 2715, "text": "Another built-in styling function is the bar function. It displays a colored bar in each cell whose length is proportional to the value in that cell." }, { "code": null, "e": 3002, "s": 2865, "text": "churn[['Attrition_Flag','Gender','Customer_Age']].\\groupby(['Attrition_Flag','Gender'], as_index=False).mean().\\style.bar(color='green')" }, { "code": null, "e": 3261, "s": 3002, "text": "We have calculated the average customer age for each group in attrition flag and gender columns. The bar function provides us a visual overview of the values. We can easily realize the minimum and maximum values as well as the order of the values in between." }, { "code": null, "e": 3504, "s": 3261, "text": "In this example, we will see an extended use of the bar function. Consider a case where we have both positive and negative values in columns. We can set 0 as reference point and use bars with different colors for negative and positive values." }, { "code": null, "e": 3617, "s": 3504, "text": "df = pd.DataFrame((np.random.randint(20, size=(6,3)) - 8) * 3.2)df.style.bar(align='mid', color=['red','green'])" }, { "code": null, "e": 3690, "s": 3617, "text": "It makes it easy to visually differentiate positive and negative values." }, { "code": null, "e": 3890, "s": 3690, "text": "Up to this point, we have used the built-in styling functions. It is possible to use our own functions. Once we create our own styler, we can apply it using the apply or applymap functions of Pandas." }, { "code": null, "e": 3998, "s": 3890, "text": "For instance, the function below highlights the values of a column that are higher than the column average." }, { "code": null, "e": 4115, "s": 3998, "text": "def above_mean(col): is_above = col > col.mean() return ['background-color: green' if v else '' for v in is_above]" }, { "code": null, "e": 4168, "s": 4115, "text": "We use the apply function to do column-wise styling." }, { "code": null, "e": 4305, "s": 4168, "text": "churn[['Marital_Status','Gender','Customer_Age','Dependent_count']].groupby(['Marital_Status','Gender']).mean().\\style.apply(above_mean)" }, { "code": null, "e": 4530, "s": 4305, "text": "We have calculated the average value for each category in the marital status and gender columns. The above_mean function is applied to the results to highlight the values that are higher than the average value of the column." }, { "code": null, "e": 4656, "s": 4530, "text": "It is possible to apply the styling only for some of the columns. The subset parameter is used to select the desired columns." }, { "code": null, "e": 4757, "s": 4656, "text": "For instance, the following code will only apply the above_mean function to the customer age column." }, { "code": null, "e": 4919, "s": 4757, "text": "churn[['Marital_Status','Gender','Customer_Age','Dependent_count']].groupby(['Marital_Status','Gender']).mean().\\style.apply(above_mean, subset=['Customer_Age'])" }, { "code": null, "e": 5016, "s": 4919, "text": "We pass the list of columns that we want to style to the subset parameter of the apply function." }, { "code": null, "e": 5140, "s": 5016, "text": "The apply function is used to do column-wise styling. If we want to do element-wise styling, the applymap function is used." }, { "code": null, "e": 5248, "s": 5140, "text": "For instance, the above_zero function below colors positive and negative values in a dataframe differently." }, { "code": null, "e": 5336, "s": 5248, "text": "def above_zero(val): color = 'green' if val > 0 else 'red' return 'color: %s' % color" }, { "code": null, "e": 5426, "s": 5336, "text": "We can use the applymap function to do element-wise styling with the above_zero function." }, { "code": null, "e": 5456, "s": 5426, "text": "df.style.applymap(above_zero)" }, { "code": null, "e": 5559, "s": 5456, "text": "The set_properties function of the Styler attribute allows for combining different styling operations." }, { "code": null, "e": 5642, "s": 5559, "text": "For instance, we can choose specific colors for the background and the characters." }, { "code": null, "e": 5796, "s": 5642, "text": "df = pd.DataFrame(np.random.randint(100, size=(6,8)))df.style.set_properties(**{'background-color': 'yellow', 'color':'black'})" }, { "code": null, "e": 5946, "s": 5796, "text": "The style functions we used here are pretty simple ones. However, we can also create more complex style functions that enhance the informative power." }, { "code": null, "e": 6061, "s": 5946, "text": "We may want to use the same styling for multiple times. Pandas offers a way to transfer styles between dataframes." }, { "code": null, "e": 6207, "s": 6061, "text": "A styler object is returned when we apply the style function. We can save this styler object in a variable and then use it to transfer the style." }, { "code": null, "e": 6400, "s": 6207, "text": "df = pd.DataFrame(np.random.randint(100, size=(6,8)) - 50)df.iloc[-1,-1] = np.nanstyle1 = df.style.highlight_min(color='red')\\ .highlight_max(color='blue')\\ .highlight_null(null_color='green')" }, { "code": null, "e": 6504, "s": 6400, "text": "The variable style1 is a styler object which is basically a dataframe with style. Here is how it looks:" }, { "code": null, "e": 6571, "s": 6504, "text": "Let’s create another styler object based on a different dataframe." }, { "code": null, "e": 6667, "s": 6571, "text": "df2 = pd.DataFrame(np.random.randint(50, size=(6,8)))df2.iloc[-1,-1] = np.nanstyle2 = df2.style" }, { "code": null, "e": 6714, "s": 6667, "text": "Style2 is a styler object that looks as below:" }, { "code": null, "e": 6787, "s": 6714, "text": "We can now transfer the style of the style1 object to the style2 object." }, { "code": null, "e": 6815, "s": 6787, "text": "style2.use(style1.export())" }, { "code": null, "e": 6856, "s": 6815, "text": "Here is how the style2 object looks now:" }, { "code": null, "e": 6947, "s": 6856, "text": "We have seen how to use the built-in style function as well as creating a custom-made one." }, { "code": null, "e": 7134, "s": 6947, "text": "We have also used the apply and applymap functions to actually apply the custom-made styles on the dataframes. We have also seen how to transfer styles from one styler object to another." }, { "code": null, "e": 7257, "s": 7134, "text": "There are other styling and formatting options available that can be accessed on the styling section of pandas user guide." } ]
Swing Examples - Using radiobuttons
Following example showcase how to use standard radio buttons in a Java Swing application. We are using the following APIs. JRadioButton − To create a standard Radio Button. JRadioButton − To create a standard Radio Button. JRadioButton.setEnabled(false); − To disable a Radio Button. JRadioButton.setEnabled(false); − To disable a Radio Button. JRadioButton.setMnemonic(KeyEvent.VK_C) − To set a keyboard shortcut a Radio Button. JRadioButton.setMnemonic(KeyEvent.VK_C) − To set a keyboard shortcut a Radio Button. JRadioButton.setSelected(true) − To set a Radio Button selected. JRadioButton.setSelected(true) − To set a Radio Button selected. import java.awt.BorderLayout; import java.awt.FlowLayout; import java.awt.LayoutManager; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.KeyEvent; import javax.swing.JRadioButton; import javax.swing.JFrame; import javax.swing.JOptionPane; import javax.swing.JPanel; public class SwingTester { public static void main(String[] args) { createWindow(); } private static void createWindow() { JFrame frame = new JFrame("Swing Tester"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); createUI(frame); frame.setSize(560, 200); frame.setLocationRelativeTo(null); frame.setVisible(true); } private static void createUI(final JFrame frame){ JPanel panel = new JPanel(); LayoutManager layout = new FlowLayout(); panel.setLayout(layout); JRadioButton radioButton1 = new JRadioButton("Radio Button 1"); JRadioButton radioButton2 = new JRadioButton("Radio Button 2"); radioButton2.setEnabled(false); JRadioButton radioButton3 = new JRadioButton("Radio Button 3"); radioButton3.setMnemonic(KeyEvent.VK_C); radioButton1.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { Object source = e.getSource(); JOptionPane.showMessageDialog(frame, ((JRadioButton)source).getText() + ": " + ((JRadioButton)source).isSelected()); } }); panel.add(radioButton1); panel.add(radioButton2); panel.add(radioButton3); frame.getContentPane().add(panel, BorderLayout.CENTER); } } Print Add Notes Bookmark this page
[ { "code": null, "e": 2129, "s": 2039, "text": "Following example showcase how to use standard radio buttons in a Java Swing application." }, { "code": null, "e": 2162, "s": 2129, "text": "We are using the following APIs." }, { "code": null, "e": 2212, "s": 2162, "text": "JRadioButton − To create a standard Radio Button." }, { "code": null, "e": 2262, "s": 2212, "text": "JRadioButton − To create a standard Radio Button." }, { "code": null, "e": 2323, "s": 2262, "text": "JRadioButton.setEnabled(false); − To disable a Radio Button." }, { "code": null, "e": 2384, "s": 2323, "text": "JRadioButton.setEnabled(false); − To disable a Radio Button." }, { "code": null, "e": 2469, "s": 2384, "text": "JRadioButton.setMnemonic(KeyEvent.VK_C) − To set a keyboard shortcut a Radio Button." }, { "code": null, "e": 2554, "s": 2469, "text": "JRadioButton.setMnemonic(KeyEvent.VK_C) − To set a keyboard shortcut a Radio Button." }, { "code": null, "e": 2619, "s": 2554, "text": "JRadioButton.setSelected(true) − To set a Radio Button selected." }, { "code": null, "e": 2684, "s": 2619, "text": "JRadioButton.setSelected(true) − To set a Radio Button selected." }, { "code": null, "e": 4377, "s": 2684, "text": "import java.awt.BorderLayout;\nimport java.awt.FlowLayout;\nimport java.awt.LayoutManager;\nimport java.awt.event.ActionEvent;\nimport java.awt.event.ActionListener;\nimport java.awt.event.KeyEvent;\n\nimport javax.swing.JRadioButton;\nimport javax.swing.JFrame;\nimport javax.swing.JOptionPane;\nimport javax.swing.JPanel;\n\npublic class SwingTester {\n public static void main(String[] args) {\n createWindow();\n }\n\n private static void createWindow() { \n JFrame frame = new JFrame(\"Swing Tester\");\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n\n createUI(frame);\n frame.setSize(560, 200); \n frame.setLocationRelativeTo(null); \n frame.setVisible(true);\n }\n\n private static void createUI(final JFrame frame){ \n JPanel panel = new JPanel();\n LayoutManager layout = new FlowLayout(); \n panel.setLayout(layout); \n\n JRadioButton radioButton1 = new JRadioButton(\"Radio Button 1\");\n JRadioButton radioButton2 = new JRadioButton(\"Radio Button 2\");\n radioButton2.setEnabled(false);\n JRadioButton radioButton3 = new JRadioButton(\"Radio Button 3\");\n radioButton3.setMnemonic(KeyEvent.VK_C);\n\n radioButton1.addActionListener(new ActionListener() {\n @Override\n public void actionPerformed(ActionEvent e) {\n Object source = e.getSource();\n JOptionPane.showMessageDialog(frame, \n ((JRadioButton)source).getText() + \": \" + ((JRadioButton)source).isSelected());\n }\n }); \n\n panel.add(radioButton1);\n panel.add(radioButton2);\n panel.add(radioButton3);\n\n frame.getContentPane().add(panel, BorderLayout.CENTER); \n }\n}" }, { "code": null, "e": 4384, "s": 4377, "text": " Print" }, { "code": null, "e": 4395, "s": 4384, "text": " Add Notes" } ]
Java program to print a Fibonacci series
Fibonacci Series generates subsequent number by adding two previous numbers. Fibonacci series starts from two numbers − F0 &amp; F1. The initial values of F0 &amp; F1 can be taken 0, 1 or 1, 1 respectively. Fn = Fn-1 + Fn-2 1. Take integer variable A, B, C 2. Set A = 1, B = 1 3. DISPLAY A, B 4. C = A + B 5. DISPLAY C 6. Set A = B, B = C 7. REPEAT from 4 - 6, for n times Live Demo public class FibonacciSeries2{ public static void main(String args[]) { int a, b, c, i, n; n = 10; a = b = 1; System.out.print(a+" "+b); for(i = 1; i <= n-2; i++) { c = a + b; System.out.print(" "); System.out.print(c); a = b; b = c; } } } 1 1 2 3 5 8 13 21 34 55
[ { "code": null, "e": 1269, "s": 1062, "text": "Fibonacci Series generates subsequent number by adding two previous numbers. Fibonacci series starts from two numbers − F0 &amp; F1. The initial values of F0 &amp; F1 can be taken 0, 1 or 1, 1 respectively." }, { "code": null, "e": 1286, "s": 1269, "text": "Fn = Fn-1 + Fn-2" }, { "code": null, "e": 1435, "s": 1286, "text": "1. Take integer variable A, B, C\n2. Set A = 1, B = 1\n3. DISPLAY A, B\n4. C = A + B\n5. DISPLAY C\n6. Set A = B, B = C\n7. REPEAT from 4 - 6, for n times" }, { "code": null, "e": 1445, "s": 1435, "text": "Live Demo" }, { "code": null, "e": 1772, "s": 1445, "text": "public class FibonacciSeries2{\n public static void main(String args[]) {\n int a, b, c, i, n;\n n = 10;\n a = b = 1;\n System.out.print(a+\" \"+b);\n for(i = 1; i <= n-2; i++) {\n c = a + b;\n System.out.print(\" \");\n System.out.print(c);\n a = b;\n b = c;\n }\n }\n}" }, { "code": null, "e": 1796, "s": 1772, "text": "1 1 2 3 5 8 13 21 34 55" } ]
Increment and Decrement Operators in C#
Increment operator increases integer value by one i.e. int a = 10; a++; ++a; Decrement operator decreases integer value by one i.e. int a = 20; a--; --a; The following is an example demonstrating increment operator − Live Demo using System; class Program { static void Main() { int a, b; a = 10; Console.WriteLine(++a); Console.WriteLine(a++); b = a; Console.WriteLine(a); Console.WriteLine(b); } } 11 11 12 12 The following is an example demonstrating decrement operator − int a, b; a = 10; // displaying decrement operator result Console.WriteLine(--a); Console.WriteLine(a--); b = a; Console.WriteLine(a); Console.WriteLine(b);
[ { "code": null, "e": 1117, "s": 1062, "text": "Increment operator increases integer value by one i.e." }, { "code": null, "e": 1139, "s": 1117, "text": "int a = 10;\na++;\n++a;" }, { "code": null, "e": 1194, "s": 1139, "text": "Decrement operator decreases integer value by one i.e." }, { "code": null, "e": 1216, "s": 1194, "text": "int a = 20;\na--;\n--a;" }, { "code": null, "e": 1279, "s": 1216, "text": "The following is an example demonstrating increment operator −" }, { "code": null, "e": 1290, "s": 1279, "text": " Live Demo" }, { "code": null, "e": 1513, "s": 1290, "text": "using System;\n\nclass Program {\n static void Main() {\n int a, b;\n\n a = 10;\n Console.WriteLine(++a);\n Console.WriteLine(a++);\n\n b = a;\n Console.WriteLine(a);\n Console.WriteLine(b);\n }\n}" }, { "code": null, "e": 1525, "s": 1513, "text": "11\n11\n12\n12" }, { "code": null, "e": 1588, "s": 1525, "text": "The following is an example demonstrating decrement operator −" }, { "code": null, "e": 1747, "s": 1588, "text": "int a, b;\na = 10;\n\n// displaying decrement operator result\nConsole.WriteLine(--a);\nConsole.WriteLine(a--);\n\nb = a;\nConsole.WriteLine(a);\nConsole.WriteLine(b);" } ]
Create a Radar Chart using Recharts in ReactJS - GeeksforGeeks
28 Jul, 2021 Introduction: Rechart JS is a library that is used for creating charts for React JS. This library is used for building Line charts, Bar charts, Pie charts, etc, with the help of React and D3 (Data-Driven Documents). Approach: To create Radar chart using Recharts, we create a dataset with label and polar coordinate details. Then we create a polar grid and both axes i.e. polarAngle axis and polarRadius axis using data coordinates. Finally using the Radar element draws the Radar plot. Creating React Application And Installing Module: Step 1: Create a React application using the following command.npx create-react-app foldername Step 1: Create a React application using the following command. npx create-react-app foldername Step 2: After creating your project folder i.e. foldername, move to it using the following command.cd foldername Step 2: After creating your project folder i.e. foldername, move to it using the following command. cd foldername Step 3: After creating the ReactJS application, Install the required modules using the following command.npm install --save recharts Step 3: After creating the ReactJS application, Install the required modules using the following command. npm install --save recharts Project Structure: It will look like the following. Example: Now write down the following code in the App.js file. Here, App is our default component where we have written our code. App.js import React from 'react';import { Radar, RadarChart, PolarGrid, PolarAngleAxis, PolarRadiusAxis } from 'recharts'; const App = () => { // Sample data const data = [ { name: 'A', x: 21 }, { name: 'B', x: 22 }, { name: 'C', x: -32 }, { name: 'D', x: -14 }, { name: 'E', x: -51 }, { name: 'F', x: 16 }, { name: 'G', x: 7 }, { name: 'H', x: -8 }, { name: 'I', x: 9 }, ]; return ( <RadarChart height={500} width={500} outerRadius="80%" data={data}> <PolarGrid /> <PolarAngleAxis dataKey="name" /> <PolarRadiusAxis /> <Radar dataKey="x" stroke="green" fill="green" fillOpacity={0.5} /> </RadarChart> );} export default App; Step to Run the Application: Run the application using the following command from the root directory of the project: npm start Output: Now open your browser and go to http://localhost:3000/, you will see the following output: Output React-Questions Recharts JavaScript ReactJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Difference between var, let and const keywords in JavaScript Difference Between PUT and PATCH Request Remove elements from a JavaScript Array How to get character array from string in JavaScript? How to filter object array based on attributes? How to fetch data from an API in ReactJS ? How to redirect to another page in ReactJS ? How to pass data from child component to its parent in ReactJS ? How to pass data from one component to other component in ReactJS ? ReactJS Functional Components
[ { "code": null, "e": 25312, "s": 25284, "text": "\n28 Jul, 2021" }, { "code": null, "e": 25528, "s": 25312, "text": "Introduction: Rechart JS is a library that is used for creating charts for React JS. This library is used for building Line charts, Bar charts, Pie charts, etc, with the help of React and D3 (Data-Driven Documents)." }, { "code": null, "e": 25799, "s": 25528, "text": "Approach: To create Radar chart using Recharts, we create a dataset with label and polar coordinate details. Then we create a polar grid and both axes i.e. polarAngle axis and polarRadius axis using data coordinates. Finally using the Radar element draws the Radar plot." }, { "code": null, "e": 25849, "s": 25799, "text": "Creating React Application And Installing Module:" }, { "code": null, "e": 25945, "s": 25849, "text": "Step 1: Create a React application using the following command.npx create-react-app foldername " }, { "code": null, "e": 26009, "s": 25945, "text": "Step 1: Create a React application using the following command." }, { "code": null, "e": 26041, "s": 26009, "text": "npx create-react-app foldername" }, { "code": null, "e": 26156, "s": 26043, "text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command.cd foldername" }, { "code": null, "e": 26256, "s": 26156, "text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command." }, { "code": null, "e": 26270, "s": 26256, "text": "cd foldername" }, { "code": null, "e": 26403, "s": 26270, "text": "Step 3: After creating the ReactJS application, Install the required modules using the following command.npm install --save recharts" }, { "code": null, "e": 26509, "s": 26403, "text": "Step 3: After creating the ReactJS application, Install the required modules using the following command." }, { "code": null, "e": 26537, "s": 26509, "text": "npm install --save recharts" }, { "code": null, "e": 26589, "s": 26537, "text": "Project Structure: It will look like the following." }, { "code": null, "e": 26719, "s": 26589, "text": "Example: Now write down the following code in the App.js file. Here, App is our default component where we have written our code." }, { "code": null, "e": 26726, "s": 26719, "text": "App.js" }, { "code": "import React from 'react';import { Radar, RadarChart, PolarGrid, PolarAngleAxis, PolarRadiusAxis } from 'recharts'; const App = () => { // Sample data const data = [ { name: 'A', x: 21 }, { name: 'B', x: 22 }, { name: 'C', x: -32 }, { name: 'D', x: -14 }, { name: 'E', x: -51 }, { name: 'F', x: 16 }, { name: 'G', x: 7 }, { name: 'H', x: -8 }, { name: 'I', x: 9 }, ]; return ( <RadarChart height={500} width={500} outerRadius=\"80%\" data={data}> <PolarGrid /> <PolarAngleAxis dataKey=\"name\" /> <PolarRadiusAxis /> <Radar dataKey=\"x\" stroke=\"green\" fill=\"green\" fillOpacity={0.5} /> </RadarChart> );} export default App;", "e": 27519, "s": 26726, "text": null }, { "code": null, "e": 27636, "s": 27519, "text": "Step to Run the Application: Run the application using the following command from the root directory of the project:" }, { "code": null, "e": 27646, "s": 27636, "text": "npm start" }, { "code": null, "e": 27745, "s": 27646, "text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:" }, { "code": null, "e": 27752, "s": 27745, "text": "Output" }, { "code": null, "e": 27768, "s": 27752, "text": "React-Questions" }, { "code": null, "e": 27777, "s": 27768, "text": "Recharts" }, { "code": null, "e": 27788, "s": 27777, "text": "JavaScript" }, { "code": null, "e": 27796, "s": 27788, "text": "ReactJS" }, { "code": null, "e": 27813, "s": 27796, "text": "Web Technologies" }, { "code": null, "e": 27911, "s": 27813, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27972, "s": 27911, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 28013, "s": 27972, "text": "Difference Between PUT and PATCH Request" }, { "code": null, "e": 28053, "s": 28013, "text": "Remove elements from a JavaScript Array" }, { "code": null, "e": 28107, "s": 28053, "text": "How to get character array from string in JavaScript?" }, { "code": null, "e": 28155, "s": 28107, "text": "How to filter object array based on attributes?" }, { "code": null, "e": 28198, "s": 28155, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 28243, "s": 28198, "text": "How to redirect to another page in ReactJS ?" }, { "code": null, "e": 28308, "s": 28243, "text": "How to pass data from child component to its parent in ReactJS ?" }, { "code": null, "e": 28376, "s": 28308, "text": "How to pass data from one component to other component in ReactJS ?" } ]
Angular PrimeNG Messages Component - GeeksforGeeks
03 Oct, 2021 Angular PrimeNG is an open-source framework with a rich set of native Angular UI components that are used for great styling and this framework is used to make responsive websites with very much ease. In this article, we will know how to use the Messages Component in Angular PrimeNG. We will also learn about the properties, styling along with their syntaxes that will be used in the code. Messages component: It is used to display a message with particular severity. Properties of Messages Component: value: It is an array of messages to display. It is of array data type, the default value is null. closable: It defines if the message box can be closed by the click icon. It is of the boolean data type, the default value is true. style: It sets the inline style of the component. It is of string data type, the default value is null. styleClass: It sets the style class of the component. It is of string data type, the default value is null. enableService: It specifies whether displaying services messages are enabled. It is of the boolean data type, the default value is true. escape: It specifies whether displaying messages would be escaped or not. It is of boolean data type, the default value is true. key: It is the Id to match the key of the message to enable scoping in service-based messaging. It is of string data type, the default value is null. showTransitionOptions: It sets the transition options of the show animation. It is of the boolean data type, the default value is 300ms ease-out. hideTransitionOptions: It sets the transition options of the hide animation. It is of the boolean data type, the default value is 200ms cubic-bezier(0.86, 0, 0.07, 1). Styling for Messages Component: p-messages: It is a container element. p-message: It is a message element. p-message-info: It is a message element when displaying info messages. p-message-warn: It is a message element when displaying warning messages. p-message-error: It is a message element when displaying error messages. p-message-success: It is a message element when displaying success messages. p-message-close: It is a close button. p-message-close-icon: It is a close icon. p-message-icon: It is a severity icon. p-message-summary: It is a summary of a message. p-message-detail: It is a detail of a message. Properties of Message Component: severity: It is used to specifies the severity level of the message. It is of string data type, the default value is null. text: It is used to set the text content. It is of string data type, the default value is null. escape: Whether displaying messages would be escaped or not. boolean true style: It is used to set the inline style of the component. It is of string data type, the default value is null. styleClass: It is used to sets the style class of the component. It is of string data type, the default value is null. Styling for Message Component: p-inline-message: It is a message element. p-inline-message-info: It is a message element when displaying info messages. p-inline-message-warn: It is a message element when displaying warning messages. p-inline-message-error: It is a message element when displaying error messages. p-inline-message-success: It is a message element when displaying success messages. p-inline-message-icon: It is used to specify the severity icon. p-inline-message-text: It is a text message. Creating Angular application & module installation: Step 1: Create an Angular application using the following command. ng new appname Step 2: After creating your project folder i.e. appname, move to it using the following command. cd appname Step 3: Install PrimeNG in your given directory. npm install primeng --save npm install primeicons --save Project Structure: It will look like the following: Example 1: This is the basic example that illustrates how to use the Messages component. app.component.html <h2>GeeksforGeeks</h2><h5>PrimeNG Messages Component</h5><p-messages [(value)]="gfg" [enableService]="false"></p-messages> app.component.ts import { Component } from "@angular/core";import { Message } from "primeng/api"; @Component({ selector: "my-app", templateUrl: "./app.component.html",})export class AppComponent { gfg: Message[]; ngOnInit() { this.gfg = [ { detail: "This is a message" }, { detail: "This is a message" }, { detail: "This is a message" }, { detail: "This is a message" }, ]; }} app.module.ts import { NgModule } from "@angular/core";import { BrowserModule } from "@angular/platform-browser";import { BrowserAnimationsModule } from "@angular/platform-browser/animations"; import { AppComponent } from "./app.component";import { MessagesModule } from "primeng/messages";import { MessageModule } from "primeng/message"; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, MessagesModule, MessageModule, ], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {} Output: Example 2: In this example, we have cleared the messages using the button. app.component.html <h2>GeeksforGeeks</h2><h5>PrimeNG Messages Component</h5><p-messages [(value)]="msgs"></p-messages><button type="button" (click)="hide()">Hide</button> app.component.ts import { Component } from "@angular/core";import { Message } from "primeng/api"; @Component({ selector: "my-app", templateUrl: "./app.component.html",})export class AppComponent { msgs = [ { severity: "success", summary: "GeeksforGeeks", detail: "This is a message", }, ]; hide() { this.msgs = []; } ngOnInit() {}} app.module.ts import { NgModule } from "@angular/core";import { BrowserModule } from "@angular/platform-browser";import { BrowserAnimationsModule } from "@angular/platform-browser/animations"; import { AppComponent } from "./app.component";import { MessagesModule } from "primeng/messages";import { MessageModule } from "primeng/message"; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, MessagesModule, MessageModule, ], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {} Output: Reference: https://primefaces.org/primeng/showcase/#/messages Angular-PrimeNG AngularJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Top 10 Angular Libraries For Web Developers How to use <mat-chip-list> and <mat-chip> in Angular Material ? How to make a Bootstrap Modal Popup in Angular 9/8 ? Angular 10 (blur) Event How to create module with Routing in Angular 9 ? Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS?
[ { "code": null, "e": 25031, "s": 25003, "text": "\n03 Oct, 2021" }, { "code": null, "e": 25422, "s": 25031, "text": "Angular PrimeNG is an open-source framework with a rich set of native Angular UI components that are used for great styling and this framework is used to make responsive websites with very much ease. In this article, we will know how to use the Messages Component in Angular PrimeNG. We will also learn about the properties, styling along with their syntaxes that will be used in the code. " }, { "code": null, "e": 25500, "s": 25422, "text": "Messages component: It is used to display a message with particular severity." }, { "code": null, "e": 25534, "s": 25500, "text": "Properties of Messages Component:" }, { "code": null, "e": 25633, "s": 25534, "text": "value: It is an array of messages to display. It is of array data type, the default value is null." }, { "code": null, "e": 25765, "s": 25633, "text": "closable: It defines if the message box can be closed by the click icon. It is of the boolean data type, the default value is true." }, { "code": null, "e": 25869, "s": 25765, "text": "style: It sets the inline style of the component. It is of string data type, the default value is null." }, { "code": null, "e": 25977, "s": 25869, "text": "styleClass: It sets the style class of the component. It is of string data type, the default value is null." }, { "code": null, "e": 26114, "s": 25977, "text": "enableService: It specifies whether displaying services messages are enabled. It is of the boolean data type, the default value is true." }, { "code": null, "e": 26243, "s": 26114, "text": "escape: It specifies whether displaying messages would be escaped or not. It is of boolean data type, the default value is true." }, { "code": null, "e": 26393, "s": 26243, "text": "key: It is the Id to match the key of the message to enable scoping in service-based messaging. It is of string data type, the default value is null." }, { "code": null, "e": 26539, "s": 26393, "text": "showTransitionOptions: It sets the transition options of the show animation. It is of the boolean data type, the default value is 300ms ease-out." }, { "code": null, "e": 26707, "s": 26539, "text": "hideTransitionOptions: It sets the transition options of the hide animation. It is of the boolean data type, the default value is 200ms cubic-bezier(0.86, 0, 0.07, 1)." }, { "code": null, "e": 26739, "s": 26707, "text": "Styling for Messages Component:" }, { "code": null, "e": 26778, "s": 26739, "text": "p-messages: It is a container element." }, { "code": null, "e": 26814, "s": 26778, "text": "p-message: It is a message element." }, { "code": null, "e": 26885, "s": 26814, "text": "p-message-info: It is a message element when displaying info messages." }, { "code": null, "e": 26959, "s": 26885, "text": "p-message-warn: It is a message element when displaying warning messages." }, { "code": null, "e": 27032, "s": 26959, "text": "p-message-error: It is a message element when displaying error messages." }, { "code": null, "e": 27109, "s": 27032, "text": "p-message-success: It is a message element when displaying success messages." }, { "code": null, "e": 27148, "s": 27109, "text": "p-message-close: It is a close button." }, { "code": null, "e": 27190, "s": 27148, "text": "p-message-close-icon: It is a close icon." }, { "code": null, "e": 27229, "s": 27190, "text": "p-message-icon: It is a severity icon." }, { "code": null, "e": 27278, "s": 27229, "text": "p-message-summary: It is a summary of a message." }, { "code": null, "e": 27325, "s": 27278, "text": "p-message-detail: It is a detail of a message." }, { "code": null, "e": 27360, "s": 27327, "text": "Properties of Message Component:" }, { "code": null, "e": 27483, "s": 27360, "text": "severity: It is used to specifies the severity level of the message. It is of string data type, the default value is null." }, { "code": null, "e": 27579, "s": 27483, "text": "text: It is used to set the text content. It is of string data type, the default value is null." }, { "code": null, "e": 27653, "s": 27579, "text": "escape: Whether displaying messages would be escaped or not. boolean true" }, { "code": null, "e": 27767, "s": 27653, "text": "style: It is used to set the inline style of the component. It is of string data type, the default value is null." }, { "code": null, "e": 27886, "s": 27767, "text": "styleClass: It is used to sets the style class of the component. It is of string data type, the default value is null." }, { "code": null, "e": 27917, "s": 27886, "text": "Styling for Message Component:" }, { "code": null, "e": 27960, "s": 27917, "text": "p-inline-message: It is a message element." }, { "code": null, "e": 28038, "s": 27960, "text": "p-inline-message-info: It is a message element when displaying info messages." }, { "code": null, "e": 28119, "s": 28038, "text": "p-inline-message-warn: It is a message element when displaying warning messages." }, { "code": null, "e": 28199, "s": 28119, "text": "p-inline-message-error: It is a message element when displaying error messages." }, { "code": null, "e": 28283, "s": 28199, "text": "p-inline-message-success: It is a message element when displaying success messages." }, { "code": null, "e": 28347, "s": 28283, "text": "p-inline-message-icon: It is used to specify the severity icon." }, { "code": null, "e": 28392, "s": 28347, "text": "p-inline-message-text: It is a text message." }, { "code": null, "e": 28444, "s": 28392, "text": "Creating Angular application & module installation:" }, { "code": null, "e": 28511, "s": 28444, "text": "Step 1: Create an Angular application using the following command." }, { "code": null, "e": 28526, "s": 28511, "text": "ng new appname" }, { "code": null, "e": 28623, "s": 28526, "text": "Step 2: After creating your project folder i.e. appname, move to it using the following command." }, { "code": null, "e": 28634, "s": 28623, "text": "cd appname" }, { "code": null, "e": 28683, "s": 28634, "text": "Step 3: Install PrimeNG in your given directory." }, { "code": null, "e": 28740, "s": 28683, "text": "npm install primeng --save\nnpm install primeicons --save" }, { "code": null, "e": 28792, "s": 28740, "text": "Project Structure: It will look like the following:" }, { "code": null, "e": 28881, "s": 28792, "text": "Example 1: This is the basic example that illustrates how to use the Messages component." }, { "code": null, "e": 28900, "s": 28881, "text": "app.component.html" }, { "code": "<h2>GeeksforGeeks</h2><h5>PrimeNG Messages Component</h5><p-messages [(value)]=\"gfg\" [enableService]=\"false\"></p-messages>", "e": 29025, "s": 28900, "text": null }, { "code": null, "e": 29042, "s": 29025, "text": "app.component.ts" }, { "code": "import { Component } from \"@angular/core\";import { Message } from \"primeng/api\"; @Component({ selector: \"my-app\", templateUrl: \"./app.component.html\",})export class AppComponent { gfg: Message[]; ngOnInit() { this.gfg = [ { detail: \"This is a message\" }, { detail: \"This is a message\" }, { detail: \"This is a message\" }, { detail: \"This is a message\" }, ]; }}", "e": 29436, "s": 29042, "text": null }, { "code": null, "e": 29450, "s": 29436, "text": "app.module.ts" }, { "code": "import { NgModule } from \"@angular/core\";import { BrowserModule } from \"@angular/platform-browser\";import { BrowserAnimationsModule } from \"@angular/platform-browser/animations\"; import { AppComponent } from \"./app.component\";import { MessagesModule } from \"primeng/messages\";import { MessageModule } from \"primeng/message\"; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, MessagesModule, MessageModule, ], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {}", "e": 29978, "s": 29450, "text": null }, { "code": null, "e": 29986, "s": 29978, "text": "Output:" }, { "code": null, "e": 30061, "s": 29986, "text": "Example 2: In this example, we have cleared the messages using the button." }, { "code": null, "e": 30080, "s": 30061, "text": "app.component.html" }, { "code": "<h2>GeeksforGeeks</h2><h5>PrimeNG Messages Component</h5><p-messages [(value)]=\"msgs\"></p-messages><button type=\"button\" (click)=\"hide()\">Hide</button>", "e": 30232, "s": 30080, "text": null }, { "code": null, "e": 30249, "s": 30232, "text": "app.component.ts" }, { "code": "import { Component } from \"@angular/core\";import { Message } from \"primeng/api\"; @Component({ selector: \"my-app\", templateUrl: \"./app.component.html\",})export class AppComponent { msgs = [ { severity: \"success\", summary: \"GeeksforGeeks\", detail: \"This is a message\", }, ]; hide() { this.msgs = []; } ngOnInit() {}}", "e": 30596, "s": 30249, "text": null }, { "code": null, "e": 30610, "s": 30596, "text": "app.module.ts" }, { "code": "import { NgModule } from \"@angular/core\";import { BrowserModule } from \"@angular/platform-browser\";import { BrowserAnimationsModule } from \"@angular/platform-browser/animations\"; import { AppComponent } from \"./app.component\";import { MessagesModule } from \"primeng/messages\";import { MessageModule } from \"primeng/message\"; @NgModule({ imports: [ BrowserModule, BrowserAnimationsModule, MessagesModule, MessageModule, ], declarations: [AppComponent], bootstrap: [AppComponent],})export class AppModule {}", "e": 31138, "s": 30610, "text": null }, { "code": null, "e": 31146, "s": 31138, "text": "Output:" }, { "code": null, "e": 31208, "s": 31146, "text": "Reference: https://primefaces.org/primeng/showcase/#/messages" }, { "code": null, "e": 31224, "s": 31208, "text": "Angular-PrimeNG" }, { "code": null, "e": 31234, "s": 31224, "text": "AngularJS" }, { "code": null, "e": 31251, "s": 31234, "text": "Web Technologies" }, { "code": null, "e": 31349, "s": 31251, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 31358, "s": 31349, "text": "Comments" }, { "code": null, "e": 31371, "s": 31358, "text": "Old Comments" }, { "code": null, "e": 31415, "s": 31371, "text": "Top 10 Angular Libraries For Web Developers" }, { "code": null, "e": 31479, "s": 31415, "text": "How to use <mat-chip-list> and <mat-chip> in Angular Material ?" }, { "code": null, "e": 31532, "s": 31479, "text": "How to make a Bootstrap Modal Popup in Angular 9/8 ?" }, { "code": null, "e": 31556, "s": 31532, "text": "Angular 10 (blur) Event" }, { "code": null, "e": 31605, "s": 31556, "text": "How to create module with Routing in Angular 9 ?" }, { "code": null, "e": 31647, "s": 31605, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 31680, "s": 31647, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 31723, "s": 31680, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 31785, "s": 31723, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" } ]
Using the Pandas “Resample” Function | by Jeremy Chow | Towards Data Science
This article is an introductory dive into the technical aspects of the pandas resample function for datetime manipulation. I hope it serves as a readable source of pseudo-documentation for those less inclined to digging through the pandas source code! If you’d like to check out the code used to generate the examples and see more examples that weren’t included in this article, follow the link here. Let’s get started! Resampling is necessary when you’re given a data set recorded in some time interval and you want to change the time interval to something else. For example, you could aggregate monthly data into yearly data, or you could upsample hourly data into minute-by-minute data. The syntax of resample is fairly straightforward: <DataFrame or Series>.resample(arguments).<aggregate function> I’ll dive into what the arguments are and how to use them, but first here’s a basic, out-of-the-box demonstration. You will need a datetime type index or column to do the following: # Given a Series object called data with some number value per date>>>╔═══════════════════════╦══════╗║ date ║ val ║╠═══════════════════════╬══════╣║ 2000-01-01 00:00:00 ║ 0 ║║ 2000-01-01 00:01:00 ║ 2 ║║ 2000-01-01 00:02:00 ║ 4 ║║ 2000-01-01 00:03:00 ║ 6 ║║ 2000-01-01 00:04:00 ║ 8 ║║ 2000-01-01 00:05:00 ║ 10 ║║ 2000-01-01 00:06:00 ║ 12 ║║ 2000-01-01 00:07:00 ║ 14 ║║ 2000-01-01 00:08:00 ║ 16 ║╚═══════════════════════╩══════╝Freq: T, dtype: int64# We can resample this to every other minute instead and aggregate by summing the intermediate rows:data.resample('2min').sum()>>>╔═════════════════════╦══════╗║ date ║ val ║╠═════════════════════╬══════╣║ 2000-01-01 00:00:00 ║ 2 ║║ 2000-01-01 00:02:00 ║ 10 ║║ 2000-01-01 00:04:00 ║ 18 ║║ 2000-01-01 00:06:00 ║ 26 ║║ 2000-01-01 00:08:00 ║ 16 ║╚═════════════════════╩══════╝Freq: 2T, dtype: int64 Now that we have a basic understanding of what resampling is, let’s go into the code! In the order of the source code : pd.DataFrame.resample(rule, how=None, axis=0, fill_method=None, closed=None, label=None, convention=”start”, kind=None, loffset=None, limit=None, base=0, on=None, level=None) I’ve bolded the arguments that I will cover. The rest are either deprecated or used for period instead of datetime analysis, which I will not be going over in this article. string that contains rule aliases and/or numerics This is the core of resampling. The string you input here determines by what interval the data will be resampled by, as denoted by the bold part in the following line: data.resample('2min').sum() As you can see, you can throw in floats or integers before the string to change the frequency. You can even throw multiple float/string pairs together for a very specific timeframe! For example: '3min' or '3T' = 3 minutes'SMS' = Two times a month'1D3H.5min20S' = One Day, 3 hours, .5min(30sec) + 20sec To save you the pain of trying to look up the resample strings, I’ve posted the table below: Once you put in your rule, you need to decide how you will either reduce the old datapoints or fill in the new ones. This function goes right after the resample function call: data.resample('2min').sum() There are two kinds resampling: Downsampling — Resample to a wider time frame (from months to years) Downsampling — Resample to a wider time frame (from months to years) This is fairly straightforward in that it can use all the groupby aggregate functions including mean(), min(), max(), sum() and so forth. In downsampling, your total number of rows goes down. 2. Upsampling — Resample to a shorter time frame (from hours to minutes) This will result in additional empty rows, so you have the following options to fill those with numeric values: 1. ffill() or pad()2. bfill() or backfill() ‘Forward filling’ or ‘padding’ — Use the last known value to fill the new one. ‘Backfilling’ — Use the next known value to fill the new one. You can also fill with NaNs using the asfreq() function with no arguments. This will result in new data points having NaNs in them, which you can use a fillna() function on later. Here are some demonstrations of the forward and back fills: Starting with months table:╔════════════╦═════╗║ date ║ val ║╠════════════╬═════╣║ 2000-01-31 ║ 0 ║║ 2000-02-29 ║ 2 ║║ 2000-03-31 ║ 4 ║║ 2000-04-30 ║ 6 ║║ 2000-05-31 ║ 8 ║╚════════════╩═════╝print('Forward Fill')print(months.resample('SMS').ffill())╔════════════╦═════╗║ date ║ val ║╠════════════╬═════╣║ 2000-01-15 ║ NaN ║║ 2000-02-01 ║ 0.0 ║║ 2000-02-15 ║ 0.0 ║║ 2000-03-01 ║ 2.0 ║║ 2000-03-15 ║ 2.0 ║║ 2000-04-01 ║ 4.0 ║║ 2000-04-15 ║ 4.0 ║║ 2000-05-01 ║ 6.0 ║║ 2000-05-15 ║ 6.0 ║╚════════════╩═════╝# Alternative to ffill is bfill (backward fill) that takes value of next existing months pointprint('Backward Fill')print(months.resample('SMS').bfill())╔════════════╦═════╗║ date ║ val ║╠════════════╬═════╣║ 2000-01-15 ║ 0 ║║ 2000-02-01 ║ 2 ║║ 2000-02-15 ║ 2 ║║ 2000-03-01 ║ 4 ║║ 2000-03-15 ║ 4 ║║ 2000-04-01 ║ 6 ║║ 2000-04-15 ║ 6 ║║ 2000-05-01 ║ 8 ║║ 2000-05-15 ║ 8 ║╚════════════╩═════╝ 'left', 'right', or None I’m going to include their documentation comment here, since it describes the basics fairly succinctly. Which side of bin interval is closed. The default is ‘left’for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’,‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’. The closed argument tells which side is included, ‘closed’ being the included side (implying the other side is not included) in the calculation for each time interval. You can see how it behaves here: # Original Table 'minutes'╔═════════════════════╦═════╗║ date ║ val ║╠═════════════════════╬═════╣║ 2000-01-01 00:00:00 ║ 0 ║║ 2000-01-01 00:01:00 ║ 2 ║║ 2000-01-01 00:02:00 ║ 4 ║║ 2000-01-01 00:03:00 ║ 6 ║║ 2000-01-01 00:04:00 ║ 8 ║║ 2000-01-01 00:05:00 ║ 10 ║║ 2000-01-01 00:06:00 ║ 12 ║║ 2000-01-01 00:07:00 ║ 14 ║║ 2000-01-01 00:08:00 ║ 16 ║╚═════════════════════╩═════╝# The default is closed='left'df=pd.DataFrame()df['left'] = minutes.resample('2min').sum()df['right'] = minutes.resample('2min',closed='right').sum()df>>>╔═════════════════════╦══════╦═══════╗║ index ║ left ║ right ║╠═════════════════════╬══════╬═══════╣║ 2000-01-01 00:00:00 ║ 2 ║ 6.0 ║║ 2000-01-01 00:02:00 ║ 10 ║ 14.0 ║║ 2000-01-01 00:04:00 ║ 18 ║ 22.0 ║║ 2000-01-01 00:06:00 ║ 26 ║ 30.0 ║║ 2000-01-01 00:08:00 ║ 16 ║ NaN ║╚═════════════════════╩══════╩═══════╝ 'left', 'right', or None Once again, the documentation is pretty useful. Which bin edge label to label bucket with. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’. This argument does not change the underlying calculation, it just relabels the output based on the desired edge once the aggregation is performed. df=pd.DataFrame()# Label default is leftdf['left'] = minutes.resample('2min').sum()df['right'] = minutes.resample('2min',label='right').sum()df>>>╔═════════════════════╦══════╦═══════╗║ ║ left ║ right ║╠═════════════════════╬══════╬═══════╣║ 2000-01-01 00:00:00 ║ 2 ║ NaN ║║ 2000-01-01 00:02:00 ║ 10 ║ 2.0 ║║ 2000-01-01 00:04:00 ║ 18 ║ 10.0 ║║ 2000-01-01 00:06:00 ║ 26 ║ 18.0 ║║ 2000-01-01 00:08:00 ║ 16 ║ 26.0 ║╚═════════════════════╩══════╩═══════╝ stringthat matches rule notation. This argument is also pretty self explanatory. Instead of changing any of the calculations, it just bumps the labels over by the specified amount of time. df=pd.DataFrame()df['no_offset'] = minutes.resample('2min').sum()df['2min_offset'] = minutes.resample('2min',loffset='2T').sum()df['4min_offset'] = minutes.resample('2min',loffset='4T').sum()df>>>╔═════════════════════╦═══════════╦═════════════╦═════════════╗║ index ║ no_offset ║ 2min_offset ║ 4min_offset ║╠═════════════════════╬═══════════╬═════════════╬═════════════╣║ 2000-01-01 00:00:00 ║ 2 ║ NaN ║ NaN ║║ 2000-01-01 00:02:00 ║ 10 ║ 2.0 ║ NaN ║║ 2000-01-01 00:04:00 ║ 18 ║ 10.0 ║ 2.0 ║║ 2000-01-01 00:06:00 ║ 26 ║ 18.0 ║ 10.0 ║║ 2000-01-01 00:08:00 ║ 16 ║ 26.0 ║ 18.0 ║╚═════════════════════╩═══════════╩═════════════╩═════════════╝ numeric input that correlates with the unit used in the resampling rule Shifts the base time to calculate from by some time amount. As the documentation describes it, this function moves the ‘origin’. minutes.head().resample('30S').sum()>>>╔═════════════════════╦═════╗║ date ║ val ║╠═════════════════════╬═════╣║ 2000-01-01 00:00:00 ║ 0 ║║ 2000-01-01 00:00:30 ║ 0 ║║ 2000-01-01 00:01:00 ║ 2 ║║ 2000-01-01 00:01:30 ║ 0 ║║ 2000-01-01 00:02:00 ║ 4 ║║ 2000-01-01 00:02:30 ║ 0 ║║ 2000-01-01 00:03:00 ║ 6 ║║ 2000-01-01 00:03:30 ║ 0 ║║ 2000-01-01 00:04:00 ║ 8 ║╚═════════════════════╩═════╝minutes.head().resample('30S',base=15).sum()>>>╔═════════════════════╦═════╗║ date ║ val ║╠═════════════════════╬═════╣║ 1999-12-31 23:59:45 ║ 0 ║║ 2000-01-01 00:00:15 ║ 0 ║║ 2000-01-01 00:00:45 ║ 2 ║║ 2000-01-01 00:01:15 ║ 0 ║║ 2000-01-01 00:01:45 ║ 4 ║║ 2000-01-01 00:02:15 ║ 0 ║║ 2000-01-01 00:02:45 ║ 6 ║║ 2000-01-01 00:03:15 ║ 0 ║║ 2000-01-01 00:03:45 ║ 8 ║╚═════════════════════╩═════╝The table shifted by 15 seconds. These arguments specify what column name or index to base your resampling on. If your data has the date along the columns instead of down the rows, specify axis = 1 If your date column is not the index, specify that column name using: on = 'date_column_name' If you have a multi-level indexed dataframe, use level to specify what level the correct datetime index to resample is. The rest of the arguments are deprecated or redundant due to functionality being captured using other methods. For example, how and fill_method remove the need for the aggregate function after the resample call, but how is for downsampling and fill_method is for upsampling. You can read more about these arguments in the source documentation if you’re interested. That’s all for today! I hope I shed some light on how resample works and what each of its arguments do. Stay tuned for more tutorials and other data science related articles!
[ { "code": null, "e": 424, "s": 172, "text": "This article is an introductory dive into the technical aspects of the pandas resample function for datetime manipulation. I hope it serves as a readable source of pseudo-documentation for those less inclined to digging through the pandas source code!" }, { "code": null, "e": 573, "s": 424, "text": "If you’d like to check out the code used to generate the examples and see more examples that weren’t included in this article, follow the link here." }, { "code": null, "e": 592, "s": 573, "text": "Let’s get started!" }, { "code": null, "e": 862, "s": 592, "text": "Resampling is necessary when you’re given a data set recorded in some time interval and you want to change the time interval to something else. For example, you could aggregate monthly data into yearly data, or you could upsample hourly data into minute-by-minute data." }, { "code": null, "e": 912, "s": 862, "text": "The syntax of resample is fairly straightforward:" }, { "code": null, "e": 975, "s": 912, "text": "<DataFrame or Series>.resample(arguments).<aggregate function>" }, { "code": null, "e": 1157, "s": 975, "text": "I’ll dive into what the arguments are and how to use them, but first here’s a basic, out-of-the-box demonstration. You will need a datetime type index or column to do the following:" }, { "code": null, "e": 2087, "s": 1157, "text": "# Given a Series object called data with some number value per date>>>╔═══════════════════════╦══════╗║ date ║ val ║╠═══════════════════════╬══════╣║ 2000-01-01 00:00:00 ║ 0 ║║ 2000-01-01 00:01:00 ║ 2 ║║ 2000-01-01 00:02:00 ║ 4 ║║ 2000-01-01 00:03:00 ║ 6 ║║ 2000-01-01 00:04:00 ║ 8 ║║ 2000-01-01 00:05:00 ║ 10 ║║ 2000-01-01 00:06:00 ║ 12 ║║ 2000-01-01 00:07:00 ║ 14 ║║ 2000-01-01 00:08:00 ║ 16 ║╚═══════════════════════╩══════╝Freq: T, dtype: int64# We can resample this to every other minute instead and aggregate by summing the intermediate rows:data.resample('2min').sum()>>>╔═════════════════════╦══════╗║ date ║ val ║╠═════════════════════╬══════╣║ 2000-01-01 00:00:00 ║ 2 ║║ 2000-01-01 00:02:00 ║ 10 ║║ 2000-01-01 00:04:00 ║ 18 ║║ 2000-01-01 00:06:00 ║ 26 ║║ 2000-01-01 00:08:00 ║ 16 ║╚═════════════════════╩══════╝Freq: 2T, dtype: int64" }, { "code": null, "e": 2173, "s": 2087, "text": "Now that we have a basic understanding of what resampling is, let’s go into the code!" }, { "code": null, "e": 2207, "s": 2173, "text": "In the order of the source code :" }, { "code": null, "e": 2382, "s": 2207, "text": "pd.DataFrame.resample(rule, how=None, axis=0, fill_method=None, closed=None, label=None, convention=”start”, kind=None, loffset=None, limit=None, base=0, on=None, level=None)" }, { "code": null, "e": 2555, "s": 2382, "text": "I’ve bolded the arguments that I will cover. The rest are either deprecated or used for period instead of datetime analysis, which I will not be going over in this article." }, { "code": null, "e": 2605, "s": 2555, "text": "string that contains rule aliases and/or numerics" }, { "code": null, "e": 2773, "s": 2605, "text": "This is the core of resampling. The string you input here determines by what interval the data will be resampled by, as denoted by the bold part in the following line:" }, { "code": null, "e": 2801, "s": 2773, "text": "data.resample('2min').sum()" }, { "code": null, "e": 2996, "s": 2801, "text": "As you can see, you can throw in floats or integers before the string to change the frequency. You can even throw multiple float/string pairs together for a very specific timeframe! For example:" }, { "code": null, "e": 3103, "s": 2996, "text": "'3min' or '3T' = 3 minutes'SMS' = Two times a month'1D3H.5min20S' = One Day, 3 hours, .5min(30sec) + 20sec" }, { "code": null, "e": 3196, "s": 3103, "text": "To save you the pain of trying to look up the resample strings, I’ve posted the table below:" }, { "code": null, "e": 3372, "s": 3196, "text": "Once you put in your rule, you need to decide how you will either reduce the old datapoints or fill in the new ones. This function goes right after the resample function call:" }, { "code": null, "e": 3400, "s": 3372, "text": "data.resample('2min').sum()" }, { "code": null, "e": 3432, "s": 3400, "text": "There are two kinds resampling:" }, { "code": null, "e": 3501, "s": 3432, "text": "Downsampling — Resample to a wider time frame (from months to years)" }, { "code": null, "e": 3570, "s": 3501, "text": "Downsampling — Resample to a wider time frame (from months to years)" }, { "code": null, "e": 3708, "s": 3570, "text": "This is fairly straightforward in that it can use all the groupby aggregate functions including mean(), min(), max(), sum() and so forth." }, { "code": null, "e": 3762, "s": 3708, "text": "In downsampling, your total number of rows goes down." }, { "code": null, "e": 3835, "s": 3762, "text": "2. Upsampling — Resample to a shorter time frame (from hours to minutes)" }, { "code": null, "e": 3947, "s": 3835, "text": "This will result in additional empty rows, so you have the following options to fill those with numeric values:" }, { "code": null, "e": 3991, "s": 3947, "text": "1. ffill() or pad()2. bfill() or backfill()" }, { "code": null, "e": 4070, "s": 3991, "text": "‘Forward filling’ or ‘padding’ — Use the last known value to fill the new one." }, { "code": null, "e": 4132, "s": 4070, "text": "‘Backfilling’ — Use the next known value to fill the new one." }, { "code": null, "e": 4312, "s": 4132, "text": "You can also fill with NaNs using the asfreq() function with no arguments. This will result in new data points having NaNs in them, which you can use a fillna() function on later." }, { "code": null, "e": 4372, "s": 4312, "text": "Here are some demonstrations of the forward and back fills:" }, { "code": null, "e": 5311, "s": 4372, "text": "Starting with months table:╔════════════╦═════╗║ date ║ val ║╠════════════╬═════╣║ 2000-01-31 ║ 0 ║║ 2000-02-29 ║ 2 ║║ 2000-03-31 ║ 4 ║║ 2000-04-30 ║ 6 ║║ 2000-05-31 ║ 8 ║╚════════════╩═════╝print('Forward Fill')print(months.resample('SMS').ffill())╔════════════╦═════╗║ date ║ val ║╠════════════╬═════╣║ 2000-01-15 ║ NaN ║║ 2000-02-01 ║ 0.0 ║║ 2000-02-15 ║ 0.0 ║║ 2000-03-01 ║ 2.0 ║║ 2000-03-15 ║ 2.0 ║║ 2000-04-01 ║ 4.0 ║║ 2000-04-15 ║ 4.0 ║║ 2000-05-01 ║ 6.0 ║║ 2000-05-15 ║ 6.0 ║╚════════════╩═════╝# Alternative to ffill is bfill (backward fill) that takes value of next existing months pointprint('Backward Fill')print(months.resample('SMS').bfill())╔════════════╦═════╗║ date ║ val ║╠════════════╬═════╣║ 2000-01-15 ║ 0 ║║ 2000-02-01 ║ 2 ║║ 2000-02-15 ║ 2 ║║ 2000-03-01 ║ 4 ║║ 2000-03-15 ║ 4 ║║ 2000-04-01 ║ 6 ║║ 2000-04-15 ║ 6 ║║ 2000-05-01 ║ 8 ║║ 2000-05-15 ║ 8 ║╚════════════╩═════╝" }, { "code": null, "e": 5336, "s": 5311, "text": "'left', 'right', or None" }, { "code": null, "e": 5440, "s": 5336, "text": "I’m going to include their documentation comment here, since it describes the basics fairly succinctly." }, { "code": null, "e": 5613, "s": 5440, "text": "Which side of bin interval is closed. The default is ‘left’for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’,‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’." }, { "code": null, "e": 5814, "s": 5613, "text": "The closed argument tells which side is included, ‘closed’ being the included side (implying the other side is not included) in the calculation for each time interval. You can see how it behaves here:" }, { "code": null, "e": 6714, "s": 5814, "text": "# Original Table 'minutes'╔═════════════════════╦═════╗║ date ║ val ║╠═════════════════════╬═════╣║ 2000-01-01 00:00:00 ║ 0 ║║ 2000-01-01 00:01:00 ║ 2 ║║ 2000-01-01 00:02:00 ║ 4 ║║ 2000-01-01 00:03:00 ║ 6 ║║ 2000-01-01 00:04:00 ║ 8 ║║ 2000-01-01 00:05:00 ║ 10 ║║ 2000-01-01 00:06:00 ║ 12 ║║ 2000-01-01 00:07:00 ║ 14 ║║ 2000-01-01 00:08:00 ║ 16 ║╚═════════════════════╩═════╝# The default is closed='left'df=pd.DataFrame()df['left'] = minutes.resample('2min').sum()df['right'] = minutes.resample('2min',closed='right').sum()df>>>╔═════════════════════╦══════╦═══════╗║ index ║ left ║ right ║╠═════════════════════╬══════╬═══════╣║ 2000-01-01 00:00:00 ║ 2 ║ 6.0 ║║ 2000-01-01 00:02:00 ║ 10 ║ 14.0 ║║ 2000-01-01 00:04:00 ║ 18 ║ 22.0 ║║ 2000-01-01 00:06:00 ║ 26 ║ 30.0 ║║ 2000-01-01 00:08:00 ║ 16 ║ NaN ║╚═════════════════════╩══════╩═══════╝" }, { "code": null, "e": 6739, "s": 6714, "text": "'left', 'right', or None" }, { "code": null, "e": 6787, "s": 6739, "text": "Once again, the documentation is pretty useful." }, { "code": null, "e": 6967, "s": 6787, "text": "Which bin edge label to label bucket with. The default is ‘left’ for all frequency offsets except for ‘M’, ‘A’, ‘Q’, ‘BM’, ‘BA’, ‘BQ’, and ‘W’ which all have a default of ‘right’." }, { "code": null, "e": 7114, "s": 6967, "text": "This argument does not change the underlying calculation, it just relabels the output based on the desired edge once the aggregation is performed." }, { "code": null, "e": 7603, "s": 7114, "text": "df=pd.DataFrame()# Label default is leftdf['left'] = minutes.resample('2min').sum()df['right'] = minutes.resample('2min',label='right').sum()df>>>╔═════════════════════╦══════╦═══════╗║ ║ left ║ right ║╠═════════════════════╬══════╬═══════╣║ 2000-01-01 00:00:00 ║ 2 ║ NaN ║║ 2000-01-01 00:02:00 ║ 10 ║ 2.0 ║║ 2000-01-01 00:04:00 ║ 18 ║ 10.0 ║║ 2000-01-01 00:06:00 ║ 26 ║ 18.0 ║║ 2000-01-01 00:08:00 ║ 16 ║ 26.0 ║╚═════════════════════╩══════╩═══════╝" }, { "code": null, "e": 7637, "s": 7603, "text": "stringthat matches rule notation." }, { "code": null, "e": 7792, "s": 7637, "text": "This argument is also pretty self explanatory. Instead of changing any of the calculations, it just bumps the labels over by the specified amount of time." }, { "code": null, "e": 8556, "s": 7792, "text": "df=pd.DataFrame()df['no_offset'] = minutes.resample('2min').sum()df['2min_offset'] = minutes.resample('2min',loffset='2T').sum()df['4min_offset'] = minutes.resample('2min',loffset='4T').sum()df>>>╔═════════════════════╦═══════════╦═════════════╦═════════════╗║ index ║ no_offset ║ 2min_offset ║ 4min_offset ║╠═════════════════════╬═══════════╬═════════════╬═════════════╣║ 2000-01-01 00:00:00 ║ 2 ║ NaN ║ NaN ║║ 2000-01-01 00:02:00 ║ 10 ║ 2.0 ║ NaN ║║ 2000-01-01 00:04:00 ║ 18 ║ 10.0 ║ 2.0 ║║ 2000-01-01 00:06:00 ║ 26 ║ 18.0 ║ 10.0 ║║ 2000-01-01 00:08:00 ║ 16 ║ 26.0 ║ 18.0 ║╚═════════════════════╩═══════════╩═════════════╩═════════════╝" }, { "code": null, "e": 8628, "s": 8556, "text": "numeric input that correlates with the unit used in the resampling rule" }, { "code": null, "e": 8757, "s": 8628, "text": "Shifts the base time to calculate from by some time amount. As the documentation describes it, this function moves the ‘origin’." }, { "code": null, "e": 9630, "s": 8757, "text": "minutes.head().resample('30S').sum()>>>╔═════════════════════╦═════╗║ date ║ val ║╠═════════════════════╬═════╣║ 2000-01-01 00:00:00 ║ 0 ║║ 2000-01-01 00:00:30 ║ 0 ║║ 2000-01-01 00:01:00 ║ 2 ║║ 2000-01-01 00:01:30 ║ 0 ║║ 2000-01-01 00:02:00 ║ 4 ║║ 2000-01-01 00:02:30 ║ 0 ║║ 2000-01-01 00:03:00 ║ 6 ║║ 2000-01-01 00:03:30 ║ 0 ║║ 2000-01-01 00:04:00 ║ 8 ║╚═════════════════════╩═════╝minutes.head().resample('30S',base=15).sum()>>>╔═════════════════════╦═════╗║ date ║ val ║╠═════════════════════╬═════╣║ 1999-12-31 23:59:45 ║ 0 ║║ 2000-01-01 00:00:15 ║ 0 ║║ 2000-01-01 00:00:45 ║ 2 ║║ 2000-01-01 00:01:15 ║ 0 ║║ 2000-01-01 00:01:45 ║ 4 ║║ 2000-01-01 00:02:15 ║ 0 ║║ 2000-01-01 00:02:45 ║ 6 ║║ 2000-01-01 00:03:15 ║ 0 ║║ 2000-01-01 00:03:45 ║ 8 ║╚═════════════════════╩═════╝The table shifted by 15 seconds." }, { "code": null, "e": 9708, "s": 9630, "text": "These arguments specify what column name or index to base your resampling on." }, { "code": null, "e": 9795, "s": 9708, "text": "If your data has the date along the columns instead of down the rows, specify axis = 1" }, { "code": null, "e": 9865, "s": 9795, "text": "If your date column is not the index, specify that column name using:" }, { "code": null, "e": 9889, "s": 9865, "text": "on = 'date_column_name'" }, { "code": null, "e": 10009, "s": 9889, "text": "If you have a multi-level indexed dataframe, use level to specify what level the correct datetime index to resample is." }, { "code": null, "e": 10374, "s": 10009, "text": "The rest of the arguments are deprecated or redundant due to functionality being captured using other methods. For example, how and fill_method remove the need for the aggregate function after the resample call, but how is for downsampling and fill_method is for upsampling. You can read more about these arguments in the source documentation if you’re interested." } ]
How to Change Typeface of TextView in Android? - GeeksforGeeks
27 Sep, 2021 A typeface is a particular design for alphabets that separates it from other typefaces in terms of style, size, and weight variations. In general, there are a lot of local typefaces available on your device or software for use. However, many more typefaces are available on the Internet that can be downloaded and used for respective works. Similarly, such typefaces can be introduced for displaying the text inside the TextView. So in this article, we will show you how you could use a download typeface and apply it to the text inside the TextView of your Android application. Step 1: Create a New Project in Android Studio To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. We demonstrated the application in Kotlin, so make sure you select Kotlin as the primary language while creating a New Project. Step 2: Working with the activity_main.xml file Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file. Add a TextView in the layout file. XML <?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <!--TextView to display the text--> <TextView android:id="@+id/text_view" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_centerInParent="true" android:text="Hello Geek!" android:textSize="40sp"/> </RelativeLayout> Step 3: Download and store the desired font in the assets folder We download a font from here. However, you can download a font of your choice. Now, just copy the downloaded font file and paste it into the assets folder. In case, you have no clue about the assets folder, or if it is missing from your folders, create a new assets folder by following this article on Assets Folder in Android Studio. Step 4: Working with the MainActivity.kt file Go to the MainActivity.kt file and refer to the following code. Below is the code for the MainActivity.kt file. Comments are added inside the code to understand the code in more detail. Kotlin import android.graphics.Typefaceimport androidx.appcompat.app.AppCompatActivityimport android.os.Bundleimport android.widget.TextView class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val mTextView = findViewById<TextView>(R.id.text_view) // Creating a typeface val font = Typeface.createFromAsset(assets, "JellyBomb.ttf") // Setting the TextView typeface mTextView.typeface = font }} Output: You can see that the typeface is applied to the text in the TextView. Android Kotlin Android Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Flutter - Custom Bottom Navigation Bar How to Read Data from SQLite Database in Android? Retrofit with Kotlin Coroutine in Android Android Listview in Java with Example How to Change the Background Color After Clicking the Button in Android? Android UI Layouts Kotlin Array Retrofit with Kotlin Coroutine in Android Kotlin Setters and Getters MVP (Model View Presenter) Architecture Pattern in Android with Example
[ { "code": null, "e": 25116, "s": 25088, "text": "\n27 Sep, 2021" }, { "code": null, "e": 25457, "s": 25116, "text": "A typeface is a particular design for alphabets that separates it from other typefaces in terms of style, size, and weight variations. In general, there are a lot of local typefaces available on your device or software for use. However, many more typefaces are available on the Internet that can be downloaded and used for respective works." }, { "code": null, "e": 25695, "s": 25457, "text": "Similarly, such typefaces can be introduced for displaying the text inside the TextView. So in this article, we will show you how you could use a download typeface and apply it to the text inside the TextView of your Android application." }, { "code": null, "e": 25742, "s": 25695, "text": "Step 1: Create a New Project in Android Studio" }, { "code": null, "e": 25981, "s": 25742, "text": "To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. We demonstrated the application in Kotlin, so make sure you select Kotlin as the primary language while creating a New Project." }, { "code": null, "e": 26029, "s": 25981, "text": "Step 2: Working with the activity_main.xml file" }, { "code": null, "e": 26206, "s": 26029, "text": "Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file. Add a TextView in the layout file." }, { "code": null, "e": 26210, "s": 26206, "text": "XML" }, { "code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:app=\"http://schemas.android.com/apk/res-auto\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" tools:context=\".MainActivity\"> <!--TextView to display the text--> <TextView android:id=\"@+id/text_view\" android:layout_width=\"wrap_content\" android:layout_height=\"wrap_content\" android:layout_centerInParent=\"true\" android:text=\"Hello Geek!\" android:textSize=\"40sp\"/> </RelativeLayout>", "e": 26851, "s": 26210, "text": null }, { "code": null, "e": 26916, "s": 26851, "text": "Step 3: Download and store the desired font in the assets folder" }, { "code": null, "e": 27251, "s": 26916, "text": "We download a font from here. However, you can download a font of your choice. Now, just copy the downloaded font file and paste it into the assets folder. In case, you have no clue about the assets folder, or if it is missing from your folders, create a new assets folder by following this article on Assets Folder in Android Studio." }, { "code": null, "e": 27297, "s": 27251, "text": "Step 4: Working with the MainActivity.kt file" }, { "code": null, "e": 27483, "s": 27297, "text": "Go to the MainActivity.kt file and refer to the following code. Below is the code for the MainActivity.kt file. Comments are added inside the code to understand the code in more detail." }, { "code": null, "e": 27490, "s": 27483, "text": "Kotlin" }, { "code": "import android.graphics.Typefaceimport androidx.appcompat.app.AppCompatActivityimport android.os.Bundleimport android.widget.TextView class MainActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) val mTextView = findViewById<TextView>(R.id.text_view) // Creating a typeface val font = Typeface.createFromAsset(assets, \"JellyBomb.ttf\") // Setting the TextView typeface mTextView.typeface = font }}", "e": 28065, "s": 27490, "text": null }, { "code": null, "e": 28073, "s": 28065, "text": "Output:" }, { "code": null, "e": 28143, "s": 28073, "text": "You can see that the typeface is applied to the text in the TextView." }, { "code": null, "e": 28151, "s": 28143, "text": "Android" }, { "code": null, "e": 28158, "s": 28151, "text": "Kotlin" }, { "code": null, "e": 28166, "s": 28158, "text": "Android" }, { "code": null, "e": 28264, "s": 28166, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 28303, "s": 28264, "text": "Flutter - Custom Bottom Navigation Bar" }, { "code": null, "e": 28353, "s": 28303, "text": "How to Read Data from SQLite Database in Android?" }, { "code": null, "e": 28395, "s": 28353, "text": "Retrofit with Kotlin Coroutine in Android" }, { "code": null, "e": 28433, "s": 28395, "text": "Android Listview in Java with Example" }, { "code": null, "e": 28506, "s": 28433, "text": "How to Change the Background Color After Clicking the Button in Android?" }, { "code": null, "e": 28525, "s": 28506, "text": "Android UI Layouts" }, { "code": null, "e": 28538, "s": 28525, "text": "Kotlin Array" }, { "code": null, "e": 28580, "s": 28538, "text": "Retrofit with Kotlin Coroutine in Android" }, { "code": null, "e": 28607, "s": 28580, "text": "Kotlin Setters and Getters" } ]
Method Overriding with Access Modifier - GeeksforGeeks
14 Dec, 2018 Prerequisites: Method Overriding in java and Access Modifier in Java Method OverridingIn any object-oriented programming language, Overriding is a feature that allows a subclass or child class to provide a specific implementation of a method that is already provided by its super-class or parent class. When a method in a subclass has the same name, same parameters or signature and same return type(or sub-type) as a method in its super-class, then the method in the subclass is said to override the method in the super-class.Method overriding is one of the ways by which java achieve Run Time Polymorphism. The version of a method that is executed will be determined by the object that is used to invoke it. If an object of a parent class is used to invoke the method, then the version in the parent class will be executed, but if an object of the subclass is used to invoke the method, then the version in the child class will be executed. In other words, it is the type of the object being referred to (not the type of the reference variable) that determines which version of an overridden method will be executed. Access ModifiersAs the name suggests, access modifiers in Java help to restrict the scope of a class, constructor, variable, method or data member. There are four types of access modifiers available in java: Default – No keyword required Private Protected Public Method Overriding with Access ModifiersTheir is Only one rule while doing Method overriding with Access modifiers i.e. If you are overriding any method, overridden method (i.e. declared in subclass) must not be more restrictive. Access modifier restrictions in decreasing order: private default protected public i.e. private is more restricted then default and default is more restricted than protected and so on. Example 1: class A { protected void method() { System.out.println("Hello"); }} public class B extends A { // Compile Time Error void method() { System.out.println("Hello"); } public static void main(String args[]) { B b = new B(); b.method(); }} Compile Time Error Note: In the above Example Superclass class A defined a method whose access modifier is protected. While doing method overriding in SubClass Class B we didn’t define any access modifier so Default access modifier will be used. By the rule, Default is more restricted then Protected so this program will give compile time error. Instead of default, we could’ve used public which is less restricted then protected. Example 2: class A { protected void method() { System.out.println("Hello"); }} public class B extends A { public void method() { System.out.println("Hello"); } public static void main(String args[]) { B b = new B(); b.method(); }} Hello Note: In the above Example Superclass class A defined a method whose access modifier is protected. While doing method overriding in SubClass Class B we define access modifier as Public. Because Public access modifier is less restricted than Protected hence this program compiles successfully. java-inheritance java-overriding Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Different ways of Reading a text file in Java Constructors in Java Stream In Java Exceptions in Java Generics in Java Functional Interfaces in Java Comparator Interface in Java with Examples Strings in Java HashMap get() Method in Java StringBuilder Class in Java with Examples
[ { "code": null, "e": 23974, "s": 23946, "text": "\n14 Dec, 2018" }, { "code": null, "e": 24043, "s": 23974, "text": "Prerequisites: Method Overriding in java and Access Modifier in Java" }, { "code": null, "e": 25093, "s": 24043, "text": "Method OverridingIn any object-oriented programming language, Overriding is a feature that allows a subclass or child class to provide a specific implementation of a method that is already provided by its super-class or parent class. When a method in a subclass has the same name, same parameters or signature and same return type(or sub-type) as a method in its super-class, then the method in the subclass is said to override the method in the super-class.Method overriding is one of the ways by which java achieve Run Time Polymorphism. The version of a method that is executed will be determined by the object that is used to invoke it. If an object of a parent class is used to invoke the method, then the version in the parent class will be executed, but if an object of the subclass is used to invoke the method, then the version in the child class will be executed. In other words, it is the type of the object being referred to (not the type of the reference variable) that determines which version of an overridden method will be executed." }, { "code": null, "e": 25301, "s": 25093, "text": "Access ModifiersAs the name suggests, access modifiers in Java help to restrict the scope of a class, constructor, variable, method or data member. There are four types of access modifiers available in java:" }, { "code": null, "e": 25331, "s": 25301, "text": "Default – No keyword required" }, { "code": null, "e": 25339, "s": 25331, "text": "Private" }, { "code": null, "e": 25349, "s": 25339, "text": "Protected" }, { "code": null, "e": 25356, "s": 25349, "text": "Public" }, { "code": null, "e": 25475, "s": 25356, "text": "Method Overriding with Access ModifiersTheir is Only one rule while doing Method overriding with Access modifiers i.e." }, { "code": null, "e": 25585, "s": 25475, "text": "If you are overriding any method, overridden method (i.e. declared in subclass) must not be more restrictive." }, { "code": null, "e": 25635, "s": 25585, "text": "Access modifier restrictions in decreasing order:" }, { "code": null, "e": 25643, "s": 25635, "text": "private" }, { "code": null, "e": 25651, "s": 25643, "text": "default" }, { "code": null, "e": 25661, "s": 25651, "text": "protected" }, { "code": null, "e": 25668, "s": 25661, "text": "public" }, { "code": null, "e": 25770, "s": 25668, "text": "i.e. private is more restricted then default and default is more restricted than protected and so on." }, { "code": null, "e": 25781, "s": 25770, "text": "Example 1:" }, { "code": "class A { protected void method() { System.out.println(\"Hello\"); }} public class B extends A { // Compile Time Error void method() { System.out.println(\"Hello\"); } public static void main(String args[]) { B b = new B(); b.method(); }}", "e": 26079, "s": 25781, "text": null }, { "code": null, "e": 26099, "s": 26079, "text": "Compile Time Error\n" }, { "code": null, "e": 26512, "s": 26099, "text": "Note: In the above Example Superclass class A defined a method whose access modifier is protected. While doing method overriding in SubClass Class B we didn’t define any access modifier so Default access modifier will be used. By the rule, Default is more restricted then Protected so this program will give compile time error. Instead of default, we could’ve used public which is less restricted then protected." }, { "code": null, "e": 26523, "s": 26512, "text": "Example 2:" }, { "code": "class A { protected void method() { System.out.println(\"Hello\"); }} public class B extends A { public void method() { System.out.println(\"Hello\"); } public static void main(String args[]) { B b = new B(); b.method(); }}", "e": 26801, "s": 26523, "text": null }, { "code": null, "e": 26808, "s": 26801, "text": "Hello\n" }, { "code": null, "e": 27101, "s": 26808, "text": "Note: In the above Example Superclass class A defined a method whose access modifier is protected. While doing method overriding in SubClass Class B we define access modifier as Public. Because Public access modifier is less restricted than Protected hence this program compiles successfully." }, { "code": null, "e": 27118, "s": 27101, "text": "java-inheritance" }, { "code": null, "e": 27134, "s": 27118, "text": "java-overriding" }, { "code": null, "e": 27139, "s": 27134, "text": "Java" }, { "code": null, "e": 27144, "s": 27139, "text": "Java" }, { "code": null, "e": 27242, "s": 27144, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27251, "s": 27242, "text": "Comments" }, { "code": null, "e": 27264, "s": 27251, "text": "Old Comments" }, { "code": null, "e": 27310, "s": 27264, "text": "Different ways of Reading a text file in Java" }, { "code": null, "e": 27331, "s": 27310, "text": "Constructors in Java" }, { "code": null, "e": 27346, "s": 27331, "text": "Stream In Java" }, { "code": null, "e": 27365, "s": 27346, "text": "Exceptions in Java" }, { "code": null, "e": 27382, "s": 27365, "text": "Generics in Java" }, { "code": null, "e": 27412, "s": 27382, "text": "Functional Interfaces in Java" }, { "code": null, "e": 27455, "s": 27412, "text": "Comparator Interface in Java with Examples" }, { "code": null, "e": 27471, "s": 27455, "text": "Strings in Java" }, { "code": null, "e": 27500, "s": 27471, "text": "HashMap get() Method in Java" } ]
EJB - Stateful Bean
A stateful session bean is a type of enterprise bean, which preserve the conversational state with client. A stateful session bean as per its name keeps associated client state in its instance variables. EJB Container creates a separate stateful session bean to process client's each request. As soon as request scope is over, statelful session bean is destroyed. Following are the steps required to create a stateful EJB − Create a remote/local interface exposing the business methods. Create a remote/local interface exposing the business methods. This interface will be used by the EJB client application. This interface will be used by the EJB client application. Use @Local annotation if EJB client is in same environment where EJB session bean need to be deployed. Use @Local annotation if EJB client is in same environment where EJB session bean need to be deployed. Use @Remote annotation if EJB client is in different environment where EJB session bean need to be deployed. Use @Remote annotation if EJB client is in different environment where EJB session bean need to be deployed. Create a stateful session bean, implementing the above interface. Create a stateful session bean, implementing the above interface. Use @Stateful annotation to signify it a stateful bean. EJB Container automatically creates the relevant configurations or interfaces required by reading this annotation during deployment. Use @Stateful annotation to signify it a stateful bean. EJB Container automatically creates the relevant configurations or interfaces required by reading this annotation during deployment. import javax.ejb.Remote; @Remote public interface LibraryStatefulSessionBeanRemote { //add business method declarations } @Stateful public class LibraryStatefulSessionBean implements LibraryStatefulSessionBeanRemote { //implement business method } Let us create a test EJB application to test stateful EJB. Create a project with a name EjbComponent under a package com.tutorialspoint.stateful as explained in the EJB − Create Application chapter. You can also use the project created in EJB - Create Application chapter as such for this chapter to understand stateful EJB concepts. Create LibraryStatefulSessionBean.java and LibraryStatefulSessionBeanRemote as explained in the EJB − Create Application chapter. Keep rest of the files unchanged. Clean and Build the application to make sure business logic is working as per the requirements. Finally, deploy the application in the form of jar file on JBoss Application Server. JBoss Application server will get started automatically if it is not started yet. Now create the EJB client, a console based application in the same way as explained in the EJB - Create Application chapter under topic Create Client to access EJB. package com.tutorialspoint.stateful; import java.util.List; import javax.ejb.Remote; @Remote public interface LibraryStatefulSessionBeanRemote { void addBook(String bookName); List getBooks(); } package com.tutorialspoint.stateful; import java.util.ArrayList; import java.util.List; import javax.ejb.Stateful; @Stateful public class LibraryStatefulSessionBean implements LibraryStatefulSessionBeanRemote { List<String> bookShelf; public LibraryStatefulSessionBean() { bookShelf = new ArrayList<String>(); } public void addBook(String bookName) { bookShelf.add(bookName); } public List<String> getBooks() { return bookShelf; } } As soon as you deploy the EjbComponent project on JBOSS, notice the jboss log. As soon as you deploy the EjbComponent project on JBOSS, notice the jboss log. JBoss has automatically created a JNDI entry for our session bean − LibraryStatefulSessionBean/remote. JBoss has automatically created a JNDI entry for our session bean − LibraryStatefulSessionBean/remote. We will be using this lookup string to get remote business object of type − com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote We will be using this lookup string to get remote business object of type − com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote ... 16:30:01,401 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: LibraryStatefulSessionBean/remote - EJB3.x Default Remote Business Interface LibraryStatefulSessionBean/remote-com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote - EJB3.x Remote Business Interface 16:30:02,723 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=EjbComponent.jar,name=LibraryStatefulSessionBean,service=EJB3 16:30:02,723 INFO [EJBContainer] STARTED EJB: com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote ejbName: LibraryStatefulSessionBean 16:30:02,731 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI: LibraryStatefulSessionBean/remote - EJB3.x Default Remote Business Interface LibraryStatefulSessionBean/remote-com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote - EJB3.x Remote Business Interface ... java.naming.factory.initial = org.jnp.interfaces.NamingContextFactory java.naming.factory.url.pkgs = org.jboss.naming:org.jnp.interfaces java.naming.provider.url = localhost These properties are used to initialize the InitialContext object of java naming service. These properties are used to initialize the InitialContext object of java naming service. InitialContext object will be used to lookup stateful session bean. InitialContext object will be used to lookup stateful session bean. package com.tutorialspoint.test; import com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote; import java.io.BufferedReader; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStreamReader; import java.util.List; import java.util.Properties; import javax.naming.InitialContext; import javax.naming.NamingException; public class EJBTester { BufferedReader brConsoleReader = null; Properties props; InitialContext ctx; { props = new Properties(); try { props.load(new FileInputStream("jndi.properties")); } catch (IOException ex) { ex.printStackTrace(); } try { ctx = new InitialContext(props); } catch (NamingException ex) { ex.printStackTrace(); } brConsoleReader = new BufferedReader(new InputStreamReader(System.in)); } public static void main(String[] args) { EJBTester ejbTester = new EJBTester(); ejbTester.testStatelessEjb(); } private void showGUI() { System.out.println("**********************"); System.out.println("Welcome to Book Store"); System.out.println("**********************"); System.out.print("Options \n1. Add Book\n2. Exit \nEnter Choice: "); } private void testStatelessEjb() { try { int choice = 1; LibraryStatefulSessionBeanRemote libraryBean = LibraryStatefulSessionBeanRemote)ctx.lookup("LibraryStatefulSessionBean/remote"); while (choice != 2) { String bookName; showGUI(); String strChoice = brConsoleReader.readLine(); choice = Integer.parseInt(strChoice); if (choice == 1) { System.out.print("Enter book name: "); bookName = brConsoleReader.readLine(); Book book = new Book(); book.setName(bookName); libraryBean.addBook(book); } else if (choice == 2) { break; } } List<Book> booksList = libraryBean.getBooks(); System.out.println("Book(s) entered so far: " + booksList.size()); int i = 0; for (Book book:booksList) { System.out.println((i+1)+". " + book.getName()); i++; } LibraryStatefulSessionBeanRemote libraryBean1 = (LibraryStatefulSessionBeanRemote)ctx.lookup("LibraryStatefulSessionBean/remote"); List<String> booksList1 = libraryBean1.getBooks(); System.out.println( "***Using second lookup to get library stateful object***"); System.out.println( "Book(s) entered so far: " + booksList1.size()); for (int i = 0; i < booksList1.size(); ++i) { System.out.println((i+1)+". " + booksList1.get(i)); } } catch (Exception e) { System.out.println(e.getMessage()); e.printStackTrace(); }finally { try { if(brConsoleReader !=null) { brConsoleReader.close(); } } catch (IOException ex) { System.out.println(ex.getMessage()); } } } } EJBTester performs the following tasks − Load properties from jndi.properties and initialize the InitialContext object. Load properties from jndi.properties and initialize the InitialContext object. In testStatefulEjb() method, jndi lookup is done with name - "LibraryStatefulSessionBean/remote" to obtain the remote business object (stateful ejb). In testStatefulEjb() method, jndi lookup is done with name - "LibraryStatefulSessionBean/remote" to obtain the remote business object (stateful ejb). Then the user is shown a library store User Interface and he/she is asked to enter a choice. Then the user is shown a library store User Interface and he/she is asked to enter a choice. If user enters 1, system asks for book name and saves the book using stateful session bean addBook() method. Session Bean is storing the book in its instance variable. If user enters 1, system asks for book name and saves the book using stateful session bean addBook() method. Session Bean is storing the book in its instance variable. If user enters 2, system retrieves books using stateful session bean getBooks() method and exits. If user enters 2, system retrieves books using stateful session bean getBooks() method and exits. Then another jndi lookup is done with the name - "LibraryStatefulSessionBean/remote" to obtain the remote business object (stateful EJB) again and listing of books is done. Then another jndi lookup is done with the name - "LibraryStatefulSessionBean/remote" to obtain the remote business object (stateful EJB) again and listing of books is done. Locate EJBTester.java in project explorer. Right click on EJBTester class and select run file. Verify the following output in Netbeans console − run: ********************** Welcome to Book Store ********************** Options 1. Add Book 2. Exit Enter Choice: 1 Enter book name: Learn Java ********************** Welcome to Book Store ********************** Options 1. Add Book 2. Exit Enter Choice: 2 Book(s) entered so far: 1 1. Learn Java ***Using second lookup to get library stateful object*** Book(s) entered so far: 0 BUILD SUCCESSFUL (total time: 13 seconds) Locate EJBTester.java in project explorer. Right click on EJBTester class and select run file. Verify the following output in Netbeans console. run: ********************** Welcome to Book Store ********************** Options 1. Add Book 2. Exit Enter Choice: 2 Book(s) entered so far: 0 ***Using second lookup to get library stateful object*** Book(s) entered so far: 0 BUILD SUCCESSFUL (total time: 12 seconds) Output shown above states that for each lookup, a different stateful EJB instance is returned. Output shown above states that for each lookup, a different stateful EJB instance is returned. Stateful EJB object is keeping value for single session only. As in second run, we are not getting any value of books. Stateful EJB object is keeping value for single session only. As in second run, we are not getting any value of books. Print Add Notes Bookmark this page
[ { "code": null, "e": 2411, "s": 2047, "text": "A stateful session bean is a type of enterprise bean, which preserve the conversational state with client. A stateful session bean as per its name keeps associated client state in its instance variables. EJB Container creates a separate stateful session bean to process client's each request. As soon as request scope is over, statelful session bean is destroyed." }, { "code": null, "e": 2471, "s": 2411, "text": "Following are the steps required to create a stateful EJB −" }, { "code": null, "e": 2534, "s": 2471, "text": "Create a remote/local interface exposing the business methods." }, { "code": null, "e": 2597, "s": 2534, "text": "Create a remote/local interface exposing the business methods." }, { "code": null, "e": 2656, "s": 2597, "text": "This interface will be used by the EJB client application." }, { "code": null, "e": 2715, "s": 2656, "text": "This interface will be used by the EJB client application." }, { "code": null, "e": 2818, "s": 2715, "text": "Use @Local annotation if EJB client is in same environment where EJB session bean need to be deployed." }, { "code": null, "e": 2921, "s": 2818, "text": "Use @Local annotation if EJB client is in same environment where EJB session bean need to be deployed." }, { "code": null, "e": 3030, "s": 2921, "text": "Use @Remote annotation if EJB client is in different environment where EJB session bean need to be deployed." }, { "code": null, "e": 3139, "s": 3030, "text": "Use @Remote annotation if EJB client is in different environment where EJB session bean need to be deployed." }, { "code": null, "e": 3205, "s": 3139, "text": "Create a stateful session bean, implementing the above interface." }, { "code": null, "e": 3271, "s": 3205, "text": "Create a stateful session bean, implementing the above interface." }, { "code": null, "e": 3460, "s": 3271, "text": "Use @Stateful annotation to signify it a stateful bean. EJB Container automatically creates the relevant configurations or interfaces required by reading this annotation during deployment." }, { "code": null, "e": 3649, "s": 3460, "text": "Use @Stateful annotation to signify it a stateful bean. EJB Container automatically creates the relevant configurations or interfaces required by reading this annotation during deployment." }, { "code": null, "e": 3776, "s": 3649, "text": "import javax.ejb.Remote;\n \n@Remote\npublic interface LibraryStatefulSessionBeanRemote {\n //add business method declarations\n}" }, { "code": null, "e": 3906, "s": 3776, "text": "@Stateful\npublic class LibraryStatefulSessionBean implements LibraryStatefulSessionBeanRemote {\n //implement business method \n}" }, { "code": null, "e": 3965, "s": 3906, "text": "Let us create a test EJB application to test stateful EJB." }, { "code": null, "e": 4240, "s": 3965, "text": "Create a project with a name EjbComponent under a package com.tutorialspoint.stateful as explained in the EJB − Create Application chapter. You can also use the project created in EJB - Create Application chapter as such for this chapter to understand stateful EJB concepts." }, { "code": null, "e": 4404, "s": 4240, "text": "Create LibraryStatefulSessionBean.java and LibraryStatefulSessionBeanRemote as explained in the EJB − Create Application chapter. Keep rest of the files unchanged." }, { "code": null, "e": 4500, "s": 4404, "text": "Clean and Build the application to make sure business logic is working as per the requirements." }, { "code": null, "e": 4667, "s": 4500, "text": "Finally, deploy the application in the form of jar file on JBoss Application Server. JBoss Application server will get started automatically if it is not started yet." }, { "code": null, "e": 4832, "s": 4667, "text": "Now create the EJB client, a console based application in the same way as explained in the EJB - Create Application chapter under topic Create Client to access EJB." }, { "code": null, "e": 5037, "s": 4832, "text": "package com.tutorialspoint.stateful;\n \nimport java.util.List;\nimport javax.ejb.Remote;\n \n@Remote\npublic interface LibraryStatefulSessionBeanRemote {\n void addBook(String bookName);\n List getBooks();\n}" }, { "code": null, "e": 5532, "s": 5037, "text": "package com.tutorialspoint.stateful;\n \nimport java.util.ArrayList;\nimport java.util.List;\nimport javax.ejb.Stateful;\n \n@Stateful\npublic class LibraryStatefulSessionBean implements LibraryStatefulSessionBeanRemote {\n \n List<String> bookShelf; \n \n public LibraryStatefulSessionBean() {\n bookShelf = new ArrayList<String>();\n }\n \n public void addBook(String bookName) {\n bookShelf.add(bookName);\n } \n \n public List<String> getBooks() {\n return bookShelf;\n }\n}" }, { "code": null, "e": 5611, "s": 5532, "text": "As soon as you deploy the EjbComponent project on JBOSS, notice the jboss log." }, { "code": null, "e": 5690, "s": 5611, "text": "As soon as you deploy the EjbComponent project on JBOSS, notice the jboss log." }, { "code": null, "e": 5793, "s": 5690, "text": "JBoss has automatically created a JNDI entry for our session bean − LibraryStatefulSessionBean/remote." }, { "code": null, "e": 5896, "s": 5793, "text": "JBoss has automatically created a JNDI entry for our session bean − LibraryStatefulSessionBean/remote." }, { "code": null, "e": 6035, "s": 5896, "text": "We will be using this lookup string to get remote business object of type − com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote " }, { "code": null, "e": 6174, "s": 6035, "text": "We will be using this lookup string to get remote business object of type − com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote " }, { "code": null, "e": 7070, "s": 6174, "text": "...\n16:30:01,401 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:\n LibraryStatefulSessionBean/remote - EJB3.x Default Remote Business Interface\n LibraryStatefulSessionBean/remote-com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote - EJB3.x Remote Business Interface\n16:30:02,723 INFO [SessionSpecContainer] Starting jboss.j2ee:jar=EjbComponent.jar,name=LibraryStatefulSessionBean,service=EJB3\n16:30:02,723 INFO [EJBContainer] STARTED EJB: com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote ejbName: LibraryStatefulSessionBean\n16:30:02,731 INFO [JndiSessionRegistrarBase] Binding the following Entries in Global JNDI:\n \n LibraryStatefulSessionBean/remote - EJB3.x Default Remote Business Interface\n LibraryStatefulSessionBean/remote-com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote - EJB3.x Remote Business Interface\n... \n" }, { "code": null, "e": 7244, "s": 7070, "text": "java.naming.factory.initial = org.jnp.interfaces.NamingContextFactory\njava.naming.factory.url.pkgs = org.jboss.naming:org.jnp.interfaces\njava.naming.provider.url = localhost" }, { "code": null, "e": 7334, "s": 7244, "text": "These properties are used to initialize the InitialContext object of java naming service." }, { "code": null, "e": 7424, "s": 7334, "text": "These properties are used to initialize the InitialContext object of java naming service." }, { "code": null, "e": 7492, "s": 7424, "text": "InitialContext object will be used to lookup stateful session bean." }, { "code": null, "e": 7560, "s": 7492, "text": "InitialContext object will be used to lookup stateful session bean." }, { "code": null, "e": 10799, "s": 7560, "text": "package com.tutorialspoint.test;\n \nimport com.tutorialspoint.stateful.LibraryStatefulSessionBeanRemote;\nimport java.io.BufferedReader;\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.util.List;\nimport java.util.Properties;\nimport javax.naming.InitialContext;\nimport javax.naming.NamingException;\n \npublic class EJBTester {\n \n BufferedReader brConsoleReader = null; \n Properties props;\n InitialContext ctx;\n {\n props = new Properties();\n try {\n props.load(new FileInputStream(\"jndi.properties\"));\n } catch (IOException ex) {\n ex.printStackTrace();\n }\n try {\n ctx = new InitialContext(props); \n } catch (NamingException ex) {\n ex.printStackTrace();\n }\n brConsoleReader = \n new BufferedReader(new InputStreamReader(System.in));\n }\n \n public static void main(String[] args) {\n \n EJBTester ejbTester = new EJBTester();\n \n ejbTester.testStatelessEjb();\n }\n \n private void showGUI() {\n System.out.println(\"**********************\");\n System.out.println(\"Welcome to Book Store\");\n System.out.println(\"**********************\");\n System.out.print(\"Options \\n1. Add Book\\n2. Exit \\nEnter Choice: \");\n }\n \n private void testStatelessEjb() {\n \n try {\n int choice = 1; \n \n LibraryStatefulSessionBeanRemote libraryBean =\n LibraryStatefulSessionBeanRemote)ctx.lookup(\"LibraryStatefulSessionBean/remote\");\n \n while (choice != 2) {\n String bookName;\n showGUI();\n String strChoice = brConsoleReader.readLine();\n choice = Integer.parseInt(strChoice);\n if (choice == 1) {\n System.out.print(\"Enter book name: \");\n bookName = brConsoleReader.readLine();\n Book book = new Book();\n book.setName(bookName);\n libraryBean.addBook(book); \n } else if (choice == 2) {\n break;\n }\n }\n \n List<Book> booksList = libraryBean.getBooks();\n \n System.out.println(\"Book(s) entered so far: \" + booksList.size());\n int i = 0;\n for (Book book:booksList) {\n System.out.println((i+1)+\". \" + book.getName());\n i++;\n } \n LibraryStatefulSessionBeanRemote libraryBean1 = \n (LibraryStatefulSessionBeanRemote)ctx.lookup(\"LibraryStatefulSessionBean/remote\");\n List<String> booksList1 = libraryBean1.getBooks();\n System.out.println(\n \"***Using second lookup to get library stateful object***\");\n System.out.println(\n \"Book(s) entered so far: \" + booksList1.size());\n for (int i = 0; i < booksList1.size(); ++i) {\n System.out.println((i+1)+\". \" + booksList1.get(i));\n }\t\t \n } catch (Exception e) {\n System.out.println(e.getMessage());\n e.printStackTrace();\n }finally {\n try {\n if(brConsoleReader !=null) {\n brConsoleReader.close();\n }\n } catch (IOException ex) {\n System.out.println(ex.getMessage());\n }\n }\n }\n}" }, { "code": null, "e": 10840, "s": 10799, "text": "EJBTester performs the following tasks −" }, { "code": null, "e": 10919, "s": 10840, "text": "Load properties from jndi.properties and initialize the InitialContext object." }, { "code": null, "e": 10998, "s": 10919, "text": "Load properties from jndi.properties and initialize the InitialContext object." }, { "code": null, "e": 11148, "s": 10998, "text": "In testStatefulEjb() method, jndi lookup is done with name - \"LibraryStatefulSessionBean/remote\" to obtain the remote business object (stateful ejb)." }, { "code": null, "e": 11298, "s": 11148, "text": "In testStatefulEjb() method, jndi lookup is done with name - \"LibraryStatefulSessionBean/remote\" to obtain the remote business object (stateful ejb)." }, { "code": null, "e": 11391, "s": 11298, "text": "Then the user is shown a library store User Interface and he/she is asked to enter a choice." }, { "code": null, "e": 11484, "s": 11391, "text": "Then the user is shown a library store User Interface and he/she is asked to enter a choice." }, { "code": null, "e": 11652, "s": 11484, "text": "If user enters 1, system asks for book name and saves the book using stateful session bean addBook() method. Session Bean is storing the book in its instance variable." }, { "code": null, "e": 11820, "s": 11652, "text": "If user enters 1, system asks for book name and saves the book using stateful session bean addBook() method. Session Bean is storing the book in its instance variable." }, { "code": null, "e": 11918, "s": 11820, "text": "If user enters 2, system retrieves books using stateful session bean getBooks() method and exits." }, { "code": null, "e": 12016, "s": 11918, "text": "If user enters 2, system retrieves books using stateful session bean getBooks() method and exits." }, { "code": null, "e": 12189, "s": 12016, "text": "Then another jndi lookup is done with the name - \"LibraryStatefulSessionBean/remote\" to obtain the remote business object (stateful EJB) again and listing of books is done." }, { "code": null, "e": 12362, "s": 12189, "text": "Then another jndi lookup is done with the name - \"LibraryStatefulSessionBean/remote\" to obtain the remote business object (stateful EJB) again and listing of books is done." }, { "code": null, "e": 12457, "s": 12362, "text": "Locate EJBTester.java in project explorer. Right click on EJBTester class and select run file." }, { "code": null, "e": 12507, "s": 12457, "text": "Verify the following output in Netbeans console −" }, { "code": null, "e": 12934, "s": 12507, "text": "run:\n**********************\nWelcome to Book Store\n**********************\nOptions \n1. Add Book\n2. Exit \nEnter Choice: 1\nEnter book name: Learn Java\n**********************\nWelcome to Book Store\n**********************\nOptions \n1. Add Book\n2. Exit \nEnter Choice: 2\nBook(s) entered so far: 1\n1. Learn Java\n***Using second lookup to get library stateful object***\nBook(s) entered so far: 0\nBUILD SUCCESSFUL (total time: 13 seconds)\n" }, { "code": null, "e": 13029, "s": 12934, "text": "Locate EJBTester.java in project explorer. Right click on EJBTester class and select run file." }, { "code": null, "e": 13078, "s": 13029, "text": "Verify the following output in Netbeans console." }, { "code": null, "e": 13349, "s": 13078, "text": "run:\n**********************\nWelcome to Book Store\n**********************\nOptions \n1. Add Book\n2. Exit \nEnter Choice: 2\nBook(s) entered so far: 0\n***Using second lookup to get library stateful object***\nBook(s) entered so far: 0\nBUILD SUCCESSFUL (total time: 12 seconds)\n" }, { "code": null, "e": 13444, "s": 13349, "text": "Output shown above states that for each lookup, a different stateful EJB instance\nis returned." }, { "code": null, "e": 13539, "s": 13444, "text": "Output shown above states that for each lookup, a different stateful EJB instance\nis returned." }, { "code": null, "e": 13658, "s": 13539, "text": "Stateful EJB object is keeping value for single session only. As in second run, we\nare not getting any value of books." }, { "code": null, "e": 13777, "s": 13658, "text": "Stateful EJB object is keeping value for single session only. As in second run, we\nare not getting any value of books." }, { "code": null, "e": 13784, "s": 13777, "text": " Print" }, { "code": null, "e": 13795, "s": 13784, "text": " Add Notes" } ]
How to create JFrame with no border and title bar in Java?
To create a JFrame with no border and title bar, use setUndecorated() − JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setPreferredSize(new Dimension(400, 300)); frame.setUndecorated(true); The following is an example to create JFrame with no border and title bar − import java.awt.Dimension; import java.awt.event.ActionEvent; import javax.swing.AbstractAction; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JPanel; public class SwingDemo { public static void main(String[] args) { JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setPreferredSize(new Dimension(400, 300)); frame.setUndecorated(true); JPanel panel = new JPanel(); panel.add(new JLabel("Demo!")); panel.add(new JButton(new AbstractAction("Close") { @Override public void actionPerformed(ActionEvent e) { System.exit(0); } })); frame.add(panel); frame.pack(); frame.setVisible(true); } }
[ { "code": null, "e": 1134, "s": 1062, "text": "To create a JFrame with no border and title bar, use setUndecorated() −" }, { "code": null, "e": 1294, "s": 1134, "text": "JFrame frame = new JFrame();\nframe.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\nframe.setPreferredSize(new Dimension(400, 300));\nframe.setUndecorated(true);" }, { "code": null, "e": 1370, "s": 1294, "text": "The following is an example to create JFrame with no border and title bar −" }, { "code": null, "e": 2158, "s": 1370, "text": "import java.awt.Dimension;\nimport java.awt.event.ActionEvent;\nimport javax.swing.AbstractAction;\nimport javax.swing.JButton;\nimport javax.swing.JFrame;\nimport javax.swing.JLabel;\nimport javax.swing.JPanel;\npublic class SwingDemo {\n public static void main(String[] args) {\n JFrame frame = new JFrame();\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.setPreferredSize(new Dimension(400, 300)); frame.setUndecorated(true);\n JPanel panel = new JPanel();\n panel.add(new JLabel(\"Demo!\"));\n panel.add(new JButton(new AbstractAction(\"Close\") {\n @Override\n public void actionPerformed(ActionEvent e) {\n System.exit(0);\n }\n }));\n frame.add(panel);\n frame.pack();\n frame.setVisible(true);\n }\n}" } ]
Introduction to Classes and Inheritance in Python
Object-oriented programming creates reusable patterns of code to prevent code redundancy in projects. One way that recyclable code is created is through inheritance, when one subclass leverages code from another base class. Inheritance is when a class uses code written within another class. Classes called child classes or subclasses inherit methods and variables from parent classes or base classes. Because the Child subclass is inheriting from the Parent base class, the Child class can reuse the code of Parent, allowing the programmer to use fewer lines of code and decrease redundancy. Derived classes are declared much like their parent class; however, a list of base classes to inherit from is given after the class name − class SubClassName (ParentClass1[, ParentClass2, ...]): 'Optional class documentation string' class_suite class Parent: # define parent class parentAttr = 100 def __init__(self): print "Calling parent constructor" def parentMethod(self): print 'Calling parent method' def setAttr(self, attr): Parent.parentAttr = attr def getAttr(self): print "Parent attribute :", Parent.parentAttr class Child(Parent): # define child class def __init__(self): print "Calling child constructor" def childMethod(self): print 'Calling child method' c = Child() # instance of child c.childMethod() # child calls its method c.parentMethod() # calls parent's method c.setAttr(200) # again call parent's method c.getAttr() # again call parent's method When the above code is executed, it produces the following result Calling child constructor Calling child method Calling parent method Parent attribute :200
[ { "code": null, "e": 1286, "s": 1062, "text": "Object-oriented programming creates reusable patterns of code to prevent code redundancy in projects. One way that recyclable code is created is through inheritance, when one subclass leverages code from another base class." }, { "code": null, "e": 1354, "s": 1286, "text": "Inheritance is when a class uses code written within another class." }, { "code": null, "e": 1464, "s": 1354, "text": "Classes called child classes or subclasses inherit methods and variables from parent classes or base classes." }, { "code": null, "e": 1655, "s": 1464, "text": "Because the Child subclass is inheriting from the Parent base class, the Child class can reuse the code of Parent, allowing the programmer to use fewer lines of code and decrease redundancy." }, { "code": null, "e": 1794, "s": 1655, "text": "Derived classes are declared much like their parent class; however, a list of base classes to inherit from is given after the class name −" }, { "code": null, "e": 1906, "s": 1794, "text": "class SubClassName (ParentClass1[, ParentClass2, ...]):\n 'Optional class documentation string'\n class_suite" }, { "code": null, "e": 2627, "s": 1906, "text": "class Parent: # define parent class\n parentAttr = 100\n def __init__(self):\n print \"Calling parent constructor\"\n def parentMethod(self):\n print 'Calling parent method'\n def setAttr(self, attr):\n Parent.parentAttr = attr\n def getAttr(self):\n print \"Parent attribute :\", Parent.parentAttr\nclass Child(Parent): # define child class\n def __init__(self):\n print \"Calling child constructor\"\n def childMethod(self):\n print 'Calling child method'\nc = Child() # instance of child\nc.childMethod() # child calls its method\nc.parentMethod() # calls parent's method\nc.setAttr(200) # again call parent's method\nc.getAttr() # again call parent's method" }, { "code": null, "e": 2694, "s": 2627, "text": "When the above code is executed, it produces the following result " }, { "code": null, "e": 2787, "s": 2694, "text": "Calling child constructor\nCalling child method\nCalling parent method\nParent attribute :200\n " } ]
How to plot a dataframe using Pandas? - GeeksforGeeks
22 Jul, 2021 Pandas is one of the most popular Python packages used in data science. Pandas offer a powerful, and flexible data structure ( Dataframe & Series ) to manipulate, and analyze the data. Visualization is the best way to interpret the data. Python has many popular plotting libraries that make visualization easy. Some of them are matplotlib, seaborn, and plotly. It has great integration with matplotlib. We can plot a dataframe using the plot() method. But we need a dataframe to plot. We can create a dataframe by just passing a dictionary to the DataFrame() method of the pandas library. Let’s create a simple dataframe: Python # importing required library# In case pandas is not installed on your machine# use the command 'pip install pandas'. import pandas as pdimport matplotlib.pyplot as plt # A dictionary which represents datadata_dict = { 'name':['p1','p2','p3','p4','p5','p6'], 'age':[20,20,21,20,21,20], 'math_marks':[100,90,91,98,92,95], 'physics_marks':[90,100,91,92,98,95], 'chem_marks' :[93,89,99,92,94,92] } # creating a data frame objectdf = pd.DataFrame(data_dict) # show the dataframe# bydefault head() show # first five rows from topdf.head() Output: There are a number of plots available to interpret the data. Each graph is used for a purpose. Some of the plots are BarPlots, ScatterPlots, and Histograms, etc. To get the scatterplot of a dataframe all we have to do is to just call the plot() method by specifying some parameters. kind='scatter',x= 'some_column',y='some_colum',color='somecolor' Python3 # scatter plotdf.plot(kind = 'scatter', x = 'math_marks', y = 'physics_marks', color = 'red') # set the titleplt.title('ScatterPlot') # show the plotplt.show() Output: There are many ways to customize plots this is the basic one. Similarly, we have to specify some parameters for plot() method to get the bar plot. kind='bar',x= 'some_column',y='some_colum',color='somecolor' Python3 # bar plotdf.plot(kind = 'bar', x = 'name', y = 'physics_marks', color = 'green') # set the titleplt.title('BarPlot') # show the plotplt.show() Output: The line plot of a single column is not always useful, to get more insights we have to plot multiple columns on the same graph. To do so we have to reuse the axes. kind=’line’,x= ‘some_column’,y=’some_colum’,color=’somecolor’,ax=’someaxes’ Python3 #Get current axisax = plt.gca() # line plot for math marksdf.plot(kind = 'line', x = 'name', y = 'math_marks', color = 'green',ax = ax) # line plot for physics marksdf.plot(kind = 'line',x = 'name', y = 'physics_marks', color = 'blue',ax = ax) # line plot for chemistry marksdf.plot(kind = 'line',x = 'name', y = 'chem_marks', color = 'black',ax = ax) # set the titleplt.title('LinePlots') # show the plotplt.show() Output: Python pandas-dataFrame Python-pandas Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Install PIP on Windows ? How To Convert Python Dictionary To JSON? How to drop one or multiple columns in Pandas Dataframe Check if element exists in list in Python Python | os.path.join() method Selecting rows in pandas DataFrame based on conditions Defaultdict in Python Python | Get unique values from a list Create a directory in Python Python | Pandas dataframe.groupby()
[ { "code": null, "e": 24292, "s": 24264, "text": "\n22 Jul, 2021" }, { "code": null, "e": 24531, "s": 24292, "text": "Pandas is one of the most popular Python packages used in data science. Pandas offer a powerful, and flexible data structure ( Dataframe & Series ) to manipulate, and analyze the data. Visualization is the best way to interpret the data. " }, { "code": null, "e": 24883, "s": 24531, "text": "Python has many popular plotting libraries that make visualization easy. Some of them are matplotlib, seaborn, and plotly. It has great integration with matplotlib. We can plot a dataframe using the plot() method. But we need a dataframe to plot. We can create a dataframe by just passing a dictionary to the DataFrame() method of the pandas library. " }, { "code": null, "e": 24917, "s": 24883, "text": "Let’s create a simple dataframe: " }, { "code": null, "e": 24924, "s": 24917, "text": "Python" }, { "code": "# importing required library# In case pandas is not installed on your machine# use the command 'pip install pandas'. import pandas as pdimport matplotlib.pyplot as plt # A dictionary which represents datadata_dict = { 'name':['p1','p2','p3','p4','p5','p6'], 'age':[20,20,21,20,21,20], 'math_marks':[100,90,91,98,92,95], 'physics_marks':[90,100,91,92,98,95], 'chem_marks' :[93,89,99,92,94,92] } # creating a data frame objectdf = pd.DataFrame(data_dict) # show the dataframe# bydefault head() show # first five rows from topdf.head()", "e": 25525, "s": 24924, "text": null }, { "code": null, "e": 25535, "s": 25525, "text": "Output: " }, { "code": null, "e": 25699, "s": 25537, "text": "There are a number of plots available to interpret the data. Each graph is used for a purpose. Some of the plots are BarPlots, ScatterPlots, and Histograms, etc." }, { "code": null, "e": 25821, "s": 25699, "text": "To get the scatterplot of a dataframe all we have to do is to just call the plot() method by specifying some parameters. " }, { "code": null, "e": 25887, "s": 25821, "text": "kind='scatter',x= 'some_column',y='some_colum',color='somecolor'\n" }, { "code": null, "e": 25895, "s": 25887, "text": "Python3" }, { "code": "# scatter plotdf.plot(kind = 'scatter', x = 'math_marks', y = 'physics_marks', color = 'red') # set the titleplt.title('ScatterPlot') # show the plotplt.show()", "e": 26078, "s": 25895, "text": null }, { "code": null, "e": 26087, "s": 26078, "text": "Output: " }, { "code": null, "e": 26150, "s": 26087, "text": "There are many ways to customize plots this is the basic one. " }, { "code": null, "e": 26236, "s": 26150, "text": "Similarly, we have to specify some parameters for plot() method to get the bar plot. " }, { "code": null, "e": 26298, "s": 26236, "text": "kind='bar',x= 'some_column',y='some_colum',color='somecolor'\n" }, { "code": null, "e": 26306, "s": 26298, "text": "Python3" }, { "code": "# bar plotdf.plot(kind = 'bar', x = 'name', y = 'physics_marks', color = 'green') # set the titleplt.title('BarPlot') # show the plotplt.show()", "e": 26473, "s": 26306, "text": null }, { "code": null, "e": 26483, "s": 26473, "text": "Output: " }, { "code": null, "e": 26650, "s": 26485, "text": "The line plot of a single column is not always useful, to get more insights we have to plot multiple columns on the same graph. To do so we have to reuse the axes. " }, { "code": null, "e": 26727, "s": 26650, "text": "kind=’line’,x= ‘some_column’,y=’some_colum’,color=’somecolor’,ax=’someaxes’ " }, { "code": null, "e": 26735, "s": 26727, "text": "Python3" }, { "code": "#Get current axisax = plt.gca() # line plot for math marksdf.plot(kind = 'line', x = 'name', y = 'math_marks', color = 'green',ax = ax) # line plot for physics marksdf.plot(kind = 'line',x = 'name', y = 'physics_marks', color = 'blue',ax = ax) # line plot for chemistry marksdf.plot(kind = 'line',x = 'name', y = 'chem_marks', color = 'black',ax = ax) # set the titleplt.title('LinePlots') # show the plotplt.show()", "e": 27206, "s": 26735, "text": null }, { "code": null, "e": 27216, "s": 27206, "text": "Output: " }, { "code": null, "e": 27240, "s": 27216, "text": "Python pandas-dataFrame" }, { "code": null, "e": 27254, "s": 27240, "text": "Python-pandas" }, { "code": null, "e": 27261, "s": 27254, "text": "Python" }, { "code": null, "e": 27359, "s": 27261, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27391, "s": 27359, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 27433, "s": 27391, "text": "How To Convert Python Dictionary To JSON?" }, { "code": null, "e": 27489, "s": 27433, "text": "How to drop one or multiple columns in Pandas Dataframe" }, { "code": null, "e": 27531, "s": 27489, "text": "Check if element exists in list in Python" }, { "code": null, "e": 27562, "s": 27531, "text": "Python | os.path.join() method" }, { "code": null, "e": 27617, "s": 27562, "text": "Selecting rows in pandas DataFrame based on conditions" }, { "code": null, "e": 27639, "s": 27617, "text": "Defaultdict in Python" }, { "code": null, "e": 27678, "s": 27639, "text": "Python | Get unique values from a list" }, { "code": null, "e": 27707, "s": 27678, "text": "Create a directory in Python" } ]
How to create an infinite loop in C#?
An infinite loop is a loop that never terminates and repeats indefinitely. Let us see an example to create an infinite loop in C#. using System; namespace Demo { class Program { static void Main(string[] args) { for (int a = 0; a < 50; a--) { Console.WriteLine("value : {0}", a); } Console.ReadLine(); } } } Above, the loop executes until a < 50. The value of is set to 0 initially. int a = 0; The value of a decrements after each iteration since it is set to. a--; Therefore the value of a will never be above 50 and the condition a <50 will be true always. This will make the loop an infinite loop.
[ { "code": null, "e": 1137, "s": 1062, "text": "An infinite loop is a loop that never terminates and repeats indefinitely." }, { "code": null, "e": 1193, "s": 1137, "text": "Let us see an example to create an infinite loop in C#." }, { "code": null, "e": 1427, "s": 1193, "text": "using System;\nnamespace Demo {\n class Program {\n static void Main(string[] args) {\n for (int a = 0; a < 50; a--) {\n Console.WriteLine(\"value : {0}\", a);\n }\n Console.ReadLine();\n }\n }\n}" }, { "code": null, "e": 1502, "s": 1427, "text": "Above, the loop executes until a < 50. The value of is set to 0 initially." }, { "code": null, "e": 1513, "s": 1502, "text": "int a = 0;" }, { "code": null, "e": 1580, "s": 1513, "text": "The value of a decrements after each iteration since it is set to." }, { "code": null, "e": 1585, "s": 1580, "text": "a--;" }, { "code": null, "e": 1720, "s": 1585, "text": "Therefore the value of a will never be above 50 and the condition a <50 will be true always. This will make the loop an infinite loop." } ]
Artificial Intelligence in Mechanical Engineering | by Sunny K. Tuladhar | Towards Data Science
Artificial Intelligence and Machine Learning seems to be the current buzzword as everyone seems to be getting into this subject. Artificial Intelligence seems to have a role in all fields of science. According to Britannica , “Artificial intelligence (AI), is broadly defined as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” By intelligent beings it basically means humans ... but maybe not all humans...so anyway,It is usually classified into three subsets as shown below. Artificial Intelligence is a broader term which in cooperates Machine Learning. Machine learning uses statistical methods to allow machines to improve with experience. Deep Learning, again, is the subset of Machine Learning which uses multi layer neural networks that mimic the human brain and can learn incredibly difficult tasks with enough data. We are going to talk about Deep learning methods and its possible role in the field of Mechanical Engineering. Some common examples could be Anomaly Detection(Machine Learning) and Image based Part Classification(Deep Learning). The focus will be on Image based part classifiers and why we need them. Firstly, what is an image classifier? The ever famous AI which recognizes cat-dog pictures should come to mind. Here’s a link to the code of such a program. The data-set used contains images of cats and dogs, the algorithm learns from it and then is able to guess with 97% accuracy whether a randomly shown image is a cat or a dog. We will attempt a similar code but using Nuts, Bolts, Washers and Locating Pins as our Cats and Dogs..... because mechanical engineering. So how does it work? An algorithm is able to classify images(efficiently) by using a Machine Learning algorithm called Convolutional Neural Networks(CNN) a method used in Deep Learning. We will be using a simple version of this model called Sequential to let our model distinguish the images into four classes Nuts, Bolts, Washers and Locating Pins. The model will learn by “observing” a set of training images. After learning we will see how accurately it can predict what an image (which it has not seen) is. Go directly to the code in github We downloaded 238 parts each of the 4 classes (Total 238 x 4 = 952) from various part libraries available on the internet. Then we took 8 different isometric images of each part. This was done to augment the data available, as only 238 images for each part would not be enough to train a good neural network. A single class now has 1904 images(8 isometric images of 238 parts) a total of 7616 images. Each image is of 224 x 224 pixels. We then have our labels with numbers 0,1,2,3 each number corresponds to a particular image and means it belongs to certain class #Integers and their corresponding classes{0: 'locatingpin', 1: 'washer', 2: 'bolt', 3: 'nut'} After training on the above images we will then see how well our model predicts a random image it has not seen. The process took place in 7 steps. We will get to the details later. The brief summary is Data Collection : The data for each class was collected from various standard part libraries on the internet.Data Preparation : 8 Isometric view screenshots were taken from each image and reduced to 224 x 224 pixels.Model Selection : A Sequential CNN model was selected as it was simple and good for image classificationTrain the Model: The model was trained on our data of 7616 images with 80/20 train-test splitEvaluate the Model: The results of the model were evaluated. How well it predicted the classes?Hyperparameter Tuning: This process is done to tune the hyperparameters to get better results . We have already tuned our model in this caseMake Predictions: Check how well it predicts the real world data Data Collection : The data for each class was collected from various standard part libraries on the internet. Data Preparation : 8 Isometric view screenshots were taken from each image and reduced to 224 x 224 pixels. Model Selection : A Sequential CNN model was selected as it was simple and good for image classification Train the Model: The model was trained on our data of 7616 images with 80/20 train-test split Evaluate the Model: The results of the model were evaluated. How well it predicted the classes? Hyperparameter Tuning: This process is done to tune the hyperparameters to get better results . We have already tuned our model in this case Make Predictions: Check how well it predicts the real world data We downloaded the part data of various nuts and bolts from the different part libraries on the internet. These websites have numerous 3D models for standard parts from various makers in different file formats. Since we will be using FreeCAD API to extract the images we downloaded the files in neutral format (STEP). As already mentioned earlier, 238 parts from each of the 4 class was downloaded, that was a total of 952 parts. Then we ran a program using FreeCAD API that automatically took 8 isometric screenshots of 224 x 224 pixels of each part. FreeCAD is a free and open-source general-purpose parametric 3D computer-aided design modeler which is written in Python. As already mentioned above, each data creates 8 images of 224 x 224 pixels. So we now have a total of 1904 image from each of the 4 classes, thus a total of 7616 images. Each image is treated as a separate data even though 8 images come from the same part. The images were kept in separated folders according to their class. i.e. we have four folders Nut,Bolt, Washer and Locating Pin. Next, each of these images were converted into an array with their pixel values in grayscale. The value of the pixels range from 0 (black), 255 (white). So its actually 255 shades of gray. Now each of our image becomes a 224 x 224 array. So our entire dataset is a 3D array of 7616 x 224 x 224 dimensions.7616 (No. of images) x 224 x 224 (pixel value of each image) Similarly we create a the label dataset by giving the value of the following integers for the shown classes to corresponding indexes in the dataset. If our 5th(index) data in the dataset(X) is a locating pin , the 5th data in label set (Y) will have value 0. #integers and the corresponding classes as already mentioned above{0: 'locatingpin', 1: 'washer', 2: 'bolt', 3: 'nut'} Since this is an image recognition problem we will be using a Convolutional Neural Network (CNN). CNN is a type of Neural Network that handles image data especially well. A Neural Network is a type of Machine learning algorithm that learns in a similar manner to a human brain. The following code is how our CNN looks like. Don’t worry about it if you don’t understand. The idea is the 224 x 224 features from each of our data will go through these network and spit out an answer. The model will adjusts its weights accordingly and after many iterations will be able to predict a random image’s class. #Model descriptionModel: "sequential_1"_________________________________________________________________Layer (type) Output Shape Param # =================================================================conv2d_1 (Conv2D) (None, 222, 222, 128) 1280 _________________________________________________________________activation_1 (Activation) (None, 222, 222, 128) 0 _________________________________________________________________max_pooling2d_1 (MaxPooling2 (None, 111, 111, 128) 0 _________________________________________________________________conv2d_2 (Conv2D) (None, 109, 109, 128) 147584 _________________________________________________________________activation_2 (Activation) (None, 109, 109, 128) 0 _________________________________________________________________max_pooling2d_2 (MaxPooling2 (None, 54, 54, 128) 0 _________________________________________________________________flatten_1 (Flatten) (None, 373248) 0 _________________________________________________________________dense_1 (Dense) (None, 64) 23887936 _________________________________________________________________dense_2 (Dense) (None, 4) 260 _________________________________________________________________activation_3 (Activation) (None, 4) 0 =================================================================Total params: 24,037,060Trainable params: 24,037,060Non-trainable params: 0 Here is a YouTube video of Mark Rober (a NASA mechanical engineer) explaining how neural networks work with very little coding involved. Now finally the time has come to train the model using our dataset of 7616 images. So our [X] is a 3D array of 7616 x 224 x224 and [y] label set is a 7616 x 1 array. For all training purposes a data must be split into at least two parts: Training and Validation (Test) set (test and validation are used interchangeably when only 2 sets are involved). The training set is the data the model sees and trains on. It is the data from which it adjusts its weights and learn. The accuracy of our model on this set is the training accuracy. It is generally higher than the validation accuracy. The validation data usually comes from the same distribution as the training set and is the data the model has not seen. After the model has trained from the training set, it will try to predict the data of the validation set. How accurately it predicts this, is our validation accuracy. This is more important than the training accuracy. It shows how well the model generalizes. In real life application it is common to split it even into three parts. Train, Validation and Test. For our case we will only split it into a training and test set. It will be a 80–20 split. 80 % of the images will be used for training and 20% will be used for testing. That is train on 6092 samples, test on 1524 samples from the total 7616. For our model we trained for 15 epochs with a batch-size of 64. The number of epochs is a hyperparameter that defines the number times that the learning algorithm will work through the entire training dataset. One epoch means that each sample in the training dataset has had an opportunity to update the internal model parameters. An epoch is comprised of one or more batches. You can think of a for-loop over the number of epochs where each loop proceeds over the training dataset. Within this for-loop is another nested for-loop that iterates over each batch of samples, where one batch has the specified “batch size” number of samples. [2] That is our model will go through our entire 7616 samples 15 times (epoch) in total and adjust its weights each time so the prediction is more accurate each time. In each epoch, it will go through the 7616 samples, 64 samples (batch size) at a time. The model keeps updating its weight so as to minimize the cost(loss), thus giving us the best accuracy. Cost is a measure of inaccuracy of the model in predicting the class of the image. Cost functions are used to estimate how badly models are performing. Put simply, a cost function is a measure of how wrong the model is in terms of its ability to estimate the relationship between X and y. [1] If the algorithm predicts incorrectly the cost increases, if it predicts correct the cost decreases. After training for 15 epochs we can see the following graph of loss and accuracy. (Cost and loss can be used interchangeably for our case) The loss decreased as the model trained more times. It becomes better at classifying the images with each epoch. The model is not able to improve the performance much on the validation set. The accuracy increased as the model trains for each epoch. It becomes better at classifying the images. The accuracy is for the validation set is lower than the training set as it has not trained on it directly. The final value is 97.64% which is not bad. The next step would to be change the hyperparameters, the learning rate,number of epochs, data size etc. to improve our model. In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are derived via training.[3] For our purpose we have already modified these parameters before this article was written, in a way to obtain an optimum performance for display on this article. We increased the dataset size and number of epochs to improve the accuracy. The final step after making the adjustments on the model is to make predictions using actual data that will be used on this model. If the model does not perform well on this further hyperparameter tuning can commence. Machine Learning is a rather iterative and empirical process and thus the tuning of hyperparameters is often compared to an art rather than science as although we have an idea of what changes will happen by changing certain hyperparameters, we cannot be certain of it. This ability to classify mechanical parts could enable us to recommend parts from a standard library based only on an image or a CAD model provided by the customer. Currently to search for a required part from a standard library you have to go through a catalogue and be able to tell which part you want based on the available options and your knowledge of the catalogue. There are serial codes to remember as a change in a single digit or alphabet might mean a different type of part. If an image can be used to get the required part from the standard library, all we will need to do is to make a rough CAD model of it and send it through our algorithm. The algorithm will decide which parts are best and help narrow down our search significantly. If the classification method gets detailed and fine-tuned enough it should be able to classify with much detail what type of part you want. The narrowed search saves a lot of time. This is especially useful in a library where there are thousands of similar parts. Deep Learning (Artificial Intelligence) is a field of study that has immense possibilities as it enables us to extract a lot of knowledge from raw data. At its core it is merely data analysis. In this age of the internet, data is everywhere and if we are able to extract it efficiently, a lot can be accomplished. This field has a lot of possible applications in the domain of Mechanical Engineering as well. Since almost all studies in Deep Learning need a domain expert it would be advisable for all engineers with interest in Data Analytics, even though they haven’t majored in Computer Sciences, to learn about data science, machine learning and examine its possibilities. The knowledge of the domain plus the skills of data analysis will really help us excel in our own fields. I am thankful to Pro-Mech Minds for letting me do this and specially to its data science team which includes Gopal Kisi, Bishesh Shakya and Series Chikanbanjar for their immense help with this project. Pro-Mech Minds & Engineering Services is one of the companies in Nepal working with both mechanical and IT solutions in engineering. This idea popped as an attempt to combine design engineering with Data Science. Last but not the least I would like to offer my special thanks to Saugat K.C. for acting as a mentor for our Data Science team. Link to the code in github References [1]Conor McDonald, Machine learning fundamentals (I): Cost functions and gradient descent(2017), Towards data science [2]Jason Brownlee, Difference Between a Batch and an Epoch in a Neural Network(2018), machinelearningmastery.com [3] Hyperparameter_(machine_learning), Wikipedia [4] Andrew Ng, Convolutional Neural Networks of the Deep Learning Specialization by deeplearning.ai. (n.d.). Retrieved from Coursera
[ { "code": null, "e": 725, "s": 171, "text": "Artificial Intelligence and Machine Learning seems to be the current buzzword as everyone seems to be getting into this subject. Artificial Intelligence seems to have a role in all fields of science. According to Britannica , “Artificial intelligence (AI), is broadly defined as the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” By intelligent beings it basically means humans ... but maybe not all humans...so anyway,It is usually classified into three subsets as shown below." }, { "code": null, "e": 893, "s": 725, "text": "Artificial Intelligence is a broader term which in cooperates Machine Learning. Machine learning uses statistical methods to allow machines to improve with experience." }, { "code": null, "e": 1074, "s": 893, "text": "Deep Learning, again, is the subset of Machine Learning which uses multi layer neural networks that mimic the human brain and can learn incredibly difficult tasks with enough data." }, { "code": null, "e": 1375, "s": 1074, "text": "We are going to talk about Deep learning methods and its possible role in the field of Mechanical Engineering. Some common examples could be Anomaly Detection(Machine Learning) and Image based Part Classification(Deep Learning). The focus will be on Image based part classifiers and why we need them." }, { "code": null, "e": 1707, "s": 1375, "text": "Firstly, what is an image classifier? The ever famous AI which recognizes cat-dog pictures should come to mind. Here’s a link to the code of such a program. The data-set used contains images of cats and dogs, the algorithm learns from it and then is able to guess with 97% accuracy whether a randomly shown image is a cat or a dog." }, { "code": null, "e": 1845, "s": 1707, "text": "We will attempt a similar code but using Nuts, Bolts, Washers and Locating Pins as our Cats and Dogs..... because mechanical engineering." }, { "code": null, "e": 2356, "s": 1845, "text": "So how does it work? An algorithm is able to classify images(efficiently) by using a Machine Learning algorithm called Convolutional Neural Networks(CNN) a method used in Deep Learning. We will be using a simple version of this model called Sequential to let our model distinguish the images into four classes Nuts, Bolts, Washers and Locating Pins. The model will learn by “observing” a set of training images. After learning we will see how accurately it can predict what an image (which it has not seen) is." }, { "code": null, "e": 2390, "s": 2356, "text": "Go directly to the code in github" }, { "code": null, "e": 2826, "s": 2390, "text": "We downloaded 238 parts each of the 4 classes (Total 238 x 4 = 952) from various part libraries available on the internet. Then we took 8 different isometric images of each part. This was done to augment the data available, as only 238 images for each part would not be enough to train a good neural network. A single class now has 1904 images(8 isometric images of 238 parts) a total of 7616 images. Each image is of 224 x 224 pixels." }, { "code": null, "e": 2955, "s": 2826, "text": "We then have our labels with numbers 0,1,2,3 each number corresponds to a particular image and means it belongs to certain class" }, { "code": null, "e": 3049, "s": 2955, "text": "#Integers and their corresponding classes{0: 'locatingpin', 1: 'washer', 2: 'bolt', 3: 'nut'}" }, { "code": null, "e": 3161, "s": 3049, "text": "After training on the above images we will then see how well our model predicts a random image it has not seen." }, { "code": null, "e": 3251, "s": 3161, "text": "The process took place in 7 steps. We will get to the details later. The brief summary is" }, { "code": null, "e": 3964, "s": 3251, "text": "Data Collection : The data for each class was collected from various standard part libraries on the internet.Data Preparation : 8 Isometric view screenshots were taken from each image and reduced to 224 x 224 pixels.Model Selection : A Sequential CNN model was selected as it was simple and good for image classificationTrain the Model: The model was trained on our data of 7616 images with 80/20 train-test splitEvaluate the Model: The results of the model were evaluated. How well it predicted the classes?Hyperparameter Tuning: This process is done to tune the hyperparameters to get better results . We have already tuned our model in this caseMake Predictions: Check how well it predicts the real world data" }, { "code": null, "e": 4074, "s": 3964, "text": "Data Collection : The data for each class was collected from various standard part libraries on the internet." }, { "code": null, "e": 4182, "s": 4074, "text": "Data Preparation : 8 Isometric view screenshots were taken from each image and reduced to 224 x 224 pixels." }, { "code": null, "e": 4287, "s": 4182, "text": "Model Selection : A Sequential CNN model was selected as it was simple and good for image classification" }, { "code": null, "e": 4381, "s": 4287, "text": "Train the Model: The model was trained on our data of 7616 images with 80/20 train-test split" }, { "code": null, "e": 4477, "s": 4381, "text": "Evaluate the Model: The results of the model were evaluated. How well it predicted the classes?" }, { "code": null, "e": 4618, "s": 4477, "text": "Hyperparameter Tuning: This process is done to tune the hyperparameters to get better results . We have already tuned our model in this case" }, { "code": null, "e": 4683, "s": 4618, "text": "Make Predictions: Check how well it predicts the real world data" }, { "code": null, "e": 5000, "s": 4683, "text": "We downloaded the part data of various nuts and bolts from the different part libraries on the internet. These websites have numerous 3D models for standard parts from various makers in different file formats. Since we will be using FreeCAD API to extract the images we downloaded the files in neutral format (STEP)." }, { "code": null, "e": 5112, "s": 5000, "text": "As already mentioned earlier, 238 parts from each of the 4 class was downloaded, that was a total of 952 parts." }, { "code": null, "e": 5356, "s": 5112, "text": "Then we ran a program using FreeCAD API that automatically took 8 isometric screenshots of 224 x 224 pixels of each part. FreeCAD is a free and open-source general-purpose parametric 3D computer-aided design modeler which is written in Python." }, { "code": null, "e": 5613, "s": 5356, "text": "As already mentioned above, each data creates 8 images of 224 x 224 pixels. So we now have a total of 1904 image from each of the 4 classes, thus a total of 7616 images. Each image is treated as a separate data even though 8 images come from the same part." }, { "code": null, "e": 5742, "s": 5613, "text": "The images were kept in separated folders according to their class. i.e. we have four folders Nut,Bolt, Washer and Locating Pin." }, { "code": null, "e": 5931, "s": 5742, "text": "Next, each of these images were converted into an array with their pixel values in grayscale. The value of the pixels range from 0 (black), 255 (white). So its actually 255 shades of gray." }, { "code": null, "e": 6108, "s": 5931, "text": "Now each of our image becomes a 224 x 224 array. So our entire dataset is a 3D array of 7616 x 224 x 224 dimensions.7616 (No. of images) x 224 x 224 (pixel value of each image)" }, { "code": null, "e": 6367, "s": 6108, "text": "Similarly we create a the label dataset by giving the value of the following integers for the shown classes to corresponding indexes in the dataset. If our 5th(index) data in the dataset(X) is a locating pin , the 5th data in label set (Y) will have value 0." }, { "code": null, "e": 6486, "s": 6367, "text": "#integers and the corresponding classes as already mentioned above{0: 'locatingpin', 1: 'washer', 2: 'bolt', 3: 'nut'}" }, { "code": null, "e": 6764, "s": 6486, "text": "Since this is an image recognition problem we will be using a Convolutional Neural Network (CNN). CNN is a type of Neural Network that handles image data especially well. A Neural Network is a type of Machine learning algorithm that learns in a similar manner to a human brain." }, { "code": null, "e": 7088, "s": 6764, "text": "The following code is how our CNN looks like. Don’t worry about it if you don’t understand. The idea is the 224 x 224 features from each of our data will go through these network and spit out an answer. The model will adjusts its weights accordingly and after many iterations will be able to predict a random image’s class." }, { "code": null, "e": 8698, "s": 7088, "text": "#Model descriptionModel: \"sequential_1\"_________________________________________________________________Layer (type) Output Shape Param # =================================================================conv2d_1 (Conv2D) (None, 222, 222, 128) 1280 _________________________________________________________________activation_1 (Activation) (None, 222, 222, 128) 0 _________________________________________________________________max_pooling2d_1 (MaxPooling2 (None, 111, 111, 128) 0 _________________________________________________________________conv2d_2 (Conv2D) (None, 109, 109, 128) 147584 _________________________________________________________________activation_2 (Activation) (None, 109, 109, 128) 0 _________________________________________________________________max_pooling2d_2 (MaxPooling2 (None, 54, 54, 128) 0 _________________________________________________________________flatten_1 (Flatten) (None, 373248) 0 _________________________________________________________________dense_1 (Dense) (None, 64) 23887936 _________________________________________________________________dense_2 (Dense) (None, 4) 260 _________________________________________________________________activation_3 (Activation) (None, 4) 0 =================================================================Total params: 24,037,060Trainable params: 24,037,060Non-trainable params: 0" }, { "code": null, "e": 8835, "s": 8698, "text": "Here is a YouTube video of Mark Rober (a NASA mechanical engineer) explaining how neural networks work with very little coding involved." }, { "code": null, "e": 9186, "s": 8835, "text": "Now finally the time has come to train the model using our dataset of 7616 images. So our [X] is a 3D array of 7616 x 224 x224 and [y] label set is a 7616 x 1 array. For all training purposes a data must be split into at least two parts: Training and Validation (Test) set (test and validation are used interchangeably when only 2 sets are involved)." }, { "code": null, "e": 9422, "s": 9186, "text": "The training set is the data the model sees and trains on. It is the data from which it adjusts its weights and learn. The accuracy of our model on this set is the training accuracy. It is generally higher than the validation accuracy." }, { "code": null, "e": 9802, "s": 9422, "text": "The validation data usually comes from the same distribution as the training set and is the data the model has not seen. After the model has trained from the training set, it will try to predict the data of the validation set. How accurately it predicts this, is our validation accuracy. This is more important than the training accuracy. It shows how well the model generalizes." }, { "code": null, "e": 9903, "s": 9802, "text": "In real life application it is common to split it even into three parts. Train, Validation and Test." }, { "code": null, "e": 10146, "s": 9903, "text": "For our case we will only split it into a training and test set. It will be a 80–20 split. 80 % of the images will be used for training and 20% will be used for testing. That is train on 6092 samples, test on 1524 samples from the total 7616." }, { "code": null, "e": 10210, "s": 10146, "text": "For our model we trained for 15 epochs with a batch-size of 64." }, { "code": null, "e": 10356, "s": 10210, "text": "The number of epochs is a hyperparameter that defines the number times that the learning algorithm will work through the entire training dataset." }, { "code": null, "e": 10523, "s": 10356, "text": "One epoch means that each sample in the training dataset has had an opportunity to update the internal model parameters. An epoch is comprised of one or more batches." }, { "code": null, "e": 10789, "s": 10523, "text": "You can think of a for-loop over the number of epochs where each loop proceeds over the training dataset. Within this for-loop is another nested for-loop that iterates over each batch of samples, where one batch has the specified “batch size” number of samples. [2]" }, { "code": null, "e": 11039, "s": 10789, "text": "That is our model will go through our entire 7616 samples 15 times (epoch) in total and adjust its weights each time so the prediction is more accurate each time. In each epoch, it will go through the 7616 samples, 64 samples (batch size) at a time." }, { "code": null, "e": 11436, "s": 11039, "text": "The model keeps updating its weight so as to minimize the cost(loss), thus giving us the best accuracy. Cost is a measure of inaccuracy of the model in predicting the class of the image. Cost functions are used to estimate how badly models are performing. Put simply, a cost function is a measure of how wrong the model is in terms of its ability to estimate the relationship between X and y. [1]" }, { "code": null, "e": 11537, "s": 11436, "text": "If the algorithm predicts incorrectly the cost increases, if it predicts correct the cost decreases." }, { "code": null, "e": 11676, "s": 11537, "text": "After training for 15 epochs we can see the following graph of loss and accuracy. (Cost and loss can be used interchangeably for our case)" }, { "code": null, "e": 11866, "s": 11676, "text": "The loss decreased as the model trained more times. It becomes better at classifying the images with each epoch. The model is not able to improve the performance much on the validation set." }, { "code": null, "e": 12122, "s": 11866, "text": "The accuracy increased as the model trains for each epoch. It becomes better at classifying the images. The accuracy is for the validation set is lower than the training set as it has not trained on it directly. The final value is 97.64% which is not bad." }, { "code": null, "e": 12453, "s": 12122, "text": "The next step would to be change the hyperparameters, the learning rate,number of epochs, data size etc. to improve our model. In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. By contrast, the values of other parameters (typically node weights) are derived via training.[3]" }, { "code": null, "e": 12691, "s": 12453, "text": "For our purpose we have already modified these parameters before this article was written, in a way to obtain an optimum performance for display on this article. We increased the dataset size and number of epochs to improve the accuracy." }, { "code": null, "e": 12909, "s": 12691, "text": "The final step after making the adjustments on the model is to make predictions using actual data that will be used on this model. If the model does not perform well on this further hyperparameter tuning can commence." }, { "code": null, "e": 13178, "s": 12909, "text": "Machine Learning is a rather iterative and empirical process and thus the tuning of hyperparameters is often compared to an art rather than science as although we have an idea of what changes will happen by changing certain hyperparameters, we cannot be certain of it." }, { "code": null, "e": 13664, "s": 13178, "text": "This ability to classify mechanical parts could enable us to recommend parts from a standard library based only on an image or a CAD model provided by the customer. Currently to search for a required part from a standard library you have to go through a catalogue and be able to tell which part you want based on the available options and your knowledge of the catalogue. There are serial codes to remember as a change in a single digit or alphabet might mean a different type of part." }, { "code": null, "e": 13927, "s": 13664, "text": "If an image can be used to get the required part from the standard library, all we will need to do is to make a rough CAD model of it and send it through our algorithm. The algorithm will decide which parts are best and help narrow down our search significantly." }, { "code": null, "e": 14191, "s": 13927, "text": "If the classification method gets detailed and fine-tuned enough it should be able to classify with much detail what type of part you want. The narrowed search saves a lot of time. This is especially useful in a library where there are thousands of similar parts." }, { "code": null, "e": 14505, "s": 14191, "text": "Deep Learning (Artificial Intelligence) is a field of study that has immense possibilities as it enables us to extract a lot of knowledge from raw data. At its core it is merely data analysis. In this age of the internet, data is everywhere and if we are able to extract it efficiently, a lot can be accomplished." }, { "code": null, "e": 14974, "s": 14505, "text": "This field has a lot of possible applications in the domain of Mechanical Engineering as well. Since almost all studies in Deep Learning need a domain expert it would be advisable for all engineers with interest in Data Analytics, even though they haven’t majored in Computer Sciences, to learn about data science, machine learning and examine its possibilities. The knowledge of the domain plus the skills of data analysis will really help us excel in our own fields." }, { "code": null, "e": 15517, "s": 14974, "text": "I am thankful to Pro-Mech Minds for letting me do this and specially to its data science team which includes Gopal Kisi, Bishesh Shakya and Series Chikanbanjar for their immense help with this project. Pro-Mech Minds & Engineering Services is one of the companies in Nepal working with both mechanical and IT solutions in engineering. This idea popped as an attempt to combine design engineering with Data Science. Last but not the least I would like to offer my special thanks to Saugat K.C. for acting as a mentor for our Data Science team." }, { "code": null, "e": 15544, "s": 15517, "text": "Link to the code in github" }, { "code": null, "e": 15555, "s": 15544, "text": "References" }, { "code": null, "e": 15673, "s": 15555, "text": "[1]Conor McDonald, Machine learning fundamentals (I): Cost functions and gradient descent(2017), Towards data science" }, { "code": null, "e": 15786, "s": 15673, "text": "[2]Jason Brownlee, Difference Between a Batch and an Epoch in a Neural Network(2018), machinelearningmastery.com" }, { "code": null, "e": 15835, "s": 15786, "text": "[3] Hyperparameter_(machine_learning), Wikipedia" } ]
Angular 4 - Data Binding
Data Binding is available right from AngularJS, Angular 2 and is now available in Angular 4 as well. We use curly braces for data binding - {{}}; this process is called interpolation. We have already seen in our previous examples how we declared the value to the variable title and the same is printed in the browser. The variable in the app.component.html file is referred as {{title}} and the value of title is initialized in the app.component.ts file and in app.component.html, the value is displayed. Let us now create a dropdown of months in the browser. To do that , we have created an array of months in app.component.ts as follows − import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Angular 4 Project!'; // declared array of months. months = ["January", "Feburary", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]; } The month’s array that is shown above is to be displayed in a dropdown in the browser. For this, we will use the following line of code − <!--The content below is only a placeholder and can be replaced. --> <div style="text-align:center"> <h1> Welcome to {{title}}. </h1> </div> <div> Months : <select> <option *ngFor="let i of months">{{i}}</option> </select> </div> We have created the normal select tag with option. In option, we have used the for loop. The for loop is used to iterate over the months’ array, which in turn will create the option tag with the value present in the months. The syntax for in Angular is *ngFor = “let I of months” and to get the value of months we are displaying it in {{i}}. The two curly brackets help with data binding. You declare the variables in your app.component.ts file and the same will be replaced using the curly brackets. Let us see the output of the above month’s array in the browser The variable that is set in the app.component.ts can be bound with the app.component.html using the curly brackets; for example, {{}}. Let us now display the data in the browser based on condition. Here, we have added a variable and assigned the value as true. Using the if statement, we can hide/show the content to be displayed. import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Angular 4 Project!'; //array of months. months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]; isavailable = true; //variable is set to true } <!--The content below is only a placeholder and can be replaced.--> <div style = "text-align:center"> <h1> Welcome to {{title}}. </h1> </div> <div> Months : <select> <option *ngFor = "let i of months">{{i}}</option> </select> </div> <br/> <div> <span *ngIf = "isavailable">Condition is valid.</span> //over here based on if condition the text condition is valid is displayed. If the value of isavailable is set to false it will not display the text. </div> Let us try the above example using the IF THEN ELSE condition. import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Angular 4 Project!'; //array of months. months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]; isavailable = false; } In this case, we have made the isavailable variable as false. To print the else condition, we will have to create the ng-template as follows − <ng-template #condition1>Condition is invalid</ng-template> The full code looks like this − <!--The content below is only a placeholder and can be replaced.--> <div style="text-align:center"> <h1> Welcome to {{title}}. </h1> </div> <div> Months : <select> <option *ngFor="let i of months">{{i}}</option> </select> </div> <br/> <div> <span *ngIf="isavailable; else condition1">Condition is valid.</span> <ng-template #condition1>Condition is invalid</ng-template> </div> If is used with the else condition and the variable used is condition1. The same is assigned as an id to the ng-template, and when the available variable is set to false the text Condition is invalid is displayed. The following screenshot shows the display in the browser. Let us now use the if then else condition. import { Component } from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { title = 'Angular 4 Project!'; //array of months. months = ["January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December"]; isavailable = true; } Now, we will make the variable isavailable as true. In the html, the condition is written in the following way − <!--The content below is only a placeholder and can be replaced.--> <div style="text-align:center"> <h1> Welcome to {{title}}. </h1> </div> <div> Months : <select> <option *ngFor="let i of months">{{i}}</option> </select> </div> <br/> <div> <span *ngIf="isavailable; then condition1 else condition2">Condition is valid.</span> <ng-template #condition1>Condition is valid</ng-template> <ng-template #condition2>Condition is invalid</ng-template> </div> If the variable is true, then condition1, else condition2. Now, two templates are created with id #condition1 and #condition2. The display in the browser is as follows − 16 Lectures 1.5 hours Anadi Sharma 28 Lectures 2.5 hours Anadi Sharma 11 Lectures 7.5 hours SHIVPRASAD KOIRALA 16 Lectures 2.5 hours Frahaan Hussain 69 Lectures 5 hours Senol Atac 53 Lectures 3.5 hours Senol Atac Print Add Notes Bookmark this page
[ { "code": null, "e": 2310, "s": 1992, "text": "Data Binding is available right from AngularJS, Angular 2 and is now available in Angular 4 as well. We use curly braces for data binding - {{}}; this process is called interpolation. We have already seen in our previous examples how we declared the value to the variable title and the same is printed in the browser." }, { "code": null, "e": 2497, "s": 2310, "text": "The variable in the app.component.html file is referred as {{title}} and the value of title is initialized in the app.component.ts file and in app.component.html, the value is displayed." }, { "code": null, "e": 2633, "s": 2497, "text": "Let us now create a dropdown of months in the browser. To do that , we have created an array of months in app.component.ts as follows −" }, { "code": null, "e": 3052, "s": 2633, "text": "import { Component } from '@angular/core';\n\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\nexport class AppComponent {\n title = 'Angular 4 Project!';\n // declared array of months.\n months = [\"January\", \"Feburary\", \"March\", \"April\", \"May\", \n \"June\", \"July\", \"August\", \"September\",\n \"October\", \"November\", \"December\"];\n}" }, { "code": null, "e": 3190, "s": 3052, "text": "The month’s array that is shown above is to be displayed in a dropdown in the browser. For this, we will use the following line of code −" }, { "code": null, "e": 3445, "s": 3190, "text": "<!--The content below is only a placeholder and can be replaced. -->\n<div style=\"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n\n<div> Months :\n <select>\n <option *ngFor=\"let i of months\">{{i}}</option>\n </select>\n</div>" }, { "code": null, "e": 3669, "s": 3445, "text": "We have created the normal select tag with option. In option, we have used the for loop. The for loop is used to iterate over the months’ array, which in turn will create the option tag with the value present in the months." }, { "code": null, "e": 3787, "s": 3669, "text": "The syntax for in Angular is *ngFor = “let I of months” and to get the value of months we are displaying it in {{i}}." }, { "code": null, "e": 3946, "s": 3787, "text": "The two curly brackets help with data binding. You declare the variables in your app.component.ts file and the same will be replaced using the curly brackets." }, { "code": null, "e": 4010, "s": 3946, "text": "Let us see the output of the above month’s array in the browser" }, { "code": null, "e": 4145, "s": 4010, "text": "The variable that is set in the app.component.ts can be bound with the app.component.html using the curly brackets; for example, {{}}." }, { "code": null, "e": 4341, "s": 4145, "text": "Let us now display the data in the browser based on condition. Here, we have added a variable and assigned the value as true. Using the if statement, we can hide/show the content to be displayed." }, { "code": null, "e": 4801, "s": 4341, "text": "import { Component } from '@angular/core';\n\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\n\nexport class AppComponent {\n title = 'Angular 4 Project!';\n //array of months.\n months = [\"January\", \"February\", \"March\", \"April\",\n \"May\", \"June\", \"July\", \"August\", \"September\",\n \"October\", \"November\", \"December\"];\n isavailable = true; //variable is set to true\n}" }, { "code": null, "e": 5295, "s": 4801, "text": "<!--The content below is only a placeholder and can be replaced.-->\n<div style = \"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n\n<div> Months :\n <select>\n <option *ngFor = \"let i of months\">{{i}}</option>\n </select>\n</div>\n<br/>\n\n<div>\n <span *ngIf = \"isavailable\">Condition is valid.</span> \n //over here based on if condition the text condition is valid is displayed. \n If the value of isavailable is set to false it will not display the text.\n</div>" }, { "code": null, "e": 5358, "s": 5295, "text": "Let us try the above example using the IF THEN ELSE condition." }, { "code": null, "e": 5791, "s": 5358, "text": "import { Component } from '@angular/core';\n\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\n\nexport class AppComponent {\n title = 'Angular 4 Project!';\n //array of months.\n months = [\"January\", \"February\", \"March\", \"April\",\n \"May\", \"June\", \"July\", \"August\", \"September\",\n \"October\", \"November\", \"December\"];\n isavailable = false;\n}" }, { "code": null, "e": 5934, "s": 5791, "text": "In this case, we have made the isavailable variable as false. To print the else condition, we will have to create the ng-template as follows −" }, { "code": null, "e": 5995, "s": 5934, "text": "<ng-template #condition1>Condition is invalid</ng-template>\n" }, { "code": null, "e": 6027, "s": 5995, "text": "The full code looks like this −" }, { "code": null, "e": 6437, "s": 6027, "text": "<!--The content below is only a placeholder and can be replaced.-->\n<div style=\"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n\n<div> Months :\n <select>\n <option *ngFor=\"let i of months\">{{i}}</option>\n </select>\n</div>\n<br/>\n\n<div>\n <span *ngIf=\"isavailable; else condition1\">Condition is valid.</span>\n <ng-template #condition1>Condition is invalid</ng-template>\n</div>" }, { "code": null, "e": 6651, "s": 6437, "text": "If is used with the else condition and the variable used is condition1. The same is assigned as an id to the ng-template, and when the available variable is set to false the text Condition is invalid is displayed." }, { "code": null, "e": 6710, "s": 6651, "text": "The following screenshot shows the display in the browser." }, { "code": null, "e": 6753, "s": 6710, "text": "Let us now use the if then else condition." }, { "code": null, "e": 7185, "s": 6753, "text": "import { Component } from '@angular/core';\n\n@Component({\n selector: 'app-root',\n templateUrl: './app.component.html',\n styleUrls: ['./app.component.css']\n})\n\nexport class AppComponent {\n title = 'Angular 4 Project!';\n //array of months.\n months = [\"January\", \"February\", \"March\", \"April\",\n \"May\", \"June\", \"July\", \"August\", \"September\",\n \"October\", \"November\", \"December\"];\n isavailable = true;\n}" }, { "code": null, "e": 7298, "s": 7185, "text": "Now, we will make the variable isavailable as true. In the html, the condition is written in the following way −" }, { "code": null, "e": 7782, "s": 7298, "text": "<!--The content below is only a placeholder and can be replaced.-->\n<div style=\"text-align:center\">\n <h1>\n Welcome to {{title}}.\n </h1>\n</div>\n\n<div> Months :\n <select>\n <option *ngFor=\"let i of months\">{{i}}</option>\n </select>\n</div>\n<br/>\n\n<div>\n <span *ngIf=\"isavailable; then condition1 else condition2\">Condition is valid.</span>\n <ng-template #condition1>Condition is valid</ng-template>\n <ng-template #condition2>Condition is invalid</ng-template>\n</div>" }, { "code": null, "e": 7909, "s": 7782, "text": "If the variable is true, then condition1, else condition2. Now, two templates are created with id #condition1 and #condition2." }, { "code": null, "e": 7952, "s": 7909, "text": "The display in the browser is as follows −" }, { "code": null, "e": 7987, "s": 7952, "text": "\n 16 Lectures \n 1.5 hours \n" }, { "code": null, "e": 8001, "s": 7987, "text": " Anadi Sharma" }, { "code": null, "e": 8036, "s": 8001, "text": "\n 28 Lectures \n 2.5 hours \n" }, { "code": null, "e": 8050, "s": 8036, "text": " Anadi Sharma" }, { "code": null, "e": 8085, "s": 8050, "text": "\n 11 Lectures \n 7.5 hours \n" }, { "code": null, "e": 8105, "s": 8085, "text": " SHIVPRASAD KOIRALA" }, { "code": null, "e": 8140, "s": 8105, "text": "\n 16 Lectures \n 2.5 hours \n" }, { "code": null, "e": 8157, "s": 8140, "text": " Frahaan Hussain" }, { "code": null, "e": 8190, "s": 8157, "text": "\n 69 Lectures \n 5 hours \n" }, { "code": null, "e": 8202, "s": 8190, "text": " Senol Atac" }, { "code": null, "e": 8237, "s": 8202, "text": "\n 53 Lectures \n 3.5 hours \n" }, { "code": null, "e": 8249, "s": 8237, "text": " Senol Atac" }, { "code": null, "e": 8256, "s": 8249, "text": " Print" }, { "code": null, "e": 8267, "s": 8256, "text": " Add Notes" } ]
How to remove an element from ArrayList or, LinkedList in Java?
The ArrayList and, LinkedList classes implements the List interface of the java.util package. This interface provided two variants of the remove() method to remove particular elements as shown below − E remove(int index) E remove(int index) boolean remove(Object o) − boolean remove(Object o) − Using one of these methods you can delete a desired element from the List or, linkedList in Java. E remove(int index) − This method accepts an integer representing a particular position in the List object and, removes the element at the given position. If the remove operation is successful, this method returns the element that has been removed. If the index value passed to this method is less than 0 or greater than 1, an IndexOutOfBoundsException exception is raised. Live Demo import java.util.ArrayList; public class RemoveExample { public static void main(String[] args) { //Instantiating an ArrayList object ArrayList<String> arrayList = new ArrayList<String>(); arrayList.add("JavaFX"); arrayList.add("Java"); arrayList.add("WebGL"); arrayList.add("OpenCV"); System.out.println("Contents of the Array List: "+arrayList); //Removing elements System.out.println("Elements removed: "); System.out.println(arrayList.remove(0)); System.out.println(arrayList.remove(2)); System.out.println(" "); //Instantiating an LinkedList object ArrayList<String> linkedList = new ArrayList<String>(); linkedList.add("Krishna"); linkedList.add("Satish"); linkedList.add("Mohan"); linkedList.add("Radha"); System.out.println("Contents of the linked List: "+arrayList); //Removing elements System.out.println("Elements removed: "); System.out.println(linkedList.remove(0)); System.out.println(linkedList.remove(2)); } } Contents of the Array List: [JavaFx, Java, WebGL, OpenCV] Elements removed: JavaFX OpenCV Contents of the linked List: [Java, WebGL] Elements removed: Krishna Radha boolean remove(Object o) − This method accepts an object representing an element in the List and, removes the first occurrence of the given element. This method returns a boolean value which is − true, if the operation is successful. true, if the operation is successful. false, if the operation is unsuccessful. false, if the operation is unsuccessful. Live Demo import java.util.ArrayList; public class RemoveExample { public static void main(String[] args) { //Instantiating an ArrayList object ArrayList<String> arrayList = new ArrayList<String>(); arrayList.add("JavaFX"); arrayList.add("Java"); arrayList.add("WebGL"); arrayList.add("OpenCV"); System.out.println("Contents of the Array List: "+arrayList); //Removing elements System.out.println("Elements removed: "); System.out.println(arrayList.remove("JavaFX")); System.out.println(arrayList.remove("WebGL")); System.out.println("Contents of the array List after removing elements: "+arrayList); System.out.println(" "); //Instantiating an LinkedList object ArrayList<String> linkedList = new ArrayList<String>(); linkedList.add("Krishna"); linkedList.add("Satish"); linkedList.add("Mohan"); linkedList.add("Radha"); System.out.println("Contents of the linked List: "+linkedList); //Removing elements System.out.println("Elements removed: "); System.out.println(linkedList.remove("Satish")); System.out.println(linkedList.remove("Mohan")); System.out.println("Contents of the linked List after removing elements: "+linkedList); } } Contents of the Array List: [JavaFX, Java, WebGL, OpenCV] Elements removed: true true Contents of the array List after removing elements: [Java, OpenCV] Contents of the linked List: [Krishna, Satish, Mohan, Radha] Elements removed: true true Contents of the linked List after removing elements: [Krishna, Radha] In addition to these two method you can also remove elements of the LinkedList or ArrayList objects using the remove() of the Iterator class. Live Demo import java.util.ArrayList; import java.util.Iterator; public class RemoveExample { public static void main(String[] args) { //Instantiating an ArrayList object ArrayList<String> arrayList = new ArrayList<String>(); arrayList.add("JavaFX"); arrayList.add("Java"); arrayList.add("WebGL"); arrayList.add("OpenCV"); System.out.println("Contents of the Array List: "+arrayList); //Retrieving the Iterator object Iterator<String> it1 = arrayList.iterator(); it1.next(); it1.remove(); System.out.println("Contents of the array List after removing elements: "); while(it1.hasNext()) { System.out.println(it1.next()); } //Instantiating an LinkedList object ArrayList<String> linkedList = new ArrayList<String>(); linkedList.add("Krishna"); linkedList.add("Satish"); linkedList.add("Mohan"); linkedList.add("Radha"); System.out.println("Contents of the linked List: "+linkedList); //Retrieving the Iterator object Iterator<String> it2 = linkedList.iterator(); it2.next(); it2.remove(); System.out.println("Contents of the linked List after removing elements: "); while(it2.hasNext()) { System.out.println(it2.next()); } } } Contents of the Array List: [JavaFX, Java, WebGL, OpenCV] Contents of the array List after removing elements: Java WebGL OpenCV Contents of the linked List: [Krishna, Satish, Mohan, Radha] Contents of the linked List after removing elements: Satish Mohan Radha
[ { "code": null, "e": 1263, "s": 1062, "text": "The ArrayList and, LinkedList classes implements the List interface of the java.util package. This interface provided two variants of the remove() method to remove particular elements as shown below −" }, { "code": null, "e": 1283, "s": 1263, "text": "E remove(int index)" }, { "code": null, "e": 1303, "s": 1283, "text": "E remove(int index)" }, { "code": null, "e": 1330, "s": 1303, "text": "boolean remove(Object o) −" }, { "code": null, "e": 1357, "s": 1330, "text": "boolean remove(Object o) −" }, { "code": null, "e": 1455, "s": 1357, "text": "Using one of these methods you can delete a desired element from the List or, linkedList in Java." }, { "code": null, "e": 1704, "s": 1455, "text": "E remove(int index) − This method accepts an integer representing a particular position in the List object and, removes the element at the given position. If the remove operation is successful, this method returns the element that has been removed." }, { "code": null, "e": 1829, "s": 1704, "text": "If the index value passed to this method is less than 0 or greater than 1, an IndexOutOfBoundsException exception is raised." }, { "code": null, "e": 1840, "s": 1829, "text": " Live Demo" }, { "code": null, "e": 2910, "s": 1840, "text": "import java.util.ArrayList;\npublic class RemoveExample {\n public static void main(String[] args) {\n //Instantiating an ArrayList object\n ArrayList<String> arrayList = new ArrayList<String>();\n arrayList.add(\"JavaFX\");\n arrayList.add(\"Java\");\n arrayList.add(\"WebGL\");\n arrayList.add(\"OpenCV\");\n System.out.println(\"Contents of the Array List: \"+arrayList);\n //Removing elements\n System.out.println(\"Elements removed: \");\n System.out.println(arrayList.remove(0));\n System.out.println(arrayList.remove(2));\n System.out.println(\" \");\n //Instantiating an LinkedList object\n ArrayList<String> linkedList = new ArrayList<String>();\n linkedList.add(\"Krishna\");\n linkedList.add(\"Satish\");\n linkedList.add(\"Mohan\");\n linkedList.add(\"Radha\");\n System.out.println(\"Contents of the linked List: \"+arrayList);\n //Removing elements\n System.out.println(\"Elements removed: \");\n System.out.println(linkedList.remove(0));\n System.out.println(linkedList.remove(2));\n }\n}" }, { "code": null, "e": 3076, "s": 2910, "text": "Contents of the Array List: [JavaFx, Java, WebGL, OpenCV]\nElements removed:\nJavaFX\nOpenCV\n\nContents of the linked List: [Java, WebGL]\nElements removed:\nKrishna\nRadha" }, { "code": null, "e": 3272, "s": 3076, "text": "boolean remove(Object o) − This method accepts an object representing an element in the List and, removes the first occurrence of the given element. This method returns a boolean value which is −" }, { "code": null, "e": 3310, "s": 3272, "text": "true, if the operation is successful." }, { "code": null, "e": 3348, "s": 3310, "text": "true, if the operation is successful." }, { "code": null, "e": 3389, "s": 3348, "text": "false, if the operation is unsuccessful." }, { "code": null, "e": 3430, "s": 3389, "text": "false, if the operation is unsuccessful." }, { "code": null, "e": 3441, "s": 3430, "text": " Live Demo" }, { "code": null, "e": 4724, "s": 3441, "text": "import java.util.ArrayList;\npublic class RemoveExample {\n public static void main(String[] args) {\n //Instantiating an ArrayList object\n ArrayList<String> arrayList = new ArrayList<String>();\n arrayList.add(\"JavaFX\");\n arrayList.add(\"Java\");\n arrayList.add(\"WebGL\");\n arrayList.add(\"OpenCV\");\n System.out.println(\"Contents of the Array List: \"+arrayList);\n //Removing elements\n System.out.println(\"Elements removed: \");\n System.out.println(arrayList.remove(\"JavaFX\"));\n System.out.println(arrayList.remove(\"WebGL\"));\n System.out.println(\"Contents of the array List after removing elements: \"+arrayList);\n System.out.println(\" \");\n //Instantiating an LinkedList object\n ArrayList<String> linkedList = new ArrayList<String>();\n linkedList.add(\"Krishna\");\n linkedList.add(\"Satish\");\n linkedList.add(\"Mohan\");\n linkedList.add(\"Radha\");\n System.out.println(\"Contents of the linked List: \"+linkedList);\n //Removing elements\n System.out.println(\"Elements removed: \");\n System.out.println(linkedList.remove(\"Satish\"));\n System.out.println(linkedList.remove(\"Mohan\"));\n System.out.println(\"Contents of the linked List after removing elements: \"+linkedList);\n }\n}" }, { "code": null, "e": 5037, "s": 4724, "text": "Contents of the Array List: [JavaFX, Java, WebGL, OpenCV]\nElements removed:\ntrue\ntrue\nContents of the array List after removing elements: [Java, OpenCV]\n\nContents of the linked List: [Krishna, Satish, Mohan, Radha]\nElements removed:\ntrue\ntrue\nContents of the linked List after removing elements: [Krishna, Radha]" }, { "code": null, "e": 5179, "s": 5037, "text": "In addition to these two method you can also remove elements of the LinkedList or ArrayList objects using the remove() of the Iterator class." }, { "code": null, "e": 5190, "s": 5179, "text": " Live Demo" }, { "code": null, "e": 6497, "s": 5190, "text": "import java.util.ArrayList;\nimport java.util.Iterator;\npublic class RemoveExample {\n public static void main(String[] args) {\n //Instantiating an ArrayList object\n ArrayList<String> arrayList = new ArrayList<String>();\n arrayList.add(\"JavaFX\");\n arrayList.add(\"Java\");\n arrayList.add(\"WebGL\");\n arrayList.add(\"OpenCV\");\n System.out.println(\"Contents of the Array List: \"+arrayList);\n //Retrieving the Iterator object\n Iterator<String> it1 = arrayList.iterator();\n it1.next();\n it1.remove();\n System.out.println(\"Contents of the array List after removing elements: \");\n while(it1.hasNext()) {\n System.out.println(it1.next());\n }\n //Instantiating an LinkedList object\n ArrayList<String> linkedList = new ArrayList<String>();\n linkedList.add(\"Krishna\");\n linkedList.add(\"Satish\");\n linkedList.add(\"Mohan\");\n linkedList.add(\"Radha\");\n System.out.println(\"Contents of the linked List: \"+linkedList);\n //Retrieving the Iterator object\n Iterator<String> it2 = linkedList.iterator();\n it2.next();\n it2.remove();\n System.out.println(\"Contents of the linked List after removing elements: \");\n while(it2.hasNext()) {\n System.out.println(it2.next());\n }\n }\n}" }, { "code": null, "e": 6758, "s": 6497, "text": "Contents of the Array List: [JavaFX, Java, WebGL, OpenCV]\nContents of the array List after removing elements:\nJava\nWebGL\nOpenCV\nContents of the linked List: [Krishna, Satish, Mohan, Radha]\nContents of the linked List after removing elements:\nSatish\nMohan\nRadha" } ]
What is a static storage class in C language?
There are four storage classes in C programming language, which are as follows − auto extern static register The keyword is static. Scope of a static variable is that it retains its value throughout the program and in between function calls. Scope of a static variable is that it retains its value throughout the program and in between function calls. Static variables are initialised only once. Static variables are initialised only once. Default value is zero. Following is the C program for static storage class − Live Demo #include<stdio.h> main ( ){ inc ( ); inc ( ); inc ( ); } inc ( ){ static int i =1; printf ("%d", i); i++; } The output is stated below − 1 2 3 Following is another C program for static storage class − Live Demo #include<stdio.h> main ( ){ inc ( ); inc ( ); inc ( ); } inc ( ){ auto int i=1; printf ("%d", i); i++; } The output is stated below − 1 1 1 Following is the third example of the C program for static storage class − Live Demo #include <stdio.h> //function declaration void function(); int main(){ function(); function(); return 0; } //function definition void function(){ static int value= 1; //static variable declaration printf("\nvalue = %d ", value); value++; } The output is stated below − value = 1 value =2
[ { "code": null, "e": 1143, "s": 1062, "text": "There are four storage classes in C programming language, which are as follows −" }, { "code": null, "e": 1148, "s": 1143, "text": "auto" }, { "code": null, "e": 1155, "s": 1148, "text": "extern" }, { "code": null, "e": 1162, "s": 1155, "text": "static" }, { "code": null, "e": 1171, "s": 1162, "text": "register" }, { "code": null, "e": 1194, "s": 1171, "text": "The keyword is static." }, { "code": null, "e": 1304, "s": 1194, "text": "Scope of a static variable is that it retains its value throughout the program and in between function calls." }, { "code": null, "e": 1414, "s": 1304, "text": "Scope of a static variable is that it retains its value throughout the program and in between function calls." }, { "code": null, "e": 1458, "s": 1414, "text": "Static variables are initialised only once." }, { "code": null, "e": 1502, "s": 1458, "text": "Static variables are initialised only once." }, { "code": null, "e": 1525, "s": 1502, "text": "Default value is zero." }, { "code": null, "e": 1579, "s": 1525, "text": "Following is the C program for static storage class −" }, { "code": null, "e": 1590, "s": 1579, "text": " Live Demo" }, { "code": null, "e": 1716, "s": 1590, "text": "#include<stdio.h>\nmain ( ){\n inc ( );\n inc ( );\n inc ( );\n}\ninc ( ){\n static int i =1;\n printf (\"%d\", i);\n i++;\n}" }, { "code": null, "e": 1745, "s": 1716, "text": "The output is stated below −" }, { "code": null, "e": 1751, "s": 1745, "text": "1 2 3" }, { "code": null, "e": 1809, "s": 1751, "text": "Following is another C program for static storage class −" }, { "code": null, "e": 1820, "s": 1809, "text": " Live Demo" }, { "code": null, "e": 1943, "s": 1820, "text": "#include<stdio.h>\nmain ( ){\n inc ( );\n inc ( );\n inc ( );\n}\ninc ( ){\n auto int i=1;\n printf (\"%d\", i);\n i++;\n}" }, { "code": null, "e": 1972, "s": 1943, "text": "The output is stated below −" }, { "code": null, "e": 1978, "s": 1972, "text": "1 1 1" }, { "code": null, "e": 2053, "s": 1978, "text": "Following is the third example of the C program for static storage class −" }, { "code": null, "e": 2064, "s": 2053, "text": " Live Demo" }, { "code": null, "e": 2322, "s": 2064, "text": "#include <stdio.h>\n//function declaration\nvoid function();\nint main(){\n function();\n function();\n return 0;\n}\n//function definition\nvoid function(){\n static int value= 1; //static variable declaration\n printf(\"\\nvalue = %d \", value);\n value++;\n}" }, { "code": null, "e": 2351, "s": 2322, "text": "The output is stated below −" }, { "code": null, "e": 2370, "s": 2351, "text": "value = 1\nvalue =2" } ]
How to move (translate) a JavaFX node from one position to another?
If you move an object on the XY plane from one position to another it is known as translation. You can translate an object along either X-Axis to Y-Axis. In JavaFX using the object of the javafx.scene.transform.Translate class you can translate a node from one position to another. This class contains three properties (double) representing the distance of the desired position from the original position along X, Y, Z plane respectively. Every node in JavaFX contains an observable list to hold all the transforms to be applied on a node. You can get this list using the getTransforms() method. To move a Node from one position to another − Instantiate the Translate class. Instantiate the Translate class. Set the distance by which you want to move a node int the XYZ plane using the setX(), setY() and setZ() methods respectively. Set the distance by which you want to move a node int the XYZ plane using the setX(), setY() and setZ() methods respectively. Get the list of transforms from the node (which you want to move) using the getTransforms() method. Get the list of transforms from the node (which you want to move) using the getTransforms() method. Add the above-created scale object to it. Add the above-created scale object to it. Add the node to the scene. Add the node to the scene. Following the JavaFX example demonstrates the translation transform. It contains a 2D geometric shape and two sliders, representing the x and y translate values. If you move the sliders the object moves along the chosen axis. import javafx.application.Application; import javafx.beans.value.ChangeListener; import javafx.beans.value.ObservableValue; import javafx.geometry.Orientation; import javafx.scene.Scene; import javafx.scene.control.Label; import javafx.scene.control.Slider; import javafx.scene.layout.BorderPane; import javafx.scene.layout.VBox; import javafx.scene.paint.Color; import javafx.scene.paint.PhongMaterial; import javafx.scene.shape.CullFace; import javafx.scene.shape.DrawMode; import javafx.scene.shape.Sphere; import javafx.scene.transform.Translate; import javafx.stage.Stage; public class TranslateExample extends Application { public void start(Stage stage) { //Drawing a Sphere Sphere sphere = new Sphere(); sphere.setRadius(50.0); sphere.setCullFace(CullFace.BACK); sphere.setDrawMode(DrawMode.FILL); PhongMaterial material = new PhongMaterial(); material.setDiffuseColor(Color.BROWN); sphere.setMaterial(material); //Setting the slider for the horizontal translation Slider slider1 = new Slider(0, 500, 0); slider1.setOrientation(Orientation.VERTICAL); slider1.setShowTickLabels(true); slider1.setShowTickMarks(true); slider1.setMajorTickUnit(150); slider1.setBlockIncrement(150); //Creating the translation transformation Translate translate = new Translate(); //Linking the transformation to the slider slider1.valueProperty().addListener(new ChangeListener<Number>() { public void changed(ObservableValue <?extends Number>observable, Number oldValue, Number newValue){ translate.setX((double) newValue); translate.setY(0); translate.setZ(0); } }); //Setting the slider for the vertical translation Slider slider2 = new Slider(0, 200, 0); slider2.setOrientation(Orientation.VERTICAL); slider2.setShowTickLabels(true); slider2.setShowTickMarks(true); slider2.setMajorTickUnit(50); slider2.setBlockIncrement(50); //Creating the translation transformation slider2.valueProperty().addListener(new ChangeListener<Number>() { public void changed(ObservableValue <?extends Number>observable, Number oldValue, Number newValue){ translate.setX(0); translate.setY((double) newValue); } }); //Adding the transformation to the sphere sphere.getTransforms().add(translate); //Creating the pane BorderPane pane = new BorderPane(); pane.setBottom(new VBox(new Label("Translate along X-Axis"), slider1)); pane.setRight(new VBox(new Label("Translate along Y-Axis"), slider2)); pane.setLeft(sphere); //Preparing the scene Scene scene = new Scene(pane, 595, 300); stage.setTitle("Translate Example"); stage.setScene(scene); stage.show(); } public static void main(String args[]){ launch(args); } }
[ { "code": null, "e": 1216, "s": 1062, "text": "If you move an object on the XY plane from one position to another it is known as translation. You can translate an object along either X-Axis to Y-Axis." }, { "code": null, "e": 1501, "s": 1216, "text": "In JavaFX using the object of the javafx.scene.transform.Translate class you can translate a node from one position to another. This class contains three properties (double) representing the distance of the desired position from the original position along X, Y, Z plane respectively." }, { "code": null, "e": 1658, "s": 1501, "text": "Every node in JavaFX contains an observable list to hold all the transforms to be applied on a node. You can get this list using the getTransforms() method." }, { "code": null, "e": 1704, "s": 1658, "text": "To move a Node from one position to another −" }, { "code": null, "e": 1737, "s": 1704, "text": "Instantiate the Translate class." }, { "code": null, "e": 1770, "s": 1737, "text": "Instantiate the Translate class." }, { "code": null, "e": 1896, "s": 1770, "text": "Set the distance by which you want to move a node int the XYZ plane using the setX(), setY() and setZ() methods respectively." }, { "code": null, "e": 2022, "s": 1896, "text": "Set the distance by which you want to move a node int the XYZ plane using the setX(), setY() and setZ() methods respectively." }, { "code": null, "e": 2122, "s": 2022, "text": "Get the list of transforms from the node (which you want to move) using the getTransforms() method." }, { "code": null, "e": 2222, "s": 2122, "text": "Get the list of transforms from the node (which you want to move) using the getTransforms() method." }, { "code": null, "e": 2264, "s": 2222, "text": "Add the above-created scale object to it." }, { "code": null, "e": 2306, "s": 2264, "text": "Add the above-created scale object to it." }, { "code": null, "e": 2333, "s": 2306, "text": "Add the node to the scene." }, { "code": null, "e": 2360, "s": 2333, "text": "Add the node to the scene." }, { "code": null, "e": 2586, "s": 2360, "text": "Following the JavaFX example demonstrates the translation transform. It contains a 2D geometric shape and two sliders, representing the x and y translate values. If you move the sliders the object moves along the chosen axis." }, { "code": null, "e": 5539, "s": 2586, "text": "import javafx.application.Application;\nimport javafx.beans.value.ChangeListener;\nimport javafx.beans.value.ObservableValue;\nimport javafx.geometry.Orientation;\nimport javafx.scene.Scene;\nimport javafx.scene.control.Label;\nimport javafx.scene.control.Slider;\nimport javafx.scene.layout.BorderPane;\nimport javafx.scene.layout.VBox;\nimport javafx.scene.paint.Color;\nimport javafx.scene.paint.PhongMaterial;\nimport javafx.scene.shape.CullFace;\nimport javafx.scene.shape.DrawMode;\nimport javafx.scene.shape.Sphere;\nimport javafx.scene.transform.Translate;\nimport javafx.stage.Stage;\npublic class TranslateExample extends Application {\n public void start(Stage stage) {\n //Drawing a Sphere\n Sphere sphere = new Sphere();\n sphere.setRadius(50.0);\n sphere.setCullFace(CullFace.BACK);\n sphere.setDrawMode(DrawMode.FILL);\n PhongMaterial material = new PhongMaterial();\n material.setDiffuseColor(Color.BROWN);\n sphere.setMaterial(material);\n //Setting the slider for the horizontal translation\n Slider slider1 = new Slider(0, 500, 0);\n slider1.setOrientation(Orientation.VERTICAL);\n slider1.setShowTickLabels(true);\n slider1.setShowTickMarks(true);\n slider1.setMajorTickUnit(150);\n slider1.setBlockIncrement(150);\n //Creating the translation transformation\n Translate translate = new Translate();\n //Linking the transformation to the slider\n slider1.valueProperty().addListener(new ChangeListener<Number>() {\n public void changed(ObservableValue <?extends Number>observable, Number oldValue, Number newValue){\n translate.setX((double) newValue);\n translate.setY(0);\n translate.setZ(0);\n }\n });\n //Setting the slider for the vertical translation\n Slider slider2 = new Slider(0, 200, 0);\n slider2.setOrientation(Orientation.VERTICAL);\n slider2.setShowTickLabels(true);\n slider2.setShowTickMarks(true);\n slider2.setMajorTickUnit(50);\n slider2.setBlockIncrement(50);\n //Creating the translation transformation\n slider2.valueProperty().addListener(new ChangeListener<Number>() {\n public void changed(ObservableValue <?extends Number>observable, Number oldValue, Number newValue){\n translate.setX(0);\n translate.setY((double) newValue);\n }\n });\n //Adding the transformation to the sphere\n sphere.getTransforms().add(translate);\n //Creating the pane\n BorderPane pane = new BorderPane();\n pane.setBottom(new VBox(new Label(\"Translate along X-Axis\"), slider1));\n pane.setRight(new VBox(new Label(\"Translate along Y-Axis\"), slider2));\n pane.setLeft(sphere);\n //Preparing the scene\n Scene scene = new Scene(pane, 595, 300);\n stage.setTitle(\"Translate Example\");\n stage.setScene(scene);\n stage.show();\n }\n public static void main(String args[]){\n launch(args);\n }\n}" } ]
Height of Binary Tree | Practice | GeeksforGeeks
Given a binary tree, find its height. Example 1: Input: 1 / \ 2 3 Output: 2 Example 2: Input: 2 \ 1 / 3 Output: 3 Your Task: You don't need to read input or print anything. Your task is to complete the function height() which takes root node of the tree as input parameter and returns an integer denoting the height of the tree. If the tree is empty, return 0. Expected Time Complexity: O(N) Expected Auxiliary Space: O(N) Constraints: 1 <= Number of nodes <= 105 1 <= Data of a node <= 105 -1 superrhitik4581 day ago // Jai Shri Ram.. //simple recursive solution class Solution{ public: //Function to find the height of a binary tree. int height(struct Node* node){ // code here if(node==NULL) return 0; return 1+max(height(node->left),height(node->right)); }}; 0 alugojuvamshi1 day ago EASY C++ SOLUTION int height(struct Node* node){ //code here if(node==NULL) return 0; else { int l=height(node->left); int r=height(node->right); if(l>r)return (l+1); else return (r+1); } } 0 shubham211019971 day ago int height(Node node) { if(node==null)return 0; if(node.left==null && node.right==null) return 1;//leaf node int h1=height(node.left); int h2=height(node.right); return 1+Math.max(h1,h2); } 0 virus1vinayak2 days ago Java solution 0.29s int height(Node node) { // code here Stack<Node> r=new Stack<Node>(); r.push(node); int h=0; while(!r.isEmpty()) { Node t=r.pop(); h+=1; if(t.left!=null) r.push(t.left); else if(t.right!=null) r.push(t.right); } return h; } 0 bishal1289be214 days ago class Solution{ public: //Function to find the height of a binary tree. int height(struct Node* node){ // code here if(node==NULL){ return 0; } int left=height(node->left); int right=height(node->right); int ans=max(left,right)+1; return ans; }}; 0 vishrut4445 days ago JAVA (Recursion) int height(Node node) { // code here if(node==null) return 0; else{ return Math.max(height(node.left),height(node.right))+1; } } 0 born2code1 week ago int height(Node root) { if(root==null) return 0; if(root.left==null&&root.right==null) return 1; else return Math.max(height(root.left), height(root.right))+1; } 0 ialtafshaikh1 week ago # Solution 2 := Using Level Order -----------------------def maxDepth(self, root: Optional[TreeNode]) -> int: maxHeight = 0 # Base Case if root is None: return 0 # Create an empty queue # for level order traversal queue = [] # Enqueue Root and initialize height queue.append(root) while(len(queue)): for _ in range(len(queue)): node = queue.pop(0) if node.left is not None: queue.append(node.left) if node.right is not None: queue.append(node.right) maxHeight+=1 return maxHeight 0 shanu141 week ago 2 line solution [Recursion] : int height(struct Node* node){ if(node==NULL) return 0; // Base Condition return 1+max(height(node->left),height(node->right)); } 0 saurabhsrivastva1 week ago Hii...Can someone explain why to take local variables l and r not as global variable , in this code. int maxi_depth(TreeNode* root) { if(!root)return 0; if(!root->left && !root->right)return 1; int l=maxi_depth(root->left) ; int r=maxi_depth(root->right); return 1+ max(l,r); } →Global Variable not gives appropriate result , here in this code. WHy?? We strongly recommend solving this problem on your own before viewing its editorial. Do you still want to view the editorial? Login to access your submissions. Problem Contest Reset the IDE using the second button on the top right corner. Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values. Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints. You can access the hints to get an idea about what is expected of you as well as the final solution code. You can view the solutions submitted by other users from the submission tab.
[ { "code": null, "e": 276, "s": 238, "text": "Given a binary tree, find its height." }, { "code": null, "e": 288, "s": 276, "text": "\nExample 1:" }, { "code": null, "e": 332, "s": 288, "text": "Input:\n 1\n / \\\n 2 3\nOutput: 2\n" }, { "code": null, "e": 343, "s": 332, "text": "Example 2:" }, { "code": null, "e": 386, "s": 343, "text": "Input:\n 2\n \\\n 1\n /\n 3\nOutput: 3 " }, { "code": null, "e": 635, "s": 386, "text": "\nYour Task:\nYou don't need to read input or print anything. Your task is to complete the function height() which takes root node of the tree as input parameter and returns an integer denoting the height of the tree. If the tree is empty, return 0. " }, { "code": null, "e": 698, "s": 635, "text": "\nExpected Time Complexity: O(N)\nExpected Auxiliary Space: O(N)" }, { "code": null, "e": 767, "s": 698, "text": "\nConstraints:\n1 <= Number of nodes <= 105\n1 <= Data of a node <= 105" }, { "code": null, "e": 770, "s": 767, "text": "-1" }, { "code": null, "e": 794, "s": 770, "text": "superrhitik4581 day ago" }, { "code": null, "e": 857, "s": 794, "text": "// Jai Shri Ram.." }, { "code": null, "e": 885, "s": 857, "text": "//simple recursive solution" }, { "code": null, "e": 1125, "s": 885, "text": "class Solution{ public: //Function to find the height of a binary tree. int height(struct Node* node){ // code here if(node==NULL) return 0; return 1+max(height(node->left),height(node->right)); }}; " }, { "code": null, "e": 1127, "s": 1125, "text": "0" }, { "code": null, "e": 1150, "s": 1127, "text": "alugojuvamshi1 day ago" }, { "code": null, "e": 1178, "s": 1150, "text": " EASY C++ SOLUTION" }, { "code": null, "e": 1404, "s": 1180, "text": "int height(struct Node* node){ //code here if(node==NULL) return 0; else { int l=height(node->left); int r=height(node->right); if(l>r)return (l+1); else return (r+1); } } " }, { "code": null, "e": 1406, "s": 1404, "text": "0" }, { "code": null, "e": 1431, "s": 1406, "text": "shubham211019971 day ago" }, { "code": null, "e": 1657, "s": 1431, "text": " int height(Node node) { if(node==null)return 0; if(node.left==null && node.right==null) return 1;//leaf node int h1=height(node.left); int h2=height(node.right); return 1+Math.max(h1,h2); }" }, { "code": null, "e": 1659, "s": 1657, "text": "0" }, { "code": null, "e": 1683, "s": 1659, "text": "virus1vinayak2 days ago" }, { "code": null, "e": 1697, "s": 1683, "text": "Java solution" }, { "code": null, "e": 1704, "s": 1697, "text": "0.29s " }, { "code": null, "e": 2044, "s": 1704, "text": " int height(Node node) { // code here Stack<Node> r=new Stack<Node>(); r.push(node); int h=0; while(!r.isEmpty()) { Node t=r.pop(); h+=1; if(t.left!=null) r.push(t.left); else if(t.right!=null) r.push(t.right); } return h; }" }, { "code": null, "e": 2046, "s": 2044, "text": "0" }, { "code": null, "e": 2071, "s": 2046, "text": "bishal1289be214 days ago" }, { "code": null, "e": 2359, "s": 2071, "text": "class Solution{ public: //Function to find the height of a binary tree. int height(struct Node* node){ // code here if(node==NULL){ return 0; } int left=height(node->left); int right=height(node->right); int ans=max(left,right)+1; return ans; }};" }, { "code": null, "e": 2361, "s": 2359, "text": "0" }, { "code": null, "e": 2382, "s": 2361, "text": "vishrut4445 days ago" }, { "code": null, "e": 2399, "s": 2382, "text": "JAVA (Recursion)" }, { "code": null, "e": 2568, "s": 2399, "text": "int height(Node node) { // code here if(node==null) return 0; else{ return Math.max(height(node.left),height(node.right))+1; } }" }, { "code": null, "e": 2570, "s": 2568, "text": "0" }, { "code": null, "e": 2590, "s": 2570, "text": "born2code1 week ago" }, { "code": null, "e": 2821, "s": 2590, "text": "int height(Node root) \n {\n if(root==null)\n return 0;\n if(root.left==null&&root.right==null)\n return 1;\n else\n return Math.max(height(root.left), height(root.right))+1;\n }" }, { "code": null, "e": 2823, "s": 2821, "text": "0" }, { "code": null, "e": 2846, "s": 2823, "text": "ialtafshaikh1 week ago" }, { "code": null, "e": 3037, "s": 2846, "text": "# Solution 2 := Using Level Order -----------------------def maxDepth(self, root: Optional[TreeNode]) -> int: maxHeight = 0 # Base Case if root is None: return 0" }, { "code": null, "e": 3119, "s": 3037, "text": " # Create an empty queue # for level order traversal queue = []" }, { "code": null, "e": 3188, "s": 3119, "text": " # Enqueue Root and initialize height queue.append(root)" }, { "code": null, "e": 3286, "s": 3188, "text": " while(len(queue)): for _ in range(len(queue)): node = queue.pop(0)" }, { "code": null, "e": 3369, "s": 3286, "text": " if node.left is not None: queue.append(node.left)" }, { "code": null, "e": 3522, "s": 3369, "text": " if node.right is not None: queue.append(node.right) maxHeight+=1 return maxHeight" }, { "code": null, "e": 3524, "s": 3522, "text": "0" }, { "code": null, "e": 3542, "s": 3524, "text": "shanu141 week ago" }, { "code": null, "e": 3573, "s": 3542, "text": "2 line solution [Recursion] : " }, { "code": null, "e": 3723, "s": 3573, "text": "int height(struct Node* node){\n if(node==NULL) return 0; // Base Condition\n return 1+max(height(node->left),height(node->right));\n }" }, { "code": null, "e": 3725, "s": 3723, "text": "0" }, { "code": null, "e": 3752, "s": 3725, "text": "saurabhsrivastva1 week ago" }, { "code": null, "e": 4138, "s": 3752, "text": "Hii...Can someone explain why to take local variables l and r not as global variable , in this code. int maxi_depth(TreeNode* root) { if(!root)return 0; if(!root->left && !root->right)return 1; int l=maxi_depth(root->left) ; int r=maxi_depth(root->right); return 1+ max(l,r); } " }, { "code": null, "e": 4212, "s": 4138, "text": "→Global Variable not gives appropriate result , here in this code. WHy?? " }, { "code": null, "e": 4358, "s": 4212, "text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?" }, { "code": null, "e": 4394, "s": 4358, "text": " Login to access your submissions. " }, { "code": null, "e": 4404, "s": 4394, "text": "\nProblem\n" }, { "code": null, "e": 4414, "s": 4404, "text": "\nContest\n" }, { "code": null, "e": 4477, "s": 4414, "text": "Reset the IDE using the second button on the top right corner." }, { "code": null, "e": 4625, "s": 4477, "text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values." }, { "code": null, "e": 4833, "s": 4625, "text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints." }, { "code": null, "e": 4939, "s": 4833, "text": "You can access the hints to get an idea about what is expected of you as well as the final solution code." } ]
Number to word conversion
This algorithm will convert a given number into English words. Like 564 will be Five Hundred and Sixty-Four. For this algorithm, some predefined strings are given, from that list, it will get the proper words to make into words. The lists are like Units: it will hold all words for (0 to 9) like Zero, One...Nine twoDigits: it will hold all numbers from (10 - 19), like Ten, eleven...Nineteen tenMul: For ten multiples, (20-90), like Twenty, Thirty, ... Ninety. tenPower: It is for Hundred and Thousands as power 2 and 3 of 10 Input: The number: 568 Output: Five Hundred And Sixty Eight numToWord(num) there are some lists which hold the words for different integers Input: The number. Output: Represent number into words. Begin if n >= 0 and n < 10, then display units(n) into words else if n >= 10 and n < 20, then display twoDigitNum(n) into words //It is from ten to nineteen else if n >= 20 and n <100, then display tensMultiple(n/10), into words if n mod 10 ≠ 0, then numToWord(n mod 10) else if n >= 100 and n < 1000, then display units(n/100), into words display “Hundred”, into words //Hundred if n mod 100 ≠ 0, then display “And” numToWord(n mod 100) else if n >= 1000 and n <= 32767, then numToWord(n/1000) display “Thousand” if n mod 1000 ≠ 0, then numToWord(n mod 1000) else display invalid number and exit End #include<iostream> using namespace std; string getUnit(int n) { //Return single digit to word string unit[10] = {"Zero", "One","Two", "Three","Four","Five", "Six","Seven","Eight","Nine"}; return unit[n]; } string getTwoDigits(int n) { //Here n is 2 digit number string td[10] = {"Ten", "Eleven","Twelve","Thirteen", "Fourteen","Fifteen","Sixteen","Seventeen","Eighteen","Nineteen"}; return td[n%10]; } string getTenMul(int n) { //Here n is multiple of 10 string tm[8] = {"Twenty", "Thirty","Fourty", "Fifty","Sixty", "Seventy","Eighty","Ninty"}; return tm[n-2]; } string getTenPow(int pow) { //The power of ten in words string power[2] = {"Hundred", "Thousand"}; return power[pow-2]; } void printNumToWord(int n) { if(n >= 0 && n < 10) cout << getUnit(n) << " "; //Unit values to word else if(n >= 10 && n < 20) cout << getTwoDigits(n) << " "; //from eleven to nineteen else if(n >= 20 && n < 100) { cout << getTenMul(n/10)<<" "; if(n%10 != 0) printNumToWord(n%10); //Recursive call to convert num to word }else if(n >= 100 && n < 1000) { cout << getUnit(n/100)<<" "; cout <<getTenPow(2) << " "; if(n%100 != 0) { cout << "And "; printNumToWord(n%100); } }else if(n >= 1000 && n <= 32767) { printNumToWord(n/1000); cout <<getTenPow(3)<<" "; if(n%1000 != 0) printNumToWord(n%1000); }else printf("Invalid Input"); } main() { int number; cout << "Enter a number between 0 to 32767: "; cin >> number; printNumToWord(number); } Enter a number between 0 to 32767: 568 Five Hundred And Sixty Eight
[ { "code": null, "e": 1172, "s": 1062, "text": "This algorithm will convert a given number into English words. Like 564 will be Five Hundred and Sixty-Four. " }, { "code": null, "e": 1292, "s": 1172, "text": "For this algorithm, some predefined strings are given, from that list, it will get the proper words to make into words." }, { "code": null, "e": 1311, "s": 1292, "text": "The lists are like" }, { "code": null, "e": 1377, "s": 1311, "text": " Units: it will hold all words for (0 to 9) like Zero, One...Nine" }, { "code": null, "e": 1458, "s": 1377, "text": " twoDigits: it will hold all numbers from (10 - 19), like Ten, eleven...Nineteen" }, { "code": null, "e": 1527, "s": 1458, "text": "tenMul: For ten multiples, (20-90), like Twenty, Thirty, ... Ninety." }, { "code": null, "e": 1592, "s": 1527, "text": "tenPower: It is for Hundred and Thousands as power 2 and 3 of 10" }, { "code": null, "e": 1652, "s": 1592, "text": "Input:\nThe number: 568\nOutput:\nFive Hundred And Sixty Eight" }, { "code": null, "e": 1667, "s": 1652, "text": "numToWord(num)" }, { "code": null, "e": 1732, "s": 1667, "text": "there are some lists which hold the words for different integers" }, { "code": null, "e": 1751, "s": 1732, "text": "Input: The number." }, { "code": null, "e": 1788, "s": 1751, "text": "Output: Represent number into words." }, { "code": null, "e": 2547, "s": 1788, "text": "Begin\n if n >= 0 and n < 10, then\n display units(n) into words\n else if n >= 10 and n < 20, then\n display twoDigitNum(n) into words //It is from ten to nineteen\n else if n >= 20 and n <100, then\n display tensMultiple(n/10), into words\n if n mod 10 ≠ 0, then\n numToWord(n mod 10)\n else if n >= 100 and n < 1000, then\n display units(n/100), into words\n display “Hundred”, into words //Hundred\n if n mod 100 ≠ 0, then\n display “And”\n numToWord(n mod 100)\n else if n >= 1000 and n <= 32767, then\n numToWord(n/1000)\n display “Thousand”\n if n mod 1000 ≠ 0, then\n numToWord(n mod 1000)\n else\n display invalid number and exit\nEnd\n" }, { "code": null, "e": 4160, "s": 2547, "text": "#include<iostream>\nusing namespace std;\n\nstring getUnit(int n) {\n //Return single digit to word\n string unit[10] = {\"Zero\", \"One\",\"Two\", \"Three\",\"Four\",\"Five\", \"Six\",\"Seven\",\"Eight\",\"Nine\"};\n return unit[n];\n}\n\nstring getTwoDigits(int n) {\n //Here n is 2 digit number\n string td[10] = {\"Ten\", \"Eleven\",\"Twelve\",\"Thirteen\", \"Fourteen\",\"Fifteen\",\"Sixteen\",\"Seventeen\",\"Eighteen\",\"Nineteen\"};\n return td[n%10];\n}\n\nstring getTenMul(int n) {\n //Here n is multiple of 10\n string tm[8] = {\"Twenty\", \"Thirty\",\"Fourty\", \"Fifty\",\"Sixty\", \"Seventy\",\"Eighty\",\"Ninty\"};\n return tm[n-2];\n}\n\nstring getTenPow(int pow) {\n //The power of ten in words\n string power[2] = {\"Hundred\", \"Thousand\"};\n return power[pow-2];\n}\n\nvoid printNumToWord(int n) {\n if(n >= 0 && n < 10)\n cout << getUnit(n) << \" \"; //Unit values to word\n else if(n >= 10 && n < 20)\n cout << getTwoDigits(n) << \" \"; //from eleven to nineteen\n else if(n >= 20 && n < 100) {\n cout << getTenMul(n/10)<<\" \";\n if(n%10 != 0)\n printNumToWord(n%10); //Recursive call to convert num to word\n }else if(n >= 100 && n < 1000) {\n cout << getUnit(n/100)<<\" \";\n cout <<getTenPow(2) << \" \";\n\n if(n%100 != 0) {\n cout << \"And \";\n printNumToWord(n%100);\n }\n\n }else if(n >= 1000 && n <= 32767) {\n printNumToWord(n/1000);\n cout <<getTenPow(3)<<\" \";\n if(n%1000 != 0)\n printNumToWord(n%1000);\n }else\n printf(\"Invalid Input\");\n}\n\nmain() {\n int number;\n cout << \"Enter a number between 0 to 32767: \"; cin >> number;\n printNumToWord(number);\n}" }, { "code": null, "e": 4228, "s": 4160, "text": "Enter a number between 0 to 32767: 568\nFive Hundred And Sixty Eight" } ]
Why array index starts from zero in C/C++ ?
An array arr[i] is interpreted as *(arr+i). Here, arr denotes the address of the first array element or the 0 index element. So *(arr+i) means the element at i distance from the first element of the array. So array index starts from 0 as initially i is 0 which means the first element of the array. A program that demonstrates this in C++ is as follows. Live Demo #include <iostream> using namespace std; int main() { int arr[] = {5,8,9,3,5}; int i; for(i = 0; i<5; i++) cout<< arr[i] <<" "; cout<<"\n"; for(i = 0; i<5; i++) cout<< *(arr + i) <<" "; return 0; } The output of the above program is as follows. 5 8 9 3 5 5 8 9 3 5 Now let us understand the above program. An array arr[] contains 5 elements. These elements are displayed using a for loop with array representations arr[i] and *(arr + i). The results obtained are identical in both cases. The code snippet that shows this is as follows. int arr[] = {5,8,9,3,5}; int i; for(i = 0; i<5; i++) cout<< arr[i] <<" "; cout<<"\n"; for(i = 0; i<5; i++) cout<< *(arr + i) <<" ";
[ { "code": null, "e": 1361, "s": 1062, "text": "An array arr[i] is interpreted as *(arr+i). Here, arr denotes the address of the first array element or the 0 index element. So *(arr+i) means the element at i distance from the first element of the array. So array index starts from 0 as initially i is 0 which means the first element of the array." }, { "code": null, "e": 1416, "s": 1361, "text": "A program that demonstrates this in C++ is as follows." }, { "code": null, "e": 1427, "s": 1416, "text": " Live Demo" }, { "code": null, "e": 1649, "s": 1427, "text": "#include <iostream>\nusing namespace std;\nint main() {\n int arr[] = {5,8,9,3,5};\n int i;\n for(i = 0; i<5; i++)\n cout<< arr[i] <<\" \";\n cout<<\"\\n\";\n for(i = 0; i<5; i++)\n cout<< *(arr + i) <<\" \";\n return 0;\n}" }, { "code": null, "e": 1696, "s": 1649, "text": "The output of the above program is as follows." }, { "code": null, "e": 1716, "s": 1696, "text": "5 8 9 3 5\n5 8 9 3 5" }, { "code": null, "e": 1757, "s": 1716, "text": "Now let us understand the above program." }, { "code": null, "e": 1987, "s": 1757, "text": "An array arr[] contains 5 elements. These elements are displayed using a for loop with array representations arr[i] and *(arr + i). The results obtained are identical in both cases. The code snippet that shows this is as follows." }, { "code": null, "e": 2119, "s": 1987, "text": "int arr[] = {5,8,9,3,5};\nint i;\nfor(i = 0; i<5; i++)\ncout<< arr[i] <<\" \";\ncout<<\"\\n\";\nfor(i = 0; i<5; i++)\ncout<< *(arr + i) <<\" \";" } ]
C# program to find common values from two or more Lists
Create more than one list − // two lists var list1 = new List<int>{3, 4}; var list2 = new List<int>{1, 2, 3}; Now, use the Intersect() method to get the common values − var res = list1.Intersect(list2); The following is the complete code − Live Demo using System.Collections.Generic; using System.Linq; using System; public class Demo { public static void Main() { // two lists var list1 = new List<int>{3, 4}; var list2 = new List<int>{1, 2, 3}; // common values var res = list1.Intersect(list2); foreach(int i in res) { Console.WriteLine(i); } } } 3
[ { "code": null, "e": 1090, "s": 1062, "text": "Create more than one list −" }, { "code": null, "e": 1172, "s": 1090, "text": "// two lists\nvar list1 = new List<int>{3, 4};\nvar list2 = new List<int>{1, 2, 3};" }, { "code": null, "e": 1231, "s": 1172, "text": "Now, use the Intersect() method to get the common values −" }, { "code": null, "e": 1265, "s": 1231, "text": "var res = list1.Intersect(list2);" }, { "code": null, "e": 1302, "s": 1265, "text": "The following is the complete code −" }, { "code": null, "e": 1313, "s": 1302, "text": " Live Demo" }, { "code": null, "e": 1674, "s": 1313, "text": "using System.Collections.Generic;\nusing System.Linq;\nusing System;\n\npublic class Demo {\n public static void Main() {\n\n // two lists\n var list1 = new List<int>{3, 4};\n var list2 = new List<int>{1, 2, 3};\n\n // common values\n var res = list1.Intersect(list2);\n\n foreach(int i in res) {\n Console.WriteLine(i);\n }\n }\n}" }, { "code": null, "e": 1676, "s": 1674, "text": "3" } ]
How to make a basic dialog using jQuery UI ? - GeeksforGeeks
18 Jan, 2021 In this article, we will be creating a Basic dialog using jQuery UI. Approach: First, add jQuery UI scripts needed for your project. <link href = “https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css” rel = “stylesheet”><script src = “https://code.jquery.com/jquery-1.10.2.js”></script><script src = “https://code.jquery.com/ui/1.10.4/jquery-ui.js”></script> Example 1: HTML <!doctype html><html lang="en"> <head> <meta charset="utf-8"> <link href="https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css" rel="stylesheet"> <script src="https://code.jquery.com/jquery-1.10.2.js"></script> <script src="https://code.jquery.com/ui/1.10.4/jquery-ui.js"> </script> <script> $(function () { $("#gfg").dialog({ autoOpen: true, }); $("#geeks").click(function () { $("#gfg").dialog("open"); }); }); </script></head> <body> <div id="gfg" title="GeeksforGeeks"> jquery UI| basic dialog option </div> <button id="geeks">Open Dialog</button></body> </html> Output: Example 2: HTML <!doctype html><html lang="en"> <head> <meta charset="utf-8"> <link href="https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css" rel="stylesheet"> <script src="https://code.jquery.com/jquery-1.10.2.js"></script> <script src="https://code.jquery.com/ui/1.10.4/jquery-ui.js"> </script> <script> $(function () { $("#gfg").dialog({ autoOpen: false, }); $("#geeks").click(function () { $("#gfg").dialog("open"); }); }); </script></head> <body> <div id="gfg" title="GeeksforGeeks"> Jquery UI| basic dialog option </div> <button id="geeks">Open Dialog</button></body> </html> Output: Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course. jQuery-UI HTML JQuery Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments REST API (Introduction) Design a web page using HTML and CSS Angular File Upload Form validation using jQuery How to auto-resize an image to fit a div container using CSS? JQuery | Set the value of an input text field How to change selected value of a drop-down list using jQuery? Form validation using jQuery How to change the background color after clicking the button in JavaScript ? How to Dynamically Add/Remove Table Rows using jQuery ?
[ { "code": null, "e": 25173, "s": 25145, "text": "\n18 Jan, 2021" }, { "code": null, "e": 25242, "s": 25173, "text": "In this article, we will be creating a Basic dialog using jQuery UI." }, { "code": null, "e": 25306, "s": 25242, "text": "Approach: First, add jQuery UI scripts needed for your project." }, { "code": null, "e": 25547, "s": 25306, "text": "<link href = “https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css” rel = “stylesheet”><script src = “https://code.jquery.com/jquery-1.10.2.js”></script><script src = “https://code.jquery.com/ui/1.10.4/jquery-ui.js”></script>" }, { "code": null, "e": 25558, "s": 25547, "text": "Example 1:" }, { "code": null, "e": 25563, "s": 25558, "text": "HTML" }, { "code": "<!doctype html><html lang=\"en\"> <head> <meta charset=\"utf-8\"> <link href=\"https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css\" rel=\"stylesheet\"> <script src=\"https://code.jquery.com/jquery-1.10.2.js\"></script> <script src=\"https://code.jquery.com/ui/1.10.4/jquery-ui.js\"> </script> <script> $(function () { $(\"#gfg\").dialog({ autoOpen: true, }); $(\"#geeks\").click(function () { $(\"#gfg\").dialog(\"open\"); }); }); </script></head> <body> <div id=\"gfg\" title=\"GeeksforGeeks\"> jquery UI| basic dialog option </div> <button id=\"geeks\">Open Dialog</button></body> </html>", "e": 26288, "s": 25563, "text": null }, { "code": null, "e": 26296, "s": 26288, "text": "Output:" }, { "code": null, "e": 26307, "s": 26296, "text": "Example 2:" }, { "code": null, "e": 26312, "s": 26307, "text": "HTML" }, { "code": "<!doctype html><html lang=\"en\"> <head> <meta charset=\"utf-8\"> <link href=\"https://code.jquery.com/ui/1.10.4/themes/ui-lightness/jquery-ui.css\" rel=\"stylesheet\"> <script src=\"https://code.jquery.com/jquery-1.10.2.js\"></script> <script src=\"https://code.jquery.com/ui/1.10.4/jquery-ui.js\"> </script> <script> $(function () { $(\"#gfg\").dialog({ autoOpen: false, }); $(\"#geeks\").click(function () { $(\"#gfg\").dialog(\"open\"); }); }); </script></head> <body> <div id=\"gfg\" title=\"GeeksforGeeks\"> Jquery UI| basic dialog option </div> <button id=\"geeks\">Open Dialog</button></body> </html>", "e": 27042, "s": 26312, "text": null }, { "code": null, "e": 27050, "s": 27042, "text": "Output:" }, { "code": null, "e": 27187, "s": 27050, "text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course." }, { "code": null, "e": 27197, "s": 27187, "text": "jQuery-UI" }, { "code": null, "e": 27202, "s": 27197, "text": "HTML" }, { "code": null, "e": 27209, "s": 27202, "text": "JQuery" }, { "code": null, "e": 27226, "s": 27209, "text": "Web Technologies" }, { "code": null, "e": 27231, "s": 27226, "text": "HTML" }, { "code": null, "e": 27329, "s": 27231, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27338, "s": 27329, "text": "Comments" }, { "code": null, "e": 27351, "s": 27338, "text": "Old Comments" }, { "code": null, "e": 27375, "s": 27351, "text": "REST API (Introduction)" }, { "code": null, "e": 27412, "s": 27375, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 27432, "s": 27412, "text": "Angular File Upload" }, { "code": null, "e": 27461, "s": 27432, "text": "Form validation using jQuery" }, { "code": null, "e": 27523, "s": 27461, "text": "How to auto-resize an image to fit a div container using CSS?" }, { "code": null, "e": 27569, "s": 27523, "text": "JQuery | Set the value of an input text field" }, { "code": null, "e": 27632, "s": 27569, "text": "How to change selected value of a drop-down list using jQuery?" }, { "code": null, "e": 27661, "s": 27632, "text": "Form validation using jQuery" }, { "code": null, "e": 27738, "s": 27661, "text": "How to change the background color after clicking the button in JavaScript ?" } ]
MySQL - Modulus Operator(%, MOD)
This operator returns the reminder of the operation (right operator divided by the left operator). Following is an example of the "%" operator − mysql> SELECT 62555 MOD 59; +--------------+ | 62555 MOD 59 | +--------------+ | 15 | +--------------+ 1 row in set (0.00 sec) Let us see another example − mysql> SELECT 6255.55855 MOD 987546.965; +---------------------------+ | 6255.55855 MOD 987546.965 | +---------------------------+ | 6255.55855 | +---------------------------+ 1 row in set (0.00 sec) Assume we have created a table in MySQL using the following quires − CREATE TABLE data (Name VARCHAR(100), Score INT); INSERT INTO data values('Raman', 545); INSERT INTO data values('Rahul' , 250); INSERT INTO data values('Mohit', 695); Following query calculates displays whether the score is even or odd − mysql> SELECT name, IF(score%2, 'Odd', 'Even') as OddOrEven FROM data; MySQL Modulus Operator (%, MOD) +-------+-----------+ | name | OddOrEven | +-------+-----------+ | Raman | Odd | | Rahul | Even | | Mohit | Odd | +-------+-----------+ 3 rows in set (0.00 sec) 31 Lectures 6 hours Eduonix Learning Solutions 84 Lectures 5.5 hours Frahaan Hussain 6 Lectures 3.5 hours DATAhill Solutions Srinivas Reddy 60 Lectures 10 hours Vijay Kumar Parvatha Reddy 10 Lectures 1 hours Harshit Srivastava 25 Lectures 4 hours Trevoir Williams Print Add Notes Bookmark this page
[ { "code": null, "e": 2432, "s": 2333, "text": "This operator returns the reminder of the operation (right operator divided by the left operator)." }, { "code": null, "e": 2478, "s": 2432, "text": "Following is an example of the \"%\" operator −" }, { "code": null, "e": 2615, "s": 2478, "text": "mysql> SELECT 62555 MOD 59;\n+--------------+\n| 62555 MOD 59 |\n+--------------+\n| 15 |\n+--------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 2644, "s": 2615, "text": "Let us see another example −" }, { "code": null, "e": 2859, "s": 2644, "text": "mysql> SELECT 6255.55855 MOD 987546.965;\n+---------------------------+\n| 6255.55855 MOD 987546.965 |\n+---------------------------+\n| 6255.55855 |\n+---------------------------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 2928, "s": 2859, "text": "Assume we have created a table in MySQL using the following quires −" }, { "code": null, "e": 3096, "s": 2928, "text": "CREATE TABLE data (Name VARCHAR(100), Score INT);\nINSERT INTO data values('Raman', 545);\nINSERT INTO data values('Rahul' , 250);\nINSERT INTO data values('Mohit', 695);" }, { "code": null, "e": 3167, "s": 3096, "text": "Following query calculates displays whether the score is even or odd −" }, { "code": null, "e": 3449, "s": 3167, "text": "mysql> SELECT name, IF(score%2, 'Odd', 'Even') as OddOrEven FROM data; MySQL Modulus Operator (%, MOD)\n+-------+-----------+\n| name | OddOrEven |\n+-------+-----------+\n| Raman | Odd |\n| Rahul | Even |\n| Mohit | Odd |\n+-------+-----------+\n3 rows in set (0.00 sec)" }, { "code": null, "e": 3482, "s": 3449, "text": "\n 31 Lectures \n 6 hours \n" }, { "code": null, "e": 3510, "s": 3482, "text": " Eduonix Learning Solutions" }, { "code": null, "e": 3545, "s": 3510, "text": "\n 84 Lectures \n 5.5 hours \n" }, { "code": null, "e": 3562, "s": 3545, "text": " Frahaan Hussain" }, { "code": null, "e": 3596, "s": 3562, "text": "\n 6 Lectures \n 3.5 hours \n" }, { "code": null, "e": 3631, "s": 3596, "text": " DATAhill Solutions Srinivas Reddy" }, { "code": null, "e": 3665, "s": 3631, "text": "\n 60 Lectures \n 10 hours \n" }, { "code": null, "e": 3693, "s": 3665, "text": " Vijay Kumar Parvatha Reddy" }, { "code": null, "e": 3726, "s": 3693, "text": "\n 10 Lectures \n 1 hours \n" }, { "code": null, "e": 3746, "s": 3726, "text": " Harshit Srivastava" }, { "code": null, "e": 3779, "s": 3746, "text": "\n 25 Lectures \n 4 hours \n" }, { "code": null, "e": 3797, "s": 3779, "text": " Trevoir Williams" }, { "code": null, "e": 3804, "s": 3797, "text": " Print" }, { "code": null, "e": 3815, "s": 3804, "text": " Add Notes" } ]
Object Pool Design Pattern
23 Dec, 2020 Object pool pattern is a software creational design pattern which is used in situations where the cost of initializing a class instance is very high. Basically, an Object pool is a container which contains some amount of objects. So, when an object is taken from the pool, it is not available in the pool until it is put back. Objects in the pool have a lifecycle: Creation Validation Destroy. UML Diagram Object Pool Design Pattern Client : This is the class that uses an object of the PooledObject type. ReuseablePool: The PooledObject class is the type that is expensive or slow to instantiate, or that has limited availability, so is to be held in the object pool. ObjectPool : The Pool class is the most important class in the object pool design pattern. ObjectPool maintains a list of available objects and a collection of objects that have already been requested from the pool. Let’s take the example of the database connections. It’s obviously that opening too many connections might affect the performance for several reasons: Creating a connection is an expensive operation. When there are too many connections opened it takes longer to create a new one and the database server will become overloaded. Here the object pool manages the connections and provide a way to reuse and share them. It can also limit the maximum number of objects that can be created. Java // Java program to illustrate// Object Pool Design Patternabstract class ObjectPool<T> { long deadTime; Hashtable<T, Long> lock, unlock; ObjectPool() { deadTime = 50000; // 50 seconds lock = new Hashtable<T, Long>(); unlock = new Hashtable<T, Long>(); } abstract T create(); abstract boolean validate(T o); abstract void dead(T o); synchronized T takeOut() { long now = System.currentTimeMillis(); T t; if (unlock.size() > 0) { Enumeration<T> e = unlock.keys(); while (e.hasMoreElements()) { t = e.nextElement(); if ((now - unlock.get(t)) > deadTime) { // object has deadd unlock.remove(t); dead(t); t = null; } else { if (validate(t)) { unlock.remove(t); lock.put(t, now); return (t); } else { // object failed validation unlock.remove(t); dead(t); t = null; } } } } // no objects available, create a new one t = create(); lock.put(t, now); return (t); } synchronized void takeIn(T t) { lock.remove(t); unlock.put(t, System.currentTimeMillis()); }} // Three methods are abstract// and therefore must be implemented by the subclass class JDBCConnectionPool extends ObjectPool<Connection> { String dsn, usr, pwd; JDBCConnectionPool(String driver, String dsn, String usr, String pwd) { super(); try { Class.forName(driver).newInstance(); } catch (Exception e) { e.printStackTrace(); } this.dsn = dsn; this.usr = usr; this.pwd = pwd; } Connection create() { try { return (DriverManager.getConnection(dsn, usr, pwd)); } catch (SQLException e) { e.printStackTrace(); return (null); } } void dead(Connection o) { try { ((Connection)o).close(); } catch (SQLException e) { e.printStackTrace(); } } boolean validate(Connection o) { try { return (!((Connection)o).isClosed()); } catch (SQLException e) { e.printStackTrace(); return (false); } }} class Main { public static void main(String args[]) { // Create the ConnectionPool: JDBCConnectionPool pool = new JDBCConnectionPool( "org.hsqldb.jdbcDriver", "jdbc:hsqldb: //localhost/mydb", "sa", "password"); // Get a connection: Connection con = pool.takeOut(); // Return the connection: pool.takeIn(con); }} Advantages It offers a significant performance boost. It manages the connections and provides a way to reuse and share them. Object pool pattern is used when the rate of initializing an instance of the class is high. When to use Object Pool Design Pattern When we have a work to allocates or deallocates many objects Also, when we know that we have a limited number of objects that will be in memory at the same time. Reference : https://sourcemaking.com/design_patterns/object_pool trikaldarshiii Design Pattern Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n23 Dec, 2020" }, { "code": null, "e": 203, "s": 52, "text": "Object pool pattern is a software creational design pattern which is used in situations where the cost of initializing a class instance is very high. " }, { "code": null, "e": 419, "s": 203, "text": "Basically, an Object pool is a container which contains some amount of objects. So, when an object is taken from the pool, it is not available in the pool until it is put back. Objects in the pool have a lifecycle: " }, { "code": null, "e": 428, "s": 419, "text": "Creation" }, { "code": null, "e": 439, "s": 428, "text": "Validation" }, { "code": null, "e": 448, "s": 439, "text": "Destroy." }, { "code": null, "e": 487, "s": 448, "text": "UML Diagram Object Pool Design Pattern" }, { "code": null, "e": 560, "s": 487, "text": "Client : This is the class that uses an object of the PooledObject type." }, { "code": null, "e": 723, "s": 560, "text": "ReuseablePool: The PooledObject class is the type that is expensive or slow to instantiate, or that has limited availability, so is to be held in the object pool." }, { "code": null, "e": 939, "s": 723, "text": "ObjectPool : The Pool class is the most important class in the object pool design pattern. ObjectPool maintains a list of available objects and a collection of objects that have already been requested from the pool." }, { "code": null, "e": 1091, "s": 939, "text": "Let’s take the example of the database connections. It’s obviously that opening too many connections might affect the performance for several reasons: " }, { "code": null, "e": 1140, "s": 1091, "text": "Creating a connection is an expensive operation." }, { "code": null, "e": 1267, "s": 1140, "text": "When there are too many connections opened it takes longer to create a new one and the database server will become overloaded." }, { "code": null, "e": 1425, "s": 1267, "text": "Here the object pool manages the connections and provide a way to reuse and share them. It can also limit the maximum number of objects that can be created. " }, { "code": null, "e": 1430, "s": 1425, "text": "Java" }, { "code": "// Java program to illustrate// Object Pool Design Patternabstract class ObjectPool<T> { long deadTime; Hashtable<T, Long> lock, unlock; ObjectPool() { deadTime = 50000; // 50 seconds lock = new Hashtable<T, Long>(); unlock = new Hashtable<T, Long>(); } abstract T create(); abstract boolean validate(T o); abstract void dead(T o); synchronized T takeOut() { long now = System.currentTimeMillis(); T t; if (unlock.size() > 0) { Enumeration<T> e = unlock.keys(); while (e.hasMoreElements()) { t = e.nextElement(); if ((now - unlock.get(t)) > deadTime) { // object has deadd unlock.remove(t); dead(t); t = null; } else { if (validate(t)) { unlock.remove(t); lock.put(t, now); return (t); } else { // object failed validation unlock.remove(t); dead(t); t = null; } } } } // no objects available, create a new one t = create(); lock.put(t, now); return (t); } synchronized void takeIn(T t) { lock.remove(t); unlock.put(t, System.currentTimeMillis()); }} // Three methods are abstract// and therefore must be implemented by the subclass class JDBCConnectionPool extends ObjectPool<Connection> { String dsn, usr, pwd; JDBCConnectionPool(String driver, String dsn, String usr, String pwd) { super(); try { Class.forName(driver).newInstance(); } catch (Exception e) { e.printStackTrace(); } this.dsn = dsn; this.usr = usr; this.pwd = pwd; } Connection create() { try { return (DriverManager.getConnection(dsn, usr, pwd)); } catch (SQLException e) { e.printStackTrace(); return (null); } } void dead(Connection o) { try { ((Connection)o).close(); } catch (SQLException e) { e.printStackTrace(); } } boolean validate(Connection o) { try { return (!((Connection)o).isClosed()); } catch (SQLException e) { e.printStackTrace(); return (false); } }} class Main { public static void main(String args[]) { // Create the ConnectionPool: JDBCConnectionPool pool = new JDBCConnectionPool( \"org.hsqldb.jdbcDriver\", \"jdbc:hsqldb: //localhost/mydb\", \"sa\", \"password\"); // Get a connection: Connection con = pool.takeOut(); // Return the connection: pool.takeIn(con); }}", "e": 4405, "s": 1430, "text": null }, { "code": null, "e": 4416, "s": 4405, "text": "Advantages" }, { "code": null, "e": 4459, "s": 4416, "text": "It offers a significant performance boost." }, { "code": null, "e": 4530, "s": 4459, "text": "It manages the connections and provides a way to reuse and share them." }, { "code": null, "e": 4622, "s": 4530, "text": "Object pool pattern is used when the rate of initializing an instance of the class is high." }, { "code": null, "e": 4661, "s": 4622, "text": "When to use Object Pool Design Pattern" }, { "code": null, "e": 4722, "s": 4661, "text": "When we have a work to allocates or deallocates many objects" }, { "code": null, "e": 4823, "s": 4722, "text": "Also, when we know that we have a limited number of objects that will be in memory at the same time." }, { "code": null, "e": 4889, "s": 4823, "text": "Reference : https://sourcemaking.com/design_patterns/object_pool " }, { "code": null, "e": 4904, "s": 4889, "text": "trikaldarshiii" }, { "code": null, "e": 4919, "s": 4904, "text": "Design Pattern" }, { "code": null, "e": 4924, "s": 4919, "text": "Java" }, { "code": null, "e": 4929, "s": 4924, "text": "Java" } ]
Python – Mapping Matrix with Dictionary
14 Oct, 2020 Given a Matrix, map its values with dictionary values. Input : test_list = [[4, 2, 1], [1, 2, 3]], sub_dict = {1 : “gfg”, 2: “best”, 3 : “CS”, 4 : “Geeks”}Output : [[‘Geeks’, ‘best’, ‘gfg’], [‘gfg’, ‘best’, ‘CS’]]Explanation : Matrix elements are substituted using dictionary. Input : test_list = [[4, 2, 1]], sub_dict = {1 : “gfg”, 2: “best”, 4 : “Geeks”}Output : [[‘Geeks’, ‘best’, ‘gfg’]]Explanation : Matrix elements are substituted using dictionary. Method #1 : Using loop This is brute way in which this task can be performed. In this, we iterate for all the elements of Matrix and map from dictionary. Python3 # Python3 code to demonstrate working of # Mapping Matrix with Dictionary# Using loop # initializing listtest_list = [[4, 2, 1], [1, 2, 3], [4, 3, 1]] # printing original listprint("The original list : " + str(test_list)) # initializing dictionarysub_dict = {1 : "gfg", 2: "best", 3 : "CS", 4 : "Geeks"} # Using loop to perform required mappingres = []for sub in test_list: temp = [] for ele in sub: # mapping values from dictionary temp.append(sub_dict[ele]) res.append(temp) # printing result print("Converted Mapped Matrix : " + str(res)) The original list : [[4, 2, 1], [1, 2, 3], [4, 3, 1]] Converted Mapped Matrix : [['Geeks', 'best', 'gfg'], ['gfg', 'best', 'CS'], ['Geeks', 'CS', 'gfg']] Method #2 : Using list comprehension This is yet another way in which this task can be performed. This is just shorthand to above method. Python3 # Python3 code to demonstrate working of # Mapping Matrix with Dictionary# Using list comprehension # initializing listtest_list = [[4, 2, 1], [1, 2, 3], [4, 3, 1]] # printing original listprint("The original list : " + str(test_list)) # initializing dictionarysub_dict = {1 : "gfg", 2: "best", 3 : "CS", 4 : "Geeks"} # Using list comprehension to perform required mapping# in one line res = [[sub_dict[val] for val in sub] for sub in test_list] # printing result print("Converted Mapped Matrix : " + str(res)) The original list : [[4, 2, 1], [1, 2, 3], [4, 3, 1]] Converted Mapped Matrix : [['Geeks', 'best', 'gfg'], ['gfg', 'best', 'CS'], ['Geeks', 'CS', 'gfg']] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n14 Oct, 2020" }, { "code": null, "e": 83, "s": 28, "text": "Given a Matrix, map its values with dictionary values." }, { "code": null, "e": 305, "s": 83, "text": "Input : test_list = [[4, 2, 1], [1, 2, 3]], sub_dict = {1 : “gfg”, 2: “best”, 3 : “CS”, 4 : “Geeks”}Output : [[‘Geeks’, ‘best’, ‘gfg’], [‘gfg’, ‘best’, ‘CS’]]Explanation : Matrix elements are substituted using dictionary." }, { "code": null, "e": 483, "s": 305, "text": "Input : test_list = [[4, 2, 1]], sub_dict = {1 : “gfg”, 2: “best”, 4 : “Geeks”}Output : [[‘Geeks’, ‘best’, ‘gfg’]]Explanation : Matrix elements are substituted using dictionary." }, { "code": null, "e": 506, "s": 483, "text": "Method #1 : Using loop" }, { "code": null, "e": 637, "s": 506, "text": "This is brute way in which this task can be performed. In this, we iterate for all the elements of Matrix and map from dictionary." }, { "code": null, "e": 645, "s": 637, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of # Mapping Matrix with Dictionary# Using loop # initializing listtest_list = [[4, 2, 1], [1, 2, 3], [4, 3, 1]] # printing original listprint(\"The original list : \" + str(test_list)) # initializing dictionarysub_dict = {1 : \"gfg\", 2: \"best\", 3 : \"CS\", 4 : \"Geeks\"} # Using loop to perform required mappingres = []for sub in test_list: temp = [] for ele in sub: # mapping values from dictionary temp.append(sub_dict[ele]) res.append(temp) # printing result print(\"Converted Mapped Matrix : \" + str(res))", "e": 1225, "s": 645, "text": null }, { "code": null, "e": 1380, "s": 1225, "text": "The original list : [[4, 2, 1], [1, 2, 3], [4, 3, 1]]\nConverted Mapped Matrix : [['Geeks', 'best', 'gfg'], ['gfg', 'best', 'CS'], ['Geeks', 'CS', 'gfg']]\n" }, { "code": null, "e": 1417, "s": 1380, "text": "Method #2 : Using list comprehension" }, { "code": null, "e": 1519, "s": 1417, "text": "This is yet another way in which this task can be performed. This is just shorthand to above method. " }, { "code": null, "e": 1527, "s": 1519, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of # Mapping Matrix with Dictionary# Using list comprehension # initializing listtest_list = [[4, 2, 1], [1, 2, 3], [4, 3, 1]] # printing original listprint(\"The original list : \" + str(test_list)) # initializing dictionarysub_dict = {1 : \"gfg\", 2: \"best\", 3 : \"CS\", 4 : \"Geeks\"} # Using list comprehension to perform required mapping# in one line res = [[sub_dict[val] for val in sub] for sub in test_list] # printing result print(\"Converted Mapped Matrix : \" + str(res))", "e": 2043, "s": 1527, "text": null }, { "code": null, "e": 2198, "s": 2043, "text": "The original list : [[4, 2, 1], [1, 2, 3], [4, 3, 1]]\nConverted Mapped Matrix : [['Geeks', 'best', 'gfg'], ['gfg', 'best', 'CS'], ['Geeks', 'CS', 'gfg']]\n" }, { "code": null, "e": 2219, "s": 2198, "text": "Python list-programs" }, { "code": null, "e": 2226, "s": 2219, "text": "Python" }, { "code": null, "e": 2242, "s": 2226, "text": "Python Programs" } ]
How to toggle play/pause in ReactJS with audio ?
10 Jul, 2021 In this article, we will learn to create a play/pause button for an audio file using ReactJS Approach: We are going to use the following steps: Take the reference of the audio file in ReactJS Using Audio Class Set the default state of the song as not playing. Make a function to handle the Play/Pause of the song. Use the play() and pause() functions of audio class to do these operations. let song = new Audio(my_song); song.play(); song.pause(); Setting up environment and Execution: Step 1: Create React App command npx create-react-app foldername Step 2: After creating your project folder, i.e., folder name, move to it using the following command: cd foldername Step 3: Create a Static folder and add an audio file to it. Project Structure: It will look like the following. Example: import React, { Component } from "react"; // Import your audio file import song from "./static/a.mp3"; class App extends Component { // Create state state = { // Get audio file in a variable audio: new Audio(song), // Set initial state of song isPlaying: false, }; // Main function to handle both play and pause operations playPause = () => { // Get state of song let isPlaying = this.state.isPlaying; if (isPlaying) { // Pause the song if it is playing this.state.audio.pause(); } else { // Play the song if it is paused this.state.audio.play(); } // Change the state of song this.setState({ isPlaying: !isPlaying }); }; render() { return ( <div> {/* Show state of song on website */} <p> {this.state.isPlaying ? "Song is Playing" : "Song is Paused"} </p> {/* Button to call our main function */} <button onClick={this.playPause}> Play | Pause </button> </div> ); } } export default App; Step to Run Application: Run the application using the following command from the root directory of the project: npm start Output: Now open your browser and go to http://localhost:3000/, Turn on speakers to listen to audio. rajeev0719singh Picked React-Questions ReactJS Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Axios in React: A Guide for Beginners ReactJS useNavigate() Hook How to do crud operations in ReactJS ? How to install bootstrap in React.js ? How to navigate on path by button click in react router ? Installation of Node.js on Linux Top 10 Projects For Beginners To Practice HTML and CSS Skills Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? Remove elements from a JavaScript Array
[ { "code": null, "e": 53, "s": 25, "text": "\n10 Jul, 2021" }, { "code": null, "e": 146, "s": 53, "text": "In this article, we will learn to create a play/pause button for an audio file using ReactJS" }, { "code": null, "e": 197, "s": 146, "text": "Approach: We are going to use the following steps:" }, { "code": null, "e": 264, "s": 197, "text": "Take the reference of the audio file in ReactJS Using Audio Class " }, { "code": null, "e": 314, "s": 264, "text": "Set the default state of the song as not playing." }, { "code": null, "e": 368, "s": 314, "text": "Make a function to handle the Play/Pause of the song." }, { "code": null, "e": 444, "s": 368, "text": "Use the play() and pause() functions of audio class to do these operations." }, { "code": null, "e": 502, "s": 444, "text": "let song = new Audio(my_song);\nsong.play();\nsong.pause();" }, { "code": null, "e": 540, "s": 502, "text": "Setting up environment and Execution:" }, { "code": null, "e": 573, "s": 540, "text": "Step 1: Create React App command" }, { "code": null, "e": 605, "s": 573, "text": "npx create-react-app foldername" }, { "code": null, "e": 708, "s": 605, "text": "Step 2: After creating your project folder, i.e., folder name, move to it using the following command:" }, { "code": null, "e": 722, "s": 708, "text": "cd foldername" }, { "code": null, "e": 782, "s": 722, "text": "Step 3: Create a Static folder and add an audio file to it." }, { "code": null, "e": 834, "s": 782, "text": "Project Structure: It will look like the following." }, { "code": null, "e": 843, "s": 834, "text": "Example:" }, { "code": null, "e": 1944, "s": 843, "text": "import React, { Component } from \"react\";\n\n// Import your audio file\nimport song from \"./static/a.mp3\";\n\nclass App extends Component {\n // Create state\n state = {\n\n // Get audio file in a variable\n audio: new Audio(song),\n\n // Set initial state of song\n isPlaying: false,\n };\n\n // Main function to handle both play and pause operations\n playPause = () => {\n\n // Get state of song\n let isPlaying = this.state.isPlaying;\n\n if (isPlaying) {\n // Pause the song if it is playing\n this.state.audio.pause();\n } else {\n\n // Play the song if it is paused\n this.state.audio.play();\n }\n\n // Change the state of song\n this.setState({ isPlaying: !isPlaying });\n };\n\n render() {\n return (\n <div>\n {/* Show state of song on website */}\n <p>\n {this.state.isPlaying ? \n \"Song is Playing\" : \n \"Song is Paused\"}\n </p>\n\n {/* Button to call our main function */}\n <button onClick={this.playPause}>\n Play | Pause\n </button>\n </div>\n );\n }\n}\n\nexport default App;\n\n" }, { "code": null, "e": 2057, "s": 1944, "text": "Step to Run Application: Run the application using the following command from the root directory of the project:" }, { "code": null, "e": 2067, "s": 2057, "text": "npm start" }, { "code": null, "e": 2168, "s": 2067, "text": "Output: Now open your browser and go to http://localhost:3000/, Turn on speakers to listen to audio." }, { "code": null, "e": 2184, "s": 2168, "text": "rajeev0719singh" }, { "code": null, "e": 2191, "s": 2184, "text": "Picked" }, { "code": null, "e": 2207, "s": 2191, "text": "React-Questions" }, { "code": null, "e": 2215, "s": 2207, "text": "ReactJS" }, { "code": null, "e": 2232, "s": 2215, "text": "Web Technologies" }, { "code": null, "e": 2330, "s": 2232, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 2368, "s": 2330, "text": "Axios in React: A Guide for Beginners" }, { "code": null, "e": 2395, "s": 2368, "text": "ReactJS useNavigate() Hook" }, { "code": null, "e": 2434, "s": 2395, "text": "How to do crud operations in ReactJS ?" }, { "code": null, "e": 2473, "s": 2434, "text": "How to install bootstrap in React.js ?" }, { "code": null, "e": 2531, "s": 2473, "text": "How to navigate on path by button click in react router ?" }, { "code": null, "e": 2564, "s": 2531, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 2626, "s": 2564, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 2687, "s": 2626, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2737, "s": 2687, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
Python – Remove Non-English characters Strings from List
02 Sep, 2020 Given a List of Strings, perform removal of all Strings with non-english characters. Input : test_list = [‘Good| ????’, ‘??Geeks???’]Output : []Explanation : Both contain non-English characters Input : test_list = [“Gfg”, “Best”]Output : [“Gfg”, “Best”]Explanation : Both are valid English words. Method #1 : Using regex + findall() + list comprehension In this, we create a regex of unicodes and check for occurrence in String List, extract each String without unicode using findall(). Python3 # Python3 code to demonstrate working of # Remove Non-English characters Strings from List# Using regex + findall() + list comprehensionimport re # initializing listtest_list = ['Gfg', 'Good| ????', "for", '??Geeks???'] # printing original listprint("The original list is : " + str(test_list)) # using findall() to neglect unicode of Non-English alphabetsres = [idx for idx in test_list if not re.findall("[^\u0000-\u05C0\u2100-\u214F]+", idx)] # printing result print("The extracted list : " + str(res)) Method #2 : Using regex + search() + filter() + lambda In this, we search for only English alphabets in String, and extract only those that have those. We use filter() + lambda to perform the task of passing filter functionality and iteration. Python3 # Python3 code to demonstrate working of # Remove Non-English characters Strings from List# Using regex + search() + filter() + lambdaimport re # initializing listtest_list = ['Gfg', 'Good| ????', "for", '??Geeks???'] # printing original listprint("The original list is : " + str(test_list)) # using search() to get only those strings with alphabetsres = list(filter(lambda ele: re.search("[a-zA-Z\s]+", ele) is not None, test_list)) # printing result print("The extracted list : " + str(res)) Python string-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n02 Sep, 2020" }, { "code": null, "e": 113, "s": 28, "text": "Given a List of Strings, perform removal of all Strings with non-english characters." }, { "code": null, "e": 222, "s": 113, "text": "Input : test_list = [‘Good| ????’, ‘??Geeks???’]Output : []Explanation : Both contain non-English characters" }, { "code": null, "e": 325, "s": 222, "text": "Input : test_list = [“Gfg”, “Best”]Output : [“Gfg”, “Best”]Explanation : Both are valid English words." }, { "code": null, "e": 382, "s": 325, "text": "Method #1 : Using regex + findall() + list comprehension" }, { "code": null, "e": 515, "s": 382, "text": "In this, we create a regex of unicodes and check for occurrence in String List, extract each String without unicode using findall()." }, { "code": null, "e": 523, "s": 515, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of # Remove Non-English characters Strings from List# Using regex + findall() + list comprehensionimport re # initializing listtest_list = ['Gfg', 'Good| ????', \"for\", '??Geeks???'] # printing original listprint(\"The original list is : \" + str(test_list)) # using findall() to neglect unicode of Non-English alphabetsres = [idx for idx in test_list if not re.findall(\"[^\\u0000-\\u05C0\\u2100-\\u214F]+\", idx)] # printing result print(\"The extracted list : \" + str(res))", "e": 1033, "s": 523, "text": null }, { "code": null, "e": 1088, "s": 1033, "text": "Method #2 : Using regex + search() + filter() + lambda" }, { "code": null, "e": 1277, "s": 1088, "text": "In this, we search for only English alphabets in String, and extract only those that have those. We use filter() + lambda to perform the task of passing filter functionality and iteration." }, { "code": null, "e": 1285, "s": 1277, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of # Remove Non-English characters Strings from List# Using regex + search() + filter() + lambdaimport re # initializing listtest_list = ['Gfg', 'Good| ????', \"for\", '??Geeks???'] # printing original listprint(\"The original list is : \" + str(test_list)) # using search() to get only those strings with alphabetsres = list(filter(lambda ele: re.search(\"[a-zA-Z\\s]+\", ele) is not None, test_list)) # printing result print(\"The extracted list : \" + str(res))", "e": 1784, "s": 1285, "text": null }, { "code": null, "e": 1807, "s": 1784, "text": "Python string-programs" }, { "code": null, "e": 1814, "s": 1807, "text": "Python" }, { "code": null, "e": 1830, "s": 1814, "text": "Python Programs" } ]
Python – Extract Key’s value from Mixed Dictionaries List
30 Sep, 2021 Given a list of dictionaries, with each dictionary having different keys, extract value of key K. Input : test_list = [{“Gfg” : 3, “b” : 7}, {“is” : 5, ‘a’ : 10}, {“Best” : 9, ‘c’ : 11}], K = ‘b’ Output : 7 Explanation : Value of b is 7. Input : test_list = [{“Gfg” : 3, “b” : 7}, {“is” : 5, ‘a’ : 10}, {“Best” : 9, ‘c’ : 11}], K = ‘c’ Output : 11 Explanation : Value of c is 11. Method #1 : Using list comprehension This is one of the ways in which this task can be performed. In this, we iterate for each dictionary inside list, and check for key in it, if present the required value is returned. Python3 # Python3 code to demonstrate working of# Extract Key's value from Mixed Dictionaries List# Using list comprehension # initializing listtest_list = [{"Gfg" : 3, "b" : 7}, {"is" : 5, 'a' : 10}, {"Best" : 9, 'c' : 11}] # printing original listprint("The original list : " + str(test_list)) # initializing KK = 'Best' # list comprehension to get key's value# using in operator to check if key is present in dictionaryres = [sub[K] for sub in test_list if K in sub][0] # printing resultprint("The extracted value : " + str(res)) The original list : [{'Gfg': 3, 'b': 7}, {'is': 5, 'a': 10}, {'Best': 9, 'c': 11}] The extracted value : 9 Method #2 : Using update() + loop This is yet another way in which this task can be performed. In this, we update each dictionary into each other. Forming one large dictionary, and then the value is extracted from this dictionary. Python3 # Python3 code to demonstrate working of# Extract Key's value from Mixed Dictionaries List# Using update() + loop # initializing listtest_list = [{"Gfg" : 3, "b" : 7}, {"is" : 5, 'a' : 10}, {"Best" : 9, 'c' : 11}] # printing original listprint("The original list : " + str(test_list)) # initializing KK = 'Best' res = dict()for sub in test_list: # merging all Dictionaries into 1 res.update(sub) # printing resultprint("The extracted value : " + str(res[K])) The original list : [{'Gfg': 3, 'b': 7}, {'is': 5, 'a': 10}, {'Best': 9, 'c': 11}] The extracted value : 9 sweetyty Python dictionary-programs Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n30 Sep, 2021" }, { "code": null, "e": 126, "s": 28, "text": "Given a list of dictionaries, with each dictionary having different keys, extract value of key K." }, { "code": null, "e": 266, "s": 126, "text": "Input : test_list = [{“Gfg” : 3, “b” : 7}, {“is” : 5, ‘a’ : 10}, {“Best” : 9, ‘c’ : 11}], K = ‘b’ Output : 7 Explanation : Value of b is 7." }, { "code": null, "e": 410, "s": 266, "text": "Input : test_list = [{“Gfg” : 3, “b” : 7}, {“is” : 5, ‘a’ : 10}, {“Best” : 9, ‘c’ : 11}], K = ‘c’ Output : 11 Explanation : Value of c is 11. " }, { "code": null, "e": 447, "s": 410, "text": "Method #1 : Using list comprehension" }, { "code": null, "e": 629, "s": 447, "text": "This is one of the ways in which this task can be performed. In this, we iterate for each dictionary inside list, and check for key in it, if present the required value is returned." }, { "code": null, "e": 637, "s": 629, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Extract Key's value from Mixed Dictionaries List# Using list comprehension # initializing listtest_list = [{\"Gfg\" : 3, \"b\" : 7}, {\"is\" : 5, 'a' : 10}, {\"Best\" : 9, 'c' : 11}] # printing original listprint(\"The original list : \" + str(test_list)) # initializing KK = 'Best' # list comprehension to get key's value# using in operator to check if key is present in dictionaryres = [sub[K] for sub in test_list if K in sub][0] # printing resultprint(\"The extracted value : \" + str(res))", "e": 1186, "s": 637, "text": null }, { "code": null, "e": 1293, "s": 1186, "text": "The original list : [{'Gfg': 3, 'b': 7}, {'is': 5, 'a': 10}, {'Best': 9, 'c': 11}]\nThe extracted value : 9" }, { "code": null, "e": 1327, "s": 1293, "text": "Method #2 : Using update() + loop" }, { "code": null, "e": 1524, "s": 1327, "text": "This is yet another way in which this task can be performed. In this, we update each dictionary into each other. Forming one large dictionary, and then the value is extracted from this dictionary." }, { "code": null, "e": 1532, "s": 1524, "text": "Python3" }, { "code": "# Python3 code to demonstrate working of# Extract Key's value from Mixed Dictionaries List# Using update() + loop # initializing listtest_list = [{\"Gfg\" : 3, \"b\" : 7}, {\"is\" : 5, 'a' : 10}, {\"Best\" : 9, 'c' : 11}] # printing original listprint(\"The original list : \" + str(test_list)) # initializing KK = 'Best' res = dict()for sub in test_list: # merging all Dictionaries into 1 res.update(sub) # printing resultprint(\"The extracted value : \" + str(res[K]))", "e": 2026, "s": 1532, "text": null }, { "code": null, "e": 2133, "s": 2026, "text": "The original list : [{'Gfg': 3, 'b': 7}, {'is': 5, 'a': 10}, {'Best': 9, 'c': 11}]\nThe extracted value : 9" }, { "code": null, "e": 2142, "s": 2133, "text": "sweetyty" }, { "code": null, "e": 2169, "s": 2142, "text": "Python dictionary-programs" }, { "code": null, "e": 2190, "s": 2169, "text": "Python list-programs" }, { "code": null, "e": 2197, "s": 2190, "text": "Python" }, { "code": null, "e": 2213, "s": 2197, "text": "Python Programs" } ]
Output of Java program | Set 26
11 Nov, 2021 Ques1: What is the output of this program? class A { public int i; private int j;} class B extends A { void display() { super.j = super.i + 1; System.out.println(super.i + " " + super.j); }} class inheritance { public static void main(String args[]) { B obj = new B(); obj.i = 1; obj.j = 2; obj.display(); }} a) 2 2b) 3 3c) Runtime Errord) Compilation Error Answer: d Explanation: Class A contains a private member variable j, this cannot be inherited by subclass B. So in class B we can not access j. So it will give a compile time error. Ques2: What is the output of this program? class selection_statements { public static void main(String args[]) { int var1 = 5; int var2 = 6; if ((var2 = 1) == var1) System.out.print(var2); else System.out.print(++var2); }} options:a) 1b) 2c) 3d) 4 Answer: b Explanation: In “If” statement, first var2 is initialized to 1 and then condition is checked whether var2 is equal to var1. As we know var1 is 5 and var2 is 1, so condition will be false and else part will be executed. Ques3: What is the output of this program? class comma_operator { public static void main(String args[]) { int sum = 0; for (int i = 0, j = 0; i < 5 & j < 5; ++i, j = i + 1) sum += i; System.out.println(sum); }} options:a) 5b) 6c) 14d) compilation error Answer: b Explanation: Using comma operator, we can include more than one statement in the initialization and iteration portion of the for loop. Therefore both ++i and j = i + 1 is executed i gets the value : 0, 1, 2, 3, and j gets the values : 0, 1, 2, 3, 4, 5. Ques4. What will be the output of the program? class Geeks { public static void main(String[] args) { Geeks g = new Geeks(); g.start(); } void start() { long[] a1 = { 3, 4, 5 }; long[] a2 = fix(a1); System.out.print(a1[0] + a1[1] + a1[2] + " "); System.out.println(a2[0] + a2[1] + a2[2]); } long[] fix(long[] a3) { a3[1] = 7; return a3; }} options:a) 12 15b) 15 15.c) 3 4 5 3 7 5d) 3 7 5 3 7 5 Answer: b Explanation: The reference variables a1 and a3 refer to the same long array object. When fix() method is called, array a1 is passed as reference. Hence the value of a3[1] becomes 7 which will be reflected in a1[] as well because of call by reference. So the a1[] array become {3, 7, 5}. WHen this is returned to a2[], it becomes {3, 7, 5} as well. So Output: 3 + 7 + 5 + ” ” + 3 + 7 + 5 = 15 15 Ques5. What will be the output of the program? class BitShift { public static void main(String[] args) { int x = 0x80000000; System.out.print(x + " and "); x = x >>> 31; System.out.println(x); }} options:a) -2147483648 and 1b) 0x80000000 and 0x00000001c) -2147483648 and -1d) 1 and -2147483648 Answer: a Explanation: Option A is correct. The >>> operator moves all bits to the right, zero filling the left bits. The bit transformation looks like this: Before: 1000 0000 0000 0000 0000 0000 0000 0000 After: 0000 0000 0000 0000 0000 0000 0000 0001 Option C is incorrect because the >>> operator zero fills the left bits, which in this case changes the sign of x, as shown. This article is contributed by Rishabh Jain. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. chhabradhanvi Java-Output Program Output Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 52, "s": 24, "text": "\n11 Nov, 2021" }, { "code": null, "e": 95, "s": 52, "text": "Ques1: What is the output of this program?" }, { "code": "class A { public int i; private int j;} class B extends A { void display() { super.j = super.i + 1; System.out.println(super.i + \" \" + super.j); }} class inheritance { public static void main(String args[]) { B obj = new B(); obj.i = 1; obj.j = 2; obj.display(); }}", "e": 429, "s": 95, "text": null }, { "code": null, "e": 478, "s": 429, "text": "a) 2 2b) 3 3c) Runtime Errord) Compilation Error" }, { "code": null, "e": 490, "s": 478, "text": " Answer: d " }, { "code": null, "e": 662, "s": 490, "text": "Explanation: Class A contains a private member variable j, this cannot be inherited by subclass B. So in class B we can not access j. So it will give a compile time error." }, { "code": null, "e": 705, "s": 662, "text": "Ques2: What is the output of this program?" }, { "code": "class selection_statements { public static void main(String args[]) { int var1 = 5; int var2 = 6; if ((var2 = 1) == var1) System.out.print(var2); else System.out.print(++var2); }}", "e": 944, "s": 705, "text": null }, { "code": null, "e": 969, "s": 944, "text": "options:a) 1b) 2c) 3d) 4" }, { "code": null, "e": 979, "s": 969, "text": "Answer: b" }, { "code": null, "e": 1198, "s": 979, "text": "Explanation: In “If” statement, first var2 is initialized to 1 and then condition is checked whether var2 is equal to var1. As we know var1 is 5 and var2 is 1, so condition will be false and else part will be executed." }, { "code": null, "e": 1241, "s": 1198, "text": "Ques3: What is the output of this program?" }, { "code": "class comma_operator { public static void main(String args[]) { int sum = 0; for (int i = 0, j = 0; i < 5 & j < 5; ++i, j = i + 1) sum += i; System.out.println(sum); }}", "e": 1451, "s": 1241, "text": null }, { "code": null, "e": 1493, "s": 1451, "text": "options:a) 5b) 6c) 14d) compilation error" }, { "code": null, "e": 1503, "s": 1493, "text": "Answer: b" }, { "code": null, "e": 1756, "s": 1503, "text": "Explanation: Using comma operator, we can include more than one statement in the initialization and iteration portion of the for loop. Therefore both ++i and j = i + 1 is executed i gets the value : 0, 1, 2, 3, and j gets the values : 0, 1, 2, 3, 4, 5." }, { "code": null, "e": 1803, "s": 1756, "text": "Ques4. What will be the output of the program?" }, { "code": "class Geeks { public static void main(String[] args) { Geeks g = new Geeks(); g.start(); } void start() { long[] a1 = { 3, 4, 5 }; long[] a2 = fix(a1); System.out.print(a1[0] + a1[1] + a1[2] + \" \"); System.out.println(a2[0] + a2[1] + a2[2]); } long[] fix(long[] a3) { a3[1] = 7; return a3; }}", "e": 2183, "s": 1803, "text": null }, { "code": null, "e": 2237, "s": 2183, "text": "options:a) 12 15b) 15 15.c) 3 4 5 3 7 5d) 3 7 5 3 7 5" }, { "code": null, "e": 2247, "s": 2237, "text": "Answer: b" }, { "code": null, "e": 2595, "s": 2247, "text": "Explanation: The reference variables a1 and a3 refer to the same long array object. When fix() method is called, array a1 is passed as reference. Hence the value of a3[1] becomes 7 which will be reflected in a1[] as well because of call by reference. So the a1[] array become {3, 7, 5}. WHen this is returned to a2[], it becomes {3, 7, 5} as well." }, { "code": null, "e": 2642, "s": 2595, "text": "So Output: 3 + 7 + 5 + ” ” + 3 + 7 + 5 = 15 15" }, { "code": null, "e": 2689, "s": 2642, "text": "Ques5. What will be the output of the program?" }, { "code": "class BitShift { public static void main(String[] args) { int x = 0x80000000; System.out.print(x + \" and \"); x = x >>> 31; System.out.println(x); }}", "e": 2876, "s": 2689, "text": null }, { "code": null, "e": 2974, "s": 2876, "text": "options:a) -2147483648 and 1b) 0x80000000 and 0x00000001c) -2147483648 and -1d) 1 and -2147483648" }, { "code": null, "e": 2984, "s": 2974, "text": "Answer: a" }, { "code": null, "e": 3132, "s": 2984, "text": "Explanation: Option A is correct. The >>> operator moves all bits to the right, zero filling the left bits. The bit transformation looks like this:" }, { "code": null, "e": 3227, "s": 3132, "text": "Before: 1000 0000 0000 0000 0000 0000 0000 0000\nAfter: 0000 0000 0000 0000 0000 0000 0000 0001" }, { "code": null, "e": 3352, "s": 3227, "text": "Option C is incorrect because the >>> operator zero fills the left bits, which in this case changes the sign of x, as shown." }, { "code": null, "e": 3648, "s": 3352, "text": "This article is contributed by Rishabh Jain. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks." }, { "code": null, "e": 3773, "s": 3648, "text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above." }, { "code": null, "e": 3787, "s": 3773, "text": "chhabradhanvi" }, { "code": null, "e": 3799, "s": 3787, "text": "Java-Output" }, { "code": null, "e": 3814, "s": 3799, "text": "Program Output" } ]
Context based Access Control (CBAC)
22 Sep, 2021 In recent times, Access-list (ACL) were used for packet filtering and protection. ACL works on the sequence of rules provided by the administrator. The rules consist of various permit and deny conditions. But disadvantage of ACL is that it filters the traffic upto transport layer only. Therefore, for a low budget firewall functionality, a Cisco router with the proper IOS version is used. We can implement IOS based firewall by 2 methods: Context Based Access Control (CBAC) features Zone based firewall Context Based Access Control (CBAC) features Zone based firewall Context access based control (CBAC) – The ACLs provide traffic filtering and protection to the transport layer while on the other hand, CBAC provides the same function upto the application layer. With the help of CBAC configuration, the router can act as a firewall. Working – CBAC just works like a reflexive Access-list but in addition to it, it maintains a state table in which the sessions are maintained in memory. When a session is initiated by the device within the network, a dynamic entry is put in the state table and the outbound (going out) traffic is allowed to pass through the router(IoS based firewall). By the help of this entry, the reply of outbound traffic can pass the router (IoS based firewall) as it has an entry for the traffic initiated within the network. This is achieved by IoS based firewall CBAC mechanism as it opens temporary holes on access list (applied to the inbound traffic) to allow reply packets . Features – Some of the features of CBAC are: Inspecting traffic – CBAC maintains TCP /UDP information which is needed to perform deeper inspection in packet payload. Filtering traffic – CBAC filters the traffic which is originated from a trusted network and goes out through the firewall and allows replies only if it has an entry in the state table. It has the ability to filter the traffic intelligently upto layer 7. Detecting intrusion – CBAC examines the rate at which the connection has been established by which it can detect attacks like Dos attack, TCP syn attack etc. On the basis of this, CBAC mechanism can cause a connection to reestablish or drop malicious packets. Generating alerts and audits – The router operating CBAC mechanism log information about connections established, number of bytes sent, source and destination IP address. Inspecting traffic – CBAC maintains TCP /UDP information which is needed to perform deeper inspection in packet payload. Filtering traffic – CBAC filters the traffic which is originated from a trusted network and goes out through the firewall and allows replies only if it has an entry in the state table. It has the ability to filter the traffic intelligently upto layer 7. Detecting intrusion – CBAC examines the rate at which the connection has been established by which it can detect attacks like Dos attack, TCP syn attack etc. On the basis of this, CBAC mechanism can cause a connection to reestablish or drop malicious packets. Generating alerts and audits – The router operating CBAC mechanism log information about connections established, number of bytes sent, source and destination IP address. Configuration – There are 3 routers namely router1 (ip address – 10.1.1.1/24 on fa0/0), router2 (ip address-10.1.1.2/24 on fa0/0 and 10.1.2.1/24 on fa0/1) and router3 (ip address – 10.1.2.2/24). First, we will give routes, through EIGRP, to all the routers so that routers will be able to ping each other. After that We will make router3 as ssh server and router2(on which CBAC will be operating) will allow the traffic only if the traffic has been inspected by router2. First configuring EIGRP on router1: router1(config)#router eigrp 100 router1(config-router)#network 10.1.1.0 router1(config-router)#no auto-summary Now, configuring EIGRP on router2 to reach other networks: router2(config)#router eigrp 100 router2(config-router)#network 10.1.1.0 router2(config-router)#network 10.1.2.0 router2(config-router)#no auto-summary Now, configuring eigrp on router3: router3(config)#router eigrp 100 router3(config-router)#network 10.1.2.0 router3(config-router)#no auto-summary Now, we will configure ssh on router3: router3(config)#ip domain name GeeksforGeeks.com router3(config)#username saurabh password cisco router3(config)#line vty 0 4 router3(config-line)#transport input ssh router3(config-line)#login local router3(config)#crypto key generate rsa label Cisco.com modulus 1024 Now, we will make an Access-list on router2 by which we will deny all the traffic except EIGRP because EIGRP will maintain the reachability to all the routers. router2(config)#ip Access-list extended 100 router2(config-ext-nacl)#permit eigrp any any router2(config-ext-nacl)#deny ip any any Now, applying it to the interface: router2(config)#int fa0/1 router2(config-if)#ip access-group 100 in Now, router1 will not able to ssh router3 as we have applied access-list which will accept Eigrp packets only and deny all other packets. Now, configure CBAC on router2 to inspect the ssh traffic (Only that traffic will be allowed which will be inspected by the IoS router operating CBAC. router2(config)#!cbac router2(config)#ip inspect name Cisco ssh The first command (!cbac) will enable cbac feature while the second command will inspect the ssh traffic. Now, applying inspection to the interface: router2(config)#int fa0/1 router2(config-if)#ip inspect cisco out Now, router1 will able to ssh router3 as the ssh packet is first inspected by the router2 when it leaves the outbound (fa0/1) interface (as we have configured). This can be verified by: router2#show ip inspect all Note – Here, Access-list has been applied inbound and CBAC has been applied out because we want only that traffic to come from outside the network which has been initiated by the inside network (10.1.1.1). CBAC which is applied outbound to the interface (into fa0/1) create temporary holes on the Access-list applied inbound to the interface to allow return packets through the ACL. Limitations – Some of the limitations of cbac mechanisms are: CBAC is not simple to understand i.e it requires detailed knowledge of protocols and operations we want to perform. CBAC mechanism cannot inspect traffic originated from the router (on which we have configured CBAC) itself. No stateful table fail over support. If one router fails then another redundant router can be used as a CBAC firewall but the state table will not get duplicated therefore state table has to be rebuild causing some connection to be rebuilt. It does not inspect encrypted packets such as IPsec. CBAC is not simple to understand i.e it requires detailed knowledge of protocols and operations we want to perform. CBAC mechanism cannot inspect traffic originated from the router (on which we have configured CBAC) itself. No stateful table fail over support. If one router fails then another redundant router can be used as a CBAC firewall but the state table will not get duplicated therefore state table has to be rebuild causing some connection to be rebuilt. It does not inspect encrypted packets such as IPsec. Pushpender007 Computer Networks Computer Networks Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n22 Sep, 2021" }, { "code": null, "e": 316, "s": 28, "text": "In recent times, Access-list (ACL) were used for packet filtering and protection. ACL works on the sequence of rules provided by the administrator. The rules consist of various permit and deny conditions. But disadvantage of ACL is that it filters the traffic upto transport layer only. " }, { "code": null, "e": 471, "s": 316, "text": "Therefore, for a low budget firewall functionality, a Cisco router with the proper IOS version is used. We can implement IOS based firewall by 2 methods: " }, { "code": null, "e": 538, "s": 471, "text": "Context Based Access Control (CBAC) features Zone based firewall " }, { "code": null, "e": 584, "s": 538, "text": "Context Based Access Control (CBAC) features " }, { "code": null, "e": 606, "s": 584, "text": "Zone based firewall " }, { "code": null, "e": 874, "s": 606, "text": "Context access based control (CBAC) – The ACLs provide traffic filtering and protection to the transport layer while on the other hand, CBAC provides the same function upto the application layer. With the help of CBAC configuration, the router can act as a firewall. " }, { "code": null, "e": 1546, "s": 874, "text": "Working – CBAC just works like a reflexive Access-list but in addition to it, it maintains a state table in which the sessions are maintained in memory. When a session is initiated by the device within the network, a dynamic entry is put in the state table and the outbound (going out) traffic is allowed to pass through the router(IoS based firewall). By the help of this entry, the reply of outbound traffic can pass the router (IoS based firewall) as it has an entry for the traffic initiated within the network. This is achieved by IoS based firewall CBAC mechanism as it opens temporary holes on access list (applied to the inbound traffic) to allow reply packets . " }, { "code": null, "e": 1592, "s": 1546, "text": "Features – Some of the features of CBAC are: " }, { "code": null, "e": 2400, "s": 1592, "text": "Inspecting traffic – CBAC maintains TCP /UDP information which is needed to perform deeper inspection in packet payload. Filtering traffic – CBAC filters the traffic which is originated from a trusted network and goes out through the firewall and allows replies only if it has an entry in the state table. It has the ability to filter the traffic intelligently upto layer 7. Detecting intrusion – CBAC examines the rate at which the connection has been established by which it can detect attacks like Dos attack, TCP syn attack etc. On the basis of this, CBAC mechanism can cause a connection to reestablish or drop malicious packets. Generating alerts and audits – The router operating CBAC mechanism log information about connections established, number of bytes sent, source and destination IP address. " }, { "code": null, "e": 2522, "s": 2400, "text": "Inspecting traffic – CBAC maintains TCP /UDP information which is needed to perform deeper inspection in packet payload. " }, { "code": null, "e": 2777, "s": 2522, "text": "Filtering traffic – CBAC filters the traffic which is originated from a trusted network and goes out through the firewall and allows replies only if it has an entry in the state table. It has the ability to filter the traffic intelligently upto layer 7. " }, { "code": null, "e": 3038, "s": 2777, "text": "Detecting intrusion – CBAC examines the rate at which the connection has been established by which it can detect attacks like Dos attack, TCP syn attack etc. On the basis of this, CBAC mechanism can cause a connection to reestablish or drop malicious packets. " }, { "code": null, "e": 3211, "s": 3038, "text": "Generating alerts and audits – The router operating CBAC mechanism log information about connections established, number of bytes sent, source and destination IP address. " }, { "code": null, "e": 3228, "s": 3211, "text": "Configuration – " }, { "code": null, "e": 3686, "s": 3230, "text": "There are 3 routers namely router1 (ip address – 10.1.1.1/24 on fa0/0), router2 (ip address-10.1.1.2/24 on fa0/0 and 10.1.2.1/24 on fa0/1) and router3 (ip address – 10.1.2.2/24). First, we will give routes, through EIGRP, to all the routers so that routers will be able to ping each other. After that We will make router3 as ssh server and router2(on which CBAC will be operating) will allow the traffic only if the traffic has been inspected by router2. " }, { "code": null, "e": 3723, "s": 3686, "text": "First configuring EIGRP on router1: " }, { "code": null, "e": 3836, "s": 3723, "text": "router1(config)#router eigrp 100\nrouter1(config-router)#network 10.1.1.0\nrouter1(config-router)#no auto-summary " }, { "code": null, "e": 3896, "s": 3836, "text": "Now, configuring EIGRP on router2 to reach other networks: " }, { "code": null, "e": 4048, "s": 3896, "text": "router2(config)#router eigrp 100\nrouter2(config-router)#network 10.1.1.0\nrouter2(config-router)#network 10.1.2.0\nrouter2(config-router)#no auto-summary" }, { "code": null, "e": 4084, "s": 4048, "text": "Now, configuring eigrp on router3: " }, { "code": null, "e": 4196, "s": 4084, "text": "router3(config)#router eigrp 100\nrouter3(config-router)#network 10.1.2.0\nrouter3(config-router)#no auto-summary" }, { "code": null, "e": 4236, "s": 4196, "text": "Now, we will configure ssh on router3: " }, { "code": null, "e": 4506, "s": 4236, "text": "router3(config)#ip domain name GeeksforGeeks.com\nrouter3(config)#username saurabh password cisco\nrouter3(config)#line vty 0 4\nrouter3(config-line)#transport input ssh\nrouter3(config-line)#login local \nrouter3(config)#crypto key generate rsa label Cisco.com modulus 1024" }, { "code": null, "e": 4667, "s": 4506, "text": "Now, we will make an Access-list on router2 by which we will deny all the traffic except EIGRP because EIGRP will maintain the reachability to all the routers. " }, { "code": null, "e": 4799, "s": 4667, "text": "router2(config)#ip Access-list extended 100\nrouter2(config-ext-nacl)#permit eigrp any any \nrouter2(config-ext-nacl)#deny ip any any" }, { "code": null, "e": 4835, "s": 4799, "text": "Now, applying it to the interface: " }, { "code": null, "e": 4903, "s": 4835, "text": "router2(config)#int fa0/1\nrouter2(config-if)#ip access-group 100 in" }, { "code": null, "e": 5193, "s": 4903, "text": "Now, router1 will not able to ssh router3 as we have applied access-list which will accept Eigrp packets only and deny all other packets. Now, configure CBAC on router2 to inspect the ssh traffic (Only that traffic will be allowed which will be inspected by the IoS router operating CBAC. " }, { "code": null, "e": 5257, "s": 5193, "text": "router2(config)#!cbac\nrouter2(config)#ip inspect name Cisco ssh" }, { "code": null, "e": 5407, "s": 5257, "text": "The first command (!cbac) will enable cbac feature while the second command will inspect the ssh traffic. Now, applying inspection to the interface: " }, { "code": null, "e": 5473, "s": 5407, "text": "router2(config)#int fa0/1\nrouter2(config-if)#ip inspect cisco out" }, { "code": null, "e": 5660, "s": 5473, "text": "Now, router1 will able to ssh router3 as the ssh packet is first inspected by the router2 when it leaves the outbound (fa0/1) interface (as we have configured). This can be verified by: " }, { "code": null, "e": 5688, "s": 5660, "text": "router2#show ip inspect all" }, { "code": null, "e": 6072, "s": 5688, "text": "Note – Here, Access-list has been applied inbound and CBAC has been applied out because we want only that traffic to come from outside the network which has been initiated by the inside network (10.1.1.1). CBAC which is applied outbound to the interface (into fa0/1) create temporary holes on the Access-list applied inbound to the interface to allow return packets through the ACL. " }, { "code": null, "e": 6135, "s": 6072, "text": "Limitations – Some of the limitations of cbac mechanisms are: " }, { "code": null, "e": 6655, "s": 6135, "text": "CBAC is not simple to understand i.e it requires detailed knowledge of protocols and operations we want to perform. CBAC mechanism cannot inspect traffic originated from the router (on which we have configured CBAC) itself. No stateful table fail over support. If one router fails then another redundant router can be used as a CBAC firewall but the state table will not get duplicated therefore state table has to be rebuild causing some connection to be rebuilt. It does not inspect encrypted packets such as IPsec. " }, { "code": null, "e": 6772, "s": 6655, "text": "CBAC is not simple to understand i.e it requires detailed knowledge of protocols and operations we want to perform. " }, { "code": null, "e": 6881, "s": 6772, "text": "CBAC mechanism cannot inspect traffic originated from the router (on which we have configured CBAC) itself. " }, { "code": null, "e": 7123, "s": 6881, "text": "No stateful table fail over support. If one router fails then another redundant router can be used as a CBAC firewall but the state table will not get duplicated therefore state table has to be rebuild causing some connection to be rebuilt. " }, { "code": null, "e": 7178, "s": 7123, "text": "It does not inspect encrypted packets such as IPsec. " }, { "code": null, "e": 7194, "s": 7180, "text": "Pushpender007" }, { "code": null, "e": 7212, "s": 7194, "text": "Computer Networks" }, { "code": null, "e": 7230, "s": 7212, "text": "Computer Networks" } ]
String Manipulation in R
22 Apr, 2020 String manipulation basically refers to the process of handling and analyzing strings. It involves various operations concerned with modification and parsing of strings to use and change its data. R offers a series of in-built functions to manipulate the contents of a string. In this article, we will study different functions concerned with the manipulation of strings in R. String Concatenation is the technique of combining two strings. String Concatenation can be done using many ways: paste() functionAny number of strings can be concatenated together using the paste() function to form a larger string. This function takes separator as argument which is used between the individual string elements and another argument ‘collapse’ which reflects if we wish to print the strings together as a single larger string. By default, the value of collapse is NULL.Syntax:paste(..., sep=" ", collapse = NULL)Example:# R program for String concatenation # Concatenation using paste() functionstr <- paste("Learn", "Code")print (str)Output: "Learn Code"In case no separator is specified the default separator ” ” is inserted between individual strings.Example:str <- paste(c(1:3), "4", sep = ":")print (str)Output:"1:4" "2:4" "3:4"Since, the objects to be concatenated are of different lengths, a repetition of the string of smaller length is applied with the other input strings. The first string is a sequence of 1, 2, 3 which is then individually concatenated with the other string “4” using separator ‘:’.str <- paste(c(1:4), c(5:8), sep = "--")print (str)Output:"1--5" "2--6" "3--7" "4--8"Since, both the strings are of the same length, the corresponding elements of both are concatenated, that is the first element of the first string is concatenated with the first element of second-string using the sep '–'. paste(..., sep=" ", collapse = NULL) Example: # R program for String concatenation # Concatenation using paste() functionstr <- paste("Learn", "Code")print (str) Output: "Learn Code" In case no separator is specified the default separator ” ” is inserted between individual strings. Example: str <- paste(c(1:3), "4", sep = ":")print (str) Output: "1:4" "2:4" "3:4" Since, the objects to be concatenated are of different lengths, a repetition of the string of smaller length is applied with the other input strings. The first string is a sequence of 1, 2, 3 which is then individually concatenated with the other string “4” using separator ‘:’. str <- paste(c(1:4), c(5:8), sep = "--")print (str) Output: "1--5" "2--6" "3--7" "4--8" Since, both the strings are of the same length, the corresponding elements of both are concatenated, that is the first element of the first string is concatenated with the first element of second-string using the sep '–'. cat() functionDifferent types of strings can be concatenated together using the cat()) function in R, where sep specifies the separator to give between the strings and file name, in case we wish to write the contents onto a file.Syntax:cat(..., sep=" ", file)Example:# R program for string concatenation # Concatenation using cat() functionstr <- cat("learn", "code", "tech", sep = ":")print (str)Output:learn:code:techNULLThe output string is printed without any quotes and the default separator is ‘:’.NULL value is appended at the end.Example:cat(c(1:5), file ='sample.txt')Output:1 2 3 4 5 cat(..., sep=" ", file) Example: # R program for string concatenation # Concatenation using cat() functionstr <- cat("learn", "code", "tech", sep = ":")print (str) Output: learn:code:techNULL The output string is printed without any quotes and the default separator is ‘:’.NULL value is appended at the end.Example: cat(c(1:5), file ='sample.txt') Output: 1 2 3 4 5 The output is written to a text file sample.txt in the same working directory. length() functionThe length() function determines the number of strings specified in the function.Example:# R program to calculate length print (length(c("Learn to", "Code")))Output:2There are two strings specified in the function. # R program to calculate length print (length(c("Learn to", "Code"))) Output: 2 There are two strings specified in the function. nchar() functionnchar() counts the number of characters in each of the strings specified as arguments to the function individually.Example:print (nchar(c("Learn", "Code")))Output:5 4The output indicates the length of Learn and then Code separated by ” ” . print (nchar(c("Learn", "Code"))) Output: 5 4 The output indicates the length of Learn and then Code separated by ” ” . Conversion to upper caseAll the characters of the strings specified are converted to upper case.Example:print (toupper(c("Learn Code", "hI")))Output :"LEARN CODE" "HI" print (toupper(c("Learn Code", "hI"))) Output : "LEARN CODE" "HI" Conversion to lower caseAll the characters of the strings specified are converted to lower case.Example:print (tolower(c("Learn Code", "hI")))Output :"learn code" "hi" print (tolower(c("Learn Code", "hI"))) Output : "learn code" "hi" casefold() functionAll the characters of the strings specified are converted to lowercase or uppercase according to the arguments in casefold(..., upper=TRUE).Examples:print (casefold(c("Learn Code", "hI")))Output:"learn code" "hi"By default, the strings get converted to lower case.print (casefold(c("Learn Code", "hI"), upper = TRUE))Output:"LEARN CODE" "HI" print (casefold(c("Learn Code", "hI"))) Output: "learn code" "hi" By default, the strings get converted to lower case. print (casefold(c("Learn Code", "hI"), upper = TRUE)) Output: "LEARN CODE" "HI" Characters can be translated using the chartr(oldchar, newchar, ...) function in R, where every instance of old character is replaced by the new character in the specified set of strings.Example 1: chartr("a", "A", "An honest man gave that") Output: "An honest mAn gAve thAt" Every instance of ‘a’ is replaced by ‘A’.Example 2: chartr("is", "#@", c("This is it", "It is great")) Output: "Th#@ #@ #t" "It #@ great" Every instance of old string is replaced by new specified string. “i” is replaced by “#” by “s” by “@”, that is the corresponding positions of old string is replaced by new string.Example 3: chartr("ate", "#@", "I hate ate") Output: Error in chartr("ate", "#@", "I hate ate") : 'old' is longer than 'new' Execution halted The length of the old string should be less than the new string. A string can be split into corresponding individual strings using ” ” the default separator.Example: strsplit("Learn Code Teach !", " ") Output: [1] "Learn" "Code" "Teach" "!" substr(..., start, end) or substring(..., start, end) function in R extracts substrings out of a string beginning with the start index and ending with the end index. It also replaces the specified substring with a new set of characters.Example: substr("Learn Code Tech", 1, 4) Output: "Lear" Extracts the first four characters from the string. str <- c("program", "with", "new", "language")substring(str, 3, 3) <- "%"print(str) Output: "pr%gram" "wi%h" "ne%" "la%guage" Replaces the third character of every string with % sign. str <- c("program", "with", "new", "language")substr(str, 3, 3) <- c("%", "@")print(str) Output: "pr%gram" "wi@h" "ne%" "la@guage" Replaces the third character of each string alternatively with the specified symbols. Picked R-strings R Language Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n22 Apr, 2020" }, { "code": null, "e": 405, "s": 28, "text": "String manipulation basically refers to the process of handling and analyzing strings. It involves various operations concerned with modification and parsing of strings to use and change its data. R offers a series of in-built functions to manipulate the contents of a string. In this article, we will study different functions concerned with the manipulation of strings in R." }, { "code": null, "e": 519, "s": 405, "text": "String Concatenation is the technique of combining two strings. String Concatenation can be done using many ways:" }, { "code": null, "e": 1840, "s": 519, "text": "paste() functionAny number of strings can be concatenated together using the paste() function to form a larger string. This function takes separator as argument which is used between the individual string elements and another argument ‘collapse’ which reflects if we wish to print the strings together as a single larger string. By default, the value of collapse is NULL.Syntax:paste(..., sep=\" \", collapse = NULL)Example:# R program for String concatenation # Concatenation using paste() functionstr <- paste(\"Learn\", \"Code\")print (str)Output: \"Learn Code\"In case no separator is specified the default separator ” ” is inserted between individual strings.Example:str <- paste(c(1:3), \"4\", sep = \":\")print (str)Output:\"1:4\" \"2:4\" \"3:4\"Since, the objects to be concatenated are of different lengths, a repetition of the string of smaller length is applied with the other input strings. The first string is a sequence of 1, 2, 3 which is then individually concatenated with the other string “4” using separator ‘:’.str <- paste(c(1:4), c(5:8), sep = \"--\")print (str)Output:\"1--5\" \"2--6\" \"3--7\" \"4--8\"Since, both the strings are of the same length, the corresponding elements of both are concatenated, that is the first element of the first string is concatenated with the first element of second-string using the sep '–'." }, { "code": null, "e": 1877, "s": 1840, "text": "paste(..., sep=\" \", collapse = NULL)" }, { "code": null, "e": 1886, "s": 1877, "text": "Example:" }, { "code": "# R program for String concatenation # Concatenation using paste() functionstr <- paste(\"Learn\", \"Code\")print (str)", "e": 2003, "s": 1886, "text": null }, { "code": null, "e": 2011, "s": 2003, "text": "Output:" }, { "code": null, "e": 2025, "s": 2011, "text": " \"Learn Code\"" }, { "code": null, "e": 2125, "s": 2025, "text": "In case no separator is specified the default separator ” ” is inserted between individual strings." }, { "code": null, "e": 2134, "s": 2125, "text": "Example:" }, { "code": "str <- paste(c(1:3), \"4\", sep = \":\")print (str)", "e": 2182, "s": 2134, "text": null }, { "code": null, "e": 2190, "s": 2182, "text": "Output:" }, { "code": null, "e": 2208, "s": 2190, "text": "\"1:4\" \"2:4\" \"3:4\"" }, { "code": null, "e": 2487, "s": 2208, "text": "Since, the objects to be concatenated are of different lengths, a repetition of the string of smaller length is applied with the other input strings. The first string is a sequence of 1, 2, 3 which is then individually concatenated with the other string “4” using separator ‘:’." }, { "code": "str <- paste(c(1:4), c(5:8), sep = \"--\")print (str)", "e": 2539, "s": 2487, "text": null }, { "code": null, "e": 2547, "s": 2539, "text": "Output:" }, { "code": null, "e": 2575, "s": 2547, "text": "\"1--5\" \"2--6\" \"3--7\" \"4--8\"" }, { "code": null, "e": 2797, "s": 2575, "text": "Since, both the strings are of the same length, the corresponding elements of both are concatenated, that is the first element of the first string is concatenated with the first element of second-string using the sep '–'." }, { "code": null, "e": 3392, "s": 2797, "text": "cat() functionDifferent types of strings can be concatenated together using the cat()) function in R, where sep specifies the separator to give between the strings and file name, in case we wish to write the contents onto a file.Syntax:cat(..., sep=\" \", file)Example:# R program for string concatenation # Concatenation using cat() functionstr <- cat(\"learn\", \"code\", \"tech\", sep = \":\")print (str)Output:learn:code:techNULLThe output string is printed without any quotes and the default separator is ‘:’.NULL value is appended at the end.Example:cat(c(1:5), file ='sample.txt')Output:1 2 3 4 5" }, { "code": null, "e": 3416, "s": 3392, "text": "cat(..., sep=\" \", file)" }, { "code": null, "e": 3425, "s": 3416, "text": "Example:" }, { "code": "# R program for string concatenation # Concatenation using cat() functionstr <- cat(\"learn\", \"code\", \"tech\", sep = \":\")print (str)", "e": 3557, "s": 3425, "text": null }, { "code": null, "e": 3565, "s": 3557, "text": "Output:" }, { "code": null, "e": 3585, "s": 3565, "text": "learn:code:techNULL" }, { "code": null, "e": 3709, "s": 3585, "text": "The output string is printed without any quotes and the default separator is ‘:’.NULL value is appended at the end.Example:" }, { "code": "cat(c(1:5), file ='sample.txt')", "e": 3741, "s": 3709, "text": null }, { "code": null, "e": 3749, "s": 3741, "text": "Output:" }, { "code": null, "e": 3759, "s": 3749, "text": "1 2 3 4 5" }, { "code": null, "e": 3838, "s": 3759, "text": "The output is written to a text file sample.txt in the same working directory." }, { "code": null, "e": 4071, "s": 3838, "text": "length() functionThe length() function determines the number of strings specified in the function.Example:# R program to calculate length print (length(c(\"Learn to\", \"Code\")))Output:2There are two strings specified in the function." }, { "code": "# R program to calculate length print (length(c(\"Learn to\", \"Code\")))", "e": 4142, "s": 4071, "text": null }, { "code": null, "e": 4150, "s": 4142, "text": "Output:" }, { "code": null, "e": 4152, "s": 4150, "text": "2" }, { "code": null, "e": 4201, "s": 4152, "text": "There are two strings specified in the function." }, { "code": null, "e": 4457, "s": 4201, "text": "nchar() functionnchar() counts the number of characters in each of the strings specified as arguments to the function individually.Example:print (nchar(c(\"Learn\", \"Code\")))Output:5 4The output indicates the length of Learn and then Code separated by ” ” ." }, { "code": "print (nchar(c(\"Learn\", \"Code\")))", "e": 4491, "s": 4457, "text": null }, { "code": null, "e": 4499, "s": 4491, "text": "Output:" }, { "code": null, "e": 4503, "s": 4499, "text": "5 4" }, { "code": null, "e": 4577, "s": 4503, "text": "The output indicates the length of Learn and then Code separated by ” ” ." }, { "code": null, "e": 4745, "s": 4577, "text": "Conversion to upper caseAll the characters of the strings specified are converted to upper case.Example:print (toupper(c(\"Learn Code\", \"hI\")))Output :\"LEARN CODE\" \"HI\"" }, { "code": "print (toupper(c(\"Learn Code\", \"hI\")))", "e": 4784, "s": 4745, "text": null }, { "code": null, "e": 4793, "s": 4784, "text": "Output :" }, { "code": null, "e": 4811, "s": 4793, "text": "\"LEARN CODE\" \"HI\"" }, { "code": null, "e": 4979, "s": 4811, "text": "Conversion to lower caseAll the characters of the strings specified are converted to lower case.Example:print (tolower(c(\"Learn Code\", \"hI\")))Output :\"learn code\" \"hi\"" }, { "code": "print (tolower(c(\"Learn Code\", \"hI\")))", "e": 5018, "s": 4979, "text": null }, { "code": null, "e": 5027, "s": 5018, "text": "Output :" }, { "code": null, "e": 5045, "s": 5027, "text": "\"learn code\" \"hi\"" }, { "code": null, "e": 5406, "s": 5045, "text": "casefold() functionAll the characters of the strings specified are converted to lowercase or uppercase according to the arguments in casefold(..., upper=TRUE).Examples:print (casefold(c(\"Learn Code\", \"hI\")))Output:\"learn code\" \"hi\"By default, the strings get converted to lower case.print (casefold(c(\"Learn Code\", \"hI\"), upper = TRUE))Output:\"LEARN CODE\" \"HI\"" }, { "code": "print (casefold(c(\"Learn Code\", \"hI\")))", "e": 5446, "s": 5406, "text": null }, { "code": null, "e": 5454, "s": 5446, "text": "Output:" }, { "code": null, "e": 5472, "s": 5454, "text": "\"learn code\" \"hi\"" }, { "code": null, "e": 5525, "s": 5472, "text": "By default, the strings get converted to lower case." }, { "code": "print (casefold(c(\"Learn Code\", \"hI\"), upper = TRUE))", "e": 5579, "s": 5525, "text": null }, { "code": null, "e": 5587, "s": 5579, "text": "Output:" }, { "code": null, "e": 5605, "s": 5587, "text": "\"LEARN CODE\" \"HI\"" }, { "code": null, "e": 5803, "s": 5605, "text": "Characters can be translated using the chartr(oldchar, newchar, ...) function in R, where every instance of old character is replaced by the new character in the specified set of strings.Example 1:" }, { "code": "chartr(\"a\", \"A\", \"An honest man gave that\")", "e": 5847, "s": 5803, "text": null }, { "code": null, "e": 5855, "s": 5847, "text": "Output:" }, { "code": null, "e": 5881, "s": 5855, "text": "\"An honest mAn gAve thAt\"" }, { "code": null, "e": 5933, "s": 5881, "text": "Every instance of ‘a’ is replaced by ‘A’.Example 2:" }, { "code": "chartr(\"is\", \"#@\", c(\"This is it\", \"It is great\"))", "e": 5984, "s": 5933, "text": null }, { "code": null, "e": 5992, "s": 5984, "text": "Output:" }, { "code": null, "e": 6020, "s": 5992, "text": "\"Th#@ #@ #t\" \"It #@ great\"" }, { "code": null, "e": 6211, "s": 6020, "text": "Every instance of old string is replaced by new specified string. “i” is replaced by “#” by “s” by “@”, that is the corresponding positions of old string is replaced by new string.Example 3:" }, { "code": "chartr(\"ate\", \"#@\", \"I hate ate\")", "e": 6245, "s": 6211, "text": null }, { "code": null, "e": 6253, "s": 6245, "text": "Output:" }, { "code": null, "e": 6352, "s": 6253, "text": "Error in chartr(\"ate\", \"#@\", \"I hate ate\") : 'old' is longer than 'new'\n Execution halted " }, { "code": null, "e": 6417, "s": 6352, "text": "The length of the old string should be less than the new string." }, { "code": null, "e": 6518, "s": 6417, "text": "A string can be split into corresponding individual strings using ” ” the default separator.Example:" }, { "code": "strsplit(\"Learn Code Teach !\", \" \")", "e": 6554, "s": 6518, "text": null }, { "code": null, "e": 6562, "s": 6554, "text": "Output:" }, { "code": null, "e": 6594, "s": 6562, "text": "[1] \"Learn\" \"Code\" \"Teach\" \"!\"" }, { "code": null, "e": 6839, "s": 6594, "text": "substr(..., start, end) or substring(..., start, end) function in R extracts substrings out of a string beginning with the start index and ending with the end index. It also replaces the specified substring with a new set of characters.Example:" }, { "code": "substr(\"Learn Code Tech\", 1, 4)", "e": 6871, "s": 6839, "text": null }, { "code": null, "e": 6879, "s": 6871, "text": "Output:" }, { "code": null, "e": 6886, "s": 6879, "text": "\"Lear\"" }, { "code": null, "e": 6938, "s": 6886, "text": "Extracts the first four characters from the string." }, { "code": "str <- c(\"program\", \"with\", \"new\", \"language\")substring(str, 3, 3) <- \"%\"print(str)", "e": 7022, "s": 6938, "text": null }, { "code": null, "e": 7030, "s": 7022, "text": "Output:" }, { "code": null, "e": 7074, "s": 7030, "text": "\"pr%gram\" \"wi%h\" \"ne%\" \"la%guage\"" }, { "code": null, "e": 7132, "s": 7074, "text": "Replaces the third character of every string with % sign." }, { "code": "str <- c(\"program\", \"with\", \"new\", \"language\")substr(str, 3, 3) <- c(\"%\", \"@\")print(str)", "e": 7221, "s": 7132, "text": null }, { "code": null, "e": 7229, "s": 7221, "text": "Output:" }, { "code": null, "e": 7273, "s": 7229, "text": "\"pr%gram\" \"wi@h\" \"ne%\" \"la@guage\"" }, { "code": null, "e": 7359, "s": 7273, "text": "Replaces the third character of each string alternatively with the specified symbols." }, { "code": null, "e": 7366, "s": 7359, "text": "Picked" }, { "code": null, "e": 7376, "s": 7366, "text": "R-strings" }, { "code": null, "e": 7387, "s": 7376, "text": "R Language" } ]
Python | Get first and last elements of a list
02 Jan, 2019 Sometimes, there might be a need to get the range between which a number lies in the list, for such applications we require to get the first and last element of the list. Let’s discuss certain ways to get the first and last element of the list. Method #1 : Using list indexUsing the list indices inside the master list can perform this particular task. This is most naive method to achieve this particular task one can think of. # Python3 code to demonstrate # to get first and last element of list# using list indexing # initializing list test_list = [1, 5, 6, 7, 4] # printing original list print ("The original list is : " + str(test_list)) # using list indexing# to get first and last element of listres = [ test_list[0], test_list[-1] ] # printing resultprint ("The first and last element of list are : " + str(res)) The original list is : [1, 5, 6, 7, 4] The first and last element of list are : [1, 4] Method #2 : Using List slicingOne can also make use of list slicing technique to perform the particular task of getting first and last element. We can use step of whole list to skip to the last element after the first element. # Python3 code to demonstrate # to get first and last element of list# using List slicing # initializing list test_list = [1, 5, 6, 7, 4] # printing original list print ("The original list is : " + str(test_list)) # using List slicing# to get first and last element of listres = test_list[::len(test_list)-1] # printing resultprint ("The first and last element of list are : " + str(res)) The original list is : [1, 5, 6, 7, 4] The first and last element of list are : [1, 4] Method #3 : Using list comprehensionList comprehension can be employed to provide a shorthand to the loop technique to find first and last element of the list. Naive method of finding is converted to a single line using this method. # Python3 code to demonstrate # to get first and last element of list# using list comprehension # initializing list test_list = [1, 5, 6, 7, 4] # printing original list print ("The original list is : " + str(test_list)) # using list comprehension# to get first and last element of listres = [ test_list[i] for i in (0, -1) ] # printing resultprint ("The first and last element of list are : " + str(res)) The original list is : [1, 5, 6, 7, 4] The first and last element of list are : [1, 4] Python list-programs Python Python Programs Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 53, "s": 25, "text": "\n02 Jan, 2019" }, { "code": null, "e": 298, "s": 53, "text": "Sometimes, there might be a need to get the range between which a number lies in the list, for such applications we require to get the first and last element of the list. Let’s discuss certain ways to get the first and last element of the list." }, { "code": null, "e": 482, "s": 298, "text": "Method #1 : Using list indexUsing the list indices inside the master list can perform this particular task. This is most naive method to achieve this particular task one can think of." }, { "code": "# Python3 code to demonstrate # to get first and last element of list# using list indexing # initializing list test_list = [1, 5, 6, 7, 4] # printing original list print (\"The original list is : \" + str(test_list)) # using list indexing# to get first and last element of listres = [ test_list[0], test_list[-1] ] # printing resultprint (\"The first and last element of list are : \" + str(res))", "e": 882, "s": 482, "text": null }, { "code": null, "e": 970, "s": 882, "text": "The original list is : [1, 5, 6, 7, 4]\nThe first and last element of list are : [1, 4]\n" }, { "code": null, "e": 1198, "s": 970, "text": " Method #2 : Using List slicingOne can also make use of list slicing technique to perform the particular task of getting first and last element. We can use step of whole list to skip to the last element after the first element." }, { "code": "# Python3 code to demonstrate # to get first and last element of list# using List slicing # initializing list test_list = [1, 5, 6, 7, 4] # printing original list print (\"The original list is : \" + str(test_list)) # using List slicing# to get first and last element of listres = test_list[::len(test_list)-1] # printing resultprint (\"The first and last element of list are : \" + str(res))", "e": 1594, "s": 1198, "text": null }, { "code": null, "e": 1682, "s": 1594, "text": "The original list is : [1, 5, 6, 7, 4]\nThe first and last element of list are : [1, 4]\n" }, { "code": null, "e": 1916, "s": 1682, "text": " Method #3 : Using list comprehensionList comprehension can be employed to provide a shorthand to the loop technique to find first and last element of the list. Naive method of finding is converted to a single line using this method." }, { "code": "# Python3 code to demonstrate # to get first and last element of list# using list comprehension # initializing list test_list = [1, 5, 6, 7, 4] # printing original list print (\"The original list is : \" + str(test_list)) # using list comprehension# to get first and last element of listres = [ test_list[i] for i in (0, -1) ] # printing resultprint (\"The first and last element of list are : \" + str(res))", "e": 2328, "s": 1916, "text": null }, { "code": null, "e": 2416, "s": 2328, "text": "The original list is : [1, 5, 6, 7, 4]\nThe first and last element of list are : [1, 4]\n" }, { "code": null, "e": 2437, "s": 2416, "text": "Python list-programs" }, { "code": null, "e": 2444, "s": 2437, "text": "Python" }, { "code": null, "e": 2460, "s": 2444, "text": "Python Programs" } ]
How to Zip a directory in PHP?
10 Aug, 2021 ZIP is an archive file format that supports lossless data compression. A ZIP file may contain one or more files or directories that may have been compressed. The PHP ZipArchive class can be used to zipping and unzipping. It might be need to install the class if it is not present. Installation for Linux users:In order to use these functions you must compile PHP with zip support by using the –enable-zip configure option.Installation for Windows users:As of PHP 5.3 this extension is inbuilt. Before, Windows users need to enable php_zip.dll inside of php.ini in order to use its functions. Example: This example uses ZipArchive class and create a zipped file. <?php // Enter the name of directory$pathdir = "Directory Name/"; // Enter the name to creating zipped directory$zipcreated = "Name of Zip.zip"; // Create new zip class$zip = new ZipArchive; if($zip -> open($zipcreated, ZipArchive::CREATE ) === TRUE) { // Store the path into the variable $dir = opendir($pathdir); while($file = readdir($dir)) { if(is_file($pathdir.$file)) { $zip -> addFile($pathdir.$file, $file); } } $zip ->close();} ?> Example: This example uses ZipArchive class to unzip the file or directory. <?php // Create new zip class$zip = new ZipArchive; // Add zip filename which need// to unzip$zip->open('filename.zip'); // Extracts to current directory$zip->extractTo('./'); $zip->close(); ?> Steps to run the program: Zip a directory ‘zipfile’ containing a file ‘a.txt’. Save the above code into two files with extension .php .One for zip and another for unzip the directory. Also give the appropriate path for the directory. Here we use XAMPP to run a local web server. Place the php files along with the directory to be zipped in C:\xampp\htdocs(XAMPP is installed in C: drive in this case). In the browser, enter https://localhost/zip.php as the url and the file will be zipped. After this a new zip file is created named ‘file’. Similarly, do the same to unzip. Make sure the filename and path are matching. The text file a.txt is extracted from the zip file. Picked PHP PHP Programs Web Technologies PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n10 Aug, 2021" }, { "code": null, "e": 309, "s": 28, "text": "ZIP is an archive file format that supports lossless data compression. A ZIP file may contain one or more files or directories that may have been compressed. The PHP ZipArchive class can be used to zipping and unzipping. It might be need to install the class if it is not present." }, { "code": null, "e": 620, "s": 309, "text": "Installation for Linux users:In order to use these functions you must compile PHP with zip support by using the –enable-zip configure option.Installation for Windows users:As of PHP 5.3 this extension is inbuilt. Before, Windows users need to enable php_zip.dll inside of php.ini in order to use its functions." }, { "code": null, "e": 690, "s": 620, "text": "Example: This example uses ZipArchive class and create a zipped file." }, { "code": "<?php // Enter the name of directory$pathdir = \"Directory Name/\"; // Enter the name to creating zipped directory$zipcreated = \"Name of Zip.zip\"; // Create new zip class$zip = new ZipArchive; if($zip -> open($zipcreated, ZipArchive::CREATE ) === TRUE) { // Store the path into the variable $dir = opendir($pathdir); while($file = readdir($dir)) { if(is_file($pathdir.$file)) { $zip -> addFile($pathdir.$file, $file); } } $zip ->close();} ?>", "e": 1190, "s": 690, "text": null }, { "code": null, "e": 1266, "s": 1190, "text": "Example: This example uses ZipArchive class to unzip the file or directory." }, { "code": "<?php // Create new zip class$zip = new ZipArchive; // Add zip filename which need// to unzip$zip->open('filename.zip'); // Extracts to current directory$zip->extractTo('./'); $zip->close(); ?>", "e": 1466, "s": 1266, "text": null }, { "code": null, "e": 1545, "s": 1466, "text": "Steps to run the program: Zip a directory ‘zipfile’ containing a file ‘a.txt’." }, { "code": null, "e": 1700, "s": 1545, "text": "Save the above code into two files with extension .php .One for zip and another for unzip the directory. Also give the appropriate path for the directory." }, { "code": null, "e": 1868, "s": 1700, "text": "Here we use XAMPP to run a local web server. Place the php files along with the directory to be zipped in C:\\xampp\\htdocs(XAMPP is installed in C: drive in this case)." }, { "code": null, "e": 1956, "s": 1868, "text": "In the browser, enter https://localhost/zip.php as the url and the file will be zipped." }, { "code": null, "e": 2007, "s": 1956, "text": "After this a new zip file is created named ‘file’." }, { "code": null, "e": 2138, "s": 2007, "text": "Similarly, do the same to unzip. Make sure the filename and path are matching. The text file a.txt is extracted from the zip file." }, { "code": null, "e": 2145, "s": 2138, "text": "Picked" }, { "code": null, "e": 2149, "s": 2145, "text": "PHP" }, { "code": null, "e": 2162, "s": 2149, "text": "PHP Programs" }, { "code": null, "e": 2179, "s": 2162, "text": "Web Technologies" }, { "code": null, "e": 2183, "s": 2179, "text": "PHP" } ]
How to compile & run a Java program using Command Prompt?
While many programming environments allow us to compile and run a program within the environment, we can also compile and run java programs using Command Prompt. After successful installation of JDK in our system and set the path, we can able to compile and execute Java programs using the command prompt. Step 1 - Need to create a java program either in Notepad or other IDE. Step 2 - Need to save this java file in a folder with "Demo.java" and it can be saved in a folder. Step 3 - Need to compile this java file from the command prompt using JAVAC command. Step 4 - "Demo.java" file is successfully compiled with a generation of ".class" file. Step 5 - Need to execute this java file using JAVA command without ".java" extension. Step 6 - Able to see "Welcome to TutorialsPoint" output in the console. class Demo{ public static void main(String args[]){ System.out.println("Welcome to Tutorials Point"); } } Welcome to TutorialsPoint
[ { "code": null, "e": 1224, "s": 1062, "text": "While many programming environments allow us to compile and run a program within the environment, we can also compile and run java programs using Command Prompt." }, { "code": null, "e": 1368, "s": 1224, "text": "After successful installation of JDK in our system and set the path, we can able to compile and execute Java programs using the command prompt." }, { "code": null, "e": 1442, "s": 1368, "text": "Step 1 - Need to create a java program either in Notepad or other IDE." }, { "code": null, "e": 1544, "s": 1442, "text": "Step 2 - Need to save this java file in a folder with \"Demo.java\" and it can be saved in a folder." }, { "code": null, "e": 1632, "s": 1544, "text": "Step 3 - Need to compile this java file from the command prompt using JAVAC command." }, { "code": null, "e": 1721, "s": 1632, "text": "Step 4 - \"Demo.java\" file is successfully compiled with a generation of \".class\" file." }, { "code": null, "e": 1810, "s": 1721, "text": "Step 5 - Need to execute this java file using JAVA command without \".java\" extension." }, { "code": null, "e": 1885, "s": 1810, "text": "Step 6 - Able to see \"Welcome to TutorialsPoint\" output in the console." }, { "code": null, "e": 2003, "s": 1885, "text": "class Demo{\n public static void main(String args[]){\n System.out.println(\"Welcome to Tutorials Point\");\n }\n}" }, { "code": null, "e": 2029, "s": 2003, "text": "Welcome to TutorialsPoint" } ]
Gradient Boosted Trees for Classification — One of the Best Machine Learning Algorithms | by Saul Dobilas | Towards Data Science
Gradient Boosted Trees is one of my favorite algorithms, and it’s been widely used by many Data Scientists. Whether you are building a model for your own project or trying to win a Kaggle competition, it is always worth considering Gradient Boosting. In this story, I will take you through the details of how the algorithm actually works and provide Python code resources for you to use in your own projects. The category of algorithms Gradient Boosted Trees belong to Definitions and fundamentals Example based explanation of Gradient Boosted Trees Python code resources Gradient Boosting is a tree-based algorithm, which sits under the supervised branch of Machine Learning. Note that it can be used for both classification and regression problems. In this story, however, I will focus on the classification side. Side note, I have put Neural Networks in a category of their own due to their unique approach to Machine Learning. However, they can be used to solve a wide range of problems, including but not limited to classification and regression. The below chart is interactive so make sure to click👇 on different categories to enlarge and reveal more. If you enjoy Data Science and Machine Learning, please subscribe to get an email whenever I publish a new story. To understand how Gradient Boosted Trees are formed, we need to familiarize ourselves with logistic regression and decision tree concepts. Regression Trees — this may sound strange at first, but the Gradient Boost Classification algorithm does not use Classification Trees. Instead, it uses Regression Trees. This is because the target in Gradient Boosted Trees is the residual, not the class label. Don’t worry if you are confused about this. It will become clearer once we start going through the example. Residual — it is the difference between the observed value and the estimated value (prediction). We will use residuals for calculating MSE and for tree predictions. MSE (Mean Squared Error) — there are multiple ways to find the best split in the tree node, with MSE being one of the most common approaches used for regression trees. Hence, MSE is what we will use in our algorithm. Note, ‘Observed’ and ‘Predicted’ are the actual and model-predicted values for each data record.Residual = (Observed — Predicted). Odds Ratio — it is simply a ratio between the number of events and non-events. Say, if your data contains 3 spam emails and 2 non-spam emails, the odds would be 3:2 (1.5 in decimal notation). Log(odds) — is a natural logarithm of the odds ratio. So if, the odds are 3:2 = 1.5, then log(odds) = log(1.5) = 0.405... Probability vs. odds — it is easy to convert odds to probabilities and vice versa. For example if, the odds are 3:2, then the probability is 3/5 = 0.6.You can use the following equations to convert between probability and odds: The probability expressed through log(odds) — is the final piece that we will need for our model calculations: Note, if you want to get a deeper understanding of the above concepts, you can refer to my story about Logistic Regression: towardsdatascience.com First, let me share with you a process map to help you see how each step fits together to produce a Gradient Boosted Trees model. First, let's take a look at the data we will use. We have 9 observations from the Australian weather dataset with ‘Wind Gust Speed’ and ‘Humidity at 3 pm’ as our two model features (independent variables). ‘Rain Tomorrow Flag’ is what we want to predict with our model. Note, we are using a tiny set of data because we want to make the explanation easier. However, the algorithm works the same with a much larger dataset. Start - Initial Prediction Our data contains 5 observations where Rain Tomorrow = 1 and 4 observations where Rain Tomorrow = 0. The initial prediction is simply a probability derived from all observations Probability = 5 / 9 = 0.56. Meanwhile, residuals are calculated for each observation separately and, in this case, take one of the two values: Residual = Observed — Predicted = 1 — 0.56 = 0.44 or 0 — 0.56 = -0.56. Since we are building a classification model, we will also need to calculate log(odds). While the initial prediction is Probability = 0.56 the log(odds) value of the initial leaf is: Log(odds) = log(5/4) = log(1.25) = 0.22. Note that log(odds) from the initial leaf will be used to recalculate residuals after building Tree 1. Let’s now build the first tree. Below are the model hyperparameters that we use for this specific example. I just want to highlight a few of them at this stage, but I will provide the complete Python code in the Python section at the end of this story. While the tree's default size has a max depth of 3, we have set it to 1 to make the example simpler. This means that the trees we are building will be decision stumps. In total, our model will contain 3 decision stumps with the learning set to 0.5. It is important to reiterate that with Gradient Boosting, we are not predicting class labels (rain / no rain) but residuals instead (see below table). Start - Initial Prediction Let’s look a the first decision tree (stump) and work out how it’s been created. The first question to answer is: “Why has the algorithm selected this particular node split (Humidity at 3 pm ≤ 45.5)?” The best split is identified by calculating MSE for every possible split using ‘Humidity at 3 pm’ and calculating MSE for every possible split using ‘Wind Gust Speed.’ Then the algorithm compares all of those splits and picks the one with the lowest MSE. You can see the calculation of MSE for the chosen split below. The second question we need to answer: “ What is the purpose of ‘value’ within the tree, and how is it calculated?” The leaf ‘value’ is what the algorithm uses to recalculate the residuals, which are then used as a target when building the next tree. ‘Value’ is calculated with the below formula: Now let’s put all of this knowledge to practice and calculate each of the metrics within the tree: The last part left to do before moving to Tree 2 is recalculating the residuals (model targets). Following the steps in the process map above, this is what we need to do to recalculate the residuals for each observation: 'New Log(odds)' = 'Log(odds) from the initial prediction' + 'Value from Tree 1' * 'Learning Rate'.Obs1 = 0.22 + 0.643 * 0.5 = 0.54 <-- Example for leaf 2Obs6 = 0.22 + -2.25 * 0.5 = -0.91 <-- Example for leaf 1 Note, the calculation above gives us the new log(odds). However, to get new residuals, we first need to convert log(odds) back to probabilities using this formula: New Predicted Probability (Obs1) = 1/(1+e^(-0.54)) = 0.63New Predicted Probability (Obs6) = 1/(1+e^(0.91)) = 0.29Note how the above probabilities are steps in the right direction for both observations. Finally, we calculate new residuals using the class label and the new predicted probability. New Residual (Obs1) = 1 - 0.63 = 0.37New Residual (Obs6) = 0 - 0.29 = -0.29 Here is a table containing recalculated values for all observations: One interesting observation is that the new predictions for Obs7 and Obs9 are actually worse than before. This is exactly why we need to build many trees, as having a larger number of trees will lead to improvements in predictions across all observations. You can see this illustrated in the gif image at the beginning of this story. Now we can use new residuals to build the second tree. Note, I will not list all of the formulas again since those are the same as in Tree 1. However, if you want to attempt calculations yourself, remember to use the new values from a table created post Tree 1. The algorithm identified Wind Gust Speed ≤49 as the best node split in the second tree with 8 observations (samples) in leaf 1 and only 1 observation in leaf 2. Note, when building a larger tree from a big dataset, it is advisable to restrict the minimum number of observations allowed in a leaf to reduce the chance of overfitting. Since we already know how the leaf values are calculated, let’s go straight into finding the new residuals. 'New Log(odds) T2' = 'Log(odds) from the initial prediction' + 'Value from Tree 1' * 'Learning Rate' + 'Value from Tree 2' * 'Learning Rate'.Obs1 = 0.22 + 0.643 * 0.5 + 0.347 * 0.5 = 0.72 <-- T1L2-T2L1Obs6 = 0.22 + -2.25 * 0.5 + 0.347 * 0.5 = -0.73 <-- T1L1-T2L1Obs7 = 0.22 + 0.643 * 0.5 + -2.724 * 0.5 = -0.82 <-- T1L2-T2L2As before we convert the above to New Predicted ProbabilitesPred Prob T2 (Obs1) = 1/(1+e^(-0.72)) = 0.67Pred Prob T2 (Obs6) = 1/(1+e^(0.73)) = 0.33Pred Prob T2 (Obs7) = 1/(1+e^(0.82)) = 0.31 Then we calculate new residuals and place them in the table below to be used for a target in Tree number 3: The same process is repeated when building additional trees until the specified number of trees is reached, or the improvement becomes too small (falls below a certain threshold). For interest, this is what stump number 3 looks like: Since we restricted our model to only 3 trees, the last part left is to get the final set of predictions by following the above methodology to combine the log(odds). I believe you have got the hang of it by now, so I will not repeat it. Instead, let’s jump into the Python code. Now that you understand how Gradient Boosted Trees work let’s build a model using the full set of observations but with the same two features as before. This is so we can generate the below 3D graph to demonstrate model predictions. We will use the following data and libraries: Australian weather data from Kaggle Scikit-learn library for splitting the data into train-test samples, building Gradient Boost Classification model and model evaluation Plotly for data visualizations Pandas and Numpy for data manipulation Let’s import all the libraries: Then we get the Australian weather data from Kaggle, which you can download following this link: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package. We ingest the data and derive a few new variables for usage in the model. Next, we define a function to be used for model training and run the model to produce the results: Step 1— split data into train and test samples Step 2— set model parameters and train (fit) the model Step 3— predict class labels on train and test data using our model Step 4— generate model summary statistics Step 5— run the model and display the results These are the model evaluation metrics the above function returns. Finally, you can use the below function to generate the 3D graph shown above. Although this story is quite long and with many details, the Gradient Boost algorithm is one of the most important ones to understand for a Data Scientist. I sincerely hope you had fun reading it and that there is no more mystery left behind this algorithm for you. Cheers! 👏Saul Dobilas If you have already spent your learning budget for this month, please remember me next time. My personalized link to join Medium is: solclover.com Also, here are some resources on alternative algorithms if you found this article interesting.
[ { "code": null, "e": 422, "s": 171, "text": "Gradient Boosted Trees is one of my favorite algorithms, and it’s been widely used by many Data Scientists. Whether you are building a model for your own project or trying to win a Kaggle competition, it is always worth considering Gradient Boosting." }, { "code": null, "e": 580, "s": 422, "text": "In this story, I will take you through the details of how the algorithm actually works and provide Python code resources for you to use in your own projects." }, { "code": null, "e": 640, "s": 580, "text": "The category of algorithms Gradient Boosted Trees belong to" }, { "code": null, "e": 669, "s": 640, "text": "Definitions and fundamentals" }, { "code": null, "e": 721, "s": 669, "text": "Example based explanation of Gradient Boosted Trees" }, { "code": null, "e": 743, "s": 721, "text": "Python code resources" }, { "code": null, "e": 987, "s": 743, "text": "Gradient Boosting is a tree-based algorithm, which sits under the supervised branch of Machine Learning. Note that it can be used for both classification and regression problems. In this story, however, I will focus on the classification side." }, { "code": null, "e": 1329, "s": 987, "text": "Side note, I have put Neural Networks in a category of their own due to their unique approach to Machine Learning. However, they can be used to solve a wide range of problems, including but not limited to classification and regression. The below chart is interactive so make sure to click👇 on different categories to enlarge and reveal more." }, { "code": null, "e": 1442, "s": 1329, "text": "If you enjoy Data Science and Machine Learning, please subscribe to get an email whenever I publish a new story." }, { "code": null, "e": 1581, "s": 1442, "text": "To understand how Gradient Boosted Trees are formed, we need to familiarize ourselves with logistic regression and decision tree concepts." }, { "code": null, "e": 1950, "s": 1581, "text": "Regression Trees — this may sound strange at first, but the Gradient Boost Classification algorithm does not use Classification Trees. Instead, it uses Regression Trees. This is because the target in Gradient Boosted Trees is the residual, not the class label. Don’t worry if you are confused about this. It will become clearer once we start going through the example." }, { "code": null, "e": 2115, "s": 1950, "text": "Residual — it is the difference between the observed value and the estimated value (prediction). We will use residuals for calculating MSE and for tree predictions." }, { "code": null, "e": 2463, "s": 2115, "text": "MSE (Mean Squared Error) — there are multiple ways to find the best split in the tree node, with MSE being one of the most common approaches used for regression trees. Hence, MSE is what we will use in our algorithm. Note, ‘Observed’ and ‘Predicted’ are the actual and model-predicted values for each data record.Residual = (Observed — Predicted)." }, { "code": null, "e": 2655, "s": 2463, "text": "Odds Ratio — it is simply a ratio between the number of events and non-events. Say, if your data contains 3 spam emails and 2 non-spam emails, the odds would be 3:2 (1.5 in decimal notation)." }, { "code": null, "e": 2777, "s": 2655, "text": "Log(odds) — is a natural logarithm of the odds ratio. So if, the odds are 3:2 = 1.5, then log(odds) = log(1.5) = 0.405..." }, { "code": null, "e": 3005, "s": 2777, "text": "Probability vs. odds — it is easy to convert odds to probabilities and vice versa. For example if, the odds are 3:2, then the probability is 3/5 = 0.6.You can use the following equations to convert between probability and odds:" }, { "code": null, "e": 3116, "s": 3005, "text": "The probability expressed through log(odds) — is the final piece that we will need for our model calculations:" }, { "code": null, "e": 3240, "s": 3116, "text": "Note, if you want to get a deeper understanding of the above concepts, you can refer to my story about Logistic Regression:" }, { "code": null, "e": 3263, "s": 3240, "text": "towardsdatascience.com" }, { "code": null, "e": 3393, "s": 3263, "text": "First, let me share with you a process map to help you see how each step fits together to produce a Gradient Boosted Trees model." }, { "code": null, "e": 3443, "s": 3393, "text": "First, let's take a look at the data we will use." }, { "code": null, "e": 3663, "s": 3443, "text": "We have 9 observations from the Australian weather dataset with ‘Wind Gust Speed’ and ‘Humidity at 3 pm’ as our two model features (independent variables). ‘Rain Tomorrow Flag’ is what we want to predict with our model." }, { "code": null, "e": 3815, "s": 3663, "text": "Note, we are using a tiny set of data because we want to make the explanation easier. However, the algorithm works the same with a much larger dataset." }, { "code": null, "e": 3845, "s": 3815, "text": " Start - Initial Prediction " }, { "code": null, "e": 3946, "s": 3845, "text": "Our data contains 5 observations where Rain Tomorrow = 1 and 4 observations where Rain Tomorrow = 0." }, { "code": null, "e": 4237, "s": 3946, "text": "The initial prediction is simply a probability derived from all observations Probability = 5 / 9 = 0.56. Meanwhile, residuals are calculated for each observation separately and, in this case, take one of the two values: Residual = Observed — Predicted = 1 — 0.56 = 0.44 or 0 — 0.56 = -0.56." }, { "code": null, "e": 4564, "s": 4237, "text": "Since we are building a classification model, we will also need to calculate log(odds). While the initial prediction is Probability = 0.56 the log(odds) value of the initial leaf is: Log(odds) = log(5/4) = log(1.25) = 0.22. Note that log(odds) from the initial leaf will be used to recalculate residuals after building Tree 1." }, { "code": null, "e": 4817, "s": 4564, "text": "Let’s now build the first tree. Below are the model hyperparameters that we use for this specific example. I just want to highlight a few of them at this stage, but I will provide the complete Python code in the Python section at the end of this story." }, { "code": null, "e": 5066, "s": 4817, "text": "While the tree's default size has a max depth of 3, we have set it to 1 to make the example simpler. This means that the trees we are building will be decision stumps. In total, our model will contain 3 decision stumps with the learning set to 0.5." }, { "code": null, "e": 5217, "s": 5066, "text": "It is important to reiterate that with Gradient Boosting, we are not predicting class labels (rain / no rain) but residuals instead (see below table)." }, { "code": null, "e": 5247, "s": 5217, "text": " Start - Initial Prediction " }, { "code": null, "e": 5328, "s": 5247, "text": "Let’s look a the first decision tree (stump) and work out how it’s been created." }, { "code": null, "e": 5448, "s": 5328, "text": "The first question to answer is: “Why has the algorithm selected this particular node split (Humidity at 3 pm ≤ 45.5)?”" }, { "code": null, "e": 5616, "s": 5448, "text": "The best split is identified by calculating MSE for every possible split using ‘Humidity at 3 pm’ and calculating MSE for every possible split using ‘Wind Gust Speed.’" }, { "code": null, "e": 5766, "s": 5616, "text": "Then the algorithm compares all of those splits and picks the one with the lowest MSE. You can see the calculation of MSE for the chosen split below." }, { "code": null, "e": 5882, "s": 5766, "text": "The second question we need to answer: “ What is the purpose of ‘value’ within the tree, and how is it calculated?”" }, { "code": null, "e": 6017, "s": 5882, "text": "The leaf ‘value’ is what the algorithm uses to recalculate the residuals, which are then used as a target when building the next tree." }, { "code": null, "e": 6063, "s": 6017, "text": "‘Value’ is calculated with the below formula:" }, { "code": null, "e": 6162, "s": 6063, "text": "Now let’s put all of this knowledge to practice and calculate each of the metrics within the tree:" }, { "code": null, "e": 6383, "s": 6162, "text": "The last part left to do before moving to Tree 2 is recalculating the residuals (model targets). Following the steps in the process map above, this is what we need to do to recalculate the residuals for each observation:" }, { "code": null, "e": 6593, "s": 6383, "text": "'New Log(odds)' = 'Log(odds) from the initial prediction' + 'Value from Tree 1' * 'Learning Rate'.Obs1 = 0.22 + 0.643 * 0.5 = 0.54 <-- Example for leaf 2Obs6 = 0.22 + -2.25 * 0.5 = -0.91 <-- Example for leaf 1" }, { "code": null, "e": 6757, "s": 6593, "text": "Note, the calculation above gives us the new log(odds). However, to get new residuals, we first need to convert log(odds) back to probabilities using this formula:" }, { "code": null, "e": 6959, "s": 6757, "text": "New Predicted Probability (Obs1) = 1/(1+e^(-0.54)) = 0.63New Predicted Probability (Obs6) = 1/(1+e^(0.91)) = 0.29Note how the above probabilities are steps in the right direction for both observations." }, { "code": null, "e": 7052, "s": 6959, "text": "Finally, we calculate new residuals using the class label and the new predicted probability." }, { "code": null, "e": 7128, "s": 7052, "text": "New Residual (Obs1) = 1 - 0.63 = 0.37New Residual (Obs6) = 0 - 0.29 = -0.29" }, { "code": null, "e": 7197, "s": 7128, "text": "Here is a table containing recalculated values for all observations:" }, { "code": null, "e": 7531, "s": 7197, "text": "One interesting observation is that the new predictions for Obs7 and Obs9 are actually worse than before. This is exactly why we need to build many trees, as having a larger number of trees will lead to improvements in predictions across all observations. You can see this illustrated in the gif image at the beginning of this story." }, { "code": null, "e": 7793, "s": 7531, "text": "Now we can use new residuals to build the second tree. Note, I will not list all of the formulas again since those are the same as in Tree 1. However, if you want to attempt calculations yourself, remember to use the new values from a table created post Tree 1." }, { "code": null, "e": 7954, "s": 7793, "text": "The algorithm identified Wind Gust Speed ≤49 as the best node split in the second tree with 8 observations (samples) in leaf 1 and only 1 observation in leaf 2." }, { "code": null, "e": 8126, "s": 7954, "text": "Note, when building a larger tree from a big dataset, it is advisable to restrict the minimum number of observations allowed in a leaf to reduce the chance of overfitting." }, { "code": null, "e": 8234, "s": 8126, "text": "Since we already know how the leaf values are calculated, let’s go straight into finding the new residuals." }, { "code": null, "e": 8749, "s": 8234, "text": "'New Log(odds) T2' = 'Log(odds) from the initial prediction' + 'Value from Tree 1' * 'Learning Rate' + 'Value from Tree 2' * 'Learning Rate'.Obs1 = 0.22 + 0.643 * 0.5 + 0.347 * 0.5 = 0.72 <-- T1L2-T2L1Obs6 = 0.22 + -2.25 * 0.5 + 0.347 * 0.5 = -0.73 <-- T1L1-T2L1Obs7 = 0.22 + 0.643 * 0.5 + -2.724 * 0.5 = -0.82 <-- T1L2-T2L2As before we convert the above to New Predicted ProbabilitesPred Prob T2 (Obs1) = 1/(1+e^(-0.72)) = 0.67Pred Prob T2 (Obs6) = 1/(1+e^(0.73)) = 0.33Pred Prob T2 (Obs7) = 1/(1+e^(0.82)) = 0.31" }, { "code": null, "e": 8857, "s": 8749, "text": "Then we calculate new residuals and place them in the table below to be used for a target in Tree number 3:" }, { "code": null, "e": 9091, "s": 8857, "text": "The same process is repeated when building additional trees until the specified number of trees is reached, or the improvement becomes too small (falls below a certain threshold). For interest, this is what stump number 3 looks like:" }, { "code": null, "e": 9370, "s": 9091, "text": "Since we restricted our model to only 3 trees, the last part left is to get the final set of predictions by following the above methodology to combine the log(odds). I believe you have got the hang of it by now, so I will not repeat it. Instead, let’s jump into the Python code." }, { "code": null, "e": 9603, "s": 9370, "text": "Now that you understand how Gradient Boosted Trees work let’s build a model using the full set of observations but with the same two features as before. This is so we can generate the below 3D graph to demonstrate model predictions." }, { "code": null, "e": 9649, "s": 9603, "text": "We will use the following data and libraries:" }, { "code": null, "e": 9685, "s": 9649, "text": "Australian weather data from Kaggle" }, { "code": null, "e": 9820, "s": 9685, "text": "Scikit-learn library for splitting the data into train-test samples, building Gradient Boost Classification model and model evaluation" }, { "code": null, "e": 9851, "s": 9820, "text": "Plotly for data visualizations" }, { "code": null, "e": 9890, "s": 9851, "text": "Pandas and Numpy for data manipulation" }, { "code": null, "e": 9922, "s": 9890, "text": "Let’s import all the libraries:" }, { "code": null, "e": 10081, "s": 9922, "text": "Then we get the Australian weather data from Kaggle, which you can download following this link: https://www.kaggle.com/jsphyg/weather-dataset-rattle-package." }, { "code": null, "e": 10155, "s": 10081, "text": "We ingest the data and derive a few new variables for usage in the model." }, { "code": null, "e": 10254, "s": 10155, "text": "Next, we define a function to be used for model training and run the model to produce the results:" }, { "code": null, "e": 10301, "s": 10254, "text": "Step 1— split data into train and test samples" }, { "code": null, "e": 10356, "s": 10301, "text": "Step 2— set model parameters and train (fit) the model" }, { "code": null, "e": 10424, "s": 10356, "text": "Step 3— predict class labels on train and test data using our model" }, { "code": null, "e": 10466, "s": 10424, "text": "Step 4— generate model summary statistics" }, { "code": null, "e": 10512, "s": 10466, "text": "Step 5— run the model and display the results" }, { "code": null, "e": 10579, "s": 10512, "text": "These are the model evaluation metrics the above function returns." }, { "code": null, "e": 10657, "s": 10579, "text": "Finally, you can use the below function to generate the 3D graph shown above." }, { "code": null, "e": 10923, "s": 10657, "text": "Although this story is quite long and with many details, the Gradient Boost algorithm is one of the most important ones to understand for a Data Scientist. I sincerely hope you had fun reading it and that there is no more mystery left behind this algorithm for you." }, { "code": null, "e": 10945, "s": 10923, "text": "Cheers! 👏Saul Dobilas" }, { "code": null, "e": 11078, "s": 10945, "text": "If you have already spent your learning budget for this month, please remember me next time. My personalized link to join Medium is:" }, { "code": null, "e": 11092, "s": 11078, "text": "solclover.com" } ]
Fibonacci Cube Graph - GeeksforGeeks
24 Nov, 2021 You are given input as order of graph n (highest number of edges connected to a node), you have to find the number of vertices in a Fibonacci cube graph of order n.Examples : Input : n = 3 Output : 5 Explanation : Fib(n + 2) = Fib(5) = 5 Input : n = 2 Output : 3 A Fibonacci Cube Graph is similar to hypercube graph, but with a fibonacci number of vertices. In fibonacci cube graph only 1 vertex has degree n rest all has degree less than n. Fibonacci cube graph of order n has F(n + 2) vertices, where F(n) is a n-th fibonacci number, Fibonacii series : 1, 1, 2, 3, 5, 8, 13, 21, 34................... For input n as order of graph, find the corresponding fibonacci number at the position n + 2. where F(n) = F(n – 1) + F(n – 2)Approach : Find the (n + 2)-th fibonacci number.Below is the implementation of above approach : C++ Java Python3 C# PHP Javascript // CPP code to find vertices in a fibonacci// cube graph of order n#include<iostream>using namespace std; // function to find fibonacci numberint fib(int n){ if (n <= 1) return n; return fib(n - 1) + fib(n - 2);} // function for finding number of vertices // in fibonacci cube graphint findVertices (int n){ // return fibonacci number for f(n + 2) return fib(n + 2);} // driver programint main(){ // n is the order of the graph int n = 3; cout << findVertices(n); return 0;} // java code to find vertices in a fibonacci// cube graph of order npublic class GFG { // function to find fibonacci number static int fib(int n) { if (n <= 1) return n; return fib(n - 1) + fib(n - 2); } // function for finding number of vertices // in fibonacci cube graph static int findVertices (int n) { // return fibonacci number for f(n + 2) return fib(n + 2); } public static void main(String args[]) { // n is the order of the graph int n = 3; System.out.println(findVertices(n)); }} // This code is contributed by Sam007 # Python3 code to find vertices in # a fibonacci cube graph of order n # Function to find fibonacci number def fib(n): if n <= 1: return n return fib(n - 1) + fib(n - 2) # Function for finding number of # vertices in fibonacci cube graph def findVertices(n): # return fibonacci number # for f(n + 2) return fib(n + 2) # Driver Codeif __name__ == "__main__": # n is the order of the graph n = 3 print(findVertices(n)) # This code is contributed # by Rituraj Jain // C# code to find vertices in a fibonacci// cube graph of order nusing System; class GFG { // function to find fibonacci number static int fib(int n) { if (n <= 1) return n; return fib(n - 1) + fib(n - 2); } // function for finding number of // vertices in fibonacci cube graph static int findVertices (int n) { // return fibonacci number for // f(n + 2) return fib(n + 2); } // Driver code static void Main() { // n is the order of the graph int n = 3; Console.Write(findVertices(n)); }} // This code is contributed by Sam007 <?php// PHP code to find vertices in a // fibonacci cube graph of order n // function to find fibonacci numberfunction fib($n){ if ($n <= 1) return $n; return fib($n - 1) + fib($n - 2);} // function for finding number of // vertices in fibonacci cube graphfunction findVertices ($n){ // return fibonacci number // for f(n + 2) return fib($n + 2);} // Driver Code // n is the order of the graph$n = 3;echo findVertices($n); // This code is contributed by Sam007?> <script> // Javascript code to find vertices in a fibonacci// cube graph of order n // function to find fibonacci numberfunction fib(n){ if (n <= 1) return n; return fib(n - 1) + fib(n - 2);} // function for finding number of vertices // in fibonacci cube graphfunction findVertices (n){ // return fibonacci number for f(n + 2) return fib(n + 2);} // driver program// n is the order of the graphvar n = 3;document.write( findVertices(n)); </script> 5 Note that the above code can be optimized to work in O(Log n) using efficient implementations discussed in Program for Fibonacci numbers Sam007 rituraj_jain rutvik_56 Fibonacci Graph Fibonacci Graph Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Best First Search (Informed Search) Vertex Cover Problem | Set 1 (Introduction and Approximate Algorithm) Eulerian path and circuit for undirected graph Find if there is a path between two vertices in a directed graph Iterative Deepening Search(IDS) or Iterative Deepening Depth First Search(IDDFS) Graph Coloring | Set 2 (Greedy Algorithm) Printing Paths in Dijkstra's Shortest Path Algorithm Longest Path in a Directed Acyclic Graph Prim’s MST for Adjacency List Representation | Greedy Algo-6 Kruskal's Minimum Spanning Tree using STL in C++
[ { "code": null, "e": 25090, "s": 25062, "text": "\n24 Nov, 2021" }, { "code": null, "e": 25267, "s": 25090, "text": "You are given input as order of graph n (highest number of edges connected to a node), you have to find the number of vertices in a Fibonacci cube graph of order n.Examples : " }, { "code": null, "e": 25357, "s": 25267, "text": "Input : n = 3\nOutput : 5\nExplanation : \nFib(n + 2) = Fib(5) = 5\n\nInput : n = 2\nOutput : 3" }, { "code": null, "e": 25700, "s": 25359, "text": "A Fibonacci Cube Graph is similar to hypercube graph, but with a fibonacci number of vertices. In fibonacci cube graph only 1 vertex has degree n rest all has degree less than n. Fibonacci cube graph of order n has F(n + 2) vertices, where F(n) is a n-th fibonacci number, Fibonacii series : 1, 1, 2, 3, 5, 8, 13, 21, 34................... " }, { "code": null, "e": 25924, "s": 25700, "text": "For input n as order of graph, find the corresponding fibonacci number at the position n + 2. where F(n) = F(n – 1) + F(n – 2)Approach : Find the (n + 2)-th fibonacci number.Below is the implementation of above approach : " }, { "code": null, "e": 25928, "s": 25924, "text": "C++" }, { "code": null, "e": 25933, "s": 25928, "text": "Java" }, { "code": null, "e": 25941, "s": 25933, "text": "Python3" }, { "code": null, "e": 25944, "s": 25941, "text": "C#" }, { "code": null, "e": 25948, "s": 25944, "text": "PHP" }, { "code": null, "e": 25959, "s": 25948, "text": "Javascript" }, { "code": "// CPP code to find vertices in a fibonacci// cube graph of order n#include<iostream>using namespace std; // function to find fibonacci numberint fib(int n){ if (n <= 1) return n; return fib(n - 1) + fib(n - 2);} // function for finding number of vertices // in fibonacci cube graphint findVertices (int n){ // return fibonacci number for f(n + 2) return fib(n + 2);} // driver programint main(){ // n is the order of the graph int n = 3; cout << findVertices(n); return 0;}", "e": 26469, "s": 25959, "text": null }, { "code": "// java code to find vertices in a fibonacci// cube graph of order npublic class GFG { // function to find fibonacci number static int fib(int n) { if (n <= 1) return n; return fib(n - 1) + fib(n - 2); } // function for finding number of vertices // in fibonacci cube graph static int findVertices (int n) { // return fibonacci number for f(n + 2) return fib(n + 2); } public static void main(String args[]) { // n is the order of the graph int n = 3; System.out.println(findVertices(n)); }} // This code is contributed by Sam007", "e": 27127, "s": 26469, "text": null }, { "code": "# Python3 code to find vertices in # a fibonacci cube graph of order n # Function to find fibonacci number def fib(n): if n <= 1: return n return fib(n - 1) + fib(n - 2) # Function for finding number of # vertices in fibonacci cube graph def findVertices(n): # return fibonacci number # for f(n + 2) return fib(n + 2) # Driver Codeif __name__ == \"__main__\": # n is the order of the graph n = 3 print(findVertices(n)) # This code is contributed # by Rituraj Jain", "e": 27651, "s": 27127, "text": null }, { "code": "// C# code to find vertices in a fibonacci// cube graph of order nusing System; class GFG { // function to find fibonacci number static int fib(int n) { if (n <= 1) return n; return fib(n - 1) + fib(n - 2); } // function for finding number of // vertices in fibonacci cube graph static int findVertices (int n) { // return fibonacci number for // f(n + 2) return fib(n + 2); } // Driver code static void Main() { // n is the order of the graph int n = 3; Console.Write(findVertices(n)); }} // This code is contributed by Sam007", "e": 28337, "s": 27651, "text": null }, { "code": "<?php// PHP code to find vertices in a // fibonacci cube graph of order n // function to find fibonacci numberfunction fib($n){ if ($n <= 1) return $n; return fib($n - 1) + fib($n - 2);} // function for finding number of // vertices in fibonacci cube graphfunction findVertices ($n){ // return fibonacci number // for f(n + 2) return fib($n + 2);} // Driver Code // n is the order of the graph$n = 3;echo findVertices($n); // This code is contributed by Sam007?>", "e": 28834, "s": 28337, "text": null }, { "code": "<script> // Javascript code to find vertices in a fibonacci// cube graph of order n // function to find fibonacci numberfunction fib(n){ if (n <= 1) return n; return fib(n - 1) + fib(n - 2);} // function for finding number of vertices // in fibonacci cube graphfunction findVertices (n){ // return fibonacci number for f(n + 2) return fib(n + 2);} // driver program// n is the order of the graphvar n = 3;document.write( findVertices(n)); </script>", "e": 29310, "s": 28834, "text": null }, { "code": null, "e": 29312, "s": 29310, "text": "5" }, { "code": null, "e": 29452, "s": 29314, "text": "Note that the above code can be optimized to work in O(Log n) using efficient implementations discussed in Program for Fibonacci numbers " }, { "code": null, "e": 29459, "s": 29452, "text": "Sam007" }, { "code": null, "e": 29472, "s": 29459, "text": "rituraj_jain" }, { "code": null, "e": 29482, "s": 29472, "text": "rutvik_56" }, { "code": null, "e": 29492, "s": 29482, "text": "Fibonacci" }, { "code": null, "e": 29498, "s": 29492, "text": "Graph" }, { "code": null, "e": 29508, "s": 29498, "text": "Fibonacci" }, { "code": null, "e": 29514, "s": 29508, "text": "Graph" }, { "code": null, "e": 29612, "s": 29514, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29621, "s": 29612, "text": "Comments" }, { "code": null, "e": 29634, "s": 29621, "text": "Old Comments" }, { "code": null, "e": 29670, "s": 29634, "text": "Best First Search (Informed Search)" }, { "code": null, "e": 29740, "s": 29670, "text": "Vertex Cover Problem | Set 1 (Introduction and Approximate Algorithm)" }, { "code": null, "e": 29787, "s": 29740, "text": "Eulerian path and circuit for undirected graph" }, { "code": null, "e": 29852, "s": 29787, "text": "Find if there is a path between two vertices in a directed graph" }, { "code": null, "e": 29933, "s": 29852, "text": "Iterative Deepening Search(IDS) or Iterative Deepening Depth First Search(IDDFS)" }, { "code": null, "e": 29975, "s": 29933, "text": "Graph Coloring | Set 2 (Greedy Algorithm)" }, { "code": null, "e": 30028, "s": 29975, "text": "Printing Paths in Dijkstra's Shortest Path Algorithm" }, { "code": null, "e": 30069, "s": 30028, "text": "Longest Path in a Directed Acyclic Graph" }, { "code": null, "e": 30130, "s": 30069, "text": "Prim’s MST for Adjacency List Representation | Greedy Algo-6" } ]
protected access modifier in Java
Variables, methods, and constructors, which are declared protected in a superclass can be accessed only by the subclasses in other package or any class within the package of the protected members' class. The protected access modifier cannot be applied to class and interfaces. Methods, fields can be declared protected, however methods and fields in a interface cannot be declared protected. Protected access gives the subclass a chance to use the helper method or variable, while preventing a non-related class from trying to use it. The following parent class uses protected access control, to allow its child class override openSpeaker() method - class AudioPlayer { protected boolean openSpeaker(Speaker sp) { // implementation details } } class StreamingAudioPlayer { boolean openSpeaker(Speaker sp) { // implementation details } } Here, if we define openSpeaker() method as private, then it would not be accessible from any other class other than AudioPlayer. If we define it as public, then it would become accessible to all the outside world. But our intention is to expose this method to its subclass only, that’s why we have used protected modifier.
[ { "code": null, "e": 1266, "s": 1062, "text": "Variables, methods, and constructors, which are declared protected in a superclass can be accessed only by the subclasses in other package or any class within the package of the protected members' class." }, { "code": null, "e": 1454, "s": 1266, "text": "The protected access modifier cannot be applied to class and interfaces. Methods, fields can be declared protected, however methods and fields in a interface cannot be declared protected." }, { "code": null, "e": 1597, "s": 1454, "text": "Protected access gives the subclass a chance to use the helper method or variable, while preventing a non-related class from trying to use it." }, { "code": null, "e": 1920, "s": 1597, "text": "The following parent class uses protected access control, to allow its child class override openSpeaker() method -\nclass AudioPlayer {\n protected boolean openSpeaker(Speaker sp) {\n // implementation details\n }\n}\nclass StreamingAudioPlayer {\n boolean openSpeaker(Speaker sp) {\n // implementation details\n }\n}" }, { "code": null, "e": 2243, "s": 1920, "text": "Here, if we define openSpeaker() method as private, then it would not be accessible from any other class other than AudioPlayer. If we define it as public, then it would become accessible to all the outside world. But our intention is to expose this method to its subclass only, that’s why we have used protected modifier." } ]
A Free Data Science Portfolio Template to Showcase Your Projects: Use It to Create Your Own | by Zach Alexander | Towards Data Science
Building a portfolio to showcase your data science work can seem like a daunting task. And, after spending countless hours fine-tuning your models, how best do you share your results with the world? Well, before you go running to a website builder, why not try building it yourself! It actually won’t take you too much time, and some of the major advantages to doing so are: You’ll have a lot more freedom in how you present your projects to the world. Website builders are convenient, but often times you’ll be limited by the types of layouts offered to you, and you may have to restructure your code to get things working. You’ll pick up a new skill! Data Scientists need to have a good understanding of how web frameworks work. Why not take a few minutes to look through some HTML, CSS, and JavaScript code — it’ll go a long way! Building out a custom website can show future employers that you are a well-rounded candidate for the job. Inevitably, they’ll be impressed that you took the time and made the effort to deploy your projects and content. Your projects will be accessible to a much broader audience if they are hosted over the internet. It’s fun! Building and showcasing your skills is what it’s all about. If I’ve convinced you, then feel free to continue reading through this step-by-step guide (and video), from downloading the initial template from GitHub all the way through deploying it to the internet. This walkthrough is coupled with a comprehensive YouTube video that you can find here: In order to download the repository, you’ll first need a GitHub account and Git. If you already have both of these, feel free to skip to step 2. If you do not have a GitHub account, you can create one for free here: https://github.com/ Next, you’ll need to install Git on your machine: https://git-scm.com/downloads With Git successfully installed, you can now access the repository that contains the Angular template: https://github.com/zachalexander/data-science-portfolio-template To download it to your local machine, you’ll want to open up a terminal window (highly recommend VSCode) and run: git clone https://github.com/zachalexander/data-science-portfolio-template.git The Angular application uses certain modules and packages, and similar to other web frameworks, one of the best ways to organize and install these packages is through a package manager called npm. If you already have npm installed, feel free to skip to step 4. If you do not have npm installed already, you’ll need to install it here: https://www.npmjs.com/get-npm To run our Angular application locally, and eventually to compile our files for deployment, we’ll need to install the Angular CLI. You can download this using npm by running the following command in the terminal: npm i @angular/cli Now comes the exciting part, let’s take a look at the template! In order to do so, you’ll need to navigate into the root folder of our project’s directory by using the command line cd command: cd data-science-portfolio-template Once you are inside of the root folder of the template directory, you’ll then want to run this command: npm install This will take a few minutes, but will install all of the necessary packages for our application. Once complete, you can then run the following command: npm start If all goes well, you should then see your terminal output something like this: You can then open up a web browser (preferably Google Chrome or Safari), and navigate to http://localhost:4200. Once there, you should be able to see the Data Science Portfolio template! Here’s the top of the home page: Once you have the template open in a browser, you’ll notice there are a lot of places that’ll need your edits. For instance, you’ll want to add your own headshot (currently a stock photo), update the project pages with your own content, and link to anything in particular you’d like to showcase. In the video walkthrough, I go into more detail about some of the places you can truly make this your own! However, here’s a quick list of files that’ll need editing right away: The home component html -data-science-portfolio-template--src---app----components-----home------home.component.html This’ll be where you’ll edit a lot of the home page html. There are code comments that look like this: <!-- EDIT CONTENT BELOW --> These indicate places that’ll need your edits to make it more personalized. The home component stylings -data-science-portfolio-template--src---app----components-----home------home.component.scss This will contain adjustments to any photos you’d like to add/change. There are code comments that look like this: // project photo 1 -- feel free to change That’ll highlight places where you can change the mapping to your photos. For reference, all of the photos that I use for styling are found in the assets folder here: -data-science-portfolio-template--src---assets So, you can just add any photos you’d like to use to the assets folder, and then change the file name to your photo, instead of sample.jpg in the scss file. The navbar component html -data-science-portfolio-template--src---app----components-----navbar------navbar.component.html You’ll want to change the “Your Name” text here to personalize it, as well as update the buttons to link to your social media pages. The footer component html -data-science-portfolio-template--src---app----components-----footer------footer.component.html You’ll want to change any content in the footer, including links to your social media pages, in order to customize this for your own website. To change color scheme or fonts -data-science-portfolio-template--src---styles.scss To change the fonts, you can use Google Fonts and import your preferences similar to the current setup in this file: You’ll also have to go through the home.component.scss file to update any font stylings that are cast on specific elements. The color scheme can be changed easily by adjusting this code with the hex values you’d prefer to use: Update or remove the resume page You may or may not want to utilize the resume page. To remove it from your application, you can do the following: Navigate to app-routing.module.ts: Navigate to app-routing.module.ts: -data-science-portfolio-template--src---app----app-routing.module.ts 2. Comment or delete out the path to the resume page (commented out below as an example): 3. Either remove the button from the home.component.html page, or link it to your resume elsewhere. To update the resume page, you’ll want to navigate to it: -data-science-portfolio-template--src---app----components-----resume------resume.component.html And edit any portion of it with your job experience, education, interests, etc. Update the annotations on the D3.js graphic For instance, the initial template has a Section 4 annotation which will likely need to be changed. To do this, you’ll want to navigate to: -data-science-portfolio-template--src---app----components-----simplelinechart------simplelinechart.component.ts And edit lines 106, 123, 139, and 155, where the following code is found: note: {title: 'Section Four' <- change this to your preferred title} Adding additional pages to your portfolio I won’t go through a ton of detail here, but will reference this excellent medium post that walks you through how to add additional pages to your application in Angular: https://zeroesandones.medium.com/how-to-create-navigation-in-angular-9-application-using-angular-router-6ee3ff182eb6 As you are editing, be sure to save each of your files so you can see them as you develop locally! When you are ready to deploy it to the internet, proceed to the next step. When you are satisfied with all of your local edits and changes, you’ll then want to open up a separate terminal window (or close out of your development server on your current terminal window) and run: ng build This command will compile your code, condense it, and make it ready for deployment! If you get stuck, feel free to watch the video for help. Wow, congratulations on making it to the final step! Now we’ll need to deploy this to the internet so everyone can witness your greatness. I’d highly recommend following along with the video at this point (starting at 18:45). To do this, we’ll use Google Firebase. It’s free hosting, so you don’t have to worry about accruing any charges. In order to get started, you’ll need to use npm again to download firebase tools. In a terminal, run: npm install -g firebase-tools Then, you’ll want to navigate to: https://firebase.google.com/, and sign in to your Google Account. You’ll have to create one if you haven’t already. You can then click “Go to Console” to get started. You’ll then want to create a new project, which I outline in detail in the video. After doing so, you can navigate back to your terminal window, making sure you are in the root directory of your project folder (i.e. data-science-portfolio-template), and run: firebase login You should be able to authenticate with your firebase account. After this, you can then link your new project to the firebase-tools CLI, and work through the steps outlined in the video (starting at 20:20). Once you have linked your files correctly and have everything ready for deployment, you’ll just want to run: firebase deploy When this command finishes, you should see a Hosting URL in the output that contains the link to your fully deployed portfolio! As mentioned in the video, this url can be shared with friends, future employers, etc. You can also set up a custom domain for your website through Firebase in a few easy steps as well, however, you will have to purchase the name (this part is not free!). If you’ve made it through the steps and now have your own portfolio hosted on the internet, congratulations! I hope that this template serves you well, and can help showcase the cool work you do. As mentioned at the end of the video walkthrough, I am not a web developer, and have learned most of this through self-learning. Therefore, for those that are more web-dev savvy, I’d be happy to collaborate in making this project even more accessible and efficient! If you enjoyed this content, or would like to check out other work I’ve done, feel free to venture over to my website at zach-alexander.com (you’ll notice that the template looks very similar to my own site :), and I have a few more pages built out as an example.
[ { "code": null, "e": 546, "s": 171, "text": "Building a portfolio to showcase your data science work can seem like a daunting task. And, after spending countless hours fine-tuning your models, how best do you share your results with the world? Well, before you go running to a website builder, why not try building it yourself! It actually won’t take you too much time, and some of the major advantages to doing so are:" }, { "code": null, "e": 796, "s": 546, "text": "You’ll have a lot more freedom in how you present your projects to the world. Website builders are convenient, but often times you’ll be limited by the types of layouts offered to you, and you may have to restructure your code to get things working." }, { "code": null, "e": 1004, "s": 796, "text": "You’ll pick up a new skill! Data Scientists need to have a good understanding of how web frameworks work. Why not take a few minutes to look through some HTML, CSS, and JavaScript code — it’ll go a long way!" }, { "code": null, "e": 1224, "s": 1004, "text": "Building out a custom website can show future employers that you are a well-rounded candidate for the job. Inevitably, they’ll be impressed that you took the time and made the effort to deploy your projects and content." }, { "code": null, "e": 1322, "s": 1224, "text": "Your projects will be accessible to a much broader audience if they are hosted over the internet." }, { "code": null, "e": 1392, "s": 1322, "text": "It’s fun! Building and showcasing your skills is what it’s all about." }, { "code": null, "e": 1595, "s": 1392, "text": "If I’ve convinced you, then feel free to continue reading through this step-by-step guide (and video), from downloading the initial template from GitHub all the way through deploying it to the internet." }, { "code": null, "e": 1682, "s": 1595, "text": "This walkthrough is coupled with a comprehensive YouTube video that you can find here:" }, { "code": null, "e": 1827, "s": 1682, "text": "In order to download the repository, you’ll first need a GitHub account and Git. If you already have both of these, feel free to skip to step 2." }, { "code": null, "e": 1918, "s": 1827, "text": "If you do not have a GitHub account, you can create one for free here: https://github.com/" }, { "code": null, "e": 1998, "s": 1918, "text": "Next, you’ll need to install Git on your machine: https://git-scm.com/downloads" }, { "code": null, "e": 2166, "s": 1998, "text": "With Git successfully installed, you can now access the repository that contains the Angular template: https://github.com/zachalexander/data-science-portfolio-template" }, { "code": null, "e": 2280, "s": 2166, "text": "To download it to your local machine, you’ll want to open up a terminal window (highly recommend VSCode) and run:" }, { "code": null, "e": 2359, "s": 2280, "text": "git clone https://github.com/zachalexander/data-science-portfolio-template.git" }, { "code": null, "e": 2620, "s": 2359, "text": "The Angular application uses certain modules and packages, and similar to other web frameworks, one of the best ways to organize and install these packages is through a package manager called npm. If you already have npm installed, feel free to skip to step 4." }, { "code": null, "e": 2724, "s": 2620, "text": "If you do not have npm installed already, you’ll need to install it here: https://www.npmjs.com/get-npm" }, { "code": null, "e": 2937, "s": 2724, "text": "To run our Angular application locally, and eventually to compile our files for deployment, we’ll need to install the Angular CLI. You can download this using npm by running the following command in the terminal:" }, { "code": null, "e": 2956, "s": 2937, "text": "npm i @angular/cli" }, { "code": null, "e": 3149, "s": 2956, "text": "Now comes the exciting part, let’s take a look at the template! In order to do so, you’ll need to navigate into the root folder of our project’s directory by using the command line cd command:" }, { "code": null, "e": 3184, "s": 3149, "text": "cd data-science-portfolio-template" }, { "code": null, "e": 3288, "s": 3184, "text": "Once you are inside of the root folder of the template directory, you’ll then want to run this command:" }, { "code": null, "e": 3300, "s": 3288, "text": "npm install" }, { "code": null, "e": 3453, "s": 3300, "text": "This will take a few minutes, but will install all of the necessary packages for our application. Once complete, you can then run the following command:" }, { "code": null, "e": 3463, "s": 3453, "text": "npm start" }, { "code": null, "e": 3543, "s": 3463, "text": "If all goes well, you should then see your terminal output something like this:" }, { "code": null, "e": 3763, "s": 3543, "text": "You can then open up a web browser (preferably Google Chrome or Safari), and navigate to http://localhost:4200. Once there, you should be able to see the Data Science Portfolio template! Here’s the top of the home page:" }, { "code": null, "e": 4059, "s": 3763, "text": "Once you have the template open in a browser, you’ll notice there are a lot of places that’ll need your edits. For instance, you’ll want to add your own headshot (currently a stock photo), update the project pages with your own content, and link to anything in particular you’d like to showcase." }, { "code": null, "e": 4237, "s": 4059, "text": "In the video walkthrough, I go into more detail about some of the places you can truly make this your own! However, here’s a quick list of files that’ll need editing right away:" }, { "code": null, "e": 4261, "s": 4237, "text": "The home component html" }, { "code": null, "e": 4353, "s": 4261, "text": "-data-science-portfolio-template--src---app----components-----home------home.component.html" }, { "code": null, "e": 4456, "s": 4353, "text": "This’ll be where you’ll edit a lot of the home page html. There are code comments that look like this:" }, { "code": null, "e": 4484, "s": 4456, "text": "<!-- EDIT CONTENT BELOW -->" }, { "code": null, "e": 4560, "s": 4484, "text": "These indicate places that’ll need your edits to make it more personalized." }, { "code": null, "e": 4588, "s": 4560, "text": "The home component stylings" }, { "code": null, "e": 4680, "s": 4588, "text": "-data-science-portfolio-template--src---app----components-----home------home.component.scss" }, { "code": null, "e": 4795, "s": 4680, "text": "This will contain adjustments to any photos you’d like to add/change. There are code comments that look like this:" }, { "code": null, "e": 4837, "s": 4795, "text": "// project photo 1 -- feel free to change" }, { "code": null, "e": 5004, "s": 4837, "text": "That’ll highlight places where you can change the mapping to your photos. For reference, all of the photos that I use for styling are found in the assets folder here:" }, { "code": null, "e": 5051, "s": 5004, "text": "-data-science-portfolio-template--src---assets" }, { "code": null, "e": 5208, "s": 5051, "text": "So, you can just add any photos you’d like to use to the assets folder, and then change the file name to your photo, instead of sample.jpg in the scss file." }, { "code": null, "e": 5234, "s": 5208, "text": "The navbar component html" }, { "code": null, "e": 5330, "s": 5234, "text": "-data-science-portfolio-template--src---app----components-----navbar------navbar.component.html" }, { "code": null, "e": 5463, "s": 5330, "text": "You’ll want to change the “Your Name” text here to personalize it, as well as update the buttons to link to your social media pages." }, { "code": null, "e": 5489, "s": 5463, "text": "The footer component html" }, { "code": null, "e": 5585, "s": 5489, "text": "-data-science-portfolio-template--src---app----components-----footer------footer.component.html" }, { "code": null, "e": 5727, "s": 5585, "text": "You’ll want to change any content in the footer, including links to your social media pages, in order to customize this for your own website." }, { "code": null, "e": 5759, "s": 5727, "text": "To change color scheme or fonts" }, { "code": null, "e": 5811, "s": 5759, "text": "-data-science-portfolio-template--src---styles.scss" }, { "code": null, "e": 5928, "s": 5811, "text": "To change the fonts, you can use Google Fonts and import your preferences similar to the current setup in this file:" }, { "code": null, "e": 6052, "s": 5928, "text": "You’ll also have to go through the home.component.scss file to update any font stylings that are cast on specific elements." }, { "code": null, "e": 6155, "s": 6052, "text": "The color scheme can be changed easily by adjusting this code with the hex values you’d prefer to use:" }, { "code": null, "e": 6188, "s": 6155, "text": "Update or remove the resume page" }, { "code": null, "e": 6302, "s": 6188, "text": "You may or may not want to utilize the resume page. To remove it from your application, you can do the following:" }, { "code": null, "e": 6337, "s": 6302, "text": "Navigate to app-routing.module.ts:" }, { "code": null, "e": 6372, "s": 6337, "text": "Navigate to app-routing.module.ts:" }, { "code": null, "e": 6441, "s": 6372, "text": "-data-science-portfolio-template--src---app----app-routing.module.ts" }, { "code": null, "e": 6531, "s": 6441, "text": "2. Comment or delete out the path to the resume page (commented out below as an example):" }, { "code": null, "e": 6631, "s": 6531, "text": "3. Either remove the button from the home.component.html page, or link it to your resume elsewhere." }, { "code": null, "e": 6689, "s": 6631, "text": "To update the resume page, you’ll want to navigate to it:" }, { "code": null, "e": 6785, "s": 6689, "text": "-data-science-portfolio-template--src---app----components-----resume------resume.component.html" }, { "code": null, "e": 6865, "s": 6785, "text": "And edit any portion of it with your job experience, education, interests, etc." }, { "code": null, "e": 6909, "s": 6865, "text": "Update the annotations on the D3.js graphic" }, { "code": null, "e": 7049, "s": 6909, "text": "For instance, the initial template has a Section 4 annotation which will likely need to be changed. To do this, you’ll want to navigate to:" }, { "code": null, "e": 7161, "s": 7049, "text": "-data-science-portfolio-template--src---app----components-----simplelinechart------simplelinechart.component.ts" }, { "code": null, "e": 7235, "s": 7161, "text": "And edit lines 106, 123, 139, and 155, where the following code is found:" }, { "code": null, "e": 7304, "s": 7235, "text": "note: {title: 'Section Four' <- change this to your preferred title}" }, { "code": null, "e": 7346, "s": 7304, "text": "Adding additional pages to your portfolio" }, { "code": null, "e": 7633, "s": 7346, "text": "I won’t go through a ton of detail here, but will reference this excellent medium post that walks you through how to add additional pages to your application in Angular: https://zeroesandones.medium.com/how-to-create-navigation-in-angular-9-application-using-angular-router-6ee3ff182eb6" }, { "code": null, "e": 7807, "s": 7633, "text": "As you are editing, be sure to save each of your files so you can see them as you develop locally! When you are ready to deploy it to the internet, proceed to the next step." }, { "code": null, "e": 8010, "s": 7807, "text": "When you are satisfied with all of your local edits and changes, you’ll then want to open up a separate terminal window (or close out of your development server on your current terminal window) and run:" }, { "code": null, "e": 8019, "s": 8010, "text": "ng build" }, { "code": null, "e": 8160, "s": 8019, "text": "This command will compile your code, condense it, and make it ready for deployment! If you get stuck, feel free to watch the video for help." }, { "code": null, "e": 8299, "s": 8160, "text": "Wow, congratulations on making it to the final step! Now we’ll need to deploy this to the internet so everyone can witness your greatness." }, { "code": null, "e": 8386, "s": 8299, "text": "I’d highly recommend following along with the video at this point (starting at 18:45)." }, { "code": null, "e": 8581, "s": 8386, "text": "To do this, we’ll use Google Firebase. It’s free hosting, so you don’t have to worry about accruing any charges. In order to get started, you’ll need to use npm again to download firebase tools." }, { "code": null, "e": 8601, "s": 8581, "text": "In a terminal, run:" }, { "code": null, "e": 8631, "s": 8601, "text": "npm install -g firebase-tools" }, { "code": null, "e": 8832, "s": 8631, "text": "Then, you’ll want to navigate to: https://firebase.google.com/, and sign in to your Google Account. You’ll have to create one if you haven’t already. You can then click “Go to Console” to get started." }, { "code": null, "e": 9091, "s": 8832, "text": "You’ll then want to create a new project, which I outline in detail in the video. After doing so, you can navigate back to your terminal window, making sure you are in the root directory of your project folder (i.e. data-science-portfolio-template), and run:" }, { "code": null, "e": 9106, "s": 9091, "text": "firebase login" }, { "code": null, "e": 9313, "s": 9106, "text": "You should be able to authenticate with your firebase account. After this, you can then link your new project to the firebase-tools CLI, and work through the steps outlined in the video (starting at 20:20)." }, { "code": null, "e": 9422, "s": 9313, "text": "Once you have linked your files correctly and have everything ready for deployment, you’ll just want to run:" }, { "code": null, "e": 9438, "s": 9422, "text": "firebase deploy" }, { "code": null, "e": 9566, "s": 9438, "text": "When this command finishes, you should see a Hosting URL in the output that contains the link to your fully deployed portfolio!" }, { "code": null, "e": 9822, "s": 9566, "text": "As mentioned in the video, this url can be shared with friends, future employers, etc. You can also set up a custom domain for your website through Firebase in a few easy steps as well, however, you will have to purchase the name (this part is not free!)." }, { "code": null, "e": 10018, "s": 9822, "text": "If you’ve made it through the steps and now have your own portfolio hosted on the internet, congratulations! I hope that this template serves you well, and can help showcase the cool work you do." }, { "code": null, "e": 10284, "s": 10018, "text": "As mentioned at the end of the video walkthrough, I am not a web developer, and have learned most of this through self-learning. Therefore, for those that are more web-dev savvy, I’d be happy to collaborate in making this project even more accessible and efficient!" } ]
Spectral graph clustering and optimal number of clusters estimation | by Madalina Ciortan | Towards Data Science
This post explains the functioning of the spectral graph clustering algorithm, then it looks at a variant named self tuned graph clustering. This adaptation has the advantage of providing an estimation for the optimal number of clusters and also for the similarity measure between data points. Next, we will provide an implementation for the eigengap heuristic computing of the optimal number of clusters in a dataset based on the largest distance between consecutive eigen values of the input data’s laplacian. Now let’s start by introducing some basing graph theory notions. Given a graph with n vertices and m nodes, the adjacency matrix is a square n*n matrix with the property: A[i][j] = 1 if there is an edge between node i and node j, 0 otherwise Because A is symmetric, its eigen vectors are real and orthogonal (the dot product is 0). The degree matrix is a n*n diagonal matrix with the property d[i][i] = the number of adjacent edges in node i or the degree of node i d[i][j] = 0 The laplacian matrix is a n*n matrix defined as: L = D -A Its eigen values are positive real numbers and the eigen vectors are real and orthogonal (the dot product of the 2 vectors is 0) A measure of the connectivity of a group to the rest of the network relative to the density of the group (the number of edges that point outside the cluster divided by the sum of the degrees of the nodes in the cluster). The lower the conductance, the better the cluster. Calculating the eigen values and eigen vectors of A with x ( n dimensional vector with the values of the nodes): A * x = lambda * x The spectrum of a matrix representing the graph G is a set of eigenvectors xi of the graph ordered by the corresponding eigen values lambda i. Now that we introduced the most important building blocks of graph theory, we are ready to summarize the spectral clustering steps: Compute the Laplacian matrix L of the input graph GCompute the eigen values (lambda) and eigen vectors (x) such that Compute the Laplacian matrix L of the input graph G Compute the eigen values (lambda) and eigen vectors (x) such that L* x = lambda * x 3. Select n eigenvectors corresponding to the largest eigenvalues and redefine the input space as a n dimensional subspace 4. Find clusters in this subspace using various clustering algorithms, such as k-means It is also possible to use instead of the adjacency matrix defined above an affinity matrix which determines how close or similar are 2 points in our space. As defined in the sklearn implemenatation: similarity = np.exp(-beta * distance / distance.std()) A good resource demoing the creation of the affinity matrix is this youtube video. Both Spectral Clustering and affinity propagation have been implemented in python. This jupiter notebook shows a quick demo of their usage. clustering = SpectralClustering(n_clusters=nb_clusters, assign_labels="discretize", random_state=0).fit(X)y_pred = clustering.labels_plt.title(f'Spectral clustering results ')plt.scatter(X[:, 0], X[:, 1], s=50, c = y_pred); Spectral clustering is a technique known to perform well particularly in the case of non-gaussian clusters where the most common clustering algorithms such as K-Means fail to give good results. However, it needs to be given the expected number of clusters and a parameter for the similarity threshold. The idea behind the self tuning spectral clustering is determine the optimal number of clusters and also the similarity metric σi used in the computation of the affinity matrix. As explained in this paper the affinity matrix is defined as where d(si, sj ) is some distance function, often just the Euclidean distance between the vectors si and sj. σ is the scale parameter and is a measure of the similarity between points. Usually it is selected manually. It can also be set automatically by running the clustering many times with different values and selecting the one producing the least distorted cluster. This paper suggest to calculate a local scaling parameter σi for each data point si instead of a single scaling parameter. The paper proposes to analyse the neighborhood of each point si and thus define: σi = d(si, sK) where sK is the K’th neighbor of point si. This is illustrated in the figure below, taken from the original paper, for a value of K=7. The affinity matrix with local scaling can be implemented as follows: A second way to estimate the number of clusters is to analyze the eigenvalues ( the largest eigenvalue of L will be a repeated eigenvalue of magnitude 1 with multiplicity equal to the number of groups C. This implies one could estimate C by counting the number of eigenvalues equaling 1). As shown in the paper: Another type of analysis can be performed on the eigenvectors but it is not in the scope of this post. This paper A Tutorial on Spectral Clustering — Ulrike von Luxburg proposes an approach based on perturbation theory and spectral graph theory to calculate the optimal number of clusters. Eigengap heuristic suggests the number of clusters k is usually given by the value of k that maximizes the eigengap (difference between consecutive eigenvalues). The larger this eigengap is, the closer the eigenvectors of the ideal case and hence the better spectral clustering works. The code base for this post can be found on Github.
[ { "code": null, "e": 683, "s": 171, "text": "This post explains the functioning of the spectral graph clustering algorithm, then it looks at a variant named self tuned graph clustering. This adaptation has the advantage of providing an estimation for the optimal number of clusters and also for the similarity measure between data points. Next, we will provide an implementation for the eigengap heuristic computing of the optimal number of clusters in a dataset based on the largest distance between consecutive eigen values of the input data’s laplacian." }, { "code": null, "e": 748, "s": 683, "text": "Now let’s start by introducing some basing graph theory notions." }, { "code": null, "e": 854, "s": 748, "text": "Given a graph with n vertices and m nodes, the adjacency matrix is a square n*n matrix with the property:" }, { "code": null, "e": 925, "s": 854, "text": "A[i][j] = 1 if there is an edge between node i and node j, 0 otherwise" }, { "code": null, "e": 1015, "s": 925, "text": "Because A is symmetric, its eigen vectors are real and orthogonal (the dot product is 0)." }, { "code": null, "e": 1076, "s": 1015, "text": "The degree matrix is a n*n diagonal matrix with the property" }, { "code": null, "e": 1149, "s": 1076, "text": "d[i][i] = the number of adjacent edges in node i or the degree of node i" }, { "code": null, "e": 1161, "s": 1149, "text": "d[i][j] = 0" }, { "code": null, "e": 1219, "s": 1161, "text": "The laplacian matrix is a n*n matrix defined as: L = D -A" }, { "code": null, "e": 1348, "s": 1219, "text": "Its eigen values are positive real numbers and the eigen vectors are real and orthogonal (the dot product of the 2 vectors is 0)" }, { "code": null, "e": 1620, "s": 1348, "text": "A measure of the connectivity of a group to the rest of the network relative to the density of the group (the number of edges that point outside the cluster divided by the sum of the degrees of the nodes in the cluster). The lower the conductance, the better the cluster." }, { "code": null, "e": 1752, "s": 1620, "text": "Calculating the eigen values and eigen vectors of A with x ( n dimensional vector with the values of the nodes): A * x = lambda * x" }, { "code": null, "e": 1895, "s": 1752, "text": "The spectrum of a matrix representing the graph G is a set of eigenvectors xi of the graph ordered by the corresponding eigen values lambda i." }, { "code": null, "e": 2027, "s": 1895, "text": "Now that we introduced the most important building blocks of graph theory, we are ready to summarize the spectral clustering steps:" }, { "code": null, "e": 2144, "s": 2027, "text": "Compute the Laplacian matrix L of the input graph GCompute the eigen values (lambda) and eigen vectors (x) such that" }, { "code": null, "e": 2196, "s": 2144, "text": "Compute the Laplacian matrix L of the input graph G" }, { "code": null, "e": 2262, "s": 2196, "text": "Compute the eigen values (lambda) and eigen vectors (x) such that" }, { "code": null, "e": 2280, "s": 2262, "text": "L* x = lambda * x" }, { "code": null, "e": 2403, "s": 2280, "text": "3. Select n eigenvectors corresponding to the largest eigenvalues and redefine the input space as a n dimensional subspace" }, { "code": null, "e": 2490, "s": 2403, "text": "4. Find clusters in this subspace using various clustering algorithms, such as k-means" }, { "code": null, "e": 2690, "s": 2490, "text": "It is also possible to use instead of the adjacency matrix defined above an affinity matrix which determines how close or similar are 2 points in our space. As defined in the sklearn implemenatation:" }, { "code": null, "e": 2745, "s": 2690, "text": "similarity = np.exp(-beta * distance / distance.std())" }, { "code": null, "e": 2828, "s": 2745, "text": "A good resource demoing the creation of the affinity matrix is this youtube video." }, { "code": null, "e": 2968, "s": 2828, "text": "Both Spectral Clustering and affinity propagation have been implemented in python. This jupiter notebook shows a quick demo of their usage." }, { "code": null, "e": 3192, "s": 2968, "text": "clustering = SpectralClustering(n_clusters=nb_clusters, assign_labels=\"discretize\", random_state=0).fit(X)y_pred = clustering.labels_plt.title(f'Spectral clustering results ')plt.scatter(X[:, 0], X[:, 1], s=50, c = y_pred);" }, { "code": null, "e": 3494, "s": 3192, "text": "Spectral clustering is a technique known to perform well particularly in the case of non-gaussian clusters where the most common clustering algorithms such as K-Means fail to give good results. However, it needs to be given the expected number of clusters and a parameter for the similarity threshold." }, { "code": null, "e": 3672, "s": 3494, "text": "The idea behind the self tuning spectral clustering is determine the optimal number of clusters and also the similarity metric σi used in the computation of the affinity matrix." }, { "code": null, "e": 3733, "s": 3672, "text": "As explained in this paper the affinity matrix is defined as" }, { "code": null, "e": 4458, "s": 3733, "text": "where d(si, sj ) is some distance function, often just the Euclidean distance between the vectors si and sj. σ is the scale parameter and is a measure of the similarity between points. Usually it is selected manually. It can also be set automatically by running the clustering many times with different values and selecting the one producing the least distorted cluster. This paper suggest to calculate a local scaling parameter σi for each data point si instead of a single scaling parameter. The paper proposes to analyse the neighborhood of each point si and thus define: σi = d(si, sK) where sK is the K’th neighbor of point si. This is illustrated in the figure below, taken from the original paper, for a value of K=7." }, { "code": null, "e": 4528, "s": 4458, "text": "The affinity matrix with local scaling can be implemented as follows:" }, { "code": null, "e": 4817, "s": 4528, "text": "A second way to estimate the number of clusters is to analyze the eigenvalues ( the largest eigenvalue of L will be a repeated eigenvalue of magnitude 1 with multiplicity equal to the number of groups C. This implies one could estimate C by counting the number of eigenvalues equaling 1)." }, { "code": null, "e": 4840, "s": 4817, "text": "As shown in the paper:" }, { "code": null, "e": 4943, "s": 4840, "text": "Another type of analysis can be performed on the eigenvectors but it is not in the scope of this post." }, { "code": null, "e": 5415, "s": 4943, "text": "This paper A Tutorial on Spectral Clustering — Ulrike von Luxburg proposes an approach based on perturbation theory and spectral graph theory to calculate the optimal number of clusters. Eigengap heuristic suggests the number of clusters k is usually given by the value of k that maximizes the eigengap (difference between consecutive eigenvalues). The larger this eigengap is, the closer the eigenvectors of the ideal case and hence the better spectral clustering works." } ]
NLP | Training a tokenizer and filtering stopwords in a sentence - GeeksforGeeks
28 Jan, 2019 Why do we need to train a sentence tokenizer?In NLTK, default sentence tokenizer works for the general purpose and it works very well. But there are chances that it won’t work best for some kind of text as that text may use nonstandard punctuation or maybe it is having a unique format. So, to handle such cases, training sentence tokenizer can result in much more accurate sentence tokenization. Let us consider the following text for the understanding of the concept. This kind of text is very common in case of any web text corpus. Example of TEXT: A guy: So, what are your plans for the party? B girl: well! I am not going! A guy: Oh, but u should enjoy. To download text file, click here. Code #1 : Training Tokenizer # Loading Librariesfrom nltk.tokenize import PunktSentenceTokenizerfrom nltk.corpus import webtext text = webtext.raw('C:\\Geeksforgeeks\\data_for_training_tokenizer.txt')sent_tokenizer = PunktSentenceTokenizer(text)sents_1 = sent_tokenizer.tokenize(text) print(sents_1[0])print("\n"sents_1[678]) Output: 'White guy: So, do you have any plans for this evening?' 'Hobo: Got any spare change?' Code #2: Default Sentence Tokenizer from nltk.tokenize import sent_tokenizesents_2 = sent_tokenize(text) print(sents_2[0])print("\n"sents_2[678]) Output: 'White guy: So, do you have any plans for this evening?' 'Girl: But you already have a Big Mac...\r\nHobo: Oh, this is all theatrical.' This difference in the second output is a good demonstration of why it can be useful to train your own sentence tokenizer, especially when your text isn’t in the typical paragraph-sentence structure. How training works ?The PunktSentenceTokenizer class follows an unsupervised learning algorithm to learn what constitutes a sentence break. It is unsupervised because so one need not give any labelled training data, just raw text. Stopwords are common words that are present in the text but generally do not contribute to the meaning of a sentence. They hold almost no importance for the purposes of information retrieval and natural language processing. For example – ‘the’ and ‘a’. Most search engines will filter out stop words from search queries and documents.NLTK library comes with a stopwords corpus – nltk_data/corpora/stopwords/ that contains word lists for many languages. Code #3 : Stopwords with Python # Loading Libraryfrom nltk.corpus import stopwords # Using stopwords from English Languagesenglish_stops = set(stopwords.words('english')) # Printing stopword list present in Englishwords = ["Let's", 'see', 'how', "it's", 'working'] print ("Before stopwords removal: ", words)print ("\nAfter stopwords removal : ", [word for word in words if word not in english_stops]) Output: Before stopwords removal: ["Let's", 'see', 'how', "it's", 'working'] After stopwords removal : ["Let's", 'see', 'working'] ? Code #4 : Complete list of languages used in NLTK stopwords. stopwords.fileids() Output: ['danish', 'dutch', 'english', 'finnish', 'french', 'german', 'hungarian', 'italian', 'norwegian', 'portuguese', 'russian', 'spanish', 'swedish', 'turkish'] Natural-language-processing Python-nltk Machine Learning Python Machine Learning Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. ML | Linear Regression Python | Decision tree implementation ML | Underfitting and Overfitting Elbow Method for optimal value of k in KMeans Support Vector Machine Algorithm Read JSON file using Python Adding new column to existing DataFrame in Pandas Python map() function How to get column names in Pandas dataframe
[ { "code": null, "e": 24438, "s": 24410, "text": "\n28 Jan, 2019" }, { "code": null, "e": 24835, "s": 24438, "text": "Why do we need to train a sentence tokenizer?In NLTK, default sentence tokenizer works for the general purpose and it works very well. But there are chances that it won’t work best for some kind of text as that text may use nonstandard punctuation or maybe it is having a unique format. So, to handle such cases, training sentence tokenizer can result in much more accurate sentence tokenization." }, { "code": null, "e": 24973, "s": 24835, "text": "Let us consider the following text for the understanding of the concept. This kind of text is very common in case of any web text corpus." }, { "code": null, "e": 25097, "s": 24973, "text": "Example of TEXT:\nA guy: So, what are your plans for the party?\nB girl: well! I am not going!\nA guy: Oh, but u should enjoy." }, { "code": null, "e": 25132, "s": 25097, "text": "To download text file, click here." }, { "code": null, "e": 25161, "s": 25132, "text": "Code #1 : Training Tokenizer" }, { "code": "# Loading Librariesfrom nltk.tokenize import PunktSentenceTokenizerfrom nltk.corpus import webtext text = webtext.raw('C:\\\\Geeksforgeeks\\\\data_for_training_tokenizer.txt')sent_tokenizer = PunktSentenceTokenizer(text)sents_1 = sent_tokenizer.tokenize(text) print(sents_1[0])print(\"\\n\"sents_1[678])", "e": 25460, "s": 25161, "text": null }, { "code": null, "e": 25468, "s": 25460, "text": "Output:" }, { "code": null, "e": 25557, "s": 25468, "text": "'White guy: So, do you have any plans for this evening?'\n\n'Hobo: Got any spare change?'\n" }, { "code": null, "e": 25594, "s": 25557, "text": " Code #2: Default Sentence Tokenizer" }, { "code": "from nltk.tokenize import sent_tokenizesents_2 = sent_tokenize(text) print(sents_2[0])print(\"\\n\"sents_2[678])", "e": 25705, "s": 25594, "text": null }, { "code": null, "e": 25713, "s": 25705, "text": "Output:" }, { "code": null, "e": 25851, "s": 25713, "text": "'White guy: So, do you have any plans for this evening?'\n\n'Girl: But you already have a Big Mac...\\r\\nHobo: Oh, this is all theatrical.'\n" }, { "code": null, "e": 26051, "s": 25851, "text": "This difference in the second output is a good demonstration of why it can be useful to train your own sentence tokenizer, especially when your text isn’t in the typical paragraph-sentence structure." }, { "code": null, "e": 26282, "s": 26051, "text": "How training works ?The PunktSentenceTokenizer class follows an unsupervised learning algorithm to learn what constitutes a sentence break. It is unsupervised because so one need not give any labelled training data, just raw text." }, { "code": null, "e": 26735, "s": 26282, "text": "Stopwords are common words that are present in the text but generally do not contribute to the meaning of a sentence. They hold almost no importance for the purposes of information retrieval and natural language processing. For example – ‘the’ and ‘a’. Most search engines will filter out stop words from search queries and documents.NLTK library comes with a stopwords corpus – nltk_data/corpora/stopwords/ that contains word lists for many languages." }, { "code": null, "e": 26767, "s": 26735, "text": "Code #3 : Stopwords with Python" }, { "code": "# Loading Libraryfrom nltk.corpus import stopwords # Using stopwords from English Languagesenglish_stops = set(stopwords.words('english')) # Printing stopword list present in Englishwords = [\"Let's\", 'see', 'how', \"it's\", 'working'] print (\"Before stopwords removal: \", words)print (\"\\nAfter stopwords removal : \", [word for word in words if word not in english_stops])", "e": 27146, "s": 26767, "text": null }, { "code": null, "e": 27154, "s": 27146, "text": "Output:" }, { "code": null, "e": 27282, "s": 27154, "text": "Before stopwords removal: [\"Let's\", 'see', 'how', \"it's\", 'working']\n\nAfter stopwords removal : [\"Let's\", 'see', 'working']\n?" }, { "code": null, "e": 27344, "s": 27282, "text": " Code #4 : Complete list of languages used in NLTK stopwords." }, { "code": "stopwords.fileids()", "e": 27364, "s": 27344, "text": null }, { "code": null, "e": 27372, "s": 27364, "text": "Output:" }, { "code": null, "e": 27529, "s": 27372, "text": "['danish', 'dutch', 'english', 'finnish', 'french', 'german',\n'hungarian', 'italian', 'norwegian', 'portuguese', 'russian',\n'spanish', 'swedish', 'turkish']" }, { "code": null, "e": 27557, "s": 27529, "text": "Natural-language-processing" }, { "code": null, "e": 27569, "s": 27557, "text": "Python-nltk" }, { "code": null, "e": 27586, "s": 27569, "text": "Machine Learning" }, { "code": null, "e": 27593, "s": 27586, "text": "Python" }, { "code": null, "e": 27610, "s": 27593, "text": "Machine Learning" }, { "code": null, "e": 27708, "s": 27610, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27731, "s": 27708, "text": "ML | Linear Regression" }, { "code": null, "e": 27769, "s": 27731, "text": "Python | Decision tree implementation" }, { "code": null, "e": 27803, "s": 27769, "text": "ML | Underfitting and Overfitting" }, { "code": null, "e": 27849, "s": 27803, "text": "Elbow Method for optimal value of k in KMeans" }, { "code": null, "e": 27882, "s": 27849, "text": "Support Vector Machine Algorithm" }, { "code": null, "e": 27910, "s": 27882, "text": "Read JSON file using Python" }, { "code": null, "e": 27960, "s": 27910, "text": "Adding new column to existing DataFrame in Pandas" }, { "code": null, "e": 27982, "s": 27960, "text": "Python map() function" } ]
How to deal with 'Boolean' values in PHP & MySQL?
We are using MySQL version 8.0.12. Let us first check the MySQL version: mysql> select version(); +-----------+ | version() | +-----------+ | 8.0.12 | +-----------+ 1 row in set (0.00 sec) To deal with Boolean in MySQL, you can use BOOL or BOOLEAN or TINYINT(1). If you use BOOL or BOOLEAN, then MySQL internally converts it into TINYINT(1). In BOOL or BOOLEAN data type, if you use true literal then MySQL represents it as 1 and false literal as 0 like in PHP/ C/ C++ language. To proof that MySQL convert the BOOL or BOOLEAN to TINYINT(1), let us create a table with BOOLEAN or BOOL column. Here, we are creating a table with column BOOLEAN. The query to create a table is as follows: mysql> create table BoolOrBooleanOrTinyintDemo -> ( -> Id int NOT NULL AUTO_INCREMENT, -> isvalidAddress BOOLEAN, -> PRIMARY KEY(Id) -> ); Query OK, 0 rows affected (0.74 sec) Now check the DDL of the above table using SHOW CREATE command. The query is as follows: mysql> show create table BoolOrBooleanOrTinyintDemo\G The following is the output: *************************** 1. row *************************** Table: BoolOrBooleanOrTinyintDemo Create Table: CREATE TABLE `boolorbooleanortinyintdemo` ( `Id` int(11) NOT NULL AUTO_INCREMENT, `isvalidAddress` tinyint(1) DEFAULT NULL, PRIMARY KEY (`Id`) ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci 1 row in set (0.00 sec) Look at the column isvalidAddress, the datatype BOOLEAN is converted into tinyint(1) internally. Now you can check the true literal will be represented by 1 and false literal by 0. Insert some records in the table with true and false literal values. The query to insert record is as follows: mysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(true); Query OK, 1 row affected (0.43 sec) mysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(false); Query OK, 1 row affected (0.17 sec) mysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(true); Query OK, 1 row affected (0.29 sec) mysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(false); Query OK, 1 row affected (0.12 sec) mysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(true); Query OK, 1 row affected (0.33 sec) Display all records from the table using select statement. The query to display all records is as follows: mysql> select *from BoolOrBooleanOrTinyintDemo; The following is the output: +----+----------------+ | Id | isvalidAddress | +----+----------------+ | 1 | 1 | | 2 | 0 | | 3 | 1 | | 4 | 0 | | 5 | 1 | +----+----------------+ 5 rows in set (0.00 sec) Look at the above sample output, true represents as 1 and false represents as 0. In PHP, if you use true then it will be represented as 1 and false will be represented as 0. Look at the following PHP code. Here, I have set the variable ‘isValidAddress’. The value is 1, that means it evaluates the if condition true and execute the body of if statement only. Check the following code: $isValidAddress = 1; if($isValidAddress) { echo 'true is represented as '; echo ($isValidAddress); } else { echo 'false is represented as '; echo ($isValidAddress); } Here is the snapshot of code: The following is the output: If you change the variable ‘isValidAddress’ to value 0. That means it evaluates the if condition false and execute the body of else statement only. The following is the code: $isValidAddress=0; if($isValidAddress) { echo 'true is represented as '; echo ($isValidAddress); } else { echo 'false is represented as '; echo ($isValidAddress); } Here is the snapshot of code: The following is the output:
[ { "code": null, "e": 1135, "s": 1062, "text": "We are using MySQL version 8.0.12. Let us first check the MySQL version:" }, { "code": null, "e": 1254, "s": 1135, "text": "mysql> select version();\n+-----------+\n| version() |\n+-----------+\n| 8.0.12 |\n+-----------+\n1 row in set (0.00 sec)" }, { "code": null, "e": 1407, "s": 1254, "text": "To deal with Boolean in MySQL, you can use BOOL or BOOLEAN or TINYINT(1). If you use BOOL or BOOLEAN, then MySQL internally converts it into TINYINT(1)." }, { "code": null, "e": 1544, "s": 1407, "text": "In BOOL or BOOLEAN data type, if you use true literal then MySQL represents it as 1 and false literal as 0 like in PHP/ C/ C++ language." }, { "code": null, "e": 1658, "s": 1544, "text": "To proof that MySQL convert the BOOL or BOOLEAN to TINYINT(1), let us create a table with BOOLEAN or BOOL column." }, { "code": null, "e": 1752, "s": 1658, "text": "Here, we are creating a table with column BOOLEAN. The query to create a table is as follows:" }, { "code": null, "e": 1943, "s": 1752, "text": "mysql> create table BoolOrBooleanOrTinyintDemo\n -> (\n -> Id int NOT NULL AUTO_INCREMENT,\n -> isvalidAddress BOOLEAN,\n -> PRIMARY KEY(Id)\n -> );\nQuery OK, 0 rows affected (0.74 sec)" }, { "code": null, "e": 2032, "s": 1943, "text": "Now check the DDL of the above table using SHOW CREATE command. The query is as follows:" }, { "code": null, "e": 2086, "s": 2032, "text": "mysql> show create table BoolOrBooleanOrTinyintDemo\\G" }, { "code": null, "e": 2115, "s": 2086, "text": "The following is the output:" }, { "code": null, "e": 2486, "s": 2115, "text": "*************************** 1. row ***************************\nTable: BoolOrBooleanOrTinyintDemo\nCreate Table: CREATE TABLE `boolorbooleanortinyintdemo` (\n `Id` int(11) NOT NULL AUTO_INCREMENT,\n `isvalidAddress` tinyint(1) DEFAULT NULL,\n PRIMARY KEY (`Id`)\n) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci\n1 row in set (0.00 sec)" }, { "code": null, "e": 2778, "s": 2486, "text": "Look at the column isvalidAddress, the datatype BOOLEAN is converted into tinyint(1) internally. Now you can check the true literal will be represented by 1 and false literal by 0. Insert some records in the table with true and false literal values. The query to insert record is as follows:" }, { "code": null, "e": 3340, "s": 2778, "text": "mysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(true);\nQuery OK, 1 row affected (0.43 sec)\nmysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(false);\nQuery OK, 1 row affected (0.17 sec)\nmysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(true);\nQuery OK, 1 row affected (0.29 sec)\nmysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(false);\nQuery OK, 1 row affected (0.12 sec)\nmysql> insert into BoolOrBooleanOrTinyintDemo(isvalidAddress) values(true);\nQuery OK, 1 row affected (0.33 sec)" }, { "code": null, "e": 3447, "s": 3340, "text": "Display all records from the table using select statement. The query to display all records is as follows:" }, { "code": null, "e": 3495, "s": 3447, "text": "mysql> select *from BoolOrBooleanOrTinyintDemo;" }, { "code": null, "e": 3524, "s": 3495, "text": "The following is the output:" }, { "code": null, "e": 3765, "s": 3524, "text": "+----+----------------+\n| Id | isvalidAddress |\n+----+----------------+\n| 1 | 1 |\n| 2 | 0 |\n| 3 | 1 |\n| 4 | 0 |\n| 5 | 1 |\n+----+----------------+\n5 rows in set (0.00 sec)" }, { "code": null, "e": 3846, "s": 3765, "text": "Look at the above sample output, true represents as 1 and false represents as 0." }, { "code": null, "e": 3939, "s": 3846, "text": "In PHP, if you use true then it will be represented as 1 and false will be represented as 0." }, { "code": null, "e": 4150, "s": 3939, "text": "Look at the following PHP code. Here, I have set the variable ‘isValidAddress’. The value is 1, that means it evaluates the if condition true and execute the body of if statement only. Check the following code:" }, { "code": null, "e": 4329, "s": 4150, "text": "$isValidAddress = 1;\nif($isValidAddress)\n{\n echo 'true is represented as ';\n echo ($isValidAddress);\n}\nelse\n{\n echo 'false is represented as ';\n echo ($isValidAddress);\n}" }, { "code": null, "e": 4359, "s": 4329, "text": "Here is the snapshot of code:" }, { "code": null, "e": 4388, "s": 4359, "text": "The following is the output:" }, { "code": null, "e": 4563, "s": 4388, "text": "If you change the variable ‘isValidAddress’ to value 0. That means it evaluates the if condition false and execute the body of else statement only. The following is the code:" }, { "code": null, "e": 4740, "s": 4563, "text": "$isValidAddress=0;\nif($isValidAddress)\n{\n echo 'true is represented as ';\n echo ($isValidAddress);\n}\nelse\n{\n echo 'false is represented as ';\n echo ($isValidAddress);\n}" }, { "code": null, "e": 4770, "s": 4740, "text": "Here is the snapshot of code:" }, { "code": null, "e": 4799, "s": 4770, "text": "The following is the output:" } ]
How to Detect Objects in Real-Time Using OpenCV and Python | by Vipul Kumar | Towards Data Science
For the uninitiated, Real-Time Object Detection might sound quite a mouthful. However, with a few awesome libraries at hand, the job becomes much easier than it sounds. In this article, we will be using one such library in python, namely OpenCV, to create a generalized program that can be used to detect any object in a video feed. OpenCV is an open-source library dedicated to solving computer vision problems. Assuming you have python 3 pre-installed on your machines, the easiest way of installing OpenCV to python is via pip. You can do it by typing the below command line in your command prompt. pip3 install opencv-python The object detection works on the Viola-Jones algorithm, which was proposed by Paul Viola and Michael Jones. The aforementioned algorithm is based on machine learning. The first step involves training a cascade function with a large amount of negative and positive labeled images. Once the classifier is trained, identifying features, namely “HAAR Features,” are extracted from these training images. HAAR features are essentially rectangular features with regions of bright and dark pixels. Each feature's value is calculated as a difference between the sum of pixel intensity under the bright region and the pixel intensity under the dark region. All the possible sizes and location of the image is used to calculate these features. An image might contain many irrelevant features and few relevant features which can be used to identify the object. The classifier is trained with the pre-labeled dataset to extract the useful features to get the minimum errors by applying appropriate weights to each feature. An individual feature is called a weak feature. The final classifier is the weighted sum of the weak features. A large region of the image contains the background; only a certain region contains the object to be detected. To increase the detection speed, cascading of classifiers is implemented. In this process, if a region of an image gives even a single negative feature, that region is disregarded for further processing, and the algorithm moves on to the next region. The only region which contains all the identifying features is outlined as the required object in the image. The above explanation is an oversimplified version. Even a simple digital image contains hundreds and thousands of pixels. Applying the algorithm straight away will require a huge computational power. Much more mathematical trickery goes in to simplify the calculation to make it computationally feasible. We will be discussing this in more detail in the upcoming article, primarily focused on the Viola-Jones Algorithm. In this article, we will focus on the programming bit, using the readily available library. OpenCV has a bunch of pre-trained classifiers that can be used to identify objects such as trees, number plates, faces, eyes, etc. We can use any of these classifiers to detect the object as per our need. After you installed the OpenCV package, open the python IDE of your choice and import OpenCV. import CV2 Since we want to detect the objects in real-time, we will be using the webcam feed. Use the below code to initiate the webcam. # Enable we# '0' is default ID for builtin web cam# for external web cam ID can be 1 or -1imcap = cv2.VideoCapture(0)imcap.set(3, 640) # set width as 640imcap.set(4, 480) # set height as 480 As mentioned earlier, OpenCV has various pre-trained HAAR classifiers stored as XML files. In this example, I am using haarcascade_frontalface_defaul a classifier for face detection. You can check other pre-trained classifiers in opencv/data/harrcascades/ folder. # importing cascadefaceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + "haarcascade_frontalface_default.xml") The next step will be to run the classifier to detect the face in the video feed and the webcam. The basic steps are; first, we capture the frame from the video feed. Next, the captured frame is converted to grayscale. Finally, the grayscale image is passed through the classifier to detect the required object. while True: success, img = imcap.read() # capture frame from video # converting image from color to grayscale imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Getting corners around the face # 1.3 = scale factor, 5 = minimum neighbor can be detected faces = faceCascade.detectMultiScale(imgGray, 1.3, 5) # drawing bounding box around face for (x, y, w, h) in faces: img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 3) # displaying image with bounding box cv2.imshow('face_detect', img) # loop will be broken when 'q' is pressed on the keyboard if cv2.waitKey(10) & 0xFF == ord('q'): breakcap.release()cv2.destroyWindow('face_detect') It should be clear that the Viola-Jones algorithm is not restricted to face detection only. Multiple types and numbers of objects in a single frame can be detected using this algorithm. All you need to do is to add multiple layers of cascade classifiers in the program, as per your requirement. Below is the complete code. I have added an additional example to include an additional layer of the classifier on the same image as a comment. You can experiment around with different classifiers using the same basic template. I hope you enjoyed this quick tutorial. This is just a basic example of object detection using OpenCV and Python. The application is immense. More advanced techniques such as CNN and deep learning can be used to solve more complex computer vision problems. More on that in later articles.
[ { "code": null, "e": 505, "s": 172, "text": "For the uninitiated, Real-Time Object Detection might sound quite a mouthful. However, with a few awesome libraries at hand, the job becomes much easier than it sounds. In this article, we will be using one such library in python, namely OpenCV, to create a generalized program that can be used to detect any object in a video feed." }, { "code": null, "e": 774, "s": 505, "text": "OpenCV is an open-source library dedicated to solving computer vision problems. Assuming you have python 3 pre-installed on your machines, the easiest way of installing OpenCV to python is via pip. You can do it by typing the below command line in your command prompt." }, { "code": null, "e": 801, "s": 774, "text": "pip3 install opencv-python" }, { "code": null, "e": 1293, "s": 801, "text": "The object detection works on the Viola-Jones algorithm, which was proposed by Paul Viola and Michael Jones. The aforementioned algorithm is based on machine learning. The first step involves training a cascade function with a large amount of negative and positive labeled images. Once the classifier is trained, identifying features, namely “HAAR Features,” are extracted from these training images. HAAR features are essentially rectangular features with regions of bright and dark pixels." }, { "code": null, "e": 2395, "s": 1293, "text": "Each feature's value is calculated as a difference between the sum of pixel intensity under the bright region and the pixel intensity under the dark region. All the possible sizes and location of the image is used to calculate these features. An image might contain many irrelevant features and few relevant features which can be used to identify the object. The classifier is trained with the pre-labeled dataset to extract the useful features to get the minimum errors by applying appropriate weights to each feature. An individual feature is called a weak feature. The final classifier is the weighted sum of the weak features. A large region of the image contains the background; only a certain region contains the object to be detected. To increase the detection speed, cascading of classifiers is implemented. In this process, if a region of an image gives even a single negative feature, that region is disregarded for further processing, and the algorithm moves on to the next region. The only region which contains all the identifying features is outlined as the required object in the image." }, { "code": null, "e": 2816, "s": 2395, "text": "The above explanation is an oversimplified version. Even a simple digital image contains hundreds and thousands of pixels. Applying the algorithm straight away will require a huge computational power. Much more mathematical trickery goes in to simplify the calculation to make it computationally feasible. We will be discussing this in more detail in the upcoming article, primarily focused on the Viola-Jones Algorithm." }, { "code": null, "e": 3113, "s": 2816, "text": "In this article, we will focus on the programming bit, using the readily available library. OpenCV has a bunch of pre-trained classifiers that can be used to identify objects such as trees, number plates, faces, eyes, etc. We can use any of these classifiers to detect the object as per our need." }, { "code": null, "e": 3207, "s": 3113, "text": "After you installed the OpenCV package, open the python IDE of your choice and import OpenCV." }, { "code": null, "e": 3219, "s": 3207, "text": "import CV2 " }, { "code": null, "e": 3346, "s": 3219, "text": "Since we want to detect the objects in real-time, we will be using the webcam feed. Use the below code to initiate the webcam." }, { "code": null, "e": 3537, "s": 3346, "text": "# Enable we# '0' is default ID for builtin web cam# for external web cam ID can be 1 or -1imcap = cv2.VideoCapture(0)imcap.set(3, 640) # set width as 640imcap.set(4, 480) # set height as 480" }, { "code": null, "e": 3801, "s": 3537, "text": "As mentioned earlier, OpenCV has various pre-trained HAAR classifiers stored as XML files. In this example, I am using haarcascade_frontalface_defaul a classifier for face detection. You can check other pre-trained classifiers in opencv/data/harrcascades/ folder." }, { "code": null, "e": 3919, "s": 3801, "text": "# importing cascadefaceCascade = cv2.CascadeClassifier(cv2.data.haarcascades + \"haarcascade_frontalface_default.xml\")" }, { "code": null, "e": 4231, "s": 3919, "text": "The next step will be to run the classifier to detect the face in the video feed and the webcam. The basic steps are; first, we capture the frame from the video feed. Next, the captured frame is converted to grayscale. Finally, the grayscale image is passed through the classifier to detect the required object." }, { "code": null, "e": 4934, "s": 4231, "text": "while True: success, img = imcap.read() # capture frame from video # converting image from color to grayscale imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # Getting corners around the face # 1.3 = scale factor, 5 = minimum neighbor can be detected faces = faceCascade.detectMultiScale(imgGray, 1.3, 5) # drawing bounding box around face for (x, y, w, h) in faces: img = cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 3) # displaying image with bounding box cv2.imshow('face_detect', img) # loop will be broken when 'q' is pressed on the keyboard if cv2.waitKey(10) & 0xFF == ord('q'): breakcap.release()cv2.destroyWindow('face_detect')" }, { "code": null, "e": 5457, "s": 4934, "text": "It should be clear that the Viola-Jones algorithm is not restricted to face detection only. Multiple types and numbers of objects in a single frame can be detected using this algorithm. All you need to do is to add multiple layers of cascade classifiers in the program, as per your requirement. Below is the complete code. I have added an additional example to include an additional layer of the classifier on the same image as a comment. You can experiment around with different classifiers using the same basic template." } ]
Bootstrap Collapsible panel
To create a collapsible panel in Bootstrap, use the panel-collapse class. You can try to run the following code to create a collapsible panel Live Demo <!DOCTYPE html> <html> <head> <title>Bootstrap Example</title> <link href = "/bootstrap/css/bootstrap.min.css" rel = "stylesheet"> <script src = "/scripts/jquery.min.js"></script> <script src = "/bootstrap/js/bootstrap.min.js"></script> </head> <body> <div class = "container"> <h2>Tutorials</h2> <p>Click below...</p> <div class = "panel-group"> <div class = "panel panel-default"> <div class = "panel-heading"> <h4 class = "panel-title"> <a data-toggle = "collapse" href = "#test">Free learning content</a> </h4> </div> <div id = "test" class = "panel-collapse collapse"> <div class = "panel-body">We provide free learning content and quizzes as well.</div> <div class = "panel-footer">Copyright 2018</div> </div> </div> </div> </div> </body> </html>
[ { "code": null, "e": 1136, "s": 1062, "text": "To create a collapsible panel in Bootstrap, use the panel-collapse class." }, { "code": null, "e": 1204, "s": 1136, "text": "You can try to run the following code to create a collapsible panel" }, { "code": null, "e": 1214, "s": 1204, "text": "Live Demo" }, { "code": null, "e": 2228, "s": 1214, "text": "<!DOCTYPE html>\n<html>\n <head>\n <title>Bootstrap Example</title>\n <link href = \"/bootstrap/css/bootstrap.min.css\" rel = \"stylesheet\">\n <script src = \"/scripts/jquery.min.js\"></script>\n <script src = \"/bootstrap/js/bootstrap.min.js\"></script>\n </head>\n <body>\n <div class = \"container\">\n <h2>Tutorials</h2>\n <p>Click below...</p>\n <div class = \"panel-group\">\n <div class = \"panel panel-default\">\n <div class = \"panel-heading\">\n <h4 class = \"panel-title\">\n <a data-toggle = \"collapse\" href = \"#test\">Free learning content</a>\n </h4>\n </div>\n <div id = \"test\" class = \"panel-collapse collapse\">\n <div class = \"panel-body\">We provide free learning content and quizzes as well.</div>\n <div class = \"panel-footer\">Copyright 2018</div>\n </div>\n </div>\n </div>\n </div>\n </body>\n</html>" } ]
Convolutional Neural Networks from the ground up | by Alejandro Escontrela | Towards Data Science
When Yann LeCun published his work on the development of a new kind of neural network architecture [1], the Convolutional Neural Network (CNN), his work went largely unnoticed. It took 14 years and a team of researchers from The University of Toronto to bring CNN’s into the public’s view during the 2012 ImageNet Computer Vision competition. Their entry, which they named AlexNet after chief architect Alex Krizhevsky, achieved an error of only 15.8% when tasked with classifying millions of images from thousands of categories [2]. Fast forward to 2018 and the current state-of-the-art Convolutional Neural Networks achieve accuracies that surpass human-level performance [3]. Motivated by these promising results, I set out to understand how CNN’s function, and how it is that they perform so well. As Richard Feynman pointed out, “What I cannot build, I do not understand”, and so to gain a well-rounded understanding of this advancement in AI, I built a convolutional neural network from scratch in NumPy. After finishing this project I feel that there’s a disconnect between how complex convolutional neural networks appear to be, and how complex they really are. Hopefully, you too will share this feeling after building your own network from scratch. Code for this project can be found here. CNN’s are best known for their ability to recognize patterns present in images, and so the task chosen for the network described in this post was that of image classification. One of the most common benchmarks for gauging how well a computer vision algorithm performs is to train it on the MNIST handwritten digit database: a collection of 70,000 handwritten digits and their corresponding labels. The goal is to train a CNN to be as accurate as possible when labeling handwritten digits (ranging from 0–9). After about five hours of training and two loops over the training set, the network presented here was able to achieve an accuracy of 98% on the test data, meaning it could correctly guess almost every handwritten digit shown to it. Let’s go over the individual components that form the network and how they link together to form predictions from the input data. After explaining each component, we will code its functionality. In the last section of this post, we’ll program every piece of the network and train it using NumPy (Code here). It is important to note that this section assumes at least a working knowledge of linear algebra and calculus, as well as familiarity with the Python programming language. If you are unfamiliar with these domains or are in need of a tune-up, check out this publication to learn about linear algebra in the scope of machine learning and this resource to start programming with Python. Without further ado, let’s get into it. CNN’s make use of filters (also known as kernels), to detect what features, such as edges, are present throughout an image. A filter is just a matrix of values, called weights, that are trained to detect specific features. The filter moves over each part of the image to check if the feature it is meant to detect is present. To provide a value representing how confident it is that a specific feature is present, the filter carries out a convolution operation, which is an element-wise product and sum between two matrices. When the feature is present in part of an image, the convolution operation between the filter and that part of the image results in a real number with a high value. If the feature is not present, the resulting value is low. In the following example, a filter that is in charge of checking for right-hand curves is passed over a part of the image. Since that part of the image contains the same curve that the filter is looking for, the result of the convolution operation is a large number (6600). But when that same filter is passed over a part of the image with a considerably different set of edges, the convolution’s output is small, meaning that there was no strong presence of a right hand curve. The result of passing this filter over the entire image is an output matrix that stores the convolutions of this filter over various parts of the image. The filter must have the same number of channels as the input image so that the element-wise multiplication can take place. For instance, if the input image contains three channels (RGB, for example), then the filter must contain three channels as well. The convolution of a filter over a 2D image: Additionally, a filter can be slid over the input image at varying intervals, using a stride value. The stride value dictates by how much the filter should move at each step. The output dimensions of a strided convolution can be calculated using the following equation: Where n_in denotes the dimension of the input image, f denotes the window size, and s denotes the stride. So that the Convolutional Neural Network can learn the values for a filter that detect features present in the input data, the filter must be passed through a non-linear mapping. The output of the convolution operation between the filter and the input image is summed with a bias term and passed through a non-linear activation function. The purpose of the activation function is to introduce non-linearity into our network. Since our input data is non-linear (it is infeasible to model the pixels that form a handwritten signature linearly), our model needs to account for that. To do so, we use the Rectified Linear Unit (ReLU) activation function: As you can see, the ReLU function is quite simple; values that are less than or equal to zero become zero and all positive values remain the same. Usually, a network utilizes more than one filter per layer. When that is the case, the outputs of each filter’s convolution over the input image are concatenated along the last axis, forming a final 3D output. The Code Using NumPy, we can program the convolution operation quite easily. The convolution function makes use of a for-loop to convolve all the filters over the image. Within each iteration of the for-loop, two while-loops are used to pass the filter over the image. At each step, the filter is multipled element-wise(*) with a section of the input image. The result of this element-wise multiplication is then summed to obtain a single value using NumPy’s sum method, and then added with a bias term. The filt input is initialized using a standard normal distribution and bias is initialized to be a vector of zeros. After one or two convolutional layers, it is common to reduce the size of the representation produced by the convolutional layer. This reduction in the representation’s size is known as downsampling. To speed up the training process and reduce the amount of memory consumed by the network, we try to reduce the redundancy present in the input feature. There are a couple of ways we can downsample an image, but for this post, we will look at the most common one: max pooling. In max pooling, a window passes over an image according to a set stride (how many units to move on each pass). At each step, the maximum value within the window is pooled into an output matrix, hence the name max pooling. In the following visual, a window of size f=2 passes over an image with a stride of 2. f denotes the dimensions of the max pooling window (red box) and s denotes the number of units the window moves in the x and y-direction. At each step, the maximum value within the window is chosen: Max pooling significantly reduces the representation size, in turn reducing the amount of memory required and the number of operations performed later in the network. The output size of the max pooling operation can be calculated using the following equation: Where n_in denotes the dimension of the input image, f denotes the window size, and s denotes the stride. An added benefit of max pooling is that it forces the network to focus on a few neurons instead of all of them which has a regularizing effect on the network, making it less likely to overfit the training data and hopefully generalize well. The Code The max pooling operation boils down to a for loop and a couple of while loops. The for-loop is used pass through each layer of the input image, and the while-loops slide the window over every part of the image. At each step, we use NumPy’s max method to obtain the maximum value: After multiple convolutional layers and downsampling operations, the 3D image representation is converted into a feature vector that is passed into a Multi-Layer Perceptron, which merely is a neural network with at least three layers. This is referred to as a Fully-Connected Layer. In the fully-connected operation of a neural network, the input representation is flattened into a feature vector and passed through a network of neurons to predict the output probabilities. The following image describes the flattening operation: The rows are concatenated to form a long feature vector. If multiple input layers are present, its rows are also concatenated to form an even longer feature vector. The feature vector is then passed through multiple dense layers. At each dense layer, the feature vector is multiplied by the layer’s weights, summed with its biases, and passed through a non-linearity. The following image visualizes the fully connected operation and dense layers: It is worth noting that, according to this Facebook post by Yann LeCun, “there is no such thing as a fully connected layer,” and he’s right. When thinking back to the convolutional layer, one realizes that a fully connected layer is a convolutional operation with a 1x1 output kernel. That is, If we pass 128 n-by-n filters over an image of dimensions n-by-n, what we would end up with is a vector of length 128. The Code NumPy makes it quite simple to program the fully connected layer of a CNN. As a matter of fact, you can do it in a single line of code using NumPy’s reshape method: In this code snippet, we gather the dimensions of the previous layer (number of channels and height/width) then use them to flatten the previous layer into a fully connected layer. This fully connected layer is proceeded by multiple dense layers of neurons that eventually produce raw predictions: The output layer of a CNN is in charge of producing the probability of each class (each digit) given the input image. To obtain these probabilities, we initialize our final Dense layer to contain the same number of neurons as there are classes. The output of this dense layer then passes through the Softmax activation function, which maps all the final dense layer outputs to a vector whose elements sum up to one: Where x denotes each element in the final layer’s outputs. The Code Once again, the softmax function can be written in a few lines of simple code: To measure how accurate our network was in predicting the handwritten digit from the input image, we make use of a loss function. The loss function assigns a real-valued number to define the model’s accuracy when predicting the output digit. A common loss function to use when predicting multiple output classes is the Categorical Cross-Entropy Loss function, defined as follows: Here, ŷ is the CNN’s prediction, and y is the desired output label. When making predictions over multiple examples, we take the average of the loss over all examples. The Code The Categorical Cross-Entropy loss function can be easily programmed using two simple lines of code, which are a mirror of the equation shown above: This about wraps up the operations that compose a convolutional neural network. Let us join these operations to construct the CNN. Given the relatively low amount of classes (10 in total) and the small size of each training image (28x28px.), a simple network architecture was chosen to solve the task of digit recognition. The network uses two consecutive convolutional layers followed by a max pooling operation to extract features from the input image. After the max pooling operation, the representation is flattened and passed through a Multi-Layer Perceptron (MLP) to carry out the task of classification. Now that we have gone over the elementary operations that form a Convolutional Neural Network, let’s create it. Feel free to use this repo when following along. The MNIST handwritten digit training and test data can be obtained here. The files store image and label data as tensors, so the files must be read through their bytestream. We define two helper methods to perform the extraction: We first define methods to initialize both the filters for the convolutional layers and the weights for the dense layers. To make for a smoother training process, we initialize each filter with a mean of 0 and a standard deviation of 1. To compute the gradients that will force the network to update its weights and optimize its objective, we need to define methods that backpropagate gradients through the convolutional and max pooling layers. To keep this post (relatively) short, I won’t go into the derivation of these gradients but, If you would like me to make a post that describes backpropagation through a convolutional neural network, leave a comment below. In the spirit abstraction, we now define a method that combines the forward and backward operations of a convolutional neural network. It takes the network’s parameters and hyperparameters as inputs and spits out the gradients: To efficiently force the network’s parameters to learn meaningful representations, we use the Adam optimization algorithm. I won’t go into much detail regarding this algorithm, but it can be thought of this way: if stochastic gradient descent is a drunk college student stumbling down a hill, then Adam is a bowling ball beaming down that same hill. A better explanation of Adam found here. That about wraps up the development of the network. To train it locally, download this repo and run the following command in the terminal: $ python3 train_cnn.py '<file_name>.pkl' Replace <file_name> with whatever file name you would like. The terminal should display the following progress bar to training progress, as well as the cost for the current training batch. After the CNN has finished training, a .pkl file containing the network’s parameters is saved to the directory where the script was run. The network takes about 5 hours to train on my macbook pro. I included the trained params in the GitHub repo under the name params.pkl. To use them, replace <file_name> with params.pkl. To measure the network’s accuracy, run the following command in the terminal: $ python3 measure_performance.py '<file_name>.pkl' This command will use the trained parameters to run predictions on all 10,000 digits in the test dataset. After all predictions are made, a value displaying the network’s accuracy will appear in the command prompt: If you run into any issues regarding dependencies, the following command can be used to install the required packages: $ pip install -r requirements.txt After two epochs over the training set, the network’s accuracy on the test set averaged 98%, which I would say is quite decent. After extending the training time by 2–3 epochs, I found that the test set performance decreased. I speculate that on the third to fourth training loop, the network begins overfitting the training set and is no longer generalizing. Because we were passing in batches of data, the network had to account for the variability in every new batch, which is why the cost fluctuated so heavily during training time: Additionally, we measure the network’s recall to understand how well it is able to predict each digit. Recall is a measure of accuracy, and it can be understood with the following example: of all the digits labeled ‘7’(or any other digit) in our test set, how many did our network correctly predict? The following bar graph displays the recall for each digit: This indicates that our network learned meaningful representations for all the digits. Overall, the CNN generalized well. Hopefully this post provided you with a richer understanding of convolutional neural networks, and perhaps even removed their perceived complexity. If you have any questions or would like to know a bit more, drop a comment below :) [1]: Lecun, Y., et al. “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE, vol. 86, no. 11, 1998, pp. 2278–2324., doi:10.1109/5.726791. [2]: Krizhevsky, Alex, et al. “ImageNet Classification with Deep Convolutional Neural Networks.” Communications of the ACM, vol. 60, no. 6, 2017, pp. 84–90., doi:10.1145/3065386. [3]: He, Kaiming, et al. “Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.” 2015 IEEE International Conference on Computer Vision (ICCV), 2015, doi:10.1109/iccv.2015.123.
[ { "code": null, "e": 851, "s": 172, "text": "When Yann LeCun published his work on the development of a new kind of neural network architecture [1], the Convolutional Neural Network (CNN), his work went largely unnoticed. It took 14 years and a team of researchers from The University of Toronto to bring CNN’s into the public’s view during the 2012 ImageNet Computer Vision competition. Their entry, which they named AlexNet after chief architect Alex Krizhevsky, achieved an error of only 15.8% when tasked with classifying millions of images from thousands of categories [2]. Fast forward to 2018 and the current state-of-the-art Convolutional Neural Networks achieve accuracies that surpass human-level performance [3]." }, { "code": null, "e": 1431, "s": 851, "text": "Motivated by these promising results, I set out to understand how CNN’s function, and how it is that they perform so well. As Richard Feynman pointed out, “What I cannot build, I do not understand”, and so to gain a well-rounded understanding of this advancement in AI, I built a convolutional neural network from scratch in NumPy. After finishing this project I feel that there’s a disconnect between how complex convolutional neural networks appear to be, and how complex they really are. Hopefully, you too will share this feeling after building your own network from scratch." }, { "code": null, "e": 1472, "s": 1431, "text": "Code for this project can be found here." }, { "code": null, "e": 2213, "s": 1472, "text": "CNN’s are best known for their ability to recognize patterns present in images, and so the task chosen for the network described in this post was that of image classification. One of the most common benchmarks for gauging how well a computer vision algorithm performs is to train it on the MNIST handwritten digit database: a collection of 70,000 handwritten digits and their corresponding labels. The goal is to train a CNN to be as accurate as possible when labeling handwritten digits (ranging from 0–9). After about five hours of training and two loops over the training set, the network presented here was able to achieve an accuracy of 98% on the test data, meaning it could correctly guess almost every handwritten digit shown to it." }, { "code": null, "e": 2945, "s": 2213, "text": "Let’s go over the individual components that form the network and how they link together to form predictions from the input data. After explaining each component, we will code its functionality. In the last section of this post, we’ll program every piece of the network and train it using NumPy (Code here). It is important to note that this section assumes at least a working knowledge of linear algebra and calculus, as well as familiarity with the Python programming language. If you are unfamiliar with these domains or are in need of a tune-up, check out this publication to learn about linear algebra in the scope of machine learning and this resource to start programming with Python. Without further ado, let’s get into it." }, { "code": null, "e": 3470, "s": 2945, "text": "CNN’s make use of filters (also known as kernels), to detect what features, such as edges, are present throughout an image. A filter is just a matrix of values, called weights, that are trained to detect specific features. The filter moves over each part of the image to check if the feature it is meant to detect is present. To provide a value representing how confident it is that a specific feature is present, the filter carries out a convolution operation, which is an element-wise product and sum between two matrices." }, { "code": null, "e": 3694, "s": 3470, "text": "When the feature is present in part of an image, the convolution operation between the filter and that part of the image results in a real number with a high value. If the feature is not present, the resulting value is low." }, { "code": null, "e": 3968, "s": 3694, "text": "In the following example, a filter that is in charge of checking for right-hand curves is passed over a part of the image. Since that part of the image contains the same curve that the filter is looking for, the result of the convolution operation is a large number (6600)." }, { "code": null, "e": 4173, "s": 3968, "text": "But when that same filter is passed over a part of the image with a considerably different set of edges, the convolution’s output is small, meaning that there was no strong presence of a right hand curve." }, { "code": null, "e": 4580, "s": 4173, "text": "The result of passing this filter over the entire image is an output matrix that stores the convolutions of this filter over various parts of the image. The filter must have the same number of channels as the input image so that the element-wise multiplication can take place. For instance, if the input image contains three channels (RGB, for example), then the filter must contain three channels as well." }, { "code": null, "e": 4625, "s": 4580, "text": "The convolution of a filter over a 2D image:" }, { "code": null, "e": 4895, "s": 4625, "text": "Additionally, a filter can be slid over the input image at varying intervals, using a stride value. The stride value dictates by how much the filter should move at each step. The output dimensions of a strided convolution can be calculated using the following equation:" }, { "code": null, "e": 5001, "s": 4895, "text": "Where n_in denotes the dimension of the input image, f denotes the window size, and s denotes the stride." }, { "code": null, "e": 5652, "s": 5001, "text": "So that the Convolutional Neural Network can learn the values for a filter that detect features present in the input data, the filter must be passed through a non-linear mapping. The output of the convolution operation between the filter and the input image is summed with a bias term and passed through a non-linear activation function. The purpose of the activation function is to introduce non-linearity into our network. Since our input data is non-linear (it is infeasible to model the pixels that form a handwritten signature linearly), our model needs to account for that. To do so, we use the Rectified Linear Unit (ReLU) activation function:" }, { "code": null, "e": 5799, "s": 5652, "text": "As you can see, the ReLU function is quite simple; values that are less than or equal to zero become zero and all positive values remain the same." }, { "code": null, "e": 6009, "s": 5799, "text": "Usually, a network utilizes more than one filter per layer. When that is the case, the outputs of each filter’s convolution over the input image are concatenated along the last axis, forming a final 3D output." }, { "code": null, "e": 6018, "s": 6009, "text": "The Code" }, { "code": null, "e": 6513, "s": 6018, "text": "Using NumPy, we can program the convolution operation quite easily. The convolution function makes use of a for-loop to convolve all the filters over the image. Within each iteration of the for-loop, two while-loops are used to pass the filter over the image. At each step, the filter is multipled element-wise(*) with a section of the input image. The result of this element-wise multiplication is then summed to obtain a single value using NumPy’s sum method, and then added with a bias term." }, { "code": null, "e": 6629, "s": 6513, "text": "The filt input is initialized using a standard normal distribution and bias is initialized to be a vector of zeros." }, { "code": null, "e": 6829, "s": 6629, "text": "After one or two convolutional layers, it is common to reduce the size of the representation produced by the convolutional layer. This reduction in the representation’s size is known as downsampling." }, { "code": null, "e": 7105, "s": 6829, "text": "To speed up the training process and reduce the amount of memory consumed by the network, we try to reduce the redundancy present in the input feature. There are a couple of ways we can downsample an image, but for this post, we will look at the most common one: max pooling." }, { "code": null, "e": 7327, "s": 7105, "text": "In max pooling, a window passes over an image according to a set stride (how many units to move on each pass). At each step, the maximum value within the window is pooled into an output matrix, hence the name max pooling." }, { "code": null, "e": 7613, "s": 7327, "text": "In the following visual, a window of size f=2 passes over an image with a stride of 2. f denotes the dimensions of the max pooling window (red box) and s denotes the number of units the window moves in the x and y-direction. At each step, the maximum value within the window is chosen:" }, { "code": null, "e": 7873, "s": 7613, "text": "Max pooling significantly reduces the representation size, in turn reducing the amount of memory required and the number of operations performed later in the network. The output size of the max pooling operation can be calculated using the following equation:" }, { "code": null, "e": 7979, "s": 7873, "text": "Where n_in denotes the dimension of the input image, f denotes the window size, and s denotes the stride." }, { "code": null, "e": 8220, "s": 7979, "text": "An added benefit of max pooling is that it forces the network to focus on a few neurons instead of all of them which has a regularizing effect on the network, making it less likely to overfit the training data and hopefully generalize well." }, { "code": null, "e": 8229, "s": 8220, "text": "The Code" }, { "code": null, "e": 8510, "s": 8229, "text": "The max pooling operation boils down to a for loop and a couple of while loops. The for-loop is used pass through each layer of the input image, and the while-loops slide the window over every part of the image. At each step, we use NumPy’s max method to obtain the maximum value:" }, { "code": null, "e": 8793, "s": 8510, "text": "After multiple convolutional layers and downsampling operations, the 3D image representation is converted into a feature vector that is passed into a Multi-Layer Perceptron, which merely is a neural network with at least three layers. This is referred to as a Fully-Connected Layer." }, { "code": null, "e": 9040, "s": 8793, "text": "In the fully-connected operation of a neural network, the input representation is flattened into a feature vector and passed through a network of neurons to predict the output probabilities. The following image describes the flattening operation:" }, { "code": null, "e": 9205, "s": 9040, "text": "The rows are concatenated to form a long feature vector. If multiple input layers are present, its rows are also concatenated to form an even longer feature vector." }, { "code": null, "e": 9408, "s": 9205, "text": "The feature vector is then passed through multiple dense layers. At each dense layer, the feature vector is multiplied by the layer’s weights, summed with its biases, and passed through a non-linearity." }, { "code": null, "e": 9487, "s": 9408, "text": "The following image visualizes the fully connected operation and dense layers:" }, { "code": null, "e": 9900, "s": 9487, "text": "It is worth noting that, according to this Facebook post by Yann LeCun, “there is no such thing as a fully connected layer,” and he’s right. When thinking back to the convolutional layer, one realizes that a fully connected layer is a convolutional operation with a 1x1 output kernel. That is, If we pass 128 n-by-n filters over an image of dimensions n-by-n, what we would end up with is a vector of length 128." }, { "code": null, "e": 9909, "s": 9900, "text": "The Code" }, { "code": null, "e": 10074, "s": 9909, "text": "NumPy makes it quite simple to program the fully connected layer of a CNN. As a matter of fact, you can do it in a single line of code using NumPy’s reshape method:" }, { "code": null, "e": 10372, "s": 10074, "text": "In this code snippet, we gather the dimensions of the previous layer (number of channels and height/width) then use them to flatten the previous layer into a fully connected layer. This fully connected layer is proceeded by multiple dense layers of neurons that eventually produce raw predictions:" }, { "code": null, "e": 10788, "s": 10372, "text": "The output layer of a CNN is in charge of producing the probability of each class (each digit) given the input image. To obtain these probabilities, we initialize our final Dense layer to contain the same number of neurons as there are classes. The output of this dense layer then passes through the Softmax activation function, which maps all the final dense layer outputs to a vector whose elements sum up to one:" }, { "code": null, "e": 10847, "s": 10788, "text": "Where x denotes each element in the final layer’s outputs." }, { "code": null, "e": 10856, "s": 10847, "text": "The Code" }, { "code": null, "e": 10935, "s": 10856, "text": "Once again, the softmax function can be written in a few lines of simple code:" }, { "code": null, "e": 11315, "s": 10935, "text": "To measure how accurate our network was in predicting the handwritten digit from the input image, we make use of a loss function. The loss function assigns a real-valued number to define the model’s accuracy when predicting the output digit. A common loss function to use when predicting multiple output classes is the Categorical Cross-Entropy Loss function, defined as follows:" }, { "code": null, "e": 11483, "s": 11315, "text": "Here, ŷ is the CNN’s prediction, and y is the desired output label. When making predictions over multiple examples, we take the average of the loss over all examples." }, { "code": null, "e": 11492, "s": 11483, "text": "The Code" }, { "code": null, "e": 11641, "s": 11492, "text": "The Categorical Cross-Entropy loss function can be easily programmed using two simple lines of code, which are a mirror of the equation shown above:" }, { "code": null, "e": 11772, "s": 11641, "text": "This about wraps up the operations that compose a convolutional neural network. Let us join these operations to construct the CNN." }, { "code": null, "e": 12252, "s": 11772, "text": "Given the relatively low amount of classes (10 in total) and the small size of each training image (28x28px.), a simple network architecture was chosen to solve the task of digit recognition. The network uses two consecutive convolutional layers followed by a max pooling operation to extract features from the input image. After the max pooling operation, the representation is flattened and passed through a Multi-Layer Perceptron (MLP) to carry out the task of classification." }, { "code": null, "e": 12364, "s": 12252, "text": "Now that we have gone over the elementary operations that form a Convolutional Neural Network, let’s create it." }, { "code": null, "e": 12413, "s": 12364, "text": "Feel free to use this repo when following along." }, { "code": null, "e": 12643, "s": 12413, "text": "The MNIST handwritten digit training and test data can be obtained here. The files store image and label data as tensors, so the files must be read through their bytestream. We define two helper methods to perform the extraction:" }, { "code": null, "e": 12880, "s": 12643, "text": "We first define methods to initialize both the filters for the convolutional layers and the weights for the dense layers. To make for a smoother training process, we initialize each filter with a mean of 0 and a standard deviation of 1." }, { "code": null, "e": 13311, "s": 12880, "text": "To compute the gradients that will force the network to update its weights and optimize its objective, we need to define methods that backpropagate gradients through the convolutional and max pooling layers. To keep this post (relatively) short, I won’t go into the derivation of these gradients but, If you would like me to make a post that describes backpropagation through a convolutional neural network, leave a comment below." }, { "code": null, "e": 13539, "s": 13311, "text": "In the spirit abstraction, we now define a method that combines the forward and backward operations of a convolutional neural network. It takes the network’s parameters and hyperparameters as inputs and spits out the gradients:" }, { "code": null, "e": 13930, "s": 13539, "text": "To efficiently force the network’s parameters to learn meaningful representations, we use the Adam optimization algorithm. I won’t go into much detail regarding this algorithm, but it can be thought of this way: if stochastic gradient descent is a drunk college student stumbling down a hill, then Adam is a bowling ball beaming down that same hill. A better explanation of Adam found here." }, { "code": null, "e": 14069, "s": 13930, "text": "That about wraps up the development of the network. To train it locally, download this repo and run the following command in the terminal:" }, { "code": null, "e": 14110, "s": 14069, "text": "$ python3 train_cnn.py '<file_name>.pkl'" }, { "code": null, "e": 14299, "s": 14110, "text": "Replace <file_name> with whatever file name you would like. The terminal should display the following progress bar to training progress, as well as the cost for the current training batch." }, { "code": null, "e": 14436, "s": 14299, "text": "After the CNN has finished training, a .pkl file containing the network’s parameters is saved to the directory where the script was run." }, { "code": null, "e": 14622, "s": 14436, "text": "The network takes about 5 hours to train on my macbook pro. I included the trained params in the GitHub repo under the name params.pkl. To use them, replace <file_name> with params.pkl." }, { "code": null, "e": 14700, "s": 14622, "text": "To measure the network’s accuracy, run the following command in the terminal:" }, { "code": null, "e": 14751, "s": 14700, "text": "$ python3 measure_performance.py '<file_name>.pkl'" }, { "code": null, "e": 14966, "s": 14751, "text": "This command will use the trained parameters to run predictions on all 10,000 digits in the test dataset. After all predictions are made, a value displaying the network’s accuracy will appear in the command prompt:" }, { "code": null, "e": 15085, "s": 14966, "text": "If you run into any issues regarding dependencies, the following command can be used to install the required packages:" }, { "code": null, "e": 15119, "s": 15085, "text": "$ pip install -r requirements.txt" }, { "code": null, "e": 15479, "s": 15119, "text": "After two epochs over the training set, the network’s accuracy on the test set averaged 98%, which I would say is quite decent. After extending the training time by 2–3 epochs, I found that the test set performance decreased. I speculate that on the third to fourth training loop, the network begins overfitting the training set and is no longer generalizing." }, { "code": null, "e": 15656, "s": 15479, "text": "Because we were passing in batches of data, the network had to account for the variability in every new batch, which is why the cost fluctuated so heavily during training time:" }, { "code": null, "e": 15956, "s": 15656, "text": "Additionally, we measure the network’s recall to understand how well it is able to predict each digit. Recall is a measure of accuracy, and it can be understood with the following example: of all the digits labeled ‘7’(or any other digit) in our test set, how many did our network correctly predict?" }, { "code": null, "e": 16016, "s": 15956, "text": "The following bar graph displays the recall for each digit:" }, { "code": null, "e": 16138, "s": 16016, "text": "This indicates that our network learned meaningful representations for all the digits. Overall, the CNN generalized well." }, { "code": null, "e": 16370, "s": 16138, "text": "Hopefully this post provided you with a richer understanding of convolutional neural networks, and perhaps even removed their perceived complexity. If you have any questions or would like to know a bit more, drop a comment below :)" }, { "code": null, "e": 16538, "s": 16370, "text": "[1]: Lecun, Y., et al. “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE, vol. 86, no. 11, 1998, pp. 2278–2324., doi:10.1109/5.726791." }, { "code": null, "e": 16717, "s": 16538, "text": "[2]: Krizhevsky, Alex, et al. “ImageNet Classification with Deep Convolutional Neural Networks.” Communications of the ACM, vol. 60, no. 6, 2017, pp. 84–90., doi:10.1145/3065386." } ]
Get reverse order using Comparator in Java
The objects of a user defined class can be ordered using the Comparator interface in Java. The java.util.Collections.reverseOrder() method reverses the order of an element collection using a Comparator. A program that demonstrates this is given as follows − import java.util.Arrays; import java.util.Collections; import java.util.Comparator; public class Demo { public static void main(String args[]) throws Exception { Comparator comparator = Collections.reverseOrder(); { "John", "Amy", "Susan", "Peter" }; int n = str.length; System.out.println("The array elements are: "); for (int i = 0; i < n; i++) { System.out.println(str[i]); } Arrays.sort(str, comparator); System.out.println("\nThe array elements sorted in reverse order are: "); for (int i = 0; i < n; i++) { System.out.println(str[i]); } } } The output of the above program is as follows − The array elements are: John Amy Susan Peter The array elements sorted in reverse order are: Susan Peter John Amy Now let us understand the above program. The Comparator is created along with reverseOrder() method. Then the string array str[] is defined and then the elements are displayed using a for loop. A code snippet which demonstrates this is as follows − Comparator comparator = Collections.reverseOrder(); { "John", "Amy", "Susan", "Peter" }; int n = str.length; System.out.println("The array elements are: "); for (int i = 0; i < n; i++) { System.out.println(str[i]); } The string array is sorted in reverse order using the java.util.Arrays.sort() method. Then the array elements are displayed using a for loop. A code snippet which demonstrates this is as follows − Arrays.sort(str, comparator); System.out.println("\nThe array elements sorted in reverse order are: "); for (int i = 0; i < n; i++) { System.out.println(str[i]); }
[ { "code": null, "e": 1265, "s": 1062, "text": "The objects of a user defined class can be ordered using the Comparator interface in Java. The java.util.Collections.reverseOrder() method reverses the order of an element collection using a Comparator." }, { "code": null, "e": 1320, "s": 1265, "text": "A program that demonstrates this is given as follows −" }, { "code": null, "e": 1945, "s": 1320, "text": "import java.util.Arrays;\nimport java.util.Collections;\nimport java.util.Comparator;\npublic class Demo {\n public static void main(String args[]) throws Exception {\n Comparator comparator = Collections.reverseOrder(); { \"John\", \"Amy\", \"Susan\", \"Peter\" };\n int n = str.length;\n System.out.println(\"The array elements are: \");\n for (int i = 0; i < n; i++) {\n System.out.println(str[i]);\n }\n Arrays.sort(str, comparator);\n System.out.println(\"\\nThe array elements sorted in reverse order are: \");\n for (int i = 0; i < n; i++) {\n System.out.println(str[i]);\n }\n }\n}" }, { "code": null, "e": 1993, "s": 1945, "text": "The output of the above program is as follows −" }, { "code": null, "e": 2107, "s": 1993, "text": "The array elements are:\nJohn\nAmy\nSusan\nPeter\nThe array elements sorted in reverse order are:\nSusan\nPeter\nJohn\nAmy" }, { "code": null, "e": 2148, "s": 2107, "text": "Now let us understand the above program." }, { "code": null, "e": 2356, "s": 2148, "text": "The Comparator is created along with reverseOrder() method. Then the string array str[] is defined and then the elements are displayed using a for loop. A code snippet which demonstrates this is as follows −" }, { "code": null, "e": 2573, "s": 2356, "text": "Comparator comparator = Collections.reverseOrder(); { \"John\", \"Amy\", \"Susan\", \"Peter\" };\nint n = str.length;\nSystem.out.println(\"The array elements are: \");\nfor (int i = 0; i < n; i++) {\nSystem.out.println(str[i]);\n}" }, { "code": null, "e": 2770, "s": 2573, "text": "The string array is sorted in reverse order using the java.util.Arrays.sort() method. Then the array elements are displayed using a for loop. A code snippet which demonstrates this is as follows −" }, { "code": null, "e": 2937, "s": 2770, "text": "Arrays.sort(str, comparator);\nSystem.out.println(\"\\nThe array elements sorted in reverse order are: \");\nfor (int i = 0; i < n; i++) {\n System.out.println(str[i]);\n}" } ]
Tcl - Nested Switch Statement
It is possible to have a switch as part of the statement sequence of an outer switch. Even if the case constants of the inner and outer switch contain common values, no conflicts will arise. The syntax for a nested switch statement is as follows − switch switchingString { matchString1 { body1 switch switchingString { matchString1 { body1 } matchString2 { body2 } ... matchStringn { bodyn } } } matchString2 { body2 } ... matchStringn { bodyn } } #!/usr/bin/tclsh set a 100 set b 200 switch $a { 100 { puts "This is part of outer switch" switch $b { 200 { puts "This is part of inner switch!" } } } } puts "Exact value of a is : $a" puts "Exact value of a is : $b" When the above code is compiled and executed, it produces the following result − This is part of outer switch This is part of inner switch! Exact value of a is : 100 Exact value of a is : 200 Print Add Notes Bookmark this page
[ { "code": null, "e": 2392, "s": 2201, "text": "It is possible to have a switch as part of the statement sequence of an outer switch. Even if the case constants of the inner and outer switch contain common values, no conflicts will arise." }, { "code": null, "e": 2449, "s": 2392, "text": "The syntax for a nested switch statement is as follows −" }, { "code": null, "e": 2797, "s": 2449, "text": "switch switchingString {\n matchString1 {\n body1\n switch switchingString {\n matchString1 {\n body1\n }\n matchString2 {\n body2\n }\n ...\n matchStringn {\n bodyn\n }\n }\n }\n matchString2 {\n body2\n }\n...\n matchStringn {\n bodyn\n }\n}\n" }, { "code": null, "e": 3074, "s": 2797, "text": "#!/usr/bin/tclsh\n\nset a 100\nset b 200\n\nswitch $a {\n 100 {\n puts \"This is part of outer switch\"\n switch $b {\n 200 {\n puts \"This is part of inner switch!\"\n }\n }\n } \n}\nputs \"Exact value of a is : $a\"\nputs \"Exact value of a is : $b\"" }, { "code": null, "e": 3155, "s": 3074, "text": "When the above code is compiled and executed, it produces the following result −" }, { "code": null, "e": 3267, "s": 3155, "text": "This is part of outer switch\nThis is part of inner switch!\nExact value of a is : 100\nExact value of a is : 200\n" }, { "code": null, "e": 3274, "s": 3267, "text": " Print" }, { "code": null, "e": 3285, "s": 3274, "text": " Add Notes" } ]
How can I set the 'backend' in matplotlib in Python?
We can use matplotlib.rcParams['backend'] to override the backend value. Using get_backend() method, return the name of the current backend, i.e., default name. Using get_backend() method, return the name of the current backend, i.e., default name. Now override the backend name. Now override the backend name. Using get_backend() method, return the name of the current backend, i.e., updated name. Using get_backend() method, return the name of the current backend, i.e., updated name. import matplotlib print("Before, Backend used by matplotlib is: ", matplotlib.get_backend()) matplotlib.rcParams['backend'] = 'TkAgg' print("After, Backend used by matplotlib is: ", matplotlib.get_backend()) Before, Backend used by matplotlib is: GTK3Agg After, Backend used by matplotlib is: TkAgg
[ { "code": null, "e": 1135, "s": 1062, "text": "We can use matplotlib.rcParams['backend'] to override the backend value." }, { "code": null, "e": 1223, "s": 1135, "text": "Using get_backend() method, return the name of the current backend, i.e., default name." }, { "code": null, "e": 1311, "s": 1223, "text": "Using get_backend() method, return the name of the current backend, i.e., default name." }, { "code": null, "e": 1342, "s": 1311, "text": "Now override the backend name." }, { "code": null, "e": 1373, "s": 1342, "text": "Now override the backend name." }, { "code": null, "e": 1461, "s": 1373, "text": "Using get_backend() method, return the name of the current backend, i.e., updated name." }, { "code": null, "e": 1549, "s": 1461, "text": "Using get_backend() method, return the name of the current backend, i.e., updated name." }, { "code": null, "e": 1757, "s": 1549, "text": "import matplotlib\nprint(\"Before, Backend used by matplotlib is: \", matplotlib.get_backend())\nmatplotlib.rcParams['backend'] = 'TkAgg'\nprint(\"After, Backend used by matplotlib is: \", matplotlib.get_backend())" }, { "code": null, "e": 1848, "s": 1757, "text": "Before, Backend used by matplotlib is: GTK3Agg\nAfter, Backend used by matplotlib is: TkAgg" } ]
Changing the primary key on a MongoDB collection?
To change the primary key, you need to first delete it. Use forEach() along with delete to remove and then get a new primary key. Let us create a collection with documents − > db.demo41.insertOne({"StudentName":"Carol"}); { "acknowledged" : true, "insertedId" : ObjectId("5e25ce4acfb11e5c34d898e3") } Display all documents from a collection with the help of find() method − > db.demo41.find(); This will produce the following output − { "_id" : ObjectId("5e25ce4acfb11e5c34d898e3"), "StudentName" : "Carol" } Here is the query to change the primary key on a MongoDB collection − > var next = db.demo41.find() > > next.forEach(function(s) { ... var prevId=s._id; ... delete s._id; ... db.demo41.insert(s); ... db.demo41.remove(prevId); ... }); Let us check the primary key once again − > db.demo41.find(); This will produce the following output displaying a new primary key − { "_id" : ObjectId("5e25cee5cfb11e5c34d898e4"), "StudentName" : "Carol" }
[ { "code": null, "e": 1236, "s": 1062, "text": "To change the primary key, you need to first delete it. Use forEach() along with delete to remove and then get a new primary key. Let us create a collection with documents −" }, { "code": null, "e": 1369, "s": 1236, "text": "> db.demo41.insertOne({\"StudentName\":\"Carol\"});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e25ce4acfb11e5c34d898e3\")\n}" }, { "code": null, "e": 1442, "s": 1369, "text": "Display all documents from a collection with the help of find() method −" }, { "code": null, "e": 1462, "s": 1442, "text": "> db.demo41.find();" }, { "code": null, "e": 1503, "s": 1462, "text": "This will produce the following output −" }, { "code": null, "e": 1577, "s": 1503, "text": "{ \"_id\" : ObjectId(\"5e25ce4acfb11e5c34d898e3\"), \"StudentName\" : \"Carol\" }" }, { "code": null, "e": 1647, "s": 1577, "text": "Here is the query to change the primary key on a MongoDB collection −" }, { "code": null, "e": 1823, "s": 1647, "text": "> var next = db.demo41.find()\n>\n> next.forEach(function(s) {\n... var prevId=s._id;\n... delete s._id;\n... db.demo41.insert(s);\n... db.demo41.remove(prevId);\n... });" }, { "code": null, "e": 1865, "s": 1823, "text": "Let us check the primary key once again −" }, { "code": null, "e": 1885, "s": 1865, "text": "> db.demo41.find();" }, { "code": null, "e": 1955, "s": 1885, "text": "This will produce the following output displaying a new primary key −" }, { "code": null, "e": 2029, "s": 1955, "text": "{ \"_id\" : ObjectId(\"5e25cee5cfb11e5c34d898e4\"), \"StudentName\" : \"Carol\" }" } ]
Introducing Label Studio, a swiss army knife of data labeling | by Nikolai Liubimov | Towards Data Science
I’ve experienced the lack of tools myself while working at one of the enterprises on a personal virtual assistant project, which was used by around 20 million people. Our team was continually looking for ways to improve quality, handle edge cases, and test hypotheses. That usually required working with unstructured data, labeling it, visually examining and exploring models predictions. We’d hack and glue together a set of internal tools, with tons of boilerplate code made to work once. Needless to say that sharing those tools or trying to extend it or embed it into the main application would be nearly impossible. Moreover, this process was about quantitative analysis of the product. While trying to improve models, machine learning engineers heavily rely on precision/recall statistics computed on a fixed dataset, ignoring how this dataset aligns with actual real-world data. Eventually, it leads to systematic errors in production, which could be identified only by qualitative analysis — basically looking with your eyes at the model's predictions. A couple of my friends and I got thinking, can we do better? And this is how Label Studio was born. It is intended to save prototyping/experimenting time for individual machine learning practitioners, as well as reducing the ML product release life cycle for technical teams. Some of the principals we were following while working on it: Making it simple. No complicated configs, and ease of integration into Machine Learning pipelines. Label Studio can be used in different places, depending on the use-case: Quickly configurable for many data types. Get the tool ready in 10 minutes. There should be an easy way to switch between labeling texts, audios or images, or even annotating all three types at the same time. Machine learning integration. It should be able to integrate with all the plethora of machine learning frameworks and models. There are numerous applications of ML with different constraints, and Label Studio has to be flexible enough to handle them and help, not complicate. If that sounds entertaining, let’s install it and get you up and running! Starting Label Studio is extremely easy: pip install label-studiolabel-studio start my_project --init It automatically opens up the web app in your browser. Just configure what you want to label and how. That’s it. Many existing labeling frameworks accept only one data type, and it becomes tedious to learn a new app each time. Right out of the box Label Studio works with Texts, Images, Audios, HTML documents (called Object components), and any imaginable combination of annotation tasks like classification, regression, tagging, spanning, pairwise comparison, object detection, segmentation and so on (defined in the Annotation components). Let’s explore how you can configure the Label Studio server for your use case Configuring Labeling Interface Label Studio’s interface is not pre-built. Instead, you create it yourself, in the same sense as you’d create a webpage. But instead of using HTML tags, you get jsx-based components. Fear not! You don’t need to write JavaScript unless you want to. The components know how to connect based on their type and name. Here is an example of Image classification where the Choices (Annotation component) gets connected to Image (Object component) by specifying its name using the toName attribute. Note the dollar sign variable $url. It specifies that the value for this component is coming from your dataset, it would expect the record with key url provide the URL for the image. Go ahead to the setup page and play around with the examples that we have in there. Or if you’re not running it locally yet, you can check out the playground. Currently, there are about 20 components covering different types of labeling, for example, Bbox for the Images or NER for the Text. A full list can found on the website. Data Import and Export After configuring what the labeling interfaces should look like, you can import your data. The web import supports multiple formats: JSON, CSV, TSV, and archives consisting of those. Each data point that you want to label is called a task. If your task contains images, you have to host them somewhere and expose URLs in data keys like {"url": "https://labelstud.io/opossum.jpg"} in the example above. The task is then loaded into the labeling interface and waits for you to label it. Exports are done using label-studio-converter. It’s a library that can take internal Label Studio JSON based format and output either some general-purpose (JSON, CSV, TSV) or model-specific formats like CONLL for textual taggers or Pascal VOC or COCO for computer vision models. Underlying storage is plain files. Making it very easy to integrate, you only need to put the data in the format that Label Studio can parse. You can find more about the formats on the website. Alright! Now you have the data in, you know how to label and how to export, let’s go over a few use cases how you can integrate Label Studio into your pipelines. Label Studio is, first and foremost, a data labeling tool, which makes it very usable for all the essential data labeling activities, like labeling the entire dataset or creating ground truth labels for verification. Besides those basics, here a few other use-cases that are pretty unique to its functionality. Embed into your app For some applications, one of the best labels you can get may come from the users of your products. You can ask them to label tasks from scratch, or provide a model prediction and request to adjust it. Then use that to update models and improve application efficiency continually. The unique feature of Label Studio is, it comes as a frontend NPM package that you can include in your applications. Here is an example: You can check it live here. Or look at the source of the gist. You can save it locally and open it in your browser! Machine Learning Integration, connecting the model You can easily connect your favorite machine learning framework with Label Studio by using the SDK. That gives you the opportunities to use: Auto labeling. Use model predictions for labeling data. It helps to annotate faster by task pre-labeling, as well as using pseudo labeling for further training Continual Learning. Continuously annotate and train your model from a stream of data, potentially with changing annotation targets Active Learning. Implement techniques for smarter selection of tasks to be labeled Prediction Service. Instantly create and deploy REST API prediction service To learn more about getting up and running, check out the example from the README. Comparing predictions You can load multiple predictions from different models architectures or versions into the interface, visually verify, find discrepancies, and edit what your models are predicting: We’ve covered just a handful of use cases that we have found exciting ourselves and where we’ve seen Label Studio can provide an advantage over existing solutions, but while we’re working on the next article, here are a few more ideas for you to explore: Monitoring model prediction errors Can you always trust your model predictions? How do you understand when would be a good time to retrain and redeploy a model? You can integrate Label Studio into the model monitoring pipeline. And as in the example above, you can show multiple versions of your model predictions and check if the prediction quality has not decreased. Human in the (loop) prediction pipeline For the applications where predictive quality is mission-critical, Label Studio can be used to adjust the model prediction when the model is uncertain about it or acts as a low-precision detector. In these scenarios, the model prediction is first sent to the annotator, and the annotator can manually adjust it. Collect results from multiple people Label Studio supports multiple results per task (called completions), which can be very valuable if you need to distribute the same task to numerous annotators and then verify or combine their results. It works similarly to the model prediction visualization with a difference that completion can be set as ground truth and edited. Incremental dataset labeling After deploying the model into production or while developing it, you may realize that labeling for one more attribute can enhance the model results. That might be a hypothesis you want to test or a predefined strategy — start with a small number of attributes and add more with time. You can use Label Studio for incremental labeling: modify the labeling config and add new classes on the go. That is just a start. We’re planning to cover more and more data types and implement more components covering different labeling scenarios. Visit the main repository on GitHub Explore the templates we’ve created for some popular labeling cases https://labelstud.io/templates/ Check the tags https://labelstud.io/tags/ Try out running a demo where you can create your annotation playground persistent within a browser session We would always be happy to learn more about possible human-in-the-loop use cases and looking forward to implementing them within Label Studio. If you have suggestions and/or feedback, please share it with us by opening an issue on GitHub or joining our growing Slack community!
[ { "code": null, "e": 793, "s": 172, "text": "I’ve experienced the lack of tools myself while working at one of the enterprises on a personal virtual assistant project, which was used by around 20 million people. Our team was continually looking for ways to improve quality, handle edge cases, and test hypotheses. That usually required working with unstructured data, labeling it, visually examining and exploring models predictions. We’d hack and glue together a set of internal tools, with tons of boilerplate code made to work once. Needless to say that sharing those tools or trying to extend it or embed it into the main application would be nearly impossible." }, { "code": null, "e": 1233, "s": 793, "text": "Moreover, this process was about quantitative analysis of the product. While trying to improve models, machine learning engineers heavily rely on precision/recall statistics computed on a fixed dataset, ignoring how this dataset aligns with actual real-world data. Eventually, it leads to systematic errors in production, which could be identified only by qualitative analysis — basically looking with your eyes at the model's predictions." }, { "code": null, "e": 1509, "s": 1233, "text": "A couple of my friends and I got thinking, can we do better? And this is how Label Studio was born. It is intended to save prototyping/experimenting time for individual machine learning practitioners, as well as reducing the ML product release life cycle for technical teams." }, { "code": null, "e": 1571, "s": 1509, "text": "Some of the principals we were following while working on it:" }, { "code": null, "e": 1743, "s": 1571, "text": "Making it simple. No complicated configs, and ease of integration into Machine Learning pipelines. Label Studio can be used in different places, depending on the use-case:" }, { "code": null, "e": 1952, "s": 1743, "text": "Quickly configurable for many data types. Get the tool ready in 10 minutes. There should be an easy way to switch between labeling texts, audios or images, or even annotating all three types at the same time." }, { "code": null, "e": 2228, "s": 1952, "text": "Machine learning integration. It should be able to integrate with all the plethora of machine learning frameworks and models. There are numerous applications of ML with different constraints, and Label Studio has to be flexible enough to handle them and help, not complicate." }, { "code": null, "e": 2302, "s": 2228, "text": "If that sounds entertaining, let’s install it and get you up and running!" }, { "code": null, "e": 2343, "s": 2302, "text": "Starting Label Studio is extremely easy:" }, { "code": null, "e": 2404, "s": 2343, "text": "pip install label-studiolabel-studio start my_project --init" }, { "code": null, "e": 2517, "s": 2404, "text": "It automatically opens up the web app in your browser. Just configure what you want to label and how. That’s it." }, { "code": null, "e": 2947, "s": 2517, "text": "Many existing labeling frameworks accept only one data type, and it becomes tedious to learn a new app each time. Right out of the box Label Studio works with Texts, Images, Audios, HTML documents (called Object components), and any imaginable combination of annotation tasks like classification, regression, tagging, spanning, pairwise comparison, object detection, segmentation and so on (defined in the Annotation components)." }, { "code": null, "e": 3025, "s": 2947, "text": "Let’s explore how you can configure the Label Studio server for your use case" }, { "code": null, "e": 3056, "s": 3025, "text": "Configuring Labeling Interface" }, { "code": null, "e": 3547, "s": 3056, "text": "Label Studio’s interface is not pre-built. Instead, you create it yourself, in the same sense as you’d create a webpage. But instead of using HTML tags, you get jsx-based components. Fear not! You don’t need to write JavaScript unless you want to. The components know how to connect based on their type and name. Here is an example of Image classification where the Choices (Annotation component) gets connected to Image (Object component) by specifying its name using the toName attribute." }, { "code": null, "e": 3889, "s": 3547, "text": "Note the dollar sign variable $url. It specifies that the value for this component is coming from your dataset, it would expect the record with key url provide the URL for the image. Go ahead to the setup page and play around with the examples that we have in there. Or if you’re not running it locally yet, you can check out the playground." }, { "code": null, "e": 4060, "s": 3889, "text": "Currently, there are about 20 components covering different types of labeling, for example, Bbox for the Images or NER for the Text. A full list can found on the website." }, { "code": null, "e": 4083, "s": 4060, "text": "Data Import and Export" }, { "code": null, "e": 4568, "s": 4083, "text": "After configuring what the labeling interfaces should look like, you can import your data. The web import supports multiple formats: JSON, CSV, TSV, and archives consisting of those. Each data point that you want to label is called a task. If your task contains images, you have to host them somewhere and expose URLs in data keys like {\"url\": \"https://labelstud.io/opossum.jpg\"} in the example above. The task is then loaded into the labeling interface and waits for you to label it." }, { "code": null, "e": 4847, "s": 4568, "text": "Exports are done using label-studio-converter. It’s a library that can take internal Label Studio JSON based format and output either some general-purpose (JSON, CSV, TSV) or model-specific formats like CONLL for textual taggers or Pascal VOC or COCO for computer vision models." }, { "code": null, "e": 5041, "s": 4847, "text": "Underlying storage is plain files. Making it very easy to integrate, you only need to put the data in the format that Label Studio can parse. You can find more about the formats on the website." }, { "code": null, "e": 5203, "s": 5041, "text": "Alright! Now you have the data in, you know how to label and how to export, let’s go over a few use cases how you can integrate Label Studio into your pipelines." }, { "code": null, "e": 5514, "s": 5203, "text": "Label Studio is, first and foremost, a data labeling tool, which makes it very usable for all the essential data labeling activities, like labeling the entire dataset or creating ground truth labels for verification. Besides those basics, here a few other use-cases that are pretty unique to its functionality." }, { "code": null, "e": 5534, "s": 5514, "text": "Embed into your app" }, { "code": null, "e": 5815, "s": 5534, "text": "For some applications, one of the best labels you can get may come from the users of your products. You can ask them to label tasks from scratch, or provide a model prediction and request to adjust it. Then use that to update models and improve application efficiency continually." }, { "code": null, "e": 5952, "s": 5815, "text": "The unique feature of Label Studio is, it comes as a frontend NPM package that you can include in your applications. Here is an example:" }, { "code": null, "e": 6068, "s": 5952, "text": "You can check it live here. Or look at the source of the gist. You can save it locally and open it in your browser!" }, { "code": null, "e": 6119, "s": 6068, "text": "Machine Learning Integration, connecting the model" }, { "code": null, "e": 6219, "s": 6119, "text": "You can easily connect your favorite machine learning framework with Label Studio by using the SDK." }, { "code": null, "e": 6260, "s": 6219, "text": "That gives you the opportunities to use:" }, { "code": null, "e": 6420, "s": 6260, "text": "Auto labeling. Use model predictions for labeling data. It helps to annotate faster by task pre-labeling, as well as using pseudo labeling for further training" }, { "code": null, "e": 6551, "s": 6420, "text": "Continual Learning. Continuously annotate and train your model from a stream of data, potentially with changing annotation targets" }, { "code": null, "e": 6634, "s": 6551, "text": "Active Learning. Implement techniques for smarter selection of tasks to be labeled" }, { "code": null, "e": 6710, "s": 6634, "text": "Prediction Service. Instantly create and deploy REST API prediction service" }, { "code": null, "e": 6793, "s": 6710, "text": "To learn more about getting up and running, check out the example from the README." }, { "code": null, "e": 6815, "s": 6793, "text": "Comparing predictions" }, { "code": null, "e": 6996, "s": 6815, "text": "You can load multiple predictions from different models architectures or versions into the interface, visually verify, find discrepancies, and edit what your models are predicting:" }, { "code": null, "e": 7251, "s": 6996, "text": "We’ve covered just a handful of use cases that we have found exciting ourselves and where we’ve seen Label Studio can provide an advantage over existing solutions, but while we’re working on the next article, here are a few more ideas for you to explore:" }, { "code": null, "e": 7286, "s": 7251, "text": "Monitoring model prediction errors" }, { "code": null, "e": 7620, "s": 7286, "text": "Can you always trust your model predictions? How do you understand when would be a good time to retrain and redeploy a model? You can integrate Label Studio into the model monitoring pipeline. And as in the example above, you can show multiple versions of your model predictions and check if the prediction quality has not decreased." }, { "code": null, "e": 7660, "s": 7620, "text": "Human in the (loop) prediction pipeline" }, { "code": null, "e": 7972, "s": 7660, "text": "For the applications where predictive quality is mission-critical, Label Studio can be used to adjust the model prediction when the model is uncertain about it or acts as a low-precision detector. In these scenarios, the model prediction is first sent to the annotator, and the annotator can manually adjust it." }, { "code": null, "e": 8009, "s": 7972, "text": "Collect results from multiple people" }, { "code": null, "e": 8341, "s": 8009, "text": "Label Studio supports multiple results per task (called completions), which can be very valuable if you need to distribute the same task to numerous annotators and then verify or combine their results. It works similarly to the model prediction visualization with a difference that completion can be set as ground truth and edited." }, { "code": null, "e": 8370, "s": 8341, "text": "Incremental dataset labeling" }, { "code": null, "e": 8764, "s": 8370, "text": "After deploying the model into production or while developing it, you may realize that labeling for one more attribute can enhance the model results. That might be a hypothesis you want to test or a predefined strategy — start with a small number of attributes and add more with time. You can use Label Studio for incremental labeling: modify the labeling config and add new classes on the go." }, { "code": null, "e": 8904, "s": 8764, "text": "That is just a start. We’re planning to cover more and more data types and implement more components covering different labeling scenarios." }, { "code": null, "e": 8940, "s": 8904, "text": "Visit the main repository on GitHub" }, { "code": null, "e": 9040, "s": 8940, "text": "Explore the templates we’ve created for some popular labeling cases https://labelstud.io/templates/" }, { "code": null, "e": 9082, "s": 9040, "text": "Check the tags https://labelstud.io/tags/" }, { "code": null, "e": 9189, "s": 9082, "text": "Try out running a demo where you can create your annotation playground persistent within a browser session" } ]
SQL Tryit Editor v1.6
SELECT * FROM Customers WHERE City IN ('Paris','London'); ​ Edit the SQL Statement, and click "Run SQL" to see the result. This SQL-Statement is not supported in the WebSQL Database. The example still works, because it uses a modified version of SQL. Your browser does not support WebSQL. Your are now using a light-version of the Try-SQL Editor, with a read-only Database. If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time. Our Try-SQL Editor uses WebSQL to demonstrate SQL. A Database-object is created in your browser, for testing purposes. You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the "Restore Database" button. WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object. WebSQL is supported in Chrome, Safari, Opera, and Edge(79). If you use another browser you will still be able to use our Try SQL Editor, but a different version, using a server-based ASP application, with a read-only Access Database, where users are not allowed to make any changes to the data.
[ { "code": null, "e": 24, "s": 0, "text": "SELECT * FROM Customers" }, { "code": null, "e": 58, "s": 24, "text": "WHERE City IN ('Paris','London');" }, { "code": null, "e": 60, "s": 58, "text": "​" }, { "code": null, "e": 123, "s": 60, "text": "Edit the SQL Statement, and click \"Run SQL\" to see the result." }, { "code": null, "e": 183, "s": 123, "text": "This SQL-Statement is not supported in the WebSQL Database." }, { "code": null, "e": 251, "s": 183, "text": "The example still works, because it uses a modified version of SQL." }, { "code": null, "e": 289, "s": 251, "text": "Your browser does not support WebSQL." }, { "code": null, "e": 374, "s": 289, "text": "Your are now using a light-version of the Try-SQL Editor, with a read-only Database." }, { "code": null, "e": 548, "s": 374, "text": "If you switch to a browser with WebSQL support, you can try any SQL statement, and play with the Database as much as you like. The Database can also be restored at any time." }, { "code": null, "e": 599, "s": 548, "text": "Our Try-SQL Editor uses WebSQL to demonstrate SQL." }, { "code": null, "e": 667, "s": 599, "text": "A Database-object is created in your browser, for testing purposes." }, { "code": null, "e": 838, "s": 667, "text": "You can try any SQL statement, and play with the Database as much as you like. The Database can be restored at any time, simply by clicking the \"Restore Database\" button." }, { "code": null, "e": 938, "s": 838, "text": "WebSQL stores a Database locally, on the user's computer. Each user gets their own Database object." }, { "code": null, "e": 998, "s": 938, "text": "WebSQL is supported in Chrome, Safari, Opera, and Edge(79)." } ]
How to get ArrayList<String> to ArrayList<Object> and vice versa in java?
Instead of the typed parameter in generics (T) you can also use “?”, representing an unknown type. These are known as wild cards you can use a wild card as − Type of parameter or, a Field or, a Local field. Using wild cards, you can convert ArrayList<String> to ArrayList<Object> as − ArrayList<String> stringList = (ArrayList<String>)(ArrayList<?>)(list); Live Demo import java.util.ArrayList; import java.util.Iterator; import java.util.ListIterator; public class ArrayListExample { public static void main(String args[]) { //Instantiating an ArrayList object ArrayList<Object> list = new ArrayList<Object>(); //populating the ArrayList list.add("apples"); list.add("mangoes"); list.add("oranges"); //Converting the Array list of object type into String type ArrayList<String> stringList = (ArrayList<String>)(ArrayList<?>)(list); //listing the contenmts of the obtained list Iterator<String> it = stringList.iterator(); while(it.hasNext()) { System.out.println(it.next()); } } } apples mangoes oranges To convert the ArrayList<Object> to ArrayList<String> − Create/Get an ArrayList object of String type. Create/Get an ArrayList object of String type. Create a new ArrayList object of Object type by passing the above obtained/created object as a parameter to its constructor. Create a new ArrayList object of Object type by passing the above obtained/created object as a parameter to its constructor. Live Demo import java.util.ArrayList; import java.util.Iterator; import java.util.ListIterator; public class ArrayListExample { public static void main(String args[]) { //Instantiating an ArrayList object ArrayList<String> stringList = new ArrayList<String>(); //populating the ArrayList stringList.add("apples"); stringList.add("mangoes"); stringList.add("oranges"); //Converting the Array list of String type to object type ArrayList<Object> objectList = new ArrayList<Object>(stringList); //listing the contents of the obtained list Iterator<String> it = stringList.iterator(); while(it.hasNext()) { System.out.println(it.next()); } } } apples mangoes oranges
[ { "code": null, "e": 1347, "s": 1062, "text": "Instead of the typed parameter in generics (T) you can also use “?”, representing an unknown type. These are known as wild cards you can use a wild card as − Type of parameter or, a Field or, a Local field. Using wild cards, you can convert ArrayList<String> to ArrayList<Object> as −" }, { "code": null, "e": 1419, "s": 1347, "text": "ArrayList<String> stringList = (ArrayList<String>)(ArrayList<?>)(list);" }, { "code": null, "e": 1430, "s": 1419, "text": " Live Demo" }, { "code": null, "e": 2132, "s": 1430, "text": "import java.util.ArrayList;\nimport java.util.Iterator;\nimport java.util.ListIterator;\npublic class ArrayListExample {\n public static void main(String args[]) {\n //Instantiating an ArrayList object\n ArrayList<Object> list = new ArrayList<Object>();\n //populating the ArrayList\n list.add(\"apples\");\n list.add(\"mangoes\");\n list.add(\"oranges\");\n //Converting the Array list of object type into String type\n ArrayList<String> stringList = (ArrayList<String>)(ArrayList<?>)(list);\n //listing the contenmts of the obtained list\n Iterator<String> it = stringList.iterator();\n while(it.hasNext()) {\n System.out.println(it.next());\n }\n }\n}" }, { "code": null, "e": 2155, "s": 2132, "text": "apples\nmangoes\noranges" }, { "code": null, "e": 2211, "s": 2155, "text": "To convert the ArrayList<Object> to ArrayList<String> −" }, { "code": null, "e": 2258, "s": 2211, "text": "Create/Get an ArrayList object of String type." }, { "code": null, "e": 2305, "s": 2258, "text": "Create/Get an ArrayList object of String type." }, { "code": null, "e": 2430, "s": 2305, "text": "Create a new ArrayList object of Object type by passing the above obtained/created object as a parameter to its constructor." }, { "code": null, "e": 2555, "s": 2430, "text": "Create a new ArrayList object of Object type by passing the above obtained/created object as a parameter to its constructor." }, { "code": null, "e": 2566, "s": 2555, "text": " Live Demo" }, { "code": null, "e": 3283, "s": 2566, "text": "import java.util.ArrayList;\nimport java.util.Iterator;\nimport java.util.ListIterator;\npublic class ArrayListExample {\n public static void main(String args[]) {\n //Instantiating an ArrayList object\n ArrayList<String> stringList = new ArrayList<String>();\n //populating the ArrayList\n stringList.add(\"apples\");\n stringList.add(\"mangoes\");\n stringList.add(\"oranges\");\n //Converting the Array list of String type to object type\n ArrayList<Object> objectList = new ArrayList<Object>(stringList);\n //listing the contents of the obtained list\n Iterator<String> it = stringList.iterator();\n while(it.hasNext()) {\n System.out.println(it.next());\n }\n }\n}" }, { "code": null, "e": 3306, "s": 3283, "text": "apples\nmangoes\noranges" } ]
Java Examples - Summation of Numbers
How to print summation of numbers ? Following example demonstrates how to add first n natural numbers by using the concept of stack. import java.io.IOException; public class AdditionStack { static int num; static int ans; static Stack theStack; public static void main(String[] args) throws IOException { num = 50; stackAddition(); System.out.println("Sum = " + ans); } public static void stackAddition() { theStack = new Stack(10000); ans = 0; while (num > 0) { theStack.push(num); --num; } while (!theStack.isEmpty()) { int newN = theStack.pop(); ans += newN; } } } class Stack { private int maxSize; private int[] data; private int top; public Stack(int s) { maxSize = s; data = new int[maxSize]; top = -1; } public void push(int p) { data[++top] = p; } public int pop() { return data[top--]; } public int peek() { return data[top]; } public boolean isEmpty() { return (top == -1); } } The above code sample will produce the following result. Sum = 1275 The following is an another example of first n natural numbers public class Demo { public static void main(String[] args) { int sum = 0; int n = 50; for (int i = 1; i <= n; i++) { sum = sum + i; } System.out.println("The Sum Of " + n + "is" + sum); } } The above code sample will produce the following result. The Sum Of 50 is 1275 Print Add Notes Bookmark this page
[ { "code": null, "e": 2104, "s": 2068, "text": "How to print summation of numbers ?" }, { "code": null, "e": 2201, "s": 2104, "text": "Following example demonstrates how to add first n natural numbers by using the concept of stack." }, { "code": null, "e": 3165, "s": 2201, "text": "import java.io.IOException;\n\npublic class AdditionStack {\n static int num;\n static int ans;\n static Stack theStack;\n public static void main(String[] args)\n \n throws IOException {\n num = 50;\n stackAddition();\n System.out.println(\"Sum = \" + ans);\n }\n public static void stackAddition() {\n theStack = new Stack(10000); \n ans = 0; \n while (num > 0) {\n theStack.push(num); \n --num; \n }\n while (!theStack.isEmpty()) {\n int newN = theStack.pop(); \n ans += newN; \n }\n }\n}\nclass Stack {\n private int maxSize; \n private int[] data;\n private int top; \n public Stack(int s) {\n maxSize = s;\n data = new int[maxSize];\n top = -1;\n }\n public void push(int p) {\n data[++top] = p;\n }\n public int pop() {\n return data[top--];\n }\n public int peek() {\n return data[top];\n }\n public boolean isEmpty() {\n return (top == -1);\n }\n}" }, { "code": null, "e": 3222, "s": 3165, "text": "The above code sample will produce the following result." }, { "code": null, "e": 3234, "s": 3222, "text": "Sum = 1275\n" }, { "code": null, "e": 3297, "s": 3234, "text": "The following is an another example of first n natural numbers" }, { "code": null, "e": 3533, "s": 3297, "text": "public class Demo {\n public static void main(String[] args) {\n int sum = 0;\n int n = 50;\n for (int i = 1; i <= n; i++) {\n sum = sum + i;\n } \n System.out.println(\"The Sum Of \" + n + \"is\" + sum);\n }\n}" }, { "code": null, "e": 3590, "s": 3533, "text": "The above code sample will produce the following result." }, { "code": null, "e": 3613, "s": 3590, "text": "The Sum Of 50 is 1275\n" }, { "code": null, "e": 3620, "s": 3613, "text": " Print" }, { "code": null, "e": 3631, "s": 3620, "text": " Add Notes" } ]
Firebase - Facebook Authentication
In this chapter, we will authenticate users with Firebase Facebook authentication. We need to open Firebase dashboard and click Auth in side menu. Next, we need to choose SIGN-IN-METHOD in tab bar. We will enable Facebook auth and leave this open since we need to add App ID and App Secret when we finish step 2. To enable Facebook authentication, we need to create the Facebook app. Click on this link to start. Once the app is created, we need to copy App ID and App Secret to the Firebase page, which we left open in step 1. We also need to copy OAuth Redirect URI from this window into the Facebook app. You can find + Add Product inside side menu of the Facebook app dashboard. Choose Facebook Login and it will appear in the side menu. You will find input field Valid OAuth redirect URIs where you need to copy the OAuth Redirect URI from Firebase. Copy the following code at the beginning of the body tag in index.html. Be sure to replace the 'APP_ID' to your app id from Facebook dashboard. Let us consider the following example. <script> window.fbAsyncInit = function() { FB.init ({ appId : 'APP_ID', xfbml : true, version : 'v2.6' }); }; (function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) {return;} js = d.createElement(s); js.id = id; js.src = "//connect.facebook.net/en_US/sdk.js"; fjs.parentNode.insertBefore(js, fjs); } (document, 'script', 'facebook-jssdk')); </script> We set everything in first three steps, now we can create two buttons for login and logout. <button onclick = "facebookSignin()">Facebook Signin</button> <button onclick = "facebookSignout()">Facebook Signout</button> This is the last step. Open index.js and copy the following code. var provider = new firebase.auth.FacebookAuthProvider(); function facebookSignin() { firebase.auth().signInWithPopup(provider) .then(function(result) { var token = result.credential.accessToken; var user = result.user; console.log(token) console.log(user) }).catch(function(error) { console.log(error.code); console.log(error.message); }); } function facebookSignout() { firebase.auth().signOut() .then(function() { console.log('Signout successful!') }, function(error) { console.log('Signout failed') }); } 60 Lectures 5 hours University Code 28 Lectures 2.5 hours Appeteria 85 Lectures 14.5 hours Appeteria 46 Lectures 2.5 hours Gautham Vijayan 13 Lectures 1.5 hours Nishant Kumar 85 Lectures 16.5 hours Rahul Agarwal Print Add Notes Bookmark this page
[ { "code": null, "e": 2249, "s": 2166, "text": "In this chapter, we will authenticate users with Firebase Facebook authentication." }, { "code": null, "e": 2479, "s": 2249, "text": "We need to open Firebase dashboard and click Auth in side menu. Next, we need to choose SIGN-IN-METHOD in tab bar. We will enable Facebook auth and leave this open since we need to add App ID and App Secret when we finish step 2." }, { "code": null, "e": 2849, "s": 2479, "text": "To enable Facebook authentication, we need to create the Facebook app. Click on this link to start. Once the app is created, we need to copy App ID and App Secret to the Firebase page, which we left open in step 1. We also need to copy OAuth Redirect URI from this window into the Facebook app. You can find + Add Product inside side menu of the Facebook app dashboard." }, { "code": null, "e": 3021, "s": 2849, "text": "Choose Facebook Login and it will appear in the side menu. You will find input field Valid OAuth redirect URIs where you need to copy the OAuth Redirect URI from Firebase." }, { "code": null, "e": 3165, "s": 3021, "text": "Copy the following code at the beginning of the body tag in index.html. Be sure to replace the 'APP_ID' to your app id from Facebook dashboard." }, { "code": null, "e": 3204, "s": 3165, "text": "Let us consider the following example." }, { "code": null, "e": 3689, "s": 3204, "text": "<script>\n window.fbAsyncInit = function() {\n FB.init ({\n appId : 'APP_ID',\n xfbml : true,\n version : 'v2.6'\n });\n };\n\n (function(d, s, id) {\n var js, fjs = d.getElementsByTagName(s)[0];\n if (d.getElementById(id)) {return;}\n js = d.createElement(s); js.id = id;\n js.src = \"//connect.facebook.net/en_US/sdk.js\";\n fjs.parentNode.insertBefore(js, fjs);\n } (document, 'script', 'facebook-jssdk'));\n\t\n</script>" }, { "code": null, "e": 3781, "s": 3689, "text": "We set everything in first three steps, now we can create two buttons for login and logout." }, { "code": null, "e": 3907, "s": 3781, "text": "<button onclick = \"facebookSignin()\">Facebook Signin</button>\n<button onclick = \"facebookSignout()\">Facebook Signout</button>" }, { "code": null, "e": 3973, "s": 3907, "text": "This is the last step. Open index.js and copy the following code." }, { "code": null, "e": 4566, "s": 3973, "text": "var provider = new firebase.auth.FacebookAuthProvider();\n\nfunction facebookSignin() {\n firebase.auth().signInWithPopup(provider)\n \n .then(function(result) {\n var token = result.credential.accessToken;\n var user = result.user;\n\t\t\n console.log(token)\n console.log(user)\n }).catch(function(error) {\n console.log(error.code);\n console.log(error.message);\n });\n}\n\nfunction facebookSignout() {\n firebase.auth().signOut()\n \n .then(function() {\n console.log('Signout successful!')\n }, function(error) {\n console.log('Signout failed')\n });\n}" }, { "code": null, "e": 4599, "s": 4566, "text": "\n 60 Lectures \n 5 hours \n" }, { "code": null, "e": 4616, "s": 4599, "text": " University Code" }, { "code": null, "e": 4651, "s": 4616, "text": "\n 28 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4662, "s": 4651, "text": " Appeteria" }, { "code": null, "e": 4698, "s": 4662, "text": "\n 85 Lectures \n 14.5 hours \n" }, { "code": null, "e": 4709, "s": 4698, "text": " Appeteria" }, { "code": null, "e": 4744, "s": 4709, "text": "\n 46 Lectures \n 2.5 hours \n" }, { "code": null, "e": 4761, "s": 4744, "text": " Gautham Vijayan" }, { "code": null, "e": 4796, "s": 4761, "text": "\n 13 Lectures \n 1.5 hours \n" }, { "code": null, "e": 4811, "s": 4796, "text": " Nishant Kumar" }, { "code": null, "e": 4847, "s": 4811, "text": "\n 85 Lectures \n 16.5 hours \n" }, { "code": null, "e": 4862, "s": 4847, "text": " Rahul Agarwal" }, { "code": null, "e": 4869, "s": 4862, "text": " Print" }, { "code": null, "e": 4880, "s": 4869, "text": " Add Notes" } ]
Traversing Diagonally in a matrix in JavaScript
We are required to write a JavaScript function that takes in a square matrix (an array of arrays having the same number of rows and columns). The function should traverse diagonally through that array of array and prepare a new array of elements placed in that order it encountered while traversing. For example, if the input to the function is − const arr = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]; Then the output should be − const output = [1, 2, 4, 7, 5, 3, 6, 8, 9]; Live Demo The code for this will be − const arr = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]; const findDiagonalOrder = (arr = []) => { if(!arr.length){ return []; }; let ind = 0; let colBegin = 0, rowBegin = 0; let rowMax = arr.length, colMax = arr[0].length; const res = [], stack = []; while(rowBegin< rowMax || colBegin<colMax) { for(let row = rowBegin, col = colBegin; row < rowMax && col >=0 ; row++,col--){ if(ind%2 === 0){ stack.push((arr[row][col])); }else{ res.push(arr[row][col]); }; }; ind++; while(stack.length){ res.push(stack.pop()); }; colBegin++ if(colBegin> colMax-1 && rowBegin < rowMax){ colBegin = colMax-1 rowBegin++ } }; return res }; console.log(findDiagonalOrder(arr)); The steps we took are − Traversed in one direction keeping track of starting point. Traversed in one direction keeping track of starting point. If index is even, we'll push to a stack and pop once it reaches end of diagonal, adding popped to our output array. If index is even, we'll push to a stack and pop once it reaches end of diagonal, adding popped to our output array. We keep incrementing index as we move to next diagonal. We keep incrementing index as we move to next diagonal. We increment column begin index until it reaches end, as it for the next iterations it will be stopped at last index and we'll be incrementing row begin index moving from this point. We increment column begin index until it reaches end, as it for the next iterations it will be stopped at last index and we'll be incrementing row begin index moving from this point. And the output in the console will be − [ 1, 2, 4, 7, 5, 3, 6, 8, 9 ]
[ { "code": null, "e": 1362, "s": 1062, "text": "We are required to write a JavaScript function that takes in a square matrix (an array of arrays having the same number of rows and columns). The function should traverse diagonally through that array of array and prepare a new array of elements placed in that order it encountered while traversing." }, { "code": null, "e": 1409, "s": 1362, "text": "For example, if the input to the function is −" }, { "code": null, "e": 1467, "s": 1409, "text": "const arr = [\n [1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]\n];" }, { "code": null, "e": 1495, "s": 1467, "text": "Then the output should be −" }, { "code": null, "e": 1539, "s": 1495, "text": "const output = [1, 2, 4, 7, 5, 3, 6, 8, 9];" }, { "code": null, "e": 1550, "s": 1539, "text": " Live Demo" }, { "code": null, "e": 1578, "s": 1550, "text": "The code for this will be −" }, { "code": null, "e": 2402, "s": 1578, "text": "const arr = [\n [1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]\n];\nconst findDiagonalOrder = (arr = []) => {\n if(!arr.length){\n return [];\n };\n let ind = 0;\n let colBegin = 0, rowBegin = 0;\n let rowMax = arr.length, colMax = arr[0].length;\n const res = [], stack = [];\n while(rowBegin< rowMax || colBegin<colMax) {\n for(let row = rowBegin, col = colBegin; row < rowMax && col >=0 ;\n row++,col--){\n if(ind%2 === 0){\n stack.push((arr[row][col]));\n }else{\n res.push(arr[row][col]);\n };\n };\n ind++;\n while(stack.length){\n res.push(stack.pop());\n };\n colBegin++\n if(colBegin> colMax-1 && rowBegin < rowMax){\n colBegin = colMax-1\n rowBegin++\n }\n };\n return res\n};\nconsole.log(findDiagonalOrder(arr));" }, { "code": null, "e": 2426, "s": 2402, "text": "The steps we took are −" }, { "code": null, "e": 2486, "s": 2426, "text": "Traversed in one direction keeping track of starting point." }, { "code": null, "e": 2546, "s": 2486, "text": "Traversed in one direction keeping track of starting point." }, { "code": null, "e": 2662, "s": 2546, "text": "If index is even, we'll push to a stack and pop once it reaches end of diagonal, adding popped to our output array." }, { "code": null, "e": 2778, "s": 2662, "text": "If index is even, we'll push to a stack and pop once it reaches end of diagonal, adding popped to our output array." }, { "code": null, "e": 2834, "s": 2778, "text": "We keep incrementing index as we move to next diagonal." }, { "code": null, "e": 2890, "s": 2834, "text": "We keep incrementing index as we move to next diagonal." }, { "code": null, "e": 3073, "s": 2890, "text": "We increment column begin index until it reaches end, as it for the next iterations it will be stopped at last index and we'll be incrementing row begin index moving from this point." }, { "code": null, "e": 3256, "s": 3073, "text": "We increment column begin index until it reaches end, as it for the next iterations it will be stopped at last index and we'll be incrementing row begin index moving from this point." }, { "code": null, "e": 3296, "s": 3256, "text": "And the output in the console will be −" }, { "code": null, "e": 3332, "s": 3296, "text": "[\n 1, 2, 4, 7, 5,\n 3, 6, 8, 9\n]" } ]
Python - tensorflow.executing_eagerly() - GeeksforGeeks
10 Jul, 2020 TensorFlow is open-source Python library designed by Google to develop Machine Learning models and deep learning neural networks. executing_eagerly() is used check if eager execution is enabled or disabled in current thread. By default eager execution is enabled so in most cases it will return true. This will return false in following cases: If it is executing inside tensorflow.function and tf.init_scope or tf.config.experimental_run_functions_eagerly(True) is not called previously. Executing inside a transformation function for tensorflow.dataset. tensorflow.compat.v1.disable_eager_execution() is called. Syntax: tensorflow.executing_eagerly() Parameters: This doesn’t accept any parameters. Returns: It returns true is eager execution is enabled otherwise it will return false. Example 1: Python3 # Importing the libraryimport tensorflow as tf # Checking eager executionres = tf.executing_eagerly() # Printing the resultprint('res: ', res) Output: res: True Example 2: This example checks eager execution for tensorflow.function with and without init_scope. Python3 # Importing the libraryimport tensorflow as tf @tf.functiondef gfg(): with tf.init_scope(): # Checking eager execution inside init_scope res = tf.executing_eagerly() print("res 1:", res) # Checking eager execution outside init_scope res = tf.executing_eagerly() print("res 2:", res)gfg() Output: res 1: True res 2: False Python-Tensorflow Tensorflow Python Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Comments Old Comments Python Dictionary Read a file line by line in Python Enumerate() in Python How to Install PIP on Windows ? Iterate over a list in Python Different ways to create Pandas Dataframe Python String | replace() Python program to convert a list to string Reading and Writing to text files in Python sum() function in Python
[ { "code": null, "e": 24145, "s": 24117, "text": "\n10 Jul, 2020" }, { "code": null, "e": 24277, "s": 24145, "text": "TensorFlow is open-source Python library designed by Google to develop Machine Learning models and deep learning neural networks. " }, { "code": null, "e": 24491, "s": 24277, "text": "executing_eagerly() is used check if eager execution is enabled or disabled in current thread. By default eager execution is enabled so in most cases it will return true. This will return false in following cases:" }, { "code": null, "e": 24636, "s": 24491, "text": "If it is executing inside tensorflow.function and tf.init_scope or tf.config.experimental_run_functions_eagerly(True) is not called previously." }, { "code": null, "e": 24703, "s": 24636, "text": "Executing inside a transformation function for tensorflow.dataset." }, { "code": null, "e": 24761, "s": 24703, "text": "tensorflow.compat.v1.disable_eager_execution() is called." }, { "code": null, "e": 24800, "s": 24761, "text": "Syntax: tensorflow.executing_eagerly()" }, { "code": null, "e": 24848, "s": 24800, "text": "Parameters: This doesn’t accept any parameters." }, { "code": null, "e": 24935, "s": 24848, "text": "Returns: It returns true is eager execution is enabled otherwise it will return false." }, { "code": null, "e": 24946, "s": 24935, "text": "Example 1:" }, { "code": null, "e": 24954, "s": 24946, "text": "Python3" }, { "code": "# Importing the libraryimport tensorflow as tf # Checking eager executionres = tf.executing_eagerly() # Printing the resultprint('res: ', res)", "e": 25099, "s": 24954, "text": null }, { "code": null, "e": 25107, "s": 25099, "text": "Output:" }, { "code": null, "e": 25119, "s": 25107, "text": "res: True\n" }, { "code": null, "e": 25219, "s": 25119, "text": "Example 2: This example checks eager execution for tensorflow.function with and without init_scope." }, { "code": null, "e": 25227, "s": 25219, "text": "Python3" }, { "code": "# Importing the libraryimport tensorflow as tf @tf.functiondef gfg(): with tf.init_scope(): # Checking eager execution inside init_scope res = tf.executing_eagerly() print(\"res 1:\", res) # Checking eager execution outside init_scope res = tf.executing_eagerly() print(\"res 2:\", res)gfg()", "e": 25531, "s": 25227, "text": null }, { "code": null, "e": 25539, "s": 25531, "text": "Output:" }, { "code": null, "e": 25567, "s": 25539, "text": "res 1: True\nres 2: False\n\n\n" }, { "code": null, "e": 25585, "s": 25567, "text": "Python-Tensorflow" }, { "code": null, "e": 25596, "s": 25585, "text": "Tensorflow" }, { "code": null, "e": 25603, "s": 25596, "text": "Python" }, { "code": null, "e": 25701, "s": 25603, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 25710, "s": 25701, "text": "Comments" }, { "code": null, "e": 25723, "s": 25710, "text": "Old Comments" }, { "code": null, "e": 25741, "s": 25723, "text": "Python Dictionary" }, { "code": null, "e": 25776, "s": 25741, "text": "Read a file line by line in Python" }, { "code": null, "e": 25798, "s": 25776, "text": "Enumerate() in Python" }, { "code": null, "e": 25830, "s": 25798, "text": "How to Install PIP on Windows ?" }, { "code": null, "e": 25860, "s": 25830, "text": "Iterate over a list in Python" }, { "code": null, "e": 25902, "s": 25860, "text": "Different ways to create Pandas Dataframe" }, { "code": null, "e": 25928, "s": 25902, "text": "Python String | replace()" }, { "code": null, "e": 25971, "s": 25928, "text": "Python program to convert a list to string" }, { "code": null, "e": 26015, "s": 25971, "text": "Reading and Writing to text files in Python" } ]
C library function - rename()
The C library function int rename(const char *old_filename, const char *new_filename) causes the filename referred to by old_filename to be changed to new_filename. Following is the declaration for rename() function. int rename(const char *old_filename, const char *new_filename) old_filename − This is the C string containing the name of the file to be renamed and/or moved. old_filename − This is the C string containing the name of the file to be renamed and/or moved. new_filename − This is the C string containing the new name for the file. new_filename − This is the C string containing the new name for the file. On success, zero is returned. On error, -1 is returned, and errno is set appropriately. The following example shows the usage of rename() function. #include <stdio.h> int main () { int ret; char oldname[] = "file.txt"; char newname[] = "newfile.txt"; ret = rename(oldname, newname); if(ret == 0) { printf("File renamed successfully"); } else { printf("Error: unable to rename the file"); } return(0); } Let us assume we have a text file file.txt, having some content. So, we are going to rename this file, using the above program. Let us compile and run the above program to produce the following message and the file will be renamed to newfile.txt file. File renamed successfully 12 Lectures 2 hours Nishant Malik 12 Lectures 2.5 hours Nishant Malik 48 Lectures 6.5 hours Asif Hussain 12 Lectures 2 hours Richa Maheshwari 20 Lectures 3.5 hours Vandana Annavaram 44 Lectures 1 hours Amit Diwan Print Add Notes Bookmark this page
[ { "code": null, "e": 2172, "s": 2007, "text": "The C library function int rename(const char *old_filename, const char *new_filename) causes the filename referred to by old_filename to be changed to new_filename." }, { "code": null, "e": 2224, "s": 2172, "text": "Following is the declaration for rename() function." }, { "code": null, "e": 2287, "s": 2224, "text": "int rename(const char *old_filename, const char *new_filename)" }, { "code": null, "e": 2383, "s": 2287, "text": "old_filename − This is the C string containing the name of the file to be renamed and/or moved." }, { "code": null, "e": 2479, "s": 2383, "text": "old_filename − This is the C string containing the name of the file to be renamed and/or moved." }, { "code": null, "e": 2553, "s": 2479, "text": "new_filename − This is the C string containing the new name for the file." }, { "code": null, "e": 2627, "s": 2553, "text": "new_filename − This is the C string containing the new name for the file." }, { "code": null, "e": 2715, "s": 2627, "text": "On success, zero is returned. On error, -1 is returned, and errno is set appropriately." }, { "code": null, "e": 2775, "s": 2715, "text": "The following example shows the usage of rename() function." }, { "code": null, "e": 3077, "s": 2775, "text": "#include <stdio.h>\n\nint main () {\n int ret;\n char oldname[] = \"file.txt\";\n char newname[] = \"newfile.txt\";\n \n ret = rename(oldname, newname);\n\t\n if(ret == 0) {\n printf(\"File renamed successfully\");\n } else {\n printf(\"Error: unable to rename the file\");\n }\n \n return(0);\n}" }, { "code": null, "e": 3329, "s": 3077, "text": "Let us assume we have a text file file.txt, having some content. So, we are going to rename this file, using the above program. Let us compile and run the above program to produce the following message and the file will be renamed to newfile.txt file." }, { "code": null, "e": 3356, "s": 3329, "text": "File renamed successfully\n" }, { "code": null, "e": 3389, "s": 3356, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 3404, "s": 3389, "text": " Nishant Malik" }, { "code": null, "e": 3439, "s": 3404, "text": "\n 12 Lectures \n 2.5 hours \n" }, { "code": null, "e": 3454, "s": 3439, "text": " Nishant Malik" }, { "code": null, "e": 3489, "s": 3454, "text": "\n 48 Lectures \n 6.5 hours \n" }, { "code": null, "e": 3503, "s": 3489, "text": " Asif Hussain" }, { "code": null, "e": 3536, "s": 3503, "text": "\n 12 Lectures \n 2 hours \n" }, { "code": null, "e": 3554, "s": 3536, "text": " Richa Maheshwari" }, { "code": null, "e": 3589, "s": 3554, "text": "\n 20 Lectures \n 3.5 hours \n" }, { "code": null, "e": 3608, "s": 3589, "text": " Vandana Annavaram" }, { "code": null, "e": 3641, "s": 3608, "text": "\n 44 Lectures \n 1 hours \n" }, { "code": null, "e": 3653, "s": 3641, "text": " Amit Diwan" }, { "code": null, "e": 3660, "s": 3653, "text": " Print" }, { "code": null, "e": 3671, "s": 3660, "text": " Add Notes" } ]
C++ Algorithm Library - find() Function
The C++ function std::algorithm::find() finds the first occurrence of the element. It uses operator = for comparison. Following is the declaration for std::algorithm::find() function form std::algorithm header. template <class InputIterator, class T> InputIterator find (InputIterator first, InputIterator last, const T& val); first − Input iterator to the initial position. first − Input iterator to the initial position. last − Input iterator to the final position. last − Input iterator to the final position. val − Value to compare the elements. val − Value to compare the elements. If element found it returns an iterator pointing to the first occurrence of the element otherwise returns last. Throws exception if either element comparison or an operation on an iterator throws exception. Please note that invalid parameters cause undefined behavior. Linear. The following example shows the usage of std::algorithm::find() function. #include <iostream> #include <vector> #include <algorithm> using namespace std; int main(void) { int val = 5; vector<int> v = {1, 2, 3, 4, 5}; auto result = find(v.begin(), v.end(), val); if (result != end(v)) cout << "Vector contains element " << val << endl; val = 15; result = find(v.begin(), v.end(), val); if (result == end(v)) cout << "Vector doesn't contain element " << val << endl; return 0; } Let us compile and run the above program, this will produce the following result − Vector contains element 5 Vector doesn't contain element 15 Print Add Notes Bookmark this page
[ { "code": null, "e": 2721, "s": 2603, "text": "The C++ function std::algorithm::find() finds the first occurrence of the element. It uses operator = for comparison." }, { "code": null, "e": 2814, "s": 2721, "text": "Following is the declaration for std::algorithm::find() function form std::algorithm header." }, { "code": null, "e": 2931, "s": 2814, "text": "template <class InputIterator, class T>\nInputIterator find (InputIterator first, InputIterator last, const T& val);\n" }, { "code": null, "e": 2979, "s": 2931, "text": "first − Input iterator to the initial position." }, { "code": null, "e": 3027, "s": 2979, "text": "first − Input iterator to the initial position." }, { "code": null, "e": 3072, "s": 3027, "text": "last − Input iterator to the final position." }, { "code": null, "e": 3117, "s": 3072, "text": "last − Input iterator to the final position." }, { "code": null, "e": 3154, "s": 3117, "text": "val − Value to compare the elements." }, { "code": null, "e": 3191, "s": 3154, "text": "val − Value to compare the elements." }, { "code": null, "e": 3303, "s": 3191, "text": "If element found it returns an iterator pointing to the first occurrence of the element otherwise returns last." }, { "code": null, "e": 3398, "s": 3303, "text": "Throws exception if either element comparison or an operation on an iterator throws exception." }, { "code": null, "e": 3460, "s": 3398, "text": "Please note that invalid parameters cause undefined behavior." }, { "code": null, "e": 3468, "s": 3460, "text": "Linear." }, { "code": null, "e": 3542, "s": 3468, "text": "The following example shows the usage of std::algorithm::find() function." }, { "code": null, "e": 3989, "s": 3542, "text": "#include <iostream>\n#include <vector>\n#include <algorithm>\n\nusing namespace std;\n\nint main(void) {\n int val = 5;\n vector<int> v = {1, 2, 3, 4, 5};\n\n auto result = find(v.begin(), v.end(), val);\n\n if (result != end(v))\n cout << \"Vector contains element \" << val << endl;\n\n val = 15;\n\n result = find(v.begin(), v.end(), val);\n\n if (result == end(v))\n cout << \"Vector doesn't contain element \" << val << endl;\n\n return 0;\n}" }, { "code": null, "e": 4072, "s": 3989, "text": "Let us compile and run the above program, this will produce the following result −" }, { "code": null, "e": 4133, "s": 4072, "text": "Vector contains element 5\nVector doesn't contain element 15\n" }, { "code": null, "e": 4140, "s": 4133, "text": " Print" }, { "code": null, "e": 4151, "s": 4140, "text": " Add Notes" } ]
OpenCV - Box Filter
The Box Filter operation is similar to the averaging blur operation; it applies a bilateral image to a filter. Here, you can choose whether the box should be normalized or not. You can perform this operation on an image using the boxFilter() method of the imgproc class. Following is the syntax of this method − boxFilter(src, dst, ddepth, ksize, anchor, normalize, borderType) This method accepts the following parameters − src − A Mat object representing the source (input image) for this operation. src − A Mat object representing the source (input image) for this operation. dst − A Mat object representing the destination (output image) for this operation. dst − A Mat object representing the destination (output image) for this operation. ddepth − A variable of the type integer representing the depth of the output image. ddepth − A variable of the type integer representing the depth of the output image. ksize − A Size object representing the size of the blurring kernel. ksize − A Size object representing the size of the blurring kernel. anchor − A variable of the type integer representing the anchor point. anchor − A variable of the type integer representing the anchor point. Normalize − A variable of the type boolean specifying weather the kernel should be normalized. Normalize − A variable of the type boolean specifying weather the kernel should be normalized. borderType − An integer object representing the type of the border used. borderType − An integer object representing the type of the border used. The following program demonstrates how to perform the Box Filter operation on an image. import org.opencv.core.Core; import org.opencv.core.Mat; import org.opencv.core.Point; import org.opencv.core.Size; import org.opencv.imgcodecs.Imgcodecs; import org.opencv.imgproc.Imgproc; public class BoxFilterTest { public static void main( String[] args ) { // Loading the OpenCV core library System.loadLibrary( Core.NATIVE_LIBRARY_NAME ); // Reading the Image from the file and storing it in to a Matrix object String file = "E:/OpenCV/chap11/filter_input.jpg"; Mat src = Imgcodecs.imread(file); // Creating an empty matrix to store the result Mat dst = new Mat(); // Creating the objects for Size and Point Size size = new Size(45, 45); Point point = Point(-1, -1); // Applying Box Filter effect on the Image Imgproc.boxFilter(src, dst, 50, size, point, true, Core.BORDER_DEFAULT); // Writing the image Imgcodecs.imwrite("E:/OpenCV/chap11/boxfilterjpg", dst); System.out.println("Image Processed"); } } Assume that following is the input image filter_input.jpg specified in the above program. On executing the program, you will get the following output − Image Processed If you open the specified path, you can observe the output image as follows − 70 Lectures 9 hours Abhilash Nelson 41 Lectures 4 hours Abhilash Nelson 20 Lectures 2 hours Spotle Learn 12 Lectures 46 mins Srikanth Guskra 19 Lectures 2 hours Haithem Gasmi 67 Lectures 6.5 hours Gianluca Mottola Print Add Notes Bookmark this page
[ { "code": null, "e": 3181, "s": 3004, "text": "The Box Filter operation is similar to the averaging blur operation; it applies a bilateral image to a filter. Here, you can choose whether the box should be normalized or not." }, { "code": null, "e": 3316, "s": 3181, "text": "You can perform this operation on an image using the boxFilter() method of the imgproc class. Following is the syntax of this method −" }, { "code": null, "e": 3383, "s": 3316, "text": "boxFilter(src, dst, ddepth, ksize, anchor, normalize, borderType)\n" }, { "code": null, "e": 3430, "s": 3383, "text": "This method accepts the following parameters −" }, { "code": null, "e": 3507, "s": 3430, "text": "src − A Mat object representing the source (input image) for this operation." }, { "code": null, "e": 3584, "s": 3507, "text": "src − A Mat object representing the source (input image) for this operation." }, { "code": null, "e": 3667, "s": 3584, "text": "dst − A Mat object representing the destination (output image) for this operation." }, { "code": null, "e": 3750, "s": 3667, "text": "dst − A Mat object representing the destination (output image) for this operation." }, { "code": null, "e": 3834, "s": 3750, "text": "ddepth − A variable of the type integer representing the depth of the output image." }, { "code": null, "e": 3918, "s": 3834, "text": "ddepth − A variable of the type integer representing the depth of the output image." }, { "code": null, "e": 3986, "s": 3918, "text": "ksize − A Size object representing the size of the blurring kernel." }, { "code": null, "e": 4054, "s": 3986, "text": "ksize − A Size object representing the size of the blurring kernel." }, { "code": null, "e": 4125, "s": 4054, "text": "anchor − A variable of the type integer representing the anchor point." }, { "code": null, "e": 4196, "s": 4125, "text": "anchor − A variable of the type integer representing the anchor point." }, { "code": null, "e": 4291, "s": 4196, "text": "Normalize − A variable of the type boolean specifying weather the kernel should be normalized." }, { "code": null, "e": 4386, "s": 4291, "text": "Normalize − A variable of the type boolean specifying weather the kernel should be normalized." }, { "code": null, "e": 4459, "s": 4386, "text": "borderType − An integer object representing the type of the border used." }, { "code": null, "e": 4532, "s": 4459, "text": "borderType − An integer object representing the type of the border used." }, { "code": null, "e": 4620, "s": 4532, "text": "The following program demonstrates how to perform the Box Filter operation on an image." }, { "code": null, "e": 5633, "s": 4620, "text": "import org.opencv.core.Core;\nimport org.opencv.core.Mat;\nimport org.opencv.core.Point;\nimport org.opencv.core.Size;\nimport org.opencv.imgcodecs.Imgcodecs;\nimport org.opencv.imgproc.Imgproc;\n\npublic class BoxFilterTest {\n public static void main( String[] args ) {\n // Loading the OpenCV core library\n System.loadLibrary( Core.NATIVE_LIBRARY_NAME );\n\n // Reading the Image from the file and storing it in to a Matrix object\n String file = \"E:/OpenCV/chap11/filter_input.jpg\";\n Mat src = Imgcodecs.imread(file);\n\n // Creating an empty matrix to store the result\n Mat dst = new Mat();\n\n // Creating the objects for Size and Point\n Size size = new Size(45, 45);\n Point point = Point(-1, -1);\n\n // Applying Box Filter effect on the Image\n Imgproc.boxFilter(src, dst, 50, size, point, true, Core.BORDER_DEFAULT);\n\n // Writing the image\n Imgcodecs.imwrite(\"E:/OpenCV/chap11/boxfilterjpg\", dst);\n\n System.out.println(\"Image Processed\");\n }\n}" }, { "code": null, "e": 5723, "s": 5633, "text": "Assume that following is the input image filter_input.jpg specified in the above program." }, { "code": null, "e": 5785, "s": 5723, "text": "On executing the program, you will get the following output −" }, { "code": null, "e": 5802, "s": 5785, "text": "Image Processed\n" }, { "code": null, "e": 5880, "s": 5802, "text": "If you open the specified path, you can observe the output image as follows −" }, { "code": null, "e": 5913, "s": 5880, "text": "\n 70 Lectures \n 9 hours \n" }, { "code": null, "e": 5930, "s": 5913, "text": " Abhilash Nelson" }, { "code": null, "e": 5963, "s": 5930, "text": "\n 41 Lectures \n 4 hours \n" }, { "code": null, "e": 5980, "s": 5963, "text": " Abhilash Nelson" }, { "code": null, "e": 6013, "s": 5980, "text": "\n 20 Lectures \n 2 hours \n" }, { "code": null, "e": 6027, "s": 6013, "text": " Spotle Learn" }, { "code": null, "e": 6059, "s": 6027, "text": "\n 12 Lectures \n 46 mins\n" }, { "code": null, "e": 6076, "s": 6059, "text": " Srikanth Guskra" }, { "code": null, "e": 6109, "s": 6076, "text": "\n 19 Lectures \n 2 hours \n" }, { "code": null, "e": 6124, "s": 6109, "text": " Haithem Gasmi" }, { "code": null, "e": 6159, "s": 6124, "text": "\n 67 Lectures \n 6.5 hours \n" }, { "code": null, "e": 6177, "s": 6159, "text": " Gianluca Mottola" }, { "code": null, "e": 6184, "s": 6177, "text": " Print" }, { "code": null, "e": 6195, "s": 6184, "text": " Add Notes" } ]
Laravel - Encryption
Encryption is a process of converting a plain text to a message using some algorithms such that any third user cannot read the information. This is helpful for transmitting sensitive information because there are fewer chances for an intruder to target the information transferred. Encryption is performed using a process called Cryptography. The text which is to be encrypted is termed as Plain Text and the text or the message obtained after the encryption is called Cipher Text. The process of converting cipher text to plain text is called Decryption. Laravel uses AES-256 and AES-128 encrypter, which uses Open SSL for encryption. All the values included in Laravel are signed using the protocol Message Authentication Code so that the underlying value cannot be tampered with once it is encrypted. The command used to generate the key in Laravel is shown below − php artisan key:generate Please note that this command uses the PHP secure random bytes’ generator and you can see the output as shown in the screenshot given below − The command given above helps in generating the key which can be used in web application. Observe the screenshot shown below − The values for encryption are properly aligned in the config/app.php file, which includes two parameters for encryption namely key and cipher. If the value using this key is not properly aligned, all the values encrypted in Laravel will be insecure. Encryption of a value can be done by using the encrypt helper in the controllers of Laravel class. These values are encrypted using OpenSSL and AES-256 cipher. All the encrypted values are signed with Message Authentication code (MAC) to check for any modifications of the encrypted string. The code shown below is mentioned in a controller and is used to store a secret or a sensitive message. <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Http\Controllers\Controller; class DemoController extends Controller{ ** * Store a secret message for the user. * * @param Request $request * @param int $id * @return Response */ public function storeSecret(Request $request, $id) { $user = User::findOrFail($id); $user->fill([ 'secret' => encrypt($request->secret) ])->save(); } } Decryption of the values is done with the decrypt helper. Observe the following lines of code − use Illuminate\Contracts\Encryption\DecryptException; // Exception for decryption thrown in facade try { $decrypted = decrypt($encryptedValue); } catch (DecryptException $e) { // } Please note that if the process of decryption is not successful because of invalid MAC being used, then an appropriate exception is thrown. 13 Lectures 3 hours Sebastian Sulinski 35 Lectures 3.5 hours Antonio Papa 7 Lectures 1.5 hours Sebastian Sulinski 42 Lectures 1 hours Skillbakerystudios 165 Lectures 13 hours Paul Carlo Tordecilla 116 Lectures 13 hours Hafizullah Masoudi Print Add Notes Bookmark this page
[ { "code": null, "e": 2754, "s": 2472, "text": "Encryption is a process of converting a plain text to a message using some algorithms such that any third user cannot read the information. This is helpful for transmitting sensitive information because there are fewer chances for an intruder to target the information transferred." }, { "code": null, "e": 3028, "s": 2754, "text": "Encryption is performed using a process called Cryptography. The text which is to be encrypted is termed as Plain Text and the text or the message obtained after the encryption is called Cipher Text. The process of converting cipher text to plain text is called Decryption." }, { "code": null, "e": 3276, "s": 3028, "text": "Laravel uses AES-256 and AES-128 encrypter, which uses Open SSL for encryption. All the values included in Laravel are signed using the protocol Message Authentication Code so that the underlying value cannot be tampered with once it is encrypted." }, { "code": null, "e": 3341, "s": 3276, "text": "The command used to generate the key in Laravel is shown below −" }, { "code": null, "e": 3367, "s": 3341, "text": "php artisan key:generate\n" }, { "code": null, "e": 3509, "s": 3367, "text": "Please note that this command uses the PHP secure random bytes’ generator and you can see the output as shown in the screenshot given below −" }, { "code": null, "e": 3636, "s": 3509, "text": "The command given above helps in generating the key which can be used in web application. Observe the screenshot shown below −" }, { "code": null, "e": 3886, "s": 3636, "text": "The values for encryption are properly aligned in the config/app.php file, which includes two parameters for encryption namely key and cipher. If the value using this key is not properly aligned, all the values encrypted in Laravel will be insecure." }, { "code": null, "e": 4177, "s": 3886, "text": "Encryption of a value can be done by using the encrypt helper in the controllers of Laravel class. These values are encrypted using OpenSSL and AES-256 cipher. All the encrypted values are signed with Message Authentication code (MAC) to check for any modifications of the encrypted string." }, { "code": null, "e": 4281, "s": 4177, "text": "The code shown below is mentioned in a controller and is used to store a secret or a sensitive message." }, { "code": null, "e": 4763, "s": 4281, "text": "<?php\n\nnamespace App\\Http\\Controllers;\n\nuse Illuminate\\Http\\Request;\nuse App\\Http\\Controllers\\Controller;\n\nclass DemoController extends Controller{\n **\n * Store a secret message for the user.\n *\n * @param Request $request\n * @param int $id\n * @return Response\n */\n \n public function storeSecret(Request $request, $id) {\n $user = User::findOrFail($id);\n $user->fill([\n 'secret' => encrypt($request->secret)\n ])->save();\n }\n}" }, { "code": null, "e": 4859, "s": 4763, "text": "Decryption of the values is done with the decrypt helper. Observe the following lines of code −" }, { "code": null, "e": 5047, "s": 4859, "text": "use Illuminate\\Contracts\\Encryption\\DecryptException;\n\n// Exception for decryption thrown in facade\ntry {\n $decrypted = decrypt($encryptedValue);\n} catch (DecryptException $e) {\n //\n}" }, { "code": null, "e": 5187, "s": 5047, "text": "Please note that if the process of decryption is not successful because of invalid MAC being used, then an appropriate exception is thrown." }, { "code": null, "e": 5220, "s": 5187, "text": "\n 13 Lectures \n 3 hours \n" }, { "code": null, "e": 5240, "s": 5220, "text": " Sebastian Sulinski" }, { "code": null, "e": 5275, "s": 5240, "text": "\n 35 Lectures \n 3.5 hours \n" }, { "code": null, "e": 5289, "s": 5275, "text": " Antonio Papa" }, { "code": null, "e": 5323, "s": 5289, "text": "\n 7 Lectures \n 1.5 hours \n" }, { "code": null, "e": 5343, "s": 5323, "text": " Sebastian Sulinski" }, { "code": null, "e": 5376, "s": 5343, "text": "\n 42 Lectures \n 1 hours \n" }, { "code": null, "e": 5396, "s": 5376, "text": " Skillbakerystudios" }, { "code": null, "e": 5431, "s": 5396, "text": "\n 165 Lectures \n 13 hours \n" }, { "code": null, "e": 5454, "s": 5431, "text": " Paul Carlo Tordecilla" }, { "code": null, "e": 5489, "s": 5454, "text": "\n 116 Lectures \n 13 hours \n" }, { "code": null, "e": 5509, "s": 5489, "text": " Hafizullah Masoudi" }, { "code": null, "e": 5516, "s": 5509, "text": " Print" }, { "code": null, "e": 5527, "s": 5516, "text": " Add Notes" } ]
MLflow Part 3: Logging Models to a Tracking Server! | by David Hundley | Towards Data Science
Hey there, friends, and welcome back to another post in our series on MLflow. If this is the first post you’ve seen and would like to catch up, be sure to check out the previous posts here: Part 1: Getting Started with MLflow! Part 2: Deploying a Tracking Server to Minikube! As always, if you would like to see the code mentioned in this post, please be sure to check out my GitHub repo here. This latest post is going to build right on top of part 2, so please do check that out if you missed it. Just to quickly recap what we did in that post, we deployed an MLflow tracking server to Kubernetes with Minikube on our local machines. Behind the scenes, the MLflow tracking server is supported by a Postgres metadata store and an AWS S3-like artifact store called Minio. That post was quite meaty, so I’m happy to share this one is much simpler by comparison. Phew! I personally like pictures much better than a static explanation, so integrating what we just mentioned in the last paragraph, this architectural image summarizes how we’ll be interacting with MLflow in this particular post. (In case you’re not familiar with the icons, the elephant is PostgreSQL, and the red flamingo-like icon is Minio.) Of course, we already did everything in the right side of the image in our last post, so this post is all about focusing on what you need to include for the Python file from your client machine. Of course, everything is coincidentally is all hosted on the local machine since we’re using making use of Minikube, but if you ever use a legit Kubernetes environment, your client will most likely be separate from the MLflow tracking server. This code is actually pretty simple, and a lot of it is bound to look familiar to you. Partially because a lot of it is basic machine learning stuff, and partially because you probably already saw much of this when looking at Part 1 of this series. Because the code is so simple, I’m going to paste the full thing here and walk through the special stuff you might be new to. # Importing in necessary librariesimport osimport pandas as pdfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import mean_squared_error, mean_absolute_error, r2_scorefrom sklearn.linear_model import ElasticNetimport mlflowimport mlflow.sklearn# PROJECT SETUP# ------------------------------------------------------------------------------# Setting the MLflow tracking servermlflow.set_tracking_uri('http://mlflow-server.local')# Setting the requried environment variablesos.environ['MLFLOW_S3_ENDPOINT_URL'] = 'http://mlflow-minio.local/'os.environ['AWS_ACCESS_KEY_ID'] = 'minio'os.environ['AWS_SECRET_ACCESS_KEY'] = 'minio123'# Loading data from a CSV filedf_wine = pd.read_csv('../data/wine/train.csv')# Separating the target class ('quality') from remainder of the training dataX = df_wine.drop(columns = 'quality')y = df_wine[['quality']]# Splitting the data into training and validation setsX_train, X_val, y_train, y_val = train_test_split(X, y, random_state = 42)# MODEL TRAINING AND LOGGING# ------------------------------------------------------------------------------# Defining model parametersalpha = 1l1_ratio = 1# Running MLFlow scriptwith mlflow.start_run():# Instantiating model with model parameters model = ElasticNet(alpha = alpha, l1_ratio = l1_ratio)# Fitting training data to the model model.fit(X_train, y_train)# Running prediction on validation dataset preds = model.predict(X_val)# Getting metrics on the validation dataset rmse = mean_squared_error(preds, y_val) abs_error = mean_absolute_error(preds, y_val) r2 = r2_score(preds, y_val)# Logging params and metrics to MLFlow mlflow.log_param('alpha', alpha) mlflow.log_param('l1_ratio', l1_ratio) mlflow.log_metric('rmse', rmse) mlflow.log_metric('abs_error', abs_error) mlflow.log_metric('r2', r2)# Logging training data mlflow.log_artifact(local_path = '../data/wine/train.csv')# Logging training code mlflow.log_artifact(local_path = './mlflow-wine.py')# Logging model to MLFlow mlflow.sklearn.log_model(sk_model = model, artifact_path = 'wine-pyfile-model', registered_model_name = 'wine-pyfile-model') Right at the top, you’ll notice that we first have to make sure that the script is pointing to the proper tracking server. When we deployed our tracking server in the last post, you might recall we had the tracking server itself behind the URI mlflow-server.local and the artifact store (Minio) served out behind mlflow-minio.local. As a reminder, Minio intentionally emulates AWS’s S3, so in case you’re wondering why we’re setting AWS-like environment variables, that is why. After loading in the data and doing some basic modeling, we come down to all the MLflow tracking goodness. MLflow is pretty flexible here, so you’ll notice we’re logging / uploading all this great stuff including... Parameters Metrics The code we used to run this model The training data itself The model itself Speaking of that last one, you’ll notice some special syntax around model naming. This is because in addition to getting the model artifacts in the artifact registry, MLflow will also create a formal model in its MLflow Model Registry. We’ll briefly touch on that below, but we’ll explore that further in a future post. (Stay tuned!) Alright, if everything works alright, all you need to do is run the following command: python mlflow-wine.py I’ve run this particular file multiple times now, so here is the output my terminal is showing me. If you’re running this for the first time, you’ll see something similar but obviously just a tiny different: Registered model 'wine-pyfile-model' already exists. Creating a new version of this model...Created version '3' of model 'wine-pyfile-model'. And friends, that’s really it! But before we wrap up this post, let’s jump into the UI just to see that everything worked properly. Fire up your browser and jump on over to mlflow-server.local. You should be greeted with a screen that looks like this: If you followed along with Part 1 of this series, this is going to look really familiar. Go ahead and open one of those runs by clicking on the proper hyperlink. If all is well, you should see all the proper information you just logged, including the model artifacts we just created. Nice! One other thing we couldn’t cover in Part 1 was the new Models tab located to the top left of the UI. Click on that, and you should see something like this: This UI is pretty cool. Not only can you provide a thorough description of the model, but you can also set specific versions of the model to different stages. For example, you can set “Version 2” to staging, “Version 3” to production, and “Version 1” to archive. This functionality is really awesome, and it will come very much in handy when we explore more things in future posts. Of course, as great as the UI is, we really should be doing everything programmatically, and MLflow has our back in that space, too. But I think we’re at a good stopping place today, so we’ll go ahead and wrap up. In the next post, we’ll look at how to interact with this UI from a more programatic perspective, and then in 2 posts from now, we’ll start really cooking with gas by showing how we might deploy a model from this Model Registry into a real production environment. Until then, thanks for reading! Stay safe, friends!
[ { "code": null, "e": 362, "s": 172, "text": "Hey there, friends, and welcome back to another post in our series on MLflow. If this is the first post you’ve seen and would like to catch up, be sure to check out the previous posts here:" }, { "code": null, "e": 399, "s": 362, "text": "Part 1: Getting Started with MLflow!" }, { "code": null, "e": 448, "s": 399, "text": "Part 2: Deploying a Tracking Server to Minikube!" }, { "code": null, "e": 566, "s": 448, "text": "As always, if you would like to see the code mentioned in this post, please be sure to check out my GitHub repo here." }, { "code": null, "e": 1039, "s": 566, "text": "This latest post is going to build right on top of part 2, so please do check that out if you missed it. Just to quickly recap what we did in that post, we deployed an MLflow tracking server to Kubernetes with Minikube on our local machines. Behind the scenes, the MLflow tracking server is supported by a Postgres metadata store and an AWS S3-like artifact store called Minio. That post was quite meaty, so I’m happy to share this one is much simpler by comparison. Phew!" }, { "code": null, "e": 1379, "s": 1039, "text": "I personally like pictures much better than a static explanation, so integrating what we just mentioned in the last paragraph, this architectural image summarizes how we’ll be interacting with MLflow in this particular post. (In case you’re not familiar with the icons, the elephant is PostgreSQL, and the red flamingo-like icon is Minio.)" }, { "code": null, "e": 1817, "s": 1379, "text": "Of course, we already did everything in the right side of the image in our last post, so this post is all about focusing on what you need to include for the Python file from your client machine. Of course, everything is coincidentally is all hosted on the local machine since we’re using making use of Minikube, but if you ever use a legit Kubernetes environment, your client will most likely be separate from the MLflow tracking server." }, { "code": null, "e": 2192, "s": 1817, "text": "This code is actually pretty simple, and a lot of it is bound to look familiar to you. Partially because a lot of it is basic machine learning stuff, and partially because you probably already saw much of this when looking at Part 1 of this series. Because the code is so simple, I’m going to paste the full thing here and walk through the special stuff you might be new to." }, { "code": null, "e": 4421, "s": 2192, "text": "# Importing in necessary librariesimport osimport pandas as pdfrom sklearn.model_selection import train_test_splitfrom sklearn.metrics import mean_squared_error, mean_absolute_error, r2_scorefrom sklearn.linear_model import ElasticNetimport mlflowimport mlflow.sklearn# PROJECT SETUP# ------------------------------------------------------------------------------# Setting the MLflow tracking servermlflow.set_tracking_uri('http://mlflow-server.local')# Setting the requried environment variablesos.environ['MLFLOW_S3_ENDPOINT_URL'] = 'http://mlflow-minio.local/'os.environ['AWS_ACCESS_KEY_ID'] = 'minio'os.environ['AWS_SECRET_ACCESS_KEY'] = 'minio123'# Loading data from a CSV filedf_wine = pd.read_csv('../data/wine/train.csv')# Separating the target class ('quality') from remainder of the training dataX = df_wine.drop(columns = 'quality')y = df_wine[['quality']]# Splitting the data into training and validation setsX_train, X_val, y_train, y_val = train_test_split(X, y, random_state = 42)# MODEL TRAINING AND LOGGING# ------------------------------------------------------------------------------# Defining model parametersalpha = 1l1_ratio = 1# Running MLFlow scriptwith mlflow.start_run():# Instantiating model with model parameters model = ElasticNet(alpha = alpha, l1_ratio = l1_ratio)# Fitting training data to the model model.fit(X_train, y_train)# Running prediction on validation dataset preds = model.predict(X_val)# Getting metrics on the validation dataset rmse = mean_squared_error(preds, y_val) abs_error = mean_absolute_error(preds, y_val) r2 = r2_score(preds, y_val)# Logging params and metrics to MLFlow mlflow.log_param('alpha', alpha) mlflow.log_param('l1_ratio', l1_ratio) mlflow.log_metric('rmse', rmse) mlflow.log_metric('abs_error', abs_error) mlflow.log_metric('r2', r2)# Logging training data mlflow.log_artifact(local_path = '../data/wine/train.csv')# Logging training code mlflow.log_artifact(local_path = './mlflow-wine.py')# Logging model to MLFlow mlflow.sklearn.log_model(sk_model = model, artifact_path = 'wine-pyfile-model', registered_model_name = 'wine-pyfile-model')" }, { "code": null, "e": 4899, "s": 4421, "text": "Right at the top, you’ll notice that we first have to make sure that the script is pointing to the proper tracking server. When we deployed our tracking server in the last post, you might recall we had the tracking server itself behind the URI mlflow-server.local and the artifact store (Minio) served out behind mlflow-minio.local. As a reminder, Minio intentionally emulates AWS’s S3, so in case you’re wondering why we’re setting AWS-like environment variables, that is why." }, { "code": null, "e": 5115, "s": 4899, "text": "After loading in the data and doing some basic modeling, we come down to all the MLflow tracking goodness. MLflow is pretty flexible here, so you’ll notice we’re logging / uploading all this great stuff including..." }, { "code": null, "e": 5126, "s": 5115, "text": "Parameters" }, { "code": null, "e": 5134, "s": 5126, "text": "Metrics" }, { "code": null, "e": 5169, "s": 5134, "text": "The code we used to run this model" }, { "code": null, "e": 5194, "s": 5169, "text": "The training data itself" }, { "code": null, "e": 5211, "s": 5194, "text": "The model itself" }, { "code": null, "e": 5545, "s": 5211, "text": "Speaking of that last one, you’ll notice some special syntax around model naming. This is because in addition to getting the model artifacts in the artifact registry, MLflow will also create a formal model in its MLflow Model Registry. We’ll briefly touch on that below, but we’ll explore that further in a future post. (Stay tuned!)" }, { "code": null, "e": 5632, "s": 5545, "text": "Alright, if everything works alright, all you need to do is run the following command:" }, { "code": null, "e": 5654, "s": 5632, "text": "python mlflow-wine.py" }, { "code": null, "e": 5862, "s": 5654, "text": "I’ve run this particular file multiple times now, so here is the output my terminal is showing me. If you’re running this for the first time, you’ll see something similar but obviously just a tiny different:" }, { "code": null, "e": 6004, "s": 5862, "text": "Registered model 'wine-pyfile-model' already exists. Creating a new version of this model...Created version '3' of model 'wine-pyfile-model'." }, { "code": null, "e": 6256, "s": 6004, "text": "And friends, that’s really it! But before we wrap up this post, let’s jump into the UI just to see that everything worked properly. Fire up your browser and jump on over to mlflow-server.local. You should be greeted with a screen that looks like this:" }, { "code": null, "e": 6546, "s": 6256, "text": "If you followed along with Part 1 of this series, this is going to look really familiar. Go ahead and open one of those runs by clicking on the proper hyperlink. If all is well, you should see all the proper information you just logged, including the model artifacts we just created. Nice!" }, { "code": null, "e": 6703, "s": 6546, "text": "One other thing we couldn’t cover in Part 1 was the new Models tab located to the top left of the UI. Click on that, and you should see something like this:" }, { "code": null, "e": 7085, "s": 6703, "text": "This UI is pretty cool. Not only can you provide a thorough description of the model, but you can also set specific versions of the model to different stages. For example, you can set “Version 2” to staging, “Version 3” to production, and “Version 1” to archive. This functionality is really awesome, and it will come very much in handy when we explore more things in future posts." } ]
Flask – WTF
One of the essential aspects of a web application is to present a user interface for the user. HTML provides a <form> tag, which is used to design an interface. A Form’s elements such as text input, radio, select etc. can be used appropriately. Data entered by a user is submitted in the form of Http request message to the server side script by either GET or POST method. The Server side script has to recreate the form elements from http request data. So in effect, form elements have to be defined twice – once in HTML and again in the server side script. The Server side script has to recreate the form elements from http request data. So in effect, form elements have to be defined twice – once in HTML and again in the server side script. Another disadvantage of using HTML form is that it is difficult (if not impossible) to render the form elements dynamically. HTML itself provides no way to validate a user’s input. Another disadvantage of using HTML form is that it is difficult (if not impossible) to render the form elements dynamically. HTML itself provides no way to validate a user’s input. This is where WTForms, a flexible form, rendering and validation library comes handy. Flask-WTF extension provides a simple interface with this WTForms library. Using Flask-WTF, we can define the form fields in our Python script and render them using an HTML template. It is also possible to apply validation to the WTF field. Let us see how this dynamic generation of HTML works. First, Flask-WTF extension needs to be installed. pip install flask-WTF The installed package contains a Form class, which has to be used as a parent for user- defined form. WTforms package contains definitions of various form fields. Some Standard form fields are listed below. TextField Represents <input type = 'text'> HTML form element BooleanField Represents <input type = 'checkbox'> HTML form element DecimalField Textfield for displaying number with decimals IntegerField TextField for displaying integer RadioField Represents <input type = 'radio'> HTML form element SelectField Represents select form element TextAreaField Represents <testarea> html form element PasswordField Represents <input type = 'password'> HTML form element SubmitField Represents <input type = 'submit'> form element For example, a form containing a text field can be designed as below − from flask_wtf import Form from wtforms import TextField class ContactForm(Form): name = TextField("Name Of Student") In addition to the ‘name’ field, a hidden field for CSRF token is created automatically. This is to prevent Cross Site Request Forgery attack. When rendered, this will result into an equivalent HTML script as shown below. <input id = "csrf_token" name = "csrf_token" type = "hidden" /> <label for = "name">Name Of Student</label><br> <input id = "name" name = "name" type = "text" value = "" /> A user-defined form class is used in a Flask application and the form is rendered using a template. from flask import Flask, render_template from forms import ContactForm app = Flask(__name__) app.secret_key = 'development key' @app.route('/contact') def contact(): form = ContactForm() return render_template('contact.html', form = form) if __name__ == '__main__': app.run(debug = True) WTForms package also contains validator class. It is useful in applying validation to form fields. Following list shows commonly used validators. DataRequired Checks whether input field is empty Email Checks whether text in the field follows email ID conventions IPAddress Validates IP address in input field Length Verifies if length of string in input field is in given range NumberRange Validates a number in input field within given range URL Validates URL entered in input field We shall now apply ‘DataRequired’ validation rule for the name field in contact form. name = TextField("Name Of Student",[validators.Required("Please enter your name.")]) The validate() function of form object validates the form data and throws the validation errors if validation fails. The Error messages are sent to the template. In the HTML template, error messages are rendered dynamically. {% for message in form.name.errors %} {{ message }} {% endfor %} The following example demonstrates the concepts given above. The design of Contact form is given below (forms.py). from flask_wtf import Form from wtforms import TextField, IntegerField, TextAreaField, SubmitField, RadioField, SelectField from wtforms import validators, ValidationError class ContactForm(Form): name = TextField("Name Of Student",[validators.Required("Please enter your name.")]) Gender = RadioField('Gender', choices = [('M','Male'),('F','Female')]) Address = TextAreaField("Address") email = TextField("Email",[validators.Required("Please enter your email address."), validators.Email("Please enter your email address.")]) Age = IntegerField("age") language = SelectField('Languages', choices = [('cpp', 'C++'), ('py', 'Python')]) submit = SubmitField("Send") Validators are applied to the Name and Email fields. Given below is the Flask application script (formexample.py). from flask import Flask, render_template, request, flash from forms import ContactForm app = Flask(__name__) app.secret_key = 'development key' @app.route('/contact', methods = ['GET', 'POST']) def contact(): form = ContactForm() if request.method == 'POST': if form.validate() == False: flash('All fields are required.') return render_template('contact.html', form = form) else: return render_template('success.html') elif request.method == 'GET': return render_template('contact.html', form = form) if __name__ == '__main__': app.run(debug = True) The Script of the template (contact.html) is as follows − <!doctype html> <html> <body> <h2 style = "text-align: center;">Contact Form</h2> {% for message in form.name.errors %} <div>{{ message }}</div> {% endfor %} {% for message in form.email.errors %} <div>{{ message }}</div> {% endfor %} <form action = "http://localhost:5000/contact" method = post> <fieldset> <legend>Contact Form</legend> {{ form.hidden_tag() }} <div style = font-size:20px; font-weight:bold; margin-left:150px;> {{ form.name.label }}<br> {{ form.name }} <br> {{ form.Gender.label }} {{ form.Gender }} {{ form.Address.label }}<br> {{ form.Address }} <br> {{ form.email.label }}<br> {{ form.email }} <br> {{ form.Age.label }}<br> {{ form.Age }} <br> {{ form.language.label }}<br> {{ form.language }} <br> {{ form.submit }} </div> </fieldset> </form> </body> </html> Run formexample.py in Python shell and visit URL http://localhost:5000/contact. The Contact form will be displayed as shown below. If there are any errors, the page will look like this − If there are no errors, ‘success.html’ will be rendered. 22 Lectures 6 hours Malhar Lathkar 21 Lectures 1.5 hours Jack Chan 16 Lectures 4 hours Malhar Lathkar 54 Lectures 6 hours Srikanth Guskra 88 Lectures 3.5 hours Jorge Escobar 80 Lectures 12 hours Stone River ELearning Print Add Notes Bookmark this page
[ { "code": null, "e": 2278, "s": 2033, "text": "One of the essential aspects of a web application is to present a user interface for the user. HTML provides a <form> tag, which is used to design an interface. A Form’s elements such as text input, radio, select etc. can be used appropriately." }, { "code": null, "e": 2406, "s": 2278, "text": "Data entered by a user is submitted in the form of Http request message to the server side script by either GET or POST method." }, { "code": null, "e": 2592, "s": 2406, "text": "The Server side script has to recreate the form elements from http request data. So in effect, form elements have to be defined twice – once in HTML and again in the server side script." }, { "code": null, "e": 2778, "s": 2592, "text": "The Server side script has to recreate the form elements from http request data. So in effect, form elements have to be defined twice – once in HTML and again in the server side script." }, { "code": null, "e": 2959, "s": 2778, "text": "Another disadvantage of using HTML form is that it is difficult (if not impossible) to render the form elements dynamically. HTML itself provides no way to validate a user’s input." }, { "code": null, "e": 3140, "s": 2959, "text": "Another disadvantage of using HTML form is that it is difficult (if not impossible) to render the form elements dynamically. HTML itself provides no way to validate a user’s input." }, { "code": null, "e": 3301, "s": 3140, "text": "This is where WTForms, a flexible form, rendering and validation library comes handy. Flask-WTF extension provides a simple interface with this WTForms library." }, { "code": null, "e": 3467, "s": 3301, "text": "Using Flask-WTF, we can define the form fields in our Python script and render them using an HTML template. It is also possible to apply validation to the WTF field." }, { "code": null, "e": 3521, "s": 3467, "text": "Let us see how this dynamic generation of HTML works." }, { "code": null, "e": 3571, "s": 3521, "text": "First, Flask-WTF extension needs to be installed." }, { "code": null, "e": 3594, "s": 3571, "text": "pip install flask-WTF\n" }, { "code": null, "e": 3696, "s": 3594, "text": "The installed package contains a Form class, which has to be used as a parent for user- defined form." }, { "code": null, "e": 3801, "s": 3696, "text": "WTforms package contains definitions of various form fields. Some Standard form fields are listed below." }, { "code": null, "e": 3811, "s": 3801, "text": "TextField" }, { "code": null, "e": 3862, "s": 3811, "text": "Represents <input type = 'text'> HTML form element" }, { "code": null, "e": 3875, "s": 3862, "text": "BooleanField" }, { "code": null, "e": 3930, "s": 3875, "text": "Represents <input type = 'checkbox'> HTML form element" }, { "code": null, "e": 3943, "s": 3930, "text": "DecimalField" }, { "code": null, "e": 3989, "s": 3943, "text": "Textfield for displaying number with decimals" }, { "code": null, "e": 4002, "s": 3989, "text": "IntegerField" }, { "code": null, "e": 4035, "s": 4002, "text": "TextField for displaying integer" }, { "code": null, "e": 4046, "s": 4035, "text": "RadioField" }, { "code": null, "e": 4098, "s": 4046, "text": "Represents <input type = 'radio'> HTML form element" }, { "code": null, "e": 4110, "s": 4098, "text": "SelectField" }, { "code": null, "e": 4141, "s": 4110, "text": "Represents select form element" }, { "code": null, "e": 4155, "s": 4141, "text": "TextAreaField" }, { "code": null, "e": 4195, "s": 4155, "text": "Represents <testarea> html form element" }, { "code": null, "e": 4209, "s": 4195, "text": "PasswordField" }, { "code": null, "e": 4264, "s": 4209, "text": "Represents <input type = 'password'> HTML form element" }, { "code": null, "e": 4276, "s": 4264, "text": "SubmitField" }, { "code": null, "e": 4324, "s": 4276, "text": "Represents <input type = 'submit'> form element" }, { "code": null, "e": 4395, "s": 4324, "text": "For example, a form containing a text field can be designed as below −" }, { "code": null, "e": 4517, "s": 4395, "text": "from flask_wtf import Form\nfrom wtforms import TextField\n\nclass ContactForm(Form):\n name = TextField(\"Name Of Student\")" }, { "code": null, "e": 4660, "s": 4517, "text": "In addition to the ‘name’ field, a hidden field for CSRF token is created automatically. This is to prevent Cross Site Request Forgery attack." }, { "code": null, "e": 4739, "s": 4660, "text": "When rendered, this will result into an equivalent HTML script as shown below." }, { "code": null, "e": 4912, "s": 4739, "text": "<input id = \"csrf_token\" name = \"csrf_token\" type = \"hidden\" />\n<label for = \"name\">Name Of Student</label><br>\n<input id = \"name\" name = \"name\" type = \"text\" value = \"\" />" }, { "code": null, "e": 5012, "s": 4912, "text": "A user-defined form class is used in a Flask application and the form is rendered using a template." }, { "code": null, "e": 5311, "s": 5012, "text": "from flask import Flask, render_template\nfrom forms import ContactForm\napp = Flask(__name__)\napp.secret_key = 'development key'\n\[email protected]('/contact')\ndef contact():\n form = ContactForm()\n return render_template('contact.html', form = form)\n\nif __name__ == '__main__':\n app.run(debug = True)" }, { "code": null, "e": 5457, "s": 5311, "text": "WTForms package also contains validator class. It is useful in applying validation to form fields. Following list shows commonly used validators." }, { "code": null, "e": 5470, "s": 5457, "text": "DataRequired" }, { "code": null, "e": 5506, "s": 5470, "text": "Checks whether input field is empty" }, { "code": null, "e": 5512, "s": 5506, "text": "Email" }, { "code": null, "e": 5574, "s": 5512, "text": "Checks whether text in the field follows email ID conventions" }, { "code": null, "e": 5584, "s": 5574, "text": "IPAddress" }, { "code": null, "e": 5620, "s": 5584, "text": "Validates IP address in input field" }, { "code": null, "e": 5627, "s": 5620, "text": "Length" }, { "code": null, "e": 5689, "s": 5627, "text": "Verifies if length of string in input field is in given range" }, { "code": null, "e": 5701, "s": 5689, "text": "NumberRange" }, { "code": null, "e": 5754, "s": 5701, "text": "Validates a number in input field within given range" }, { "code": null, "e": 5758, "s": 5754, "text": "URL" }, { "code": null, "e": 5795, "s": 5758, "text": "Validates URL entered in input field" }, { "code": null, "e": 5881, "s": 5795, "text": "We shall now apply ‘DataRequired’ validation rule for the name field in contact form." }, { "code": null, "e": 5967, "s": 5881, "text": "name = TextField(\"Name Of Student\",[validators.Required(\"Please enter your name.\")])\n" }, { "code": null, "e": 6192, "s": 5967, "text": "The validate() function of form object validates the form data and throws the validation errors if validation fails. The Error messages are sent to the template. In the HTML template, error messages are rendered dynamically." }, { "code": null, "e": 6261, "s": 6192, "text": "{% for message in form.name.errors %}\n {{ message }}\n{% endfor %}\n" }, { "code": null, "e": 6376, "s": 6261, "text": "The following example demonstrates the concepts given above. The design of Contact form is given below (forms.py)." }, { "code": null, "e": 7094, "s": 6376, "text": "from flask_wtf import Form\nfrom wtforms import TextField, IntegerField, TextAreaField, SubmitField, RadioField,\n SelectField\n\nfrom wtforms import validators, ValidationError\n\nclass ContactForm(Form):\n name = TextField(\"Name Of Student\",[validators.Required(\"Please enter \n your name.\")])\n Gender = RadioField('Gender', choices = [('M','Male'),('F','Female')])\n Address = TextAreaField(\"Address\")\n \n email = TextField(\"Email\",[validators.Required(\"Please enter your email address.\"),\n validators.Email(\"Please enter your email address.\")])\n \n Age = IntegerField(\"age\")\n language = SelectField('Languages', choices = [('cpp', 'C++'), \n ('py', 'Python')])\n submit = SubmitField(\"Send\")" }, { "code": null, "e": 7147, "s": 7094, "text": "Validators are applied to the Name and Email fields." }, { "code": null, "e": 7209, "s": 7147, "text": "Given below is the Flask application script (formexample.py)." }, { "code": null, "e": 7828, "s": 7209, "text": "from flask import Flask, render_template, request, flash\nfrom forms import ContactForm\napp = Flask(__name__)\napp.secret_key = 'development key'\n\[email protected]('/contact', methods = ['GET', 'POST'])\ndef contact():\n form = ContactForm()\n \n if request.method == 'POST':\n if form.validate() == False:\n flash('All fields are required.')\n return render_template('contact.html', form = form)\n else:\n return render_template('success.html')\n elif request.method == 'GET':\n return render_template('contact.html', form = form)\n\nif __name__ == '__main__':\n app.run(debug = True)" }, { "code": null, "e": 7886, "s": 7828, "text": "The Script of the template (contact.html) is as follows −" }, { "code": null, "e": 9161, "s": 7886, "text": "<!doctype html>\n<html>\n <body>\n <h2 style = \"text-align: center;\">Contact Form</h2>\n\t\t\n {% for message in form.name.errors %}\n <div>{{ message }}</div>\n {% endfor %}\n \n {% for message in form.email.errors %}\n <div>{{ message }}</div>\n {% endfor %}\n \n <form action = \"http://localhost:5000/contact\" method = post>\n <fieldset>\n <legend>Contact Form</legend>\n {{ form.hidden_tag() }}\n \n <div style = font-size:20px; font-weight:bold; margin-left:150px;>\n {{ form.name.label }}<br>\n {{ form.name }}\n <br>\n \n {{ form.Gender.label }} {{ form.Gender }}\n {{ form.Address.label }}<br>\n {{ form.Address }}\n <br>\n \n {{ form.email.label }}<br>\n {{ form.email }}\n <br>\n \n {{ form.Age.label }}<br>\n {{ form.Age }}\n <br>\n \n {{ form.language.label }}<br>\n {{ form.language }}\n <br>\n {{ form.submit }}\n </div>\n \n </fieldset>\n </form>\n </body>\n</html>" }, { "code": null, "e": 9292, "s": 9161, "text": "Run formexample.py in Python shell and visit URL http://localhost:5000/contact. The Contact form will be displayed as shown below." }, { "code": null, "e": 9348, "s": 9292, "text": "If there are any errors, the page will look like this −" }, { "code": null, "e": 9405, "s": 9348, "text": "If there are no errors, ‘success.html’ will be rendered." }, { "code": null, "e": 9438, "s": 9405, "text": "\n 22 Lectures \n 6 hours \n" }, { "code": null, "e": 9454, "s": 9438, "text": " Malhar Lathkar" }, { "code": null, "e": 9489, "s": 9454, "text": "\n 21 Lectures \n 1.5 hours \n" }, { "code": null, "e": 9500, "s": 9489, "text": " Jack Chan" }, { "code": null, "e": 9533, "s": 9500, "text": "\n 16 Lectures \n 4 hours \n" }, { "code": null, "e": 9549, "s": 9533, "text": " Malhar Lathkar" }, { "code": null, "e": 9582, "s": 9549, "text": "\n 54 Lectures \n 6 hours \n" }, { "code": null, "e": 9599, "s": 9582, "text": " Srikanth Guskra" }, { "code": null, "e": 9634, "s": 9599, "text": "\n 88 Lectures \n 3.5 hours \n" }, { "code": null, "e": 9649, "s": 9634, "text": " Jorge Escobar" }, { "code": null, "e": 9683, "s": 9649, "text": "\n 80 Lectures \n 12 hours \n" }, { "code": null, "e": 9706, "s": 9683, "text": " Stone River ELearning" }, { "code": null, "e": 9713, "s": 9706, "text": " Print" }, { "code": null, "e": 9724, "s": 9713, "text": " Add Notes" } ]
CSS | background-clip Property - GeeksforGeeks
17 Dec, 2021 The background-clip property in CSS is used to define how to extend background (color or image) within an element. Default Value: border-box Syntax: background-clip: border-box|padding-box|content-box|initial|inherit; Property value: border-box: The border-box property is used to set the background color spread over the whole division. Syntax:background-clip: border-box; background-clip: border-box; Example:htmlhtml<!DOCTYPE html><html> <head> <title>Border Box</title> <style> .gfg { background-color: green; background-clip:border-box; text-align:center; border:10px dashed black; } </style> </head> <body> <div class = "gfg"> <h2> GeeksforGeeks </h2> <p> background-clip: border-box; </p> </div> </body> </html> html <!DOCTYPE html><html> <head> <title>Border Box</title> <style> .gfg { background-color: green; background-clip:border-box; text-align:center; border:10px dashed black; } </style> </head> <body> <div class = "gfg"> <h2> GeeksforGeeks </h2> <p> background-clip: border-box; </p> </div> </body> </html> Output: padding-box: The padding-box property is used to set the background inside the border. Syntax:background-clip:padding-box; background-clip:padding-box; Example:htmlhtml<!DOCTYPE html><html> <head> <title>padding-box property</title> <style> .gfg { background-color: green; background-clip:padding-box; padding: 25px; border: 10px dashed black; } </style> </head> <body style = "text-align:center"> <div class = "gfg"> <h2> GeeksforGeeks </h2> <p> background-clip: padding-box; </p> </div> </body> </html> html <!DOCTYPE html><html> <head> <title>padding-box property</title> <style> .gfg { background-color: green; background-clip:padding-box; padding: 25px; border: 10px dashed black; } </style> </head> <body style = "text-align:center"> <div class = "gfg"> <h2> GeeksforGeeks </h2> <p> background-clip: padding-box; </p> </div> </body> </html> Output: content-box: The content-box property is used to set the background color upto the content only. Syntax:background-clip:content-box; background-clip:content-box; Example:htmlhtml<!DOCTYPE html><html> <head> <title>content-box property</title> <style> .gfg { background-color: green; background-clip:content-box; padding: 15px; border: 10px dashed black; } </style> </head> <body style = "text-align:center"> <div class = "gfg"> <h2> GeeksforGeeks </h2> <p> background-clip: content-box; </p> </div> </body> </html> html <!DOCTYPE html><html> <head> <title>content-box property</title> <style> .gfg { background-color: green; background-clip:content-box; padding: 15px; border: 10px dashed black; } </style> </head> <body style = "text-align:center"> <div class = "gfg"> <h2> GeeksforGeeks </h2> <p> background-clip: content-box; </p> </div> </body> </html> Output: initial: It is the default value. It is used to set the background color spread over the whole division. Syntax:background-clip:initial-box;Example:htmlhtml<!DOCTYPE html><html> <head> <title>Initial</title> <style> .gfg { background-color: green; background-clip:initial; padding: 15px; border: 10px dashed black; } </style> </head> <body style = "text-align:center"> <div class = "gfg"> <h2> GeeksforGeeks </h2> <p> background-clip: initial; </p> </div> </body> </html> background-clip:initial-box; Example: html <!DOCTYPE html><html> <head> <title>Initial</title> <style> .gfg { background-color: green; background-clip:initial; padding: 15px; border: 10px dashed black; } </style> </head> <body style = "text-align:center"> <div class = "gfg"> <h2> GeeksforGeeks </h2> <p> background-clip: initial; </p> </div> </body> </html> Output:Supported Browser: The browser supported by background-clip property are listed below:Google Chrome 4.0Internet Explorer 9.0Firefox 4.0Opera 10.5Safari 3.0My Personal Notes arrow_drop_upSave Supported Browser: The browser supported by background-clip property are listed below: Google Chrome 4.0 Internet Explorer 9.0 Firefox 4.0 Opera 10.5 Safari 3.0 skyridetim hritikbhatnagar2182 prachisoda1234 CSS-Properties Picked Technical Scripter 2018 CSS Technical Scripter Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Top 10 Projects For Beginners To Practice HTML and CSS Skills How to insert spaces/tabs in text using HTML/CSS? How to create footer to stay at the bottom of a Web page? How to update Node.js and NPM to next version ? CSS to put icon inside an input element in a form Roadmap to Become a Web Developer in 2022 Installation of Node.js on Linux How to fetch data from an API in ReactJS ? Top 10 Projects For Beginners To Practice HTML and CSS Skills Convert a string to an integer in JavaScript
[ { "code": null, "e": 23684, "s": 23656, "text": "\n17 Dec, 2021" }, { "code": null, "e": 23799, "s": 23684, "text": "The background-clip property in CSS is used to define how to extend background (color or image) within an element." }, { "code": null, "e": 23814, "s": 23799, "text": "Default Value:" }, { "code": null, "e": 23825, "s": 23814, "text": "border-box" }, { "code": null, "e": 23833, "s": 23825, "text": "Syntax:" }, { "code": null, "e": 23903, "s": 23833, "text": "background-clip: border-box|padding-box|content-box|initial|inherit;\n" }, { "code": null, "e": 23919, "s": 23903, "text": "Property value:" }, { "code": null, "e": 24023, "s": 23919, "text": "border-box: The border-box property is used to set the background color spread over the whole division." }, { "code": null, "e": 24059, "s": 24023, "text": "Syntax:background-clip: border-box;" }, { "code": null, "e": 24088, "s": 24059, "text": "background-clip: border-box;" }, { "code": null, "e": 24655, "s": 24088, "text": "Example:htmlhtml<!DOCTYPE html><html> <head> <title>Border Box</title> <style> .gfg { background-color: green; background-clip:border-box; text-align:center; border:10px dashed black; } </style> </head> <body> <div class = \"gfg\"> <h2> GeeksforGeeks </h2> <p> background-clip: border-box; </p> </div> </body> </html> " }, { "code": null, "e": 24660, "s": 24655, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title>Border Box</title> <style> .gfg { background-color: green; background-clip:border-box; text-align:center; border:10px dashed black; } </style> </head> <body> <div class = \"gfg\"> <h2> GeeksforGeeks </h2> <p> background-clip: border-box; </p> </div> </body> </html> ", "e": 25211, "s": 24660, "text": null }, { "code": null, "e": 25219, "s": 25211, "text": "Output:" }, { "code": null, "e": 25306, "s": 25219, "text": "padding-box: The padding-box property is used to set the background inside the border." }, { "code": null, "e": 25342, "s": 25306, "text": "Syntax:background-clip:padding-box;" }, { "code": null, "e": 25371, "s": 25342, "text": "background-clip:padding-box;" }, { "code": null, "e": 25972, "s": 25371, "text": "Example:htmlhtml<!DOCTYPE html><html> <head> <title>padding-box property</title> <style> .gfg { background-color: green; background-clip:padding-box; padding: 25px; border: 10px dashed black; } </style> </head> <body style = \"text-align:center\"> <div class = \"gfg\"> <h2> GeeksforGeeks </h2> <p> background-clip: padding-box; </p> </div> </body> </html> " }, { "code": null, "e": 25977, "s": 25972, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title>padding-box property</title> <style> .gfg { background-color: green; background-clip:padding-box; padding: 25px; border: 10px dashed black; } </style> </head> <body style = \"text-align:center\"> <div class = \"gfg\"> <h2> GeeksforGeeks </h2> <p> background-clip: padding-box; </p> </div> </body> </html> ", "e": 26562, "s": 25977, "text": null }, { "code": null, "e": 26570, "s": 26562, "text": "Output:" }, { "code": null, "e": 26667, "s": 26570, "text": "content-box: The content-box property is used to set the background color upto the content only." }, { "code": null, "e": 26703, "s": 26667, "text": "Syntax:background-clip:content-box;" }, { "code": null, "e": 26732, "s": 26703, "text": "background-clip:content-box;" }, { "code": null, "e": 27340, "s": 26732, "text": "Example:htmlhtml<!DOCTYPE html><html> <head> <title>content-box property</title> <style> .gfg { background-color: green; background-clip:content-box; padding: 15px; border: 10px dashed black; } </style> </head> <body style = \"text-align:center\"> <div class = \"gfg\"> <h2> GeeksforGeeks </h2> <p> background-clip: content-box; </p> </div> </body> </html> " }, { "code": null, "e": 27345, "s": 27340, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title>content-box property</title> <style> .gfg { background-color: green; background-clip:content-box; padding: 15px; border: 10px dashed black; } </style> </head> <body style = \"text-align:center\"> <div class = \"gfg\"> <h2> GeeksforGeeks </h2> <p> background-clip: content-box; </p> </div> </body> </html> ", "e": 27937, "s": 27345, "text": null }, { "code": null, "e": 27945, "s": 27937, "text": "Output:" }, { "code": null, "e": 28050, "s": 27945, "text": "initial: It is the default value. It is used to set the background color spread over the whole division." }, { "code": null, "e": 28665, "s": 28050, "text": "Syntax:background-clip:initial-box;Example:htmlhtml<!DOCTYPE html><html> <head> <title>Initial</title> <style> .gfg { background-color: green; background-clip:initial; padding: 15px; border: 10px dashed black; } </style> </head> <body style = \"text-align:center\"> <div class = \"gfg\"> <h2> GeeksforGeeks </h2> <p> background-clip: initial; </p> </div> </body> </html> " }, { "code": null, "e": 28694, "s": 28665, "text": "background-clip:initial-box;" }, { "code": null, "e": 28703, "s": 28694, "text": "Example:" }, { "code": null, "e": 28708, "s": 28703, "text": "html" }, { "code": "<!DOCTYPE html><html> <head> <title>Initial</title> <style> .gfg { background-color: green; background-clip:initial; padding: 15px; border: 10px dashed black; } </style> </head> <body style = \"text-align:center\"> <div class = \"gfg\"> <h2> GeeksforGeeks </h2> <p> background-clip: initial; </p> </div> </body> </html> ", "e": 29272, "s": 28708, "text": null }, { "code": null, "e": 29470, "s": 29272, "text": "Output:Supported Browser: The browser supported by background-clip property are listed below:Google Chrome 4.0Internet Explorer 9.0Firefox 4.0Opera 10.5Safari 3.0My Personal Notes\narrow_drop_upSave" }, { "code": null, "e": 29557, "s": 29470, "text": "Supported Browser: The browser supported by background-clip property are listed below:" }, { "code": null, "e": 29575, "s": 29557, "text": "Google Chrome 4.0" }, { "code": null, "e": 29597, "s": 29575, "text": "Internet Explorer 9.0" }, { "code": null, "e": 29609, "s": 29597, "text": "Firefox 4.0" }, { "code": null, "e": 29620, "s": 29609, "text": "Opera 10.5" }, { "code": null, "e": 29631, "s": 29620, "text": "Safari 3.0" }, { "code": null, "e": 29642, "s": 29631, "text": "skyridetim" }, { "code": null, "e": 29662, "s": 29642, "text": "hritikbhatnagar2182" }, { "code": null, "e": 29677, "s": 29662, "text": "prachisoda1234" }, { "code": null, "e": 29692, "s": 29677, "text": "CSS-Properties" }, { "code": null, "e": 29699, "s": 29692, "text": "Picked" }, { "code": null, "e": 29723, "s": 29699, "text": "Technical Scripter 2018" }, { "code": null, "e": 29727, "s": 29723, "text": "CSS" }, { "code": null, "e": 29746, "s": 29727, "text": "Technical Scripter" }, { "code": null, "e": 29763, "s": 29746, "text": "Web Technologies" }, { "code": null, "e": 29861, "s": 29763, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 29923, "s": 29861, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 29973, "s": 29923, "text": "How to insert spaces/tabs in text using HTML/CSS?" }, { "code": null, "e": 30031, "s": 29973, "text": "How to create footer to stay at the bottom of a Web page?" }, { "code": null, "e": 30079, "s": 30031, "text": "How to update Node.js and NPM to next version ?" }, { "code": null, "e": 30129, "s": 30079, "text": "CSS to put icon inside an input element in a form" }, { "code": null, "e": 30171, "s": 30129, "text": "Roadmap to Become a Web Developer in 2022" }, { "code": null, "e": 30204, "s": 30171, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 30247, "s": 30204, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 30309, "s": 30247, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" } ]
Windows Taskbar Operations in ElectronJS - GeeksforGeeks
29 Jul, 2020 ElectronJS is an Open Source Framework used for building Cross-Platform native desktop applications using web technologies such as HTML, CSS, and JavaScript which are capable of running on Windows, macOS, and Linux operating systems. It combines the Chromium engine and NodeJS into a Single Runtime. According to the official definition, the taskbar is an element of the operating system located at the bottom of the screen. It allows us to locate and launch programs through the Start Menu, or view any program that’s currently open. On the right side of the taskbar is the Notifications Area that allows us to check the date and time, items running in the background, etc. All modern Desktop applications that are supported on the Windows OS platform can interact with this Windows taskbar. Some of the more common taskbar operations include displaying an overlay over the original icon or flashing the icon of the application to notify the user. Electron also provides us with a way by which we can interact with this Windows taskbar using the Instance methods of the BrowserWindow object. This tutorial will demonstrate these common Windows Taskbar operations in Electron. For more information on how Electron interacts with the Notification Area, Refer to the article: Custom Notifications in ElectronJS. More complex Windows taskbar operations such as displaying a custom thumbnail toolbar for the application, etc, will be covered in separate articles. We assume that you are familiar with the prerequisites as covered in the above-mentioned link. For Electron to work, node and npm need to be pre-installed in the system. Project Structure: Example: Follow the Steps given in Desktop Operations in ElectronJS to setup the basic Electron Application. Copy the Boilerplate code for the main.js file and the index.html file as provided in the article. Also, perform the necessary changes mentioned for the package.json file to launch the Electron Application. We will continue building our application using the same code base. The basic steps required to setup the Electron application remain the same. Create the assets folder according to the project structure. This assets’ folder will contain the image.png file which will be used as an overlay image for the window’s taskbar. For this tutorial, we have used the Electron logo as the image.png file. package.json: { "name": "electron-taskbar", "version": "1.0.0", "description": "Windows Taskbar Operations in Electron", "main": "main.js", "scripts": { "start": "electron ." }, "keywords": [ "electron" ], "author": "Radhesh Khanna", "license": "ISC", "dependencies": { "electron": "^8.3.0" } } Output: Windows Taskbar Operations in Electron: The BrowserWindow Instance is part of the Main Process. To import and use BrowserWindow in the Renderer Process, we will be using Electron remote module. As mentioned above, the two most common Windows taskbar operations are Icon overlays and the flashing Icon effect. We will cover both of these operations in detail via code. Icon Overlays in Windows Taskbar According to the official MSDN: “Icon overlays serve as a contextual notification of status and are intended to negate the need for a separate notification area status icon to communicate that information to the user. For instance, the new mail status in Microsoft Outlook, currently shown in the notification area, can now be indicated through an overlay on the taskbar button. Again, you must decide during your development cycle which method is best for your application. Overlay icons are intended to supply important, long-standing status or notifications such as network status, messenger status, or new mail. The user should not be presented with constantly changing overlays or animations.” In Electron, we can use a small overlay icon to set over the original application Icon. This can be used to display the application status and can be changed accordingly. When the Electron application is originally launched, it displays a single application Icon in the Windows Taskbar as shown below: index.js: Add the following snippet in that file. javascript const electron = require('electron')const path = require('path'); // Import BrowserWindow using Electron remoteconst BrowserWindow = electron.remote.BrowserWindow;let win = BrowserWindow.getFocusedWindow(); win.setOverlayIcon( path.join(__dirname, '../assets/image.png'), 'Overlay Icon Description'); setTimeout(() => { win.setOverlayIcon(null, '');}, 5000); Explanation: The win.setOverlayIcon(overlay, description) Instance method of the BrowserWindow object is supported in Windows OS only. This Instance method when called sets a 16 x 16-pixel overlay image over the current taskbar icon. As mentioned above, it is usually used to convey some sort of application status or to passively notify the user. This Instance method does not have a return type. It takes in the following parameters: overlay: NativeImage This parameter represents the NativeImage icon to display in the bottom right corner of the application icon in the Windows taskbar. If this parameter is set to null, the overlay icon is cleared. The NativeImage Instance is specially designed for Electron applications to create system tray, dock, taskbar, and application icons using PNG or JPG files. description: String This parameter provides a description for the overlay icon which will be provided to Accessibility screen readers. When clearing the overlay from the taskbar icon, an empty String value can be passed to this parameter. In the above code, we have used the path.join() method of the NodeJS path module to fetch the image.png file from the assets folder. We have also used the setTimeout() function to simulate the removal of the overlay icon from the Windows taskbar after 5s. Output: Flash Frame Effect in Windows Taskbar According to the official MSDN: “Typically, a window is flashed to inform the user that the window requires attention but that it does not currently have the keyboard focus.” index.js: In Electron, we can highlight the taskbar icon of the application and make it flash for notifying the user. The flashing effect of the icon can occur for either a specific time interval or until a specific event occurs. If the notification is urgent, we can even make the icon flash until the user does not explicitly focus on the application window. This is similar to bouncing the dock icon on macOS. javascript const electron = require('electron')const path = require('path'); // Import BrowserWindow using Electron remoteconst BrowserWindow = electron.remote.BrowserWindow;let win = BrowserWindow.getFocusedWindow(); setTimeout(() => { win.flashFrame(true);}, 5000) win.once('focus', () => win.flashFrame(false)); Explanation: The win.flashFrame(flag) Instance method of the BrowserWindow object starts or stops the flashing of the application icon in the Windows taskbar based on the flag: Boolean parameter provided. This is an effective way to catch the user’s attention and this is used in all modern desktop applications. This Instance method does not have any return type. In the above code, we have used the setTimeout() function to simulate the flashing icon effect in the Windows taskbar. The flashing effect will be activated 5s after the application is launched. Note: If this Instance method is not called with the flag parameter set as false, the flashing will continue infinitely. In the above code, we have called with flag: false when the application window comes into focus i.e. when the focus Event is emitted on the current BrowserWindow Instance. For a detailed Explanation on the BrowserWindow.getFocusedWindow() static method to fetch the current BrowserWindow Instance, Refer to the article: Emulation in ElectronJS.Output: ElectronJS HTML JavaScript Node.js Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. REST API (Introduction) Design a web page using HTML and CSS Angular File Upload Form validation using jQuery DOM (Document Object Model) Convert a string to an integer in JavaScript Difference between var, let and const keywords in JavaScript How to calculate the number of days between two dates in javascript? Differences between Functional Components and Class Components in React How to append HTML code to a div using JavaScript ?
[ { "code": null, "e": 25156, "s": 25128, "text": "\n29 Jul, 2020" }, { "code": null, "e": 25456, "s": 25156, "text": "ElectronJS is an Open Source Framework used for building Cross-Platform native desktop applications using web technologies such as HTML, CSS, and JavaScript which are capable of running on Windows, macOS, and Linux operating systems. It combines the Chromium engine and NodeJS into a Single Runtime." }, { "code": null, "e": 26616, "s": 25456, "text": "According to the official definition, the taskbar is an element of the operating system located at the bottom of the screen. It allows us to locate and launch programs through the Start Menu, or view any program that’s currently open. On the right side of the taskbar is the Notifications Area that allows us to check the date and time, items running in the background, etc. All modern Desktop applications that are supported on the Windows OS platform can interact with this Windows taskbar. Some of the more common taskbar operations include displaying an overlay over the original icon or flashing the icon of the application to notify the user. Electron also provides us with a way by which we can interact with this Windows taskbar using the Instance methods of the BrowserWindow object. This tutorial will demonstrate these common Windows Taskbar operations in Electron. For more information on how Electron interacts with the Notification Area, Refer to the article: Custom Notifications in ElectronJS. More complex Windows taskbar operations such as displaying a custom thumbnail toolbar for the application, etc, will be covered in separate articles." }, { "code": null, "e": 26786, "s": 26616, "text": "We assume that you are familiar with the prerequisites as covered in the above-mentioned link. For Electron to work, node and npm need to be pre-installed in the system." }, { "code": null, "e": 26807, "s": 26786, "text": "Project Structure: " }, { "code": null, "e": 27533, "s": 26807, "text": "Example: Follow the Steps given in Desktop Operations in ElectronJS to setup the basic Electron Application. Copy the Boilerplate code for the main.js file and the index.html file as provided in the article. Also, perform the necessary changes mentioned for the package.json file to launch the Electron Application. We will continue building our application using the same code base. The basic steps required to setup the Electron application remain the same. Create the assets folder according to the project structure. This assets’ folder will contain the image.png file which will be used as an overlay image for the window’s taskbar. For this tutorial, we have used the Electron logo as the image.png file. package.json: " }, { "code": null, "e": 27851, "s": 27533, "text": "{\n \"name\": \"electron-taskbar\",\n \"version\": \"1.0.0\",\n \"description\": \"Windows Taskbar Operations in Electron\",\n \"main\": \"main.js\",\n \"scripts\": {\n \"start\": \"electron .\"\n },\n \"keywords\": [\n \"electron\"\n ],\n \"author\": \"Radhesh Khanna\",\n \"license\": \"ISC\",\n \"dependencies\": {\n \"electron\": \"^8.3.0\"\n }\n}\n" }, { "code": null, "e": 27860, "s": 27851, "text": "Output: " }, { "code": null, "e": 28230, "s": 27860, "text": " Windows Taskbar Operations in Electron: The BrowserWindow Instance is part of the Main Process. To import and use BrowserWindow in the Renderer Process, we will be using Electron remote module. As mentioned above, the two most common Windows taskbar operations are Icon overlays and the flashing Icon effect. We will cover both of these operations in detail via code. " }, { "code": null, "e": 28264, "s": 28230, "text": " Icon Overlays in Windows Taskbar" }, { "code": null, "e": 28297, "s": 28264, "text": "According to the official MSDN: " }, { "code": null, "e": 28964, "s": 28297, "text": "“Icon overlays serve as a contextual notification of status and are intended to negate the need for a separate notification area status icon to communicate that information to the user. For instance, the new mail status in Microsoft Outlook, currently shown in the notification area, can now be indicated through an overlay on the taskbar button. Again, you must decide during your development cycle which method is best for your application. Overlay icons are intended to supply important, long-standing status or notifications such as network status, messenger status, or new mail. The user should not be presented with constantly changing overlays or animations.”" }, { "code": null, "e": 29269, "s": 28964, "text": " In Electron, we can use a small overlay icon to set over the original application Icon. This can be used to display the application status and can be changed accordingly. When the Electron application is originally launched, it displays a single application Icon in the Windows Taskbar as shown below: " }, { "code": null, "e": 29321, "s": 29269, "text": "index.js: Add the following snippet in that file. " }, { "code": null, "e": 29332, "s": 29321, "text": "javascript" }, { "code": "const electron = require('electron')const path = require('path'); // Import BrowserWindow using Electron remoteconst BrowserWindow = electron.remote.BrowserWindow;let win = BrowserWindow.getFocusedWindow(); win.setOverlayIcon( path.join(__dirname, '../assets/image.png'), 'Overlay Icon Description'); setTimeout(() => { win.setOverlayIcon(null, '');}, 5000);", "e": 29702, "s": 29332, "text": null }, { "code": null, "e": 30139, "s": 29702, "text": "Explanation: The win.setOverlayIcon(overlay, description) Instance method of the BrowserWindow object is supported in Windows OS only. This Instance method when called sets a 16 x 16-pixel overlay image over the current taskbar icon. As mentioned above, it is usually used to convey some sort of application status or to passively notify the user. This Instance method does not have a return type. It takes in the following parameters: " }, { "code": null, "e": 30513, "s": 30139, "text": "overlay: NativeImage This parameter represents the NativeImage icon to display in the bottom right corner of the application icon in the Windows taskbar. If this parameter is set to null, the overlay icon is cleared. The NativeImage Instance is specially designed for Electron applications to create system tray, dock, taskbar, and application icons using PNG or JPG files." }, { "code": null, "e": 30752, "s": 30513, "text": "description: String This parameter provides a description for the overlay icon which will be provided to Accessibility screen readers. When clearing the overlay from the taskbar icon, an empty String value can be passed to this parameter." }, { "code": null, "e": 31017, "s": 30752, "text": "In the above code, we have used the path.join() method of the NodeJS path module to fetch the image.png file from the assets folder. We have also used the setTimeout() function to simulate the removal of the overlay icon from the Windows taskbar after 5s. Output: " }, { "code": null, "e": 31056, "s": 31017, "text": " Flash Frame Effect in Windows Taskbar" }, { "code": null, "e": 31089, "s": 31056, "text": "According to the official MSDN: " }, { "code": null, "e": 31232, "s": 31089, "text": "“Typically, a window is flashed to inform the user that the window requires attention but that it does not currently have the keyboard focus.”" }, { "code": null, "e": 31646, "s": 31232, "text": "index.js: In Electron, we can highlight the taskbar icon of the application and make it flash for notifying the user. The flashing effect of the icon can occur for either a specific time interval or until a specific event occurs. If the notification is urgent, we can even make the icon flash until the user does not explicitly focus on the application window. This is similar to bouncing the dock icon on macOS. " }, { "code": null, "e": 31657, "s": 31646, "text": "javascript" }, { "code": "const electron = require('electron')const path = require('path'); // Import BrowserWindow using Electron remoteconst BrowserWindow = electron.remote.BrowserWindow;let win = BrowserWindow.getFocusedWindow(); setTimeout(() => { win.flashFrame(true);}, 5000) win.once('focus', () => win.flashFrame(false));", "e": 31967, "s": 31657, "text": null }, { "code": null, "e": 32528, "s": 31967, "text": "Explanation: The win.flashFrame(flag) Instance method of the BrowserWindow object starts or stops the flashing of the application icon in the Windows taskbar based on the flag: Boolean parameter provided. This is an effective way to catch the user’s attention and this is used in all modern desktop applications. This Instance method does not have any return type. In the above code, we have used the setTimeout() function to simulate the flashing icon effect in the Windows taskbar. The flashing effect will be activated 5s after the application is launched. " }, { "code": null, "e": 33002, "s": 32528, "text": "Note: If this Instance method is not called with the flag parameter set as false, the flashing will continue infinitely. In the above code, we have called with flag: false when the application window comes into focus i.e. when the focus Event is emitted on the current BrowserWindow Instance. For a detailed Explanation on the BrowserWindow.getFocusedWindow() static method to fetch the current BrowserWindow Instance, Refer to the article: Emulation in ElectronJS.Output: " }, { "code": null, "e": 33013, "s": 33002, "text": "ElectronJS" }, { "code": null, "e": 33018, "s": 33013, "text": "HTML" }, { "code": null, "e": 33029, "s": 33018, "text": "JavaScript" }, { "code": null, "e": 33037, "s": 33029, "text": "Node.js" }, { "code": null, "e": 33054, "s": 33037, "text": "Web Technologies" }, { "code": null, "e": 33059, "s": 33054, "text": "HTML" }, { "code": null, "e": 33157, "s": 33059, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 33181, "s": 33157, "text": "REST API (Introduction)" }, { "code": null, "e": 33218, "s": 33181, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 33238, "s": 33218, "text": "Angular File Upload" }, { "code": null, "e": 33267, "s": 33238, "text": "Form validation using jQuery" }, { "code": null, "e": 33295, "s": 33267, "text": "DOM (Document Object Model)" }, { "code": null, "e": 33340, "s": 33295, "text": "Convert a string to an integer in JavaScript" }, { "code": null, "e": 33401, "s": 33340, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 33470, "s": 33401, "text": "How to calculate the number of days between two dates in javascript?" }, { "code": null, "e": 33542, "s": 33470, "text": "Differences between Functional Components and Class Components in React" } ]
Customer Segmentation: K-Means Clustering & A/B Testing | by Andrew Nguyen | Towards Data Science
I have been working in Advertising, specifically Digital Media and Performance, for nearly 3 years and customer behaviour analysis is one of the core concentrations in my day-to-day job. With the help of different analytics platforms (e.g. Google Analytics, Adobe Analytics), my life has been made easier than before since these platforms come with the built-in function of segmentation that analyses user behaviours across dimensions and metrics. However, despite the convenience provided, I was hoping to leverage Machine Learning to do customer segmentation that can be scalable and applicable to other optimizations in Data Science (e.g. A/B Testing). Then, I came across the dataset provided by Google Analytics for a Kaggle competition and decided to use it for this project. Feel free to check out the dataset here if you’re keen! Beware that the dataset has several sub-datasets and each has more than 900k rows! This always remain an essential step in every Data Science project to ensure the dataset is clean and properly pre-processed to be used for modelling. First of all, let’s import all the necessary libraries and read the csv file: import pandas as pdimport matplotlib.pyplot as pltimport numpy as npimport seaborn as snsdf_raw = pd.read_csv("google-analytics.csv")df_raw.head() As you can see, the raw dataset above is a bit “messy” and not digestible at all since some variables are formatted as JSON fields which compress different values of different sub-variables into one field. For example, for geoNetwork variable, we can tell that there are several sub-variables such as continent, subContinent, etc. that are grouped together. Thanks to the help of a Kaggler, I was able to convert these variables to a more digestible ones by flattening those JSON fields: import osimport jsonfrom pandas import json_normalizedef load_df(csv_path="google-analytics.csv", nrows=None): json_columns = ['device', 'geoNetwork', 'totals', 'trafficSource'] df = pd.read_csv(csv_path, converters={column: json.loads for column in json_columns},dtype={'fullVisitorID':'str'}, nrows=nrows) for column in json_columns: column_converted = json_normalize(df[column]) column_converted.columns = [f"{column}_{subcolumn}" for subcolumn in column_converted.columns] df = df.drop(column, axis=1).merge(column_converted, right_index=True, left_index=True) return df After flattening those JSON fields, we are able to see a much cleaner dataset, especially those JSON variables split into sub-variables (e.g. device split into device_browser, device_browserVersion, etc.). For this project, I have chosen the variables that I believe have better impact or correlation to the user behaviours: df = df.loc[:,['channelGrouping', 'date', 'fullVisitorId', 'sessionId', 'visitId', 'visitNumber', 'device_browser', 'device_operatingSystem', 'device_isMobile', 'geoNetwork_country', 'trafficSource_source', 'totals_visits', 'totals_hits', 'totals_pageviews', 'totals_bounces', 'totals_transactionRevenue']]df = df.fillna(value=0)df.head() Moving on, as the new dataset has fewer variables which, however, vary in terms of data type, I took some time to analyze each and every variable to ensure the data is “clean enough” prior to modelling. Below are some quick examples of un-clean data to be cleaned: #Format the valuesdf.channelGrouping.unique()df.channelGrouping = df.channelGrouping.replace("(Other)", "Others")#Convert boolean type to string df.device_isMobile.unique()df.device_isMobile = df.device_isMobile.astype(str)df.loc[df.device_isMobile == "False", "device"] = "Desktop"df.loc[df.device_isMobile == "True", "device"] = "Mobile"#Categorize similar valuesdf['traffic_source'] = df.trafficSource_sourcemain_traffic_source = ["google","baidu","bing","yahoo",...., "pinterest","yandex"]df.traffic_source[df.traffic_source.str.contains("google")] = "google"df.traffic_source[df.traffic_source.str.contains("baidu")] = "baidu"df.traffic_source[df.traffic_source.str.contains("bing")] = "bing"df.traffic_source[df.traffic_source.str.contains("yahoo")] = "yahoo".....df.traffic_source[~df.traffic_source.isin(main_traffic_source)] = "Others" After re-formatting, I found that fullVisitorID’s unique values are fewer than the total rows of the dataset, meaning there are multiple fullVisitorIDs that were recorded. Hence, I proceeded to group the variables by fullVisitorID and sort by Revenue: df_groupby = df.groupby(['fullVisitorId', 'channelGrouping', 'geoNetwork_country', 'traffic_source', 'device', 'deviceBrowser', 'device_operatingSystem']) .agg({'totals_hits':'sum', 'totals_pageviews':'sum', 'totals_bounces':'sum','totals_transactionRevenue':'sum'}) .reset_index()df_groupby = df_groupby.sort_values(by='totals_transactionRevenue', ascending=False).reset_index(drop=True) The last step of any EDA process that cannot be overlooked is detecting and handling outliers of the dataset. The reason being is that outliers, especially those marginally extreme ones, impact the performance of a machine learning model, mostly negatively. That said, we need to either remove those outliers from the dataset or convert them (by mean or mode) to fit them to the range that the majority of the data points lie in: #Seaborn Boxplot to see how far outliers lie compared to the restsns.boxplot(df_groupby.totals_transactionRevenue) As you can see, most of the data points in Revenue lie below USD200,000 and there’s only one extreme outlier that hits nearly USD600,000. If we don’t remove this outlier, the model also takes it into consideration that produces a less objective reflection. So let’s go ahead and remove it, and please do so for other variables. Just a quick note, there are several methods of dealing with outliers (such as inter-quantiles). However, in my case, there’s only one so I just went ahead defining the range that I believe fits well: df_groupby = df_groupby.loc[df_groupby.totals_transactionRevenue < 200000] What is K-Means Clustering and how does it help with customer segmentation? Clustering is the most well-known unsupervised learning technique that finds structure in unlabeled data by identifying similar groups/clusters, particularly with the helps of K-Means. K-Means tries to address two questions: (1) K: the number of clusters (groups) we expect to find in the dataset and (2) Means: the average distance of data to each cluster center (centroid) which we try to minimize. Also, one thing of note is that K-Means comes with several variations, typically : init = ‘random’: that randomly selects the centroids of each clusterinit = ‘k-means++’: that only selects the 1st centroid by randomness while other centroids to be placed as far away from the 1st as possible init = ‘random’: that randomly selects the centroids of each cluster init = ‘k-means++’: that only selects the 1st centroid by randomness while other centroids to be placed as far away from the 1st as possible In this project, I’ll use the second option to ensure that each cluster is well-distinguished from one another: from sklearn.cluster import KMeansdata = df_groupby.iloc[:, 7:]kmeans = KMeans(n_clusters=3, init="k-means++")kmeans.fit(data)labels = kmeans.predict(data)labels = pd.DataFrame(data=labels, index = df_groupby.index, columns=["labels"]) Before applying the algorithm, we need to define “n_clusters” which is the number of groups we expect to get out of the modelling. In this case, I randomly put n_clusters = 3. Then, I went ahead visualizing how the dataset is grouped using 2 variables: Revenue and PageViews: plt.scatter(df_kmeans.totals_transactionRevenue[df_kmeans.labels == 0],df_kmeans.totals_pageviews[df_kmeans.labels == 0], c='blue')plt.scatter(df_kmeans.totals_transactionRevenue[df_kmeans.labels == 1], df_kmeans.totals_pageviews[df_kmeans.labels == 1], c='green')plt.scatter(df_kmeans.totals_transactionRevenue[df_kmeans.labels == 2], df_kmeans.totals_pageviews[df_kmeans.labels == 2], c='orange')plt.show() As you can see, the x-axis stands for the number of Revenue while y-axis for PageViews . After modelling, we can tell a certain degree of difference in 3 clusters. However, I was not sure whether 3 is the “right” number of clusters or not. That said, we can rely on the estimator of K-Means algorithm, inertia_, which is the distance from each sample to the centroid. In particular, we will compare the inertia of each cluster ranging from 1 to 10, in my case, and see which is the lowest and how far we should go: #Find the best number of clustersnum_clusters = [x for x in range(1,10)]inertia = []for i in num_clusters: model = KMeans(n_clusters = i, init="k-means++") model.fit(data) inertia.append(model.inertia_) plt.plot(num_clusters, inertia)plt.show() From the chart above, inertia started to fall slowly since the 4th or 5th cluster, meaning that that’s the lowest inertia we can get, so I decided to go with “n_clusters=4”: plt.scatter(df_kmeans_n4.totals_pageviews[df_kmeans_n4.labels == 0], df_kmeans_n4.totals_transactionRevenue[df_kmeans_n4.labels == 0], c='blue')plt.scatter(df_kmeans_n4.totals_pageviews[df_kmeans_n4.labels == 1],df_kmeans_n4.totals_transactionRevenue[df_kmeans_n4.labels == 1], c='green')plt.scatter(df_kmeans_n4.totals_pageviews[df_kmeans_n4.labels == 2],df_kmeans_n4.totals_transactionRevenue[df_kmeans_n4.labels == 2], c='orange')plt.scatter(df_kmeans_n4.totals_pageviews[df_kmeans_n4.labels == 3],df_kmeans_n4.totals_transactionRevenue[df_kmeans_n4.labels == 3], c='red')plt.xlabel("Page Views")plt.ylabel("Revenue")plt.show() The clusters now look a lot more distinguishable from one another: Cluster 0 (Blue): high PageViews yet little-to-none RevenueCluster 1 (Red): medium PageViews, low RevenueCluster 2 (Orange): medium PageViews, medium RevenueCluster 4 (Green): unclear trend of PageViews, high Revenue Cluster 0 (Blue): high PageViews yet little-to-none Revenue Cluster 1 (Red): medium PageViews, low Revenue Cluster 2 (Orange): medium PageViews, medium Revenue Cluster 4 (Green): unclear trend of PageViews, high Revenue Except for cluster 0 and 4 (unclear pattern), which are beyond our control, cluster 1 and 2 can tell a story here as they seem to share some similarities. To understand which factor that might impact each cluster, I segmented each cluster by Channels, Device and Operating System: As seen from above, in Cluster 1, Referral channel contributed the highest Revenue, followed by Direct and Organic Search. In contrast, it’s Direct that made the highest contribution in Cluster 2. Similarly, while Macintosh is the most dominating device in Cluster 1, it’s Windows in Cluster 2 that achieved higher revenue. The only similarity between 2 clusters is the Device Browser, which Chrome is widely used. Voila! This further segmentation helps us tell which factor (in this case, Channel, Device Browser, Operating System) works better for each cluster, hence we can better evaluate our investment moving forward! What is A/B Testing and how can Hypothesis Testing come into place to complement the process? A/B Testing is no stranger to those who work in Advertising and Media, since it’s one of the powerful techniques that help improve the performance with more cost efficiency. Particularly, A/B Testing divides the audience into 2 groups: Test vs Control. Then, we expose the ads/show a different design to the Test group only to see if there’s any significant discrepancy between 2 groups: exposed vs un-exposed. In Advertising, there are a number of different automatic tools in the market that can easily help do A/B Testing at one click. However, I still wanted to try a different method in Data Science that can do the same: Hypothesis Testing. The methodology is pretty much the same, as Hypothesis Testing compares the Null Hypothesis (H0) and Alternate Hypothesis (H1) and see if there’s any significant discrepancy between the two! Assume that I run a promotion campaign that exposes an ad to the Test group. Here’s a quick summary of steps that need to be followed to test the result with Hypothesis Testing: Sample Size DeterminationPre-requisite Requirements: Normality and Correlation TestsHypothesis Testing Sample Size Determination Pre-requisite Requirements: Normality and Correlation Tests Hypothesis Testing For the 1st step, we can rely on Power Analysis which helps determine the sample size to draw from a population. Power Analysis requires 3 parameters: (1) effect size, (2) power and (3) alpha. If you are looking for details on how Power Analysis, please refer to an in-depth article here that I wrote some time ago. Below is a quick note to each parameter for your quick understanding: #Effect Size: (expected mean - actual mean) / actual_stdeffect_size = (280000 - df_group1_ab.revenue.mean())/df_group1_ab.revenue.std() #set expected mean to $350,000print(effect_size)#Powerpower = 0.9 #the probability of rejecting the null hypothesis#Alphaalpha = 0.05 #the error rate After having 3 parameters ready, we use TTestPower() to determine the sample size: import statsmodels.stats.power as smsn = sms.TTestPower().solve_power(effect_size=effect_size, power=power, alpha=alpha)print(n) The result is 279, meaning we need to draw 279 data points from each group: Test and Control. As I don’t have real data, I used np.random.normal to generate a list of revenue data, in this case sample size = 279 for each group: #Take the samples out of each group: control vs testcontrol_sample = np.random.normal(control_rev.mean(), control_rev.std(), size=279)test_sample = np.random.normal(test_rev.mean(), test_rev.std(), size=279) Moving to the 2nd step, we need to ensure the samples are (1) normally distributed and (2) independent (not correlated). Again, if you want a refresh on the tests used in this step, refer to my article as above. In short, we are going to use (1) Shapiro as the normality test and (2) Pearson as the correlation test. #Step 2. Pre-requisite: Normality, Correlationfrom scipy.stats import shapiro, pearsonrstat1, p1 = shapiro(control_sample)stat2, p2 = shapiro(test_sample)print(p1, p2)stat3, p3 = pearsonr(control_sample, test_sample)print(p3) The p-value of Shapiro is 0.129 and 0.539 for Control and Test group respectively, which is > 0.05. Hence, we don’t reject the null hypothesis and are able to say that 2 groups are normally distributed. The p-value of Pearson is 0.98, which is >0.05, meaning that 2 groups are independent from each other. Final step is here! As there are 2 variables to be tested against each other (Test vs Control group), we use T-Test to see if there’s any significant discrepancy in Revenue after running A/B Testing: #Step 3. Hypothesis Testingfrom scipy.stats import ttest_indtstat, p4 = ttest_ind(control_sample, test_sample)print(p4) The result is 0.35, which is > 0.05. Hence, the A/B Test conducted indicates that the Test Group exposed to the ads doesn’t show any superiority over the Control Group with no ad exposure. Voila! That’s the end of this project — Customer Segmentation & A/B Testing! I hope you find this article useful and easy to follow. Do look out for my upcoming projects in Data Science and Machine Learning in the near future! In the meantime feel free to check out my Github here for the complete repository: Github: https://github.com/andrewnguyen07LinkedIn: www.linkedin.com/in/andrewnguyen07
[ { "code": null, "e": 620, "s": 172, "text": "I have been working in Advertising, specifically Digital Media and Performance, for nearly 3 years and customer behaviour analysis is one of the core concentrations in my day-to-day job. With the help of different analytics platforms (e.g. Google Analytics, Adobe Analytics), my life has been made easier than before since these platforms come with the built-in function of segmentation that analyses user behaviours across dimensions and metrics." }, { "code": null, "e": 954, "s": 620, "text": "However, despite the convenience provided, I was hoping to leverage Machine Learning to do customer segmentation that can be scalable and applicable to other optimizations in Data Science (e.g. A/B Testing). Then, I came across the dataset provided by Google Analytics for a Kaggle competition and decided to use it for this project." }, { "code": null, "e": 1093, "s": 954, "text": "Feel free to check out the dataset here if you’re keen! Beware that the dataset has several sub-datasets and each has more than 900k rows!" }, { "code": null, "e": 1244, "s": 1093, "text": "This always remain an essential step in every Data Science project to ensure the dataset is clean and properly pre-processed to be used for modelling." }, { "code": null, "e": 1322, "s": 1244, "text": "First of all, let’s import all the necessary libraries and read the csv file:" }, { "code": null, "e": 1469, "s": 1322, "text": "import pandas as pdimport matplotlib.pyplot as pltimport numpy as npimport seaborn as snsdf_raw = pd.read_csv(\"google-analytics.csv\")df_raw.head()" }, { "code": null, "e": 1827, "s": 1469, "text": "As you can see, the raw dataset above is a bit “messy” and not digestible at all since some variables are formatted as JSON fields which compress different values of different sub-variables into one field. For example, for geoNetwork variable, we can tell that there are several sub-variables such as continent, subContinent, etc. that are grouped together." }, { "code": null, "e": 1957, "s": 1827, "text": "Thanks to the help of a Kaggler, I was able to convert these variables to a more digestible ones by flattening those JSON fields:" }, { "code": null, "e": 2565, "s": 1957, "text": "import osimport jsonfrom pandas import json_normalizedef load_df(csv_path=\"google-analytics.csv\", nrows=None): json_columns = ['device', 'geoNetwork', 'totals', 'trafficSource'] df = pd.read_csv(csv_path, converters={column: json.loads for column in json_columns},dtype={'fullVisitorID':'str'}, nrows=nrows) for column in json_columns: column_converted = json_normalize(df[column]) column_converted.columns = [f\"{column}_{subcolumn}\" for subcolumn in column_converted.columns] df = df.drop(column, axis=1).merge(column_converted, right_index=True, left_index=True) return df" }, { "code": null, "e": 2771, "s": 2565, "text": "After flattening those JSON fields, we are able to see a much cleaner dataset, especially those JSON variables split into sub-variables (e.g. device split into device_browser, device_browserVersion, etc.)." }, { "code": null, "e": 2890, "s": 2771, "text": "For this project, I have chosen the variables that I believe have better impact or correlation to the user behaviours:" }, { "code": null, "e": 3229, "s": 2890, "text": "df = df.loc[:,['channelGrouping', 'date', 'fullVisitorId', 'sessionId', 'visitId', 'visitNumber', 'device_browser', 'device_operatingSystem', 'device_isMobile', 'geoNetwork_country', 'trafficSource_source', 'totals_visits', 'totals_hits', 'totals_pageviews', 'totals_bounces', 'totals_transactionRevenue']]df = df.fillna(value=0)df.head()" }, { "code": null, "e": 3494, "s": 3229, "text": "Moving on, as the new dataset has fewer variables which, however, vary in terms of data type, I took some time to analyze each and every variable to ensure the data is “clean enough” prior to modelling. Below are some quick examples of un-clean data to be cleaned:" }, { "code": null, "e": 4339, "s": 3494, "text": "#Format the valuesdf.channelGrouping.unique()df.channelGrouping = df.channelGrouping.replace(\"(Other)\", \"Others\")#Convert boolean type to string df.device_isMobile.unique()df.device_isMobile = df.device_isMobile.astype(str)df.loc[df.device_isMobile == \"False\", \"device\"] = \"Desktop\"df.loc[df.device_isMobile == \"True\", \"device\"] = \"Mobile\"#Categorize similar valuesdf['traffic_source'] = df.trafficSource_sourcemain_traffic_source = [\"google\",\"baidu\",\"bing\",\"yahoo\",...., \"pinterest\",\"yandex\"]df.traffic_source[df.traffic_source.str.contains(\"google\")] = \"google\"df.traffic_source[df.traffic_source.str.contains(\"baidu\")] = \"baidu\"df.traffic_source[df.traffic_source.str.contains(\"bing\")] = \"bing\"df.traffic_source[df.traffic_source.str.contains(\"yahoo\")] = \"yahoo\".....df.traffic_source[~df.traffic_source.isin(main_traffic_source)] = \"Others\"" }, { "code": null, "e": 4591, "s": 4339, "text": "After re-formatting, I found that fullVisitorID’s unique values are fewer than the total rows of the dataset, meaning there are multiple fullVisitorIDs that were recorded. Hence, I proceeded to group the variables by fullVisitorID and sort by Revenue:" }, { "code": null, "e": 5008, "s": 4591, "text": "df_groupby = df.groupby(['fullVisitorId', 'channelGrouping', 'geoNetwork_country', 'traffic_source', 'device', 'deviceBrowser', 'device_operatingSystem']) .agg({'totals_hits':'sum', 'totals_pageviews':'sum', 'totals_bounces':'sum','totals_transactionRevenue':'sum'}) .reset_index()df_groupby = df_groupby.sort_values(by='totals_transactionRevenue', ascending=False).reset_index(drop=True)" }, { "code": null, "e": 5438, "s": 5008, "text": "The last step of any EDA process that cannot be overlooked is detecting and handling outliers of the dataset. The reason being is that outliers, especially those marginally extreme ones, impact the performance of a machine learning model, mostly negatively. That said, we need to either remove those outliers from the dataset or convert them (by mean or mode) to fit them to the range that the majority of the data points lie in:" }, { "code": null, "e": 5553, "s": 5438, "text": "#Seaborn Boxplot to see how far outliers lie compared to the restsns.boxplot(df_groupby.totals_transactionRevenue)" }, { "code": null, "e": 5810, "s": 5553, "text": "As you can see, most of the data points in Revenue lie below USD200,000 and there’s only one extreme outlier that hits nearly USD600,000. If we don’t remove this outlier, the model also takes it into consideration that produces a less objective reflection." }, { "code": null, "e": 6082, "s": 5810, "text": "So let’s go ahead and remove it, and please do so for other variables. Just a quick note, there are several methods of dealing with outliers (such as inter-quantiles). However, in my case, there’s only one so I just went ahead defining the range that I believe fits well:" }, { "code": null, "e": 6157, "s": 6082, "text": "df_groupby = df_groupby.loc[df_groupby.totals_transactionRevenue < 200000]" }, { "code": null, "e": 6233, "s": 6157, "text": "What is K-Means Clustering and how does it help with customer segmentation?" }, { "code": null, "e": 6418, "s": 6233, "text": "Clustering is the most well-known unsupervised learning technique that finds structure in unlabeled data by identifying similar groups/clusters, particularly with the helps of K-Means." }, { "code": null, "e": 6634, "s": 6418, "text": "K-Means tries to address two questions: (1) K: the number of clusters (groups) we expect to find in the dataset and (2) Means: the average distance of data to each cluster center (centroid) which we try to minimize." }, { "code": null, "e": 6717, "s": 6634, "text": "Also, one thing of note is that K-Means comes with several variations, typically :" }, { "code": null, "e": 6926, "s": 6717, "text": "init = ‘random’: that randomly selects the centroids of each clusterinit = ‘k-means++’: that only selects the 1st centroid by randomness while other centroids to be placed as far away from the 1st as possible" }, { "code": null, "e": 6995, "s": 6926, "text": "init = ‘random’: that randomly selects the centroids of each cluster" }, { "code": null, "e": 7136, "s": 6995, "text": "init = ‘k-means++’: that only selects the 1st centroid by randomness while other centroids to be placed as far away from the 1st as possible" }, { "code": null, "e": 7248, "s": 7136, "text": "In this project, I’ll use the second option to ensure that each cluster is well-distinguished from one another:" }, { "code": null, "e": 7484, "s": 7248, "text": "from sklearn.cluster import KMeansdata = df_groupby.iloc[:, 7:]kmeans = KMeans(n_clusters=3, init=\"k-means++\")kmeans.fit(data)labels = kmeans.predict(data)labels = pd.DataFrame(data=labels, index = df_groupby.index, columns=[\"labels\"])" }, { "code": null, "e": 7760, "s": 7484, "text": "Before applying the algorithm, we need to define “n_clusters” which is the number of groups we expect to get out of the modelling. In this case, I randomly put n_clusters = 3. Then, I went ahead visualizing how the dataset is grouped using 2 variables: Revenue and PageViews:" }, { "code": null, "e": 8169, "s": 7760, "text": "plt.scatter(df_kmeans.totals_transactionRevenue[df_kmeans.labels == 0],df_kmeans.totals_pageviews[df_kmeans.labels == 0], c='blue')plt.scatter(df_kmeans.totals_transactionRevenue[df_kmeans.labels == 1], df_kmeans.totals_pageviews[df_kmeans.labels == 1], c='green')plt.scatter(df_kmeans.totals_transactionRevenue[df_kmeans.labels == 2], df_kmeans.totals_pageviews[df_kmeans.labels == 2], c='orange')plt.show()" }, { "code": null, "e": 8684, "s": 8169, "text": "As you can see, the x-axis stands for the number of Revenue while y-axis for PageViews . After modelling, we can tell a certain degree of difference in 3 clusters. However, I was not sure whether 3 is the “right” number of clusters or not. That said, we can rely on the estimator of K-Means algorithm, inertia_, which is the distance from each sample to the centroid. In particular, we will compare the inertia of each cluster ranging from 1 to 10, in my case, and see which is the lowest and how far we should go:" }, { "code": null, "e": 8941, "s": 8684, "text": "#Find the best number of clustersnum_clusters = [x for x in range(1,10)]inertia = []for i in num_clusters: model = KMeans(n_clusters = i, init=\"k-means++\") model.fit(data) inertia.append(model.inertia_) plt.plot(num_clusters, inertia)plt.show()" }, { "code": null, "e": 9115, "s": 8941, "text": "From the chart above, inertia started to fall slowly since the 4th or 5th cluster, meaning that that’s the lowest inertia we can get, so I decided to go with “n_clusters=4”:" }, { "code": null, "e": 9746, "s": 9115, "text": "plt.scatter(df_kmeans_n4.totals_pageviews[df_kmeans_n4.labels == 0], df_kmeans_n4.totals_transactionRevenue[df_kmeans_n4.labels == 0], c='blue')plt.scatter(df_kmeans_n4.totals_pageviews[df_kmeans_n4.labels == 1],df_kmeans_n4.totals_transactionRevenue[df_kmeans_n4.labels == 1], c='green')plt.scatter(df_kmeans_n4.totals_pageviews[df_kmeans_n4.labels == 2],df_kmeans_n4.totals_transactionRevenue[df_kmeans_n4.labels == 2], c='orange')plt.scatter(df_kmeans_n4.totals_pageviews[df_kmeans_n4.labels == 3],df_kmeans_n4.totals_transactionRevenue[df_kmeans_n4.labels == 3], c='red')plt.xlabel(\"Page Views\")plt.ylabel(\"Revenue\")plt.show()" }, { "code": null, "e": 9813, "s": 9746, "text": "The clusters now look a lot more distinguishable from one another:" }, { "code": null, "e": 10030, "s": 9813, "text": "Cluster 0 (Blue): high PageViews yet little-to-none RevenueCluster 1 (Red): medium PageViews, low RevenueCluster 2 (Orange): medium PageViews, medium RevenueCluster 4 (Green): unclear trend of PageViews, high Revenue" }, { "code": null, "e": 10090, "s": 10030, "text": "Cluster 0 (Blue): high PageViews yet little-to-none Revenue" }, { "code": null, "e": 10137, "s": 10090, "text": "Cluster 1 (Red): medium PageViews, low Revenue" }, { "code": null, "e": 10190, "s": 10137, "text": "Cluster 2 (Orange): medium PageViews, medium Revenue" }, { "code": null, "e": 10250, "s": 10190, "text": "Cluster 4 (Green): unclear trend of PageViews, high Revenue" }, { "code": null, "e": 10405, "s": 10250, "text": "Except for cluster 0 and 4 (unclear pattern), which are beyond our control, cluster 1 and 2 can tell a story here as they seem to share some similarities." }, { "code": null, "e": 10531, "s": 10405, "text": "To understand which factor that might impact each cluster, I segmented each cluster by Channels, Device and Operating System:" }, { "code": null, "e": 10946, "s": 10531, "text": "As seen from above, in Cluster 1, Referral channel contributed the highest Revenue, followed by Direct and Organic Search. In contrast, it’s Direct that made the highest contribution in Cluster 2. Similarly, while Macintosh is the most dominating device in Cluster 1, it’s Windows in Cluster 2 that achieved higher revenue. The only similarity between 2 clusters is the Device Browser, which Chrome is widely used." }, { "code": null, "e": 11155, "s": 10946, "text": "Voila! This further segmentation helps us tell which factor (in this case, Channel, Device Browser, Operating System) works better for each cluster, hence we can better evaluate our investment moving forward!" }, { "code": null, "e": 11249, "s": 11155, "text": "What is A/B Testing and how can Hypothesis Testing come into place to complement the process?" }, { "code": null, "e": 11660, "s": 11249, "text": "A/B Testing is no stranger to those who work in Advertising and Media, since it’s one of the powerful techniques that help improve the performance with more cost efficiency. Particularly, A/B Testing divides the audience into 2 groups: Test vs Control. Then, we expose the ads/show a different design to the Test group only to see if there’s any significant discrepancy between 2 groups: exposed vs un-exposed." }, { "code": null, "e": 12087, "s": 11660, "text": "In Advertising, there are a number of different automatic tools in the market that can easily help do A/B Testing at one click. However, I still wanted to try a different method in Data Science that can do the same: Hypothesis Testing. The methodology is pretty much the same, as Hypothesis Testing compares the Null Hypothesis (H0) and Alternate Hypothesis (H1) and see if there’s any significant discrepancy between the two!" }, { "code": null, "e": 12265, "s": 12087, "text": "Assume that I run a promotion campaign that exposes an ad to the Test group. Here’s a quick summary of steps that need to be followed to test the result with Hypothesis Testing:" }, { "code": null, "e": 12368, "s": 12265, "text": "Sample Size DeterminationPre-requisite Requirements: Normality and Correlation TestsHypothesis Testing" }, { "code": null, "e": 12394, "s": 12368, "text": "Sample Size Determination" }, { "code": null, "e": 12454, "s": 12394, "text": "Pre-requisite Requirements: Normality and Correlation Tests" }, { "code": null, "e": 12473, "s": 12454, "text": "Hypothesis Testing" }, { "code": null, "e": 12789, "s": 12473, "text": "For the 1st step, we can rely on Power Analysis which helps determine the sample size to draw from a population. Power Analysis requires 3 parameters: (1) effect size, (2) power and (3) alpha. If you are looking for details on how Power Analysis, please refer to an in-depth article here that I wrote some time ago." }, { "code": null, "e": 12859, "s": 12789, "text": "Below is a quick note to each parameter for your quick understanding:" }, { "code": null, "e": 13145, "s": 12859, "text": "#Effect Size: (expected mean - actual mean) / actual_stdeffect_size = (280000 - df_group1_ab.revenue.mean())/df_group1_ab.revenue.std() #set expected mean to $350,000print(effect_size)#Powerpower = 0.9 #the probability of rejecting the null hypothesis#Alphaalpha = 0.05 #the error rate" }, { "code": null, "e": 13228, "s": 13145, "text": "After having 3 parameters ready, we use TTestPower() to determine the sample size:" }, { "code": null, "e": 13357, "s": 13228, "text": "import statsmodels.stats.power as smsn = sms.TTestPower().solve_power(effect_size=effect_size, power=power, alpha=alpha)print(n)" }, { "code": null, "e": 13585, "s": 13357, "text": "The result is 279, meaning we need to draw 279 data points from each group: Test and Control. As I don’t have real data, I used np.random.normal to generate a list of revenue data, in this case sample size = 279 for each group:" }, { "code": null, "e": 13793, "s": 13585, "text": "#Take the samples out of each group: control vs testcontrol_sample = np.random.normal(control_rev.mean(), control_rev.std(), size=279)test_sample = np.random.normal(test_rev.mean(), test_rev.std(), size=279)" }, { "code": null, "e": 14110, "s": 13793, "text": "Moving to the 2nd step, we need to ensure the samples are (1) normally distributed and (2) independent (not correlated). Again, if you want a refresh on the tests used in this step, refer to my article as above. In short, we are going to use (1) Shapiro as the normality test and (2) Pearson as the correlation test." }, { "code": null, "e": 14336, "s": 14110, "text": "#Step 2. Pre-requisite: Normality, Correlationfrom scipy.stats import shapiro, pearsonrstat1, p1 = shapiro(control_sample)stat2, p2 = shapiro(test_sample)print(p1, p2)stat3, p3 = pearsonr(control_sample, test_sample)print(p3)" }, { "code": null, "e": 14539, "s": 14336, "text": "The p-value of Shapiro is 0.129 and 0.539 for Control and Test group respectively, which is > 0.05. Hence, we don’t reject the null hypothesis and are able to say that 2 groups are normally distributed." }, { "code": null, "e": 14642, "s": 14539, "text": "The p-value of Pearson is 0.98, which is >0.05, meaning that 2 groups are independent from each other." }, { "code": null, "e": 14842, "s": 14642, "text": "Final step is here! As there are 2 variables to be tested against each other (Test vs Control group), we use T-Test to see if there’s any significant discrepancy in Revenue after running A/B Testing:" }, { "code": null, "e": 14962, "s": 14842, "text": "#Step 3. Hypothesis Testingfrom scipy.stats import ttest_indtstat, p4 = ttest_ind(control_sample, test_sample)print(p4)" }, { "code": null, "e": 15151, "s": 14962, "text": "The result is 0.35, which is > 0.05. Hence, the A/B Test conducted indicates that the Test Group exposed to the ads doesn’t show any superiority over the Control Group with no ad exposure." }, { "code": null, "e": 15284, "s": 15151, "text": "Voila! That’s the end of this project — Customer Segmentation & A/B Testing! I hope you find this article useful and easy to follow." }, { "code": null, "e": 15461, "s": 15284, "text": "Do look out for my upcoming projects in Data Science and Machine Learning in the near future! In the meantime feel free to check out my Github here for the complete repository:" } ]
Apache Pig - PigStorage()
The PigStorage() function loads and stores data as structured text files. It takes a delimiter using which each entity of a tuple is separated as a parameter. By default, it takes ‘\t’ as a parameter. Given below is the syntax of the PigStorage() function. grunt> PigStorage(field_delimiter) Let us suppose we have a file named student_data.txt in the HDFS directory named /data/ with the following content. 001,Rajiv,Reddy,9848022337,Hyderabad 002,siddarth,Battacharya,9848022338,Kolkata 003,Rajesh,Khanna,9848022339,Delhi 004,Preethi,Agarwal,9848022330,Pune 005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar 006,Archana,Mishra,9848022335,Chennai. We can load the data using the PigStorage function as shown below. grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' USING PigStorage(',') as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray ); In the above example, we have seen that we have used comma (‘,’) delimiter. Therefore, we have separated the values of a record using (,). In the same way, we can use the PigStorage() function to store the data in to HDFS directory as shown below. grunt> STORE student INTO ' hdfs://localhost:9000/pig_Output/ ' USING PigStorage (','); This will store the data into the given directory. You can verify the data as shown below. You can verify the stored data as shown below. First of all, list out the files in the directory named pig_output using ls command as shown below. $ hdfs dfs -ls 'hdfs://localhost:9000/pig_Output/' Found 2 items rw-r--r- 1 Hadoop supergroup 0 2015-10-05 13:03 hdfs://localhost:9000/pig_Output/_SUCCESS rw-r--r- 1 Hadoop supergroup 224 2015-10-05 13:03 hdfs://localhost:9000/pig_Output/part-m-00000 You can observe that two files were created after executing the Store statement. Then, using the cat command, list the contents of the file named part-m-00000 as shown below. $ hdfs dfs -cat 'hdfs://localhost:9000/pig_Output/part-m-00000' 1,Rajiv,Reddy,9848022337,Hyderabad 2,siddarth,Battacharya,9848022338,Kolkata 3,Rajesh,Khanna,9848022339,Delhi 4,Preethi,Agarwal,9848022330,Pune 5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar 6,Archana,Mishra,9848022335,Chennai 46 Lectures 3.5 hours Arnab Chakraborty 23 Lectures 1.5 hours Mukund Kumar Mishra 16 Lectures 1 hours Nilay Mehta 52 Lectures 1.5 hours Bigdata Engineer 14 Lectures 1 hours Bigdata Engineer 23 Lectures 1 hours Bigdata Engineer Print Add Notes Bookmark this page
[ { "code": null, "e": 2885, "s": 2684, "text": "The PigStorage() function loads and stores data as structured text files. It takes a delimiter using which each entity of a tuple is separated as a parameter. By default, it takes ‘\\t’ as a parameter." }, { "code": null, "e": 2941, "s": 2885, "text": "Given below is the syntax of the PigStorage() function." }, { "code": null, "e": 2977, "s": 2941, "text": "grunt> PigStorage(field_delimiter)\n" }, { "code": null, "e": 3093, "s": 2977, "text": "Let us suppose we have a file named student_data.txt in the HDFS directory named /data/ with the following content." }, { "code": null, "e": 3334, "s": 3093, "text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata \n003,Rajesh,Khanna,9848022339,Delhi \n004,Preethi,Agarwal,9848022330,Pune \n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai.\n" }, { "code": null, "e": 3401, "s": 3334, "text": "We can load the data using the PigStorage function as shown below." }, { "code": null, "e": 3587, "s": 3401, "text": "grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' USING PigStorage(',')\n as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );" }, { "code": null, "e": 3726, "s": 3587, "text": "In the above example, we have seen that we have used comma (‘,’) delimiter. Therefore, we have separated the values of a record using (,)." }, { "code": null, "e": 3835, "s": 3726, "text": "In the same way, we can use the PigStorage() function to store the data in to HDFS directory as shown below." }, { "code": null, "e": 3923, "s": 3835, "text": "grunt> STORE student INTO ' hdfs://localhost:9000/pig_Output/ ' USING PigStorage (',');" }, { "code": null, "e": 4014, "s": 3923, "text": "This will store the data into the given directory. You can verify the data as shown below." }, { "code": null, "e": 4161, "s": 4014, "text": "You can verify the stored data as shown below. First of all, list out the files in the directory named pig_output using ls command as shown below." }, { "code": null, "e": 4419, "s": 4161, "text": "$ hdfs dfs -ls 'hdfs://localhost:9000/pig_Output/'\n \nFound 2 items \nrw-r--r- 1 Hadoop supergroup 0 2015-10-05 13:03 hdfs://localhost:9000/pig_Output/_SUCCESS\n \nrw-r--r- 1 Hadoop supergroup 224 2015-10-05 13:03 hdfs://localhost:9000/pig_Output/part-m-00000\n\n" }, { "code": null, "e": 4500, "s": 4419, "text": "You can observe that two files were created after executing the Store statement." }, { "code": null, "e": 4594, "s": 4500, "text": "Then, using the cat command, list the contents of the file named part-m-00000 as shown below." }, { "code": null, "e": 4894, "s": 4594, "text": "$ hdfs dfs -cat 'hdfs://localhost:9000/pig_Output/part-m-00000'\n \n1,Rajiv,Reddy,9848022337,Hyderabad \n2,siddarth,Battacharya,9848022338,Kolkata \n3,Rajesh,Khanna,9848022339,Delhi \n4,Preethi,Agarwal,9848022330,Pune \n5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar \n6,Archana,Mishra,9848022335,Chennai\n" }, { "code": null, "e": 4929, "s": 4894, "text": "\n 46 Lectures \n 3.5 hours \n" }, { "code": null, "e": 4948, "s": 4929, "text": " Arnab Chakraborty" }, { "code": null, "e": 4983, "s": 4948, "text": "\n 23 Lectures \n 1.5 hours \n" }, { "code": null, "e": 5004, "s": 4983, "text": " Mukund Kumar Mishra" }, { "code": null, "e": 5037, "s": 5004, "text": "\n 16 Lectures \n 1 hours \n" }, { "code": null, "e": 5050, "s": 5037, "text": " Nilay Mehta" }, { "code": null, "e": 5085, "s": 5050, "text": "\n 52 Lectures \n 1.5 hours \n" }, { "code": null, "e": 5103, "s": 5085, "text": " Bigdata Engineer" }, { "code": null, "e": 5136, "s": 5103, "text": "\n 14 Lectures \n 1 hours \n" }, { "code": null, "e": 5154, "s": 5136, "text": " Bigdata Engineer" }, { "code": null, "e": 5187, "s": 5154, "text": "\n 23 Lectures \n 1 hours \n" }, { "code": null, "e": 5205, "s": 5187, "text": " Bigdata Engineer" }, { "code": null, "e": 5212, "s": 5205, "text": " Print" }, { "code": null, "e": 5223, "s": 5212, "text": " Add Notes" } ]
Porosity-Permeability Relationships Using Linear Regression in Python | by Andy McDonald | Towards Data Science
Core data analysis is a key component in the evaluation of a field or discovery, as it provides direct samples of the geological formations in the subsurface over the interval of interest. It is often considered the ‘ground truth’ by many and is used as a reference for calibrating well log measurements and petrophysical analysis. Core data is expensive to obtain and not acquired on every well at every depth. Instead, it may be acquired at discrete intervals on a small number of wells within a field and then used as a reference for other wells. Once the core data has been extracted from the well it is taken to a lab to be analysed. Along the length of the retrieved core sample a number of measurements are made. Two of which are porosity and permeability, both key components of a petrophysical analysis. Porosity is defined as the volume of space between the solid grains relative to the total rock volume. It provides an indication of the potential storage space for hydrocarbons. Permeability provides an indication of how easy fluids can flow through the rock. Porosity is a key control on permeability, with larger pores resulting in wider pathways for the reservoir fluids to flow through. Well logging tools do not provide a direct measurement for permeability and therefore it has to be inferred through relationships with core data from the same field or well, or from empirically derived equations. One common method is to plot porosity (on a linear scale) against permeability (on a logarithmic scale) and observe the trend. From this, a regression can be applied to the porosity permeability (poro-perm) crossplot to derive an equation, which can subsequently be used to predict a continuous permeability from a computed porosity in any well. In this article, I will cover how carry out a porosity-permeability regression using two methods within Python: numpy’s polyfit and statsmodels Ordinary Least Squares regression. The notebook for this article can be found on my Python and Petrophysics Github series which can accessed at the link below: https://github.com/andymcdgeo/Petrophysics-Python-Series Additionally, a list of previous articles, notebooks and blog posts can be found on my website here: http://andymcdonald.scot/python-and-petrophysics To begin, we will import a number of common libraries before we start working with the actual data. For this article we will be using pandas, matplotlib and numpy. These three libraries allow us to load, work with and visualise our data. import pandas as pdimport matplotlib.pyplot as pltimport numpy as np The dataset we are using comes from the publicly available Equinor Volve Field dataset released in 2018. The files used in this tutorial are from 15/9- 19A which contain full regular core analysis data and well log data. To load this data in we can use pd.read_csv and pass in the file name. This dataset has already been depth aligned to well log data, so no adjustments to the sample depth are required. When core slabs are analysed, a limited number of measurements are made at irregular intervals. In some cases, measurements may not be possible, for example in really tight (low permeability) sections. As a result, we can tell pandas to load any missing values / blank cells as Not a Number (NaN) by adding the argument na_values=' ' . core_data = pd.read_csv("Data/15_9-19A-CORE.csv", na_values=' ') Once we have the data loaded, we can view the details of what is in it by calling upon the .head() and .describe() methods. The .head() method returns the first five rows of the dataframe and the header row. core_data.head() The .describe() method returns useful statistics about the numeric data contained within the dataframe such as the mean, standard deviation, maximum and minimum values. core_data.describe() Using our core_data dataframe we can simply and quickly plot our data by adding .plot to the end of our dataframe and supplying some arguments. In this case we want a scatter plot (also known in petrophysics as a crossplot), with the x-axis as CPOR — Core Porosity and the y-axis as CKH — Core Permeability. core_data.plot(kind="scatter", x="CPOR", y="CKH") From this scatter plot we notice that there is a large concentration of points at low permeabilities with a few points at the higher end. We can tidy up our plot by converting the y axis to a logaritmic scale and adding a grid. This generates the the poro-perm crossplot that we are familiar with in petrophysics. core_data.plot(kind="scatter", x="CPOR", y="CKH")plt.yscale('log')plt.grid(True) We can agree that this looks much better now. We can further tidy up the plot by: Switching to matplotlib for making out plot Adding labels by using ax.set_ylabel() and ax.set_xlabel() Setting ranges for the axis using ax.axis([0,40, 0.01, 100000) Making the y-axis values easier to read by converting the exponential notation to full numbers. This is done using FuncFormatter from matplotlib and setting up a simple for loop from matplotlib.ticker import FuncFormatterfig, ax = plt.subplots()ax.axis([0, 40, 0.01, 100000])ax.plot(core_data['CPOR'], core_data['CKHG'], 'bo')ax.set_yscale('log')ax.grid(True)ax.set_ylabel('Core Perm (mD)')ax.set_xlabel('Core Porosity (%)')#Format the axes so that they show whole numbersfor axis in [ax.yaxis, ax.xaxis]: formatter = FuncFormatter(lambda y, _: '{:.16g}'.format(y)) axis.set_major_formatter(formatter) Isn’t that much better than the previous plot? We can now use this nicer looking plot within a petrophysical report or passing to other subsurface people within the team. There are two ways that we can carry out a poro-perm regression on our data: Using numpy’s polyfit function Applying a regression using the statsmodels library Before we explore each option, we first have to create a copy of our dataframe and remove the null rows. Carrying out the regression with NaN values can result in errors. poro_perm = core_data[['CPOR', 'CKHG']].copy() Once it has been copied we can then drop the NaNs using dropna(). Including the argument inplace=True tells the method to replace the values in place rather than returning a copy of the dataframe. poro_perm.dropna(inplace=True) The simplest option for applying a linear regression through the data is using the polynomial fit function from numpy. This returns an array of co-efficients. As we are wanting to use a linear fit we can specify a value of 1 at the end of the function. This tells the function we want a first degree polynomial. Also, are we are dealing with permeability data in the logarithmic scale, we need to take the logarithm of the values using np.log10. poro_perm_polyfit = np.polyfit(core_data['CPOR'], np.log10(core_data['CKH']), 1) When we check the value of poor_perm_polyfit, we return:array([0.17428705, -1.55607816]) The first value is our slope and the second is our y-intercept. Polyfit doesn’t give us much more information about the regression such as the co-efficient of determination (R-squared). For this we need to look at another model. The second option for generating a poro-perm linear regression is to use the Ordinary Least Squares (OLS) method from the statsmodels library. First we need to import the library and create our data. We will assign our x value as Core Porosity (CPOR) and our y value as the log10 of Core Permeability (CKH). The y value will be the one we are aiming to build our prediction model from. With the statsmodel OLS we need to add a constant column to our data as an intercept is not included by default unless we are using formulas. See here for the documentation. import statsmodels.api as smx = core_data['CPOR']x = sm.add_constant(x)y = np.log10(core_data['CKHG']) We can confirm the values of x by calling upon it and in Jupyter it will return a dataframe as seen here: The next step is to build and fit our model. With the OLS method, we can supply an argument for missing values. In this example I have set it to drop. This will remove or drop the missing values from the data. model = sm.OLS(y, x, missing='drop')results = model.fit() Once we have fitted the model, we can view a full summary of the regression by calling upon .summary() results.summary() Which returns a nicely formatted table like the one below and includes key statistics as the R-squared and standard error. We can also obtain the key parameters: slope and intercept, by calling upon results.params. If we want to access one of the parameters, for example the slope or constant for the CPOR value, we can access it like a list: results.params[1] Which returns a value of: 0.174287 We can then piece together the equation we will use to predict our permeability: Finally, we can take our equation and apply it to our scatter plot using the line: ax.semilogy(core_data['CPOR'], 10**(results.params[1] * core_data['CPOR'] + results.params[0]), 'r-') The whole code for the scatter plot: from matplotlib.ticker import FuncFormatterfig, ax = plt.subplots()ax.axis([0, 30, 0.01, 100000])ax.semilogy(core_data['CPOR'], core_data['CKHG'], 'bo')ax.grid(True)ax.set_ylabel('Core Perm (mD)')ax.set_xlabel('Core Porosity (%)')ax.semilogy(core_data['CPOR'], 10**(results.params[1] * core_data['CPOR'] + results.params[0]), 'r-')#Format the axes so that they show whole numbersfor axis in [ax.yaxis, ax.xaxis]: formatter = FuncFormatter(lambda y, _: '{:.16g}'.format(y)) axis.set_major_formatter(formatter) Now that we have our equation and we are happy with the results, we can apply this to our log porosity to generate a continuous permeability curve. First, we need to load in the well log data for this well: well = pd.read_csv('Data/15_9-19.csv', skiprows=[1]) And then apply our derived formula to the PHIT curve: well['PERM']= 10**(results.params[1] * (well['PHIT']*100) + results.params[0]) When we check the well header using well.head()we can see our newly created curve at the end of the dataframe. The final step in our workflow is to plot the PHIT curve and the predicted permeability curve on a log plot alongside the core measurements: fig, ax = plt.subplots(figsize=(5,10))ax1 = plt.subplot2grid((1,2), (0,0), rowspan=1, colspan = 1)ax2 = plt.subplot2grid((1,2), (0,1), rowspan=1, colspan = 1, sharey = ax1)# Porosity trackax1.plot(core_data["CPOR"]/100, core_data['DEPTH'], color = "black", marker='.', linewidth=0)ax1.plot(well['PHIT'], well['DEPTH'], color ='blue', linewidth=0.5)ax1.set_xlabel("Porosity")ax1.set_xlim(0.5, 0)ax1.xaxis.label.set_color("black")ax1.tick_params(axis='x', colors="black")ax1.spines["top"].set_edgecolor("black")ax1.set_xticks([0.5, 0.25, 0])# Permeability trackax2.plot(core_data["CKHG"], core_data['DEPTH'], color = "black", marker='.', linewidth=0)ax2.plot(well['PERM'], well['DEPTH'], color ='blue', linewidth=0.5)ax2.set_xlabel("Permeability")ax2.set_xlim(0.1, 100000)ax2.xaxis.label.set_color("black")ax2.tick_params(axis='x', colors="black")ax2.spines["top"].set_edgecolor("black")ax2.set_xticks([0.01, 1, 10, 100, 10000])ax2.semilogx()# Common functions for setting up the plot can be extracted into# a for loop. This saves repeating code.for ax in [ax1, ax2]: ax.set_ylim(4025, 3825) ax.grid(which='major', color='lightgrey', linestyle='-') ax.xaxis.set_ticks_position("top") ax.xaxis.set_label_position("top") # Removes the y axis labels on the second trackfor ax in [ax2]: plt.setp(ax.get_yticklabels(), visible = False) plt.tight_layout()fig.subplots_adjust(wspace = 0.3) This generates a simple two track log plot with our core measurements represented by black dots and our continuous curves by blue lines. As seen in track 2, our predicted permeability from a simple linear regression tracks the core permeability reasonably well. However, between about 3860 and 3875, our prediction reads lower than the actual core measurements. Also, it becomes harder to visualise the correlation at the lower interval to the more thinly bedded nature of the geology. In this walkthrough, we have covered what core porosity and permeability area and how we can predict the latter from the former to generate an equation that can be used to predict a continuous curve. This can subsequently be used in geological models or reservoir simulations. As noted at the end, there are a few small mismatches. These would benefit from further investigation and potentially further modelling either by refining the regression or by applying another machine learning model.
[ { "code": null, "e": 597, "s": 47, "text": "Core data analysis is a key component in the evaluation of a field or discovery, as it provides direct samples of the geological formations in the subsurface over the interval of interest. It is often considered the ‘ground truth’ by many and is used as a reference for calibrating well log measurements and petrophysical analysis. Core data is expensive to obtain and not acquired on every well at every depth. Instead, it may be acquired at discrete intervals on a small number of wells within a field and then used as a reference for other wells." }, { "code": null, "e": 860, "s": 597, "text": "Once the core data has been extracted from the well it is taken to a lab to be analysed. Along the length of the retrieved core sample a number of measurements are made. Two of which are porosity and permeability, both key components of a petrophysical analysis." }, { "code": null, "e": 1038, "s": 860, "text": "Porosity is defined as the volume of space between the solid grains relative to the total rock volume. It provides an indication of the potential storage space for hydrocarbons." }, { "code": null, "e": 1120, "s": 1038, "text": "Permeability provides an indication of how easy fluids can flow through the rock." }, { "code": null, "e": 1251, "s": 1120, "text": "Porosity is a key control on permeability, with larger pores resulting in wider pathways for the reservoir fluids to flow through." }, { "code": null, "e": 1464, "s": 1251, "text": "Well logging tools do not provide a direct measurement for permeability and therefore it has to be inferred through relationships with core data from the same field or well, or from empirically derived equations." }, { "code": null, "e": 1810, "s": 1464, "text": "One common method is to plot porosity (on a linear scale) against permeability (on a logarithmic scale) and observe the trend. From this, a regression can be applied to the porosity permeability (poro-perm) crossplot to derive an equation, which can subsequently be used to predict a continuous permeability from a computed porosity in any well." }, { "code": null, "e": 1989, "s": 1810, "text": "In this article, I will cover how carry out a porosity-permeability regression using two methods within Python: numpy’s polyfit and statsmodels Ordinary Least Squares regression." }, { "code": null, "e": 2114, "s": 1989, "text": "The notebook for this article can be found on my Python and Petrophysics Github series which can accessed at the link below:" }, { "code": null, "e": 2171, "s": 2114, "text": "https://github.com/andymcdgeo/Petrophysics-Python-Series" }, { "code": null, "e": 2272, "s": 2171, "text": "Additionally, a list of previous articles, notebooks and blog posts can be found on my website here:" }, { "code": null, "e": 2321, "s": 2272, "text": "http://andymcdonald.scot/python-and-petrophysics" }, { "code": null, "e": 2559, "s": 2321, "text": "To begin, we will import a number of common libraries before we start working with the actual data. For this article we will be using pandas, matplotlib and numpy. These three libraries allow us to load, work with and visualise our data." }, { "code": null, "e": 2628, "s": 2559, "text": "import pandas as pdimport matplotlib.pyplot as pltimport numpy as np" }, { "code": null, "e": 2920, "s": 2628, "text": "The dataset we are using comes from the publicly available Equinor Volve Field dataset released in 2018. The files used in this tutorial are from 15/9- 19A which contain full regular core analysis data and well log data. To load this data in we can use pd.read_csv and pass in the file name." }, { "code": null, "e": 3034, "s": 2920, "text": "This dataset has already been depth aligned to well log data, so no adjustments to the sample depth are required." }, { "code": null, "e": 3370, "s": 3034, "text": "When core slabs are analysed, a limited number of measurements are made at irregular intervals. In some cases, measurements may not be possible, for example in really tight (low permeability) sections. As a result, we can tell pandas to load any missing values / blank cells as Not a Number (NaN) by adding the argument na_values=' ' ." }, { "code": null, "e": 3435, "s": 3370, "text": "core_data = pd.read_csv(\"Data/15_9-19A-CORE.csv\", na_values=' ')" }, { "code": null, "e": 3559, "s": 3435, "text": "Once we have the data loaded, we can view the details of what is in it by calling upon the .head() and .describe() methods." }, { "code": null, "e": 3643, "s": 3559, "text": "The .head() method returns the first five rows of the dataframe and the header row." }, { "code": null, "e": 3660, "s": 3643, "text": "core_data.head()" }, { "code": null, "e": 3829, "s": 3660, "text": "The .describe() method returns useful statistics about the numeric data contained within the dataframe such as the mean, standard deviation, maximum and minimum values." }, { "code": null, "e": 3850, "s": 3829, "text": "core_data.describe()" }, { "code": null, "e": 4158, "s": 3850, "text": "Using our core_data dataframe we can simply and quickly plot our data by adding .plot to the end of our dataframe and supplying some arguments. In this case we want a scatter plot (also known in petrophysics as a crossplot), with the x-axis as CPOR — Core Porosity and the y-axis as CKH — Core Permeability." }, { "code": null, "e": 4208, "s": 4158, "text": "core_data.plot(kind=\"scatter\", x=\"CPOR\", y=\"CKH\")" }, { "code": null, "e": 4522, "s": 4208, "text": "From this scatter plot we notice that there is a large concentration of points at low permeabilities with a few points at the higher end. We can tidy up our plot by converting the y axis to a logaritmic scale and adding a grid. This generates the the poro-perm crossplot that we are familiar with in petrophysics." }, { "code": null, "e": 4603, "s": 4522, "text": "core_data.plot(kind=\"scatter\", x=\"CPOR\", y=\"CKH\")plt.yscale('log')plt.grid(True)" }, { "code": null, "e": 4685, "s": 4603, "text": "We can agree that this looks much better now. We can further tidy up the plot by:" }, { "code": null, "e": 4729, "s": 4685, "text": "Switching to matplotlib for making out plot" }, { "code": null, "e": 4788, "s": 4729, "text": "Adding labels by using ax.set_ylabel() and ax.set_xlabel()" }, { "code": null, "e": 4851, "s": 4788, "text": "Setting ranges for the axis using ax.axis([0,40, 0.01, 100000)" }, { "code": null, "e": 5029, "s": 4851, "text": "Making the y-axis values easier to read by converting the exponential notation to full numbers. This is done using FuncFormatter from matplotlib and setting up a simple for loop" }, { "code": null, "e": 5459, "s": 5029, "text": "from matplotlib.ticker import FuncFormatterfig, ax = plt.subplots()ax.axis([0, 40, 0.01, 100000])ax.plot(core_data['CPOR'], core_data['CKHG'], 'bo')ax.set_yscale('log')ax.grid(True)ax.set_ylabel('Core Perm (mD)')ax.set_xlabel('Core Porosity (%)')#Format the axes so that they show whole numbersfor axis in [ax.yaxis, ax.xaxis]: formatter = FuncFormatter(lambda y, _: '{:.16g}'.format(y)) axis.set_major_formatter(formatter)" }, { "code": null, "e": 5630, "s": 5459, "text": "Isn’t that much better than the previous plot? We can now use this nicer looking plot within a petrophysical report or passing to other subsurface people within the team." }, { "code": null, "e": 5707, "s": 5630, "text": "There are two ways that we can carry out a poro-perm regression on our data:" }, { "code": null, "e": 5738, "s": 5707, "text": "Using numpy’s polyfit function" }, { "code": null, "e": 5790, "s": 5738, "text": "Applying a regression using the statsmodels library" }, { "code": null, "e": 5961, "s": 5790, "text": "Before we explore each option, we first have to create a copy of our dataframe and remove the null rows. Carrying out the regression with NaN values can result in errors." }, { "code": null, "e": 6008, "s": 5961, "text": "poro_perm = core_data[['CPOR', 'CKHG']].copy()" }, { "code": null, "e": 6205, "s": 6008, "text": "Once it has been copied we can then drop the NaNs using dropna(). Including the argument inplace=True tells the method to replace the values in place rather than returning a copy of the dataframe." }, { "code": null, "e": 6236, "s": 6205, "text": "poro_perm.dropna(inplace=True)" }, { "code": null, "e": 6548, "s": 6236, "text": "The simplest option for applying a linear regression through the data is using the polynomial fit function from numpy. This returns an array of co-efficients. As we are wanting to use a linear fit we can specify a value of 1 at the end of the function. This tells the function we want a first degree polynomial." }, { "code": null, "e": 6682, "s": 6548, "text": "Also, are we are dealing with permeability data in the logarithmic scale, we need to take the logarithm of the values using np.log10." }, { "code": null, "e": 6763, "s": 6682, "text": "poro_perm_polyfit = np.polyfit(core_data['CPOR'], np.log10(core_data['CKH']), 1)" }, { "code": null, "e": 6852, "s": 6763, "text": "When we check the value of poor_perm_polyfit, we return:array([0.17428705, -1.55607816])" }, { "code": null, "e": 6916, "s": 6852, "text": "The first value is our slope and the second is our y-intercept." }, { "code": null, "e": 7081, "s": 6916, "text": "Polyfit doesn’t give us much more information about the regression such as the co-efficient of determination (R-squared). For this we need to look at another model." }, { "code": null, "e": 7224, "s": 7081, "text": "The second option for generating a poro-perm linear regression is to use the Ordinary Least Squares (OLS) method from the statsmodels library." }, { "code": null, "e": 7467, "s": 7224, "text": "First we need to import the library and create our data. We will assign our x value as Core Porosity (CPOR) and our y value as the log10 of Core Permeability (CKH). The y value will be the one we are aiming to build our prediction model from." }, { "code": null, "e": 7641, "s": 7467, "text": "With the statsmodel OLS we need to add a constant column to our data as an intercept is not included by default unless we are using formulas. See here for the documentation." }, { "code": null, "e": 7744, "s": 7641, "text": "import statsmodels.api as smx = core_data['CPOR']x = sm.add_constant(x)y = np.log10(core_data['CKHG'])" }, { "code": null, "e": 7850, "s": 7744, "text": "We can confirm the values of x by calling upon it and in Jupyter it will return a dataframe as seen here:" }, { "code": null, "e": 8060, "s": 7850, "text": "The next step is to build and fit our model. With the OLS method, we can supply an argument for missing values. In this example I have set it to drop. This will remove or drop the missing values from the data." }, { "code": null, "e": 8118, "s": 8060, "text": "model = sm.OLS(y, x, missing='drop')results = model.fit()" }, { "code": null, "e": 8221, "s": 8118, "text": "Once we have fitted the model, we can view a full summary of the regression by calling upon .summary()" }, { "code": null, "e": 8239, "s": 8221, "text": "results.summary()" }, { "code": null, "e": 8362, "s": 8239, "text": "Which returns a nicely formatted table like the one below and includes key statistics as the R-squared and standard error." }, { "code": null, "e": 8454, "s": 8362, "text": "We can also obtain the key parameters: slope and intercept, by calling upon results.params." }, { "code": null, "e": 8582, "s": 8454, "text": "If we want to access one of the parameters, for example the slope or constant for the CPOR value, we can access it like a list:" }, { "code": null, "e": 8600, "s": 8582, "text": "results.params[1]" }, { "code": null, "e": 8635, "s": 8600, "text": "Which returns a value of: 0.174287" }, { "code": null, "e": 8716, "s": 8635, "text": "We can then piece together the equation we will use to predict our permeability:" }, { "code": null, "e": 8799, "s": 8716, "text": "Finally, we can take our equation and apply it to our scatter plot using the line:" }, { "code": null, "e": 8901, "s": 8799, "text": "ax.semilogy(core_data['CPOR'], 10**(results.params[1] * core_data['CPOR'] + results.params[0]), 'r-')" }, { "code": null, "e": 8938, "s": 8901, "text": "The whole code for the scatter plot:" }, { "code": null, "e": 9453, "s": 8938, "text": "from matplotlib.ticker import FuncFormatterfig, ax = plt.subplots()ax.axis([0, 30, 0.01, 100000])ax.semilogy(core_data['CPOR'], core_data['CKHG'], 'bo')ax.grid(True)ax.set_ylabel('Core Perm (mD)')ax.set_xlabel('Core Porosity (%)')ax.semilogy(core_data['CPOR'], 10**(results.params[1] * core_data['CPOR'] + results.params[0]), 'r-')#Format the axes so that they show whole numbersfor axis in [ax.yaxis, ax.xaxis]: formatter = FuncFormatter(lambda y, _: '{:.16g}'.format(y)) axis.set_major_formatter(formatter)" }, { "code": null, "e": 9601, "s": 9453, "text": "Now that we have our equation and we are happy with the results, we can apply this to our log porosity to generate a continuous permeability curve." }, { "code": null, "e": 9660, "s": 9601, "text": "First, we need to load in the well log data for this well:" }, { "code": null, "e": 9713, "s": 9660, "text": "well = pd.read_csv('Data/15_9-19.csv', skiprows=[1])" }, { "code": null, "e": 9767, "s": 9713, "text": "And then apply our derived formula to the PHIT curve:" }, { "code": null, "e": 9846, "s": 9767, "text": "well['PERM']= 10**(results.params[1] * (well['PHIT']*100) + results.params[0])" }, { "code": null, "e": 9957, "s": 9846, "text": "When we check the well header using well.head()we can see our newly created curve at the end of the dataframe." }, { "code": null, "e": 10098, "s": 9957, "text": "The final step in our workflow is to plot the PHIT curve and the predicted permeability curve on a log plot alongside the core measurements:" }, { "code": null, "e": 11501, "s": 10098, "text": "fig, ax = plt.subplots(figsize=(5,10))ax1 = plt.subplot2grid((1,2), (0,0), rowspan=1, colspan = 1)ax2 = plt.subplot2grid((1,2), (0,1), rowspan=1, colspan = 1, sharey = ax1)# Porosity trackax1.plot(core_data[\"CPOR\"]/100, core_data['DEPTH'], color = \"black\", marker='.', linewidth=0)ax1.plot(well['PHIT'], well['DEPTH'], color ='blue', linewidth=0.5)ax1.set_xlabel(\"Porosity\")ax1.set_xlim(0.5, 0)ax1.xaxis.label.set_color(\"black\")ax1.tick_params(axis='x', colors=\"black\")ax1.spines[\"top\"].set_edgecolor(\"black\")ax1.set_xticks([0.5, 0.25, 0])# Permeability trackax2.plot(core_data[\"CKHG\"], core_data['DEPTH'], color = \"black\", marker='.', linewidth=0)ax2.plot(well['PERM'], well['DEPTH'], color ='blue', linewidth=0.5)ax2.set_xlabel(\"Permeability\")ax2.set_xlim(0.1, 100000)ax2.xaxis.label.set_color(\"black\")ax2.tick_params(axis='x', colors=\"black\")ax2.spines[\"top\"].set_edgecolor(\"black\")ax2.set_xticks([0.01, 1, 10, 100, 10000])ax2.semilogx()# Common functions for setting up the plot can be extracted into# a for loop. This saves repeating code.for ax in [ax1, ax2]: ax.set_ylim(4025, 3825) ax.grid(which='major', color='lightgrey', linestyle='-') ax.xaxis.set_ticks_position(\"top\") ax.xaxis.set_label_position(\"top\") # Removes the y axis labels on the second trackfor ax in [ax2]: plt.setp(ax.get_yticklabels(), visible = False) plt.tight_layout()fig.subplots_adjust(wspace = 0.3)" }, { "code": null, "e": 11638, "s": 11501, "text": "This generates a simple two track log plot with our core measurements represented by black dots and our continuous curves by blue lines." }, { "code": null, "e": 11987, "s": 11638, "text": "As seen in track 2, our predicted permeability from a simple linear regression tracks the core permeability reasonably well. However, between about 3860 and 3875, our prediction reads lower than the actual core measurements. Also, it becomes harder to visualise the correlation at the lower interval to the more thinly bedded nature of the geology." }, { "code": null, "e": 12264, "s": 11987, "text": "In this walkthrough, we have covered what core porosity and permeability area and how we can predict the latter from the former to generate an equation that can be used to predict a continuous curve. This can subsequently be used in geological models or reservoir simulations." } ]
HTML /XHTML Standard Fonts
Fonts are specific to platform. If you are using different OS then you will have different look and feel of any web page. Here we are giving a list of fonts which are available to various operating systems. HTML <FONT> tag is deprecated in version 4.0 onwards and now all fonts are set by using CSS. Here is the simple syntax of setting font of a body of web page. body { font-family: "new century schoolbook"; } or <body style = "font-family:new century schoolbook;" > You can have more information on Microsoft Fonts at http://www.microsoft.com/typography/fonts. You can check example fonts here − Microsoft Fonts Examples Following is the list of fonts supported by Macintosh System 7 and higher versions You can check example fonts here − Mac Fonts Examples Following is the list of fonts supported by most Unix System variants You can check example fonts here −Unix Fonts Examples 19 Lectures 2 hours Anadi Sharma 16 Lectures 1.5 hours Anadi Sharma 18 Lectures 1.5 hours Frahaan Hussain 57 Lectures 5.5 hours DigiFisk (Programming Is Fun) 54 Lectures 6 hours DigiFisk (Programming Is Fun) 45 Lectures 5.5 hours DigiFisk (Programming Is Fun) Print Add Notes Bookmark this page
[ { "code": null, "e": 2581, "s": 2374, "text": "Fonts are specific to platform. If you are using different OS then you will have different look and feel of any web page. Here we are giving a list of fonts which are available to various operating systems." }, { "code": null, "e": 2739, "s": 2581, "text": "HTML <FONT> tag is deprecated in version 4.0 onwards and now all fonts are set by using CSS. Here is the simple syntax of setting font of a body of web page." }, { "code": null, "e": 2846, "s": 2739, "text": "body { font-family: \"new century schoolbook\"; }\n\nor\n\n<body style = \"font-family:new century schoolbook;\" >" }, { "code": null, "e": 2941, "s": 2846, "text": "You can have more information on Microsoft Fonts at http://www.microsoft.com/typography/fonts." }, { "code": null, "e": 3001, "s": 2941, "text": "You can check example fonts here − Microsoft Fonts Examples" }, { "code": null, "e": 3084, "s": 3001, "text": "Following is the list of fonts supported by Macintosh System 7 and higher versions" }, { "code": null, "e": 3138, "s": 3084, "text": "You can check example fonts here − Mac Fonts Examples" }, { "code": null, "e": 3208, "s": 3138, "text": "Following is the list of fonts supported by most Unix System variants" }, { "code": null, "e": 3262, "s": 3208, "text": "You can check example fonts here −Unix Fonts Examples" }, { "code": null, "e": 3295, "s": 3262, "text": "\n 19 Lectures \n 2 hours \n" }, { "code": null, "e": 3309, "s": 3295, "text": " Anadi Sharma" }, { "code": null, "e": 3344, "s": 3309, "text": "\n 16 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3358, "s": 3344, "text": " Anadi Sharma" }, { "code": null, "e": 3393, "s": 3358, "text": "\n 18 Lectures \n 1.5 hours \n" }, { "code": null, "e": 3410, "s": 3393, "text": " Frahaan Hussain" }, { "code": null, "e": 3445, "s": 3410, "text": "\n 57 Lectures \n 5.5 hours \n" }, { "code": null, "e": 3476, "s": 3445, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 3509, "s": 3476, "text": "\n 54 Lectures \n 6 hours \n" }, { "code": null, "e": 3540, "s": 3509, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 3575, "s": 3540, "text": "\n 45 Lectures \n 5.5 hours \n" }, { "code": null, "e": 3606, "s": 3575, "text": " DigiFisk (Programming Is Fun)" }, { "code": null, "e": 3613, "s": 3606, "text": " Print" }, { "code": null, "e": 3624, "s": 3613, "text": " Add Notes" } ]
5 Ways to Keep Your Ubuntu System Clean - GeeksforGeeks
19 Feb, 2020 Linux has it’s own data management system that cleans your computer under the belt but still, sometimes we need to know how to manually initiate disk cleaning and control processes to enhance your computer’s performance. Below are 5 ways to keep the ubuntu system clean. 1. Uninstalling and Removing Unnecessary Applications: To uninstall the application you can you simple command. $ sudo apt remove [application name..].. Press “Y” and Enter. If you don’t want to use the command line, you can use the Ubuntu Software manager. Just click on the remove button and the application will be removed. 2. Removing Unnecessary Packages and Dependencies: After removing certain apps and packages some data is left which needs to be clean for that just use command. $ sudo apt autoremove 3. Removing Old Kernels From System: To list all kernels you can use the command $ sudo dpkg --list 'linux-image*' And you can remove any of this kernel by command $ sudo apt-get remove linux-image-VERSION Choose any version you want to remove and press Enter. 4. Cleaning Apt Cache: APT generates a cache of install apps and keeps them in your /var/cache/apt/archives directory even after those apps have been uninstalled. To check the amount of apt-cache on your system with the command. $ sudo du -sh /var/cache/apt To remove this cache use $ sudo apt-get clean and this will clean the APT cache. 5. Cleaning Thumbnail Cache from System: To check the size of Cache in the system you can use command. $ du -sh ~/.cache/thumbnails To clean this cache you can use the command $ sudo rm -rf ~/.cache/thumbnails/* Technical Scripter 2019 Linux-Unix Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. TCP Server-Client implementation in C tar command in Linux with examples curl command in Linux with Examples Conditional Statements | Shell Script 'crontab' in Linux with Examples Tail command in Linux with examples UDP Server-Client implementation in C diff command in Linux with examples Cat command in Linux with examples touch command in Linux with Examples
[ { "code": null, "e": 25845, "s": 25817, "text": "\n19 Feb, 2020" }, { "code": null, "e": 26116, "s": 25845, "text": "Linux has it’s own data management system that cleans your computer under the belt but still, sometimes we need to know how to manually initiate disk cleaning and control processes to enhance your computer’s performance. Below are 5 ways to keep the ubuntu system clean." }, { "code": null, "e": 26228, "s": 26116, "text": "1. Uninstalling and Removing Unnecessary Applications: To uninstall the application you can you simple command." }, { "code": null, "e": 26269, "s": 26228, "text": "$ sudo apt remove [application name..].." }, { "code": null, "e": 26443, "s": 26269, "text": "Press “Y” and Enter. If you don’t want to use the command line, you can use the Ubuntu Software manager. Just click on the remove button and the application will be removed." }, { "code": null, "e": 26604, "s": 26443, "text": "2. Removing Unnecessary Packages and Dependencies: After removing certain apps and packages some data is left which needs to be clean for that just use command." }, { "code": null, "e": 26626, "s": 26604, "text": "$ sudo apt autoremove" }, { "code": null, "e": 26707, "s": 26626, "text": "3. Removing Old Kernels From System: To list all kernels you can use the command" }, { "code": null, "e": 26741, "s": 26707, "text": "$ sudo dpkg --list 'linux-image*'" }, { "code": null, "e": 26790, "s": 26741, "text": "And you can remove any of this kernel by command" }, { "code": null, "e": 26832, "s": 26790, "text": "$ sudo apt-get remove linux-image-VERSION" }, { "code": null, "e": 26887, "s": 26832, "text": "Choose any version you want to remove and press Enter." }, { "code": null, "e": 27116, "s": 26887, "text": "4. Cleaning Apt Cache: APT generates a cache of install apps and keeps them in your /var/cache/apt/archives directory even after those apps have been uninstalled. To check the amount of apt-cache on your system with the command." }, { "code": null, "e": 27145, "s": 27116, "text": "$ sudo du -sh /var/cache/apt" }, { "code": null, "e": 27170, "s": 27145, "text": "To remove this cache use" }, { "code": null, "e": 27191, "s": 27170, "text": "$ sudo apt-get clean" }, { "code": null, "e": 27226, "s": 27191, "text": "and this will clean the APT cache." }, { "code": null, "e": 27329, "s": 27226, "text": "5. Cleaning Thumbnail Cache from System: To check the size of Cache in the system you can use command." }, { "code": null, "e": 27358, "s": 27329, "text": "$ du -sh ~/.cache/thumbnails" }, { "code": null, "e": 27402, "s": 27358, "text": "To clean this cache you can use the command" }, { "code": null, "e": 27439, "s": 27402, "text": "$ sudo rm -rf ~/.cache/thumbnails/* " }, { "code": null, "e": 27463, "s": 27439, "text": "Technical Scripter 2019" }, { "code": null, "e": 27474, "s": 27463, "text": "Linux-Unix" }, { "code": null, "e": 27493, "s": 27474, "text": "Technical Scripter" }, { "code": null, "e": 27591, "s": 27493, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 27629, "s": 27591, "text": "TCP Server-Client implementation in C" }, { "code": null, "e": 27664, "s": 27629, "text": "tar command in Linux with examples" }, { "code": null, "e": 27700, "s": 27664, "text": "curl command in Linux with Examples" }, { "code": null, "e": 27738, "s": 27700, "text": "Conditional Statements | Shell Script" }, { "code": null, "e": 27771, "s": 27738, "text": "'crontab' in Linux with Examples" }, { "code": null, "e": 27807, "s": 27771, "text": "Tail command in Linux with examples" }, { "code": null, "e": 27845, "s": 27807, "text": "UDP Server-Client implementation in C" }, { "code": null, "e": 27881, "s": 27845, "text": "diff command in Linux with examples" }, { "code": null, "e": 27916, "s": 27881, "text": "Cat command in Linux with examples" } ]
Sum of prime numbers between a range - JavaScript
We are required to write a JavaScript function that takes in two numbers, say a and b and returns the sum of all the prime numbers that fall between a and b. We should include a and b if they are prime as well. Following is the code − const num1 = 45; const num2 = 345; const isPrime = n => { if (n===1){ return false; }else if(n === 2){ return true; }else{ for(let x = 2; x < n; x++){ if(n % x === 0){ return false; } } return true; }; }; const primeBetween = (a, b) => { const res = []; while(a <= b){ if(isPrime(a)){ res.push(a); }; a++; }; return res; }; console.log(primeBetween(num1, num2)); Following is the output in the console − [ 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337 ]
[ { "code": null, "e": 1273, "s": 1062, "text": "We are required to write a JavaScript function that takes in two numbers, say a and b and returns the sum of all the prime numbers that fall between a and b. We should include a and b if they are prime as well." }, { "code": null, "e": 1297, "s": 1273, "text": "Following is the code −" }, { "code": null, "e": 1771, "s": 1297, "text": "const num1 = 45;\nconst num2 = 345;\nconst isPrime = n => {\n if (n===1){\n return false;\n }else if(n === 2){\n return true;\n }else{\n for(let x = 2; x < n; x++){\n if(n % x === 0){\n return false;\n }\n }\n return true;\n };\n};\nconst primeBetween = (a, b) => {\n const res = [];\n while(a <= b){\n if(isPrime(a)){\n res.push(a);\n };\n a++;\n };\n return res;\n};\nconsole.log(primeBetween(num1, num2));" }, { "code": null, "e": 1812, "s": 1771, "text": "Following is the output in the console −" }, { "code": null, "e": 2103, "s": 1812, "text": "[\n 47, 53, 59, 61, 67, 71, 73, 79, 83,\n 89, 97, 101, 103, 107, 109, 113, 127, 131,\n 137, 139, 149, 151, 157, 163, 167, 173, 179,\n 181, 191, 193, 197, 199, 211, 223, 227, 229,\n 233, 239, 241, 251, 257, 263, 269, 271, 277,\n 281, 283, 293, 307, 311, 313, 317, 331, 337\n]" } ]
JavaScript example to filter an array depending on multiple checkbox conditions.
Following is the code to filter an array depending on multiple checkbox condition using JavaScript − Live Demo <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Document</title> <style> body { font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif; } .result,.sample { font-size: 18px; font-weight: 500; color: rebeccapurple; } .result { color: red; } </style> </head> <body> <h1>Filter an array depending on multiple checkbox conditions</h1> <div class="sample">[22,10,50,30,90,33,80,75,33,99,150,105]</div> <div class="result"></div> <br /> <input type="checkbox" class="check" onclick="filterArr()" />Number should be greater than 50<br /> <input type="checkbox" class="check" onclick="filterArr()" />Number should divide by 5<br /> <input type="checkbox" class="check" onclick="filterArr()" />Number should divide by 3<br /> <h3>tick the above checkbox to apply filter to the array above</h3> <script> let resEle = document.querySelector(".result"); let checkEle = document.querySelectorAll(".check"); let arr = [22, 10, 50, 30, 90, 33, 80, 75, 33, 99, 150, 105]; let resArr = arr; function filterArr() { checkEle.forEach((item, index) => { if (item.checked && index == 0) { resArr = resArr.filter((num) => num > 50); resEle.innerHTML = resArr; } else if (item.checked && index == 1) { resArr = resArr.filter((num) => num % 5 == 0); resEle.innerHTML = resArr; } else if (item.checked && index == 2) resArr = resArr.filter((num) => num % 3 == 0); resEle.innerHTML = resArr; }); } </script> </body> </html> The above code will produce the following output − On ticking some of the checkbox −
[ { "code": null, "e": 1163, "s": 1062, "text": "Following is the code to filter an array depending on multiple checkbox condition using\nJavaScript −" }, { "code": null, "e": 1174, "s": 1163, "text": " Live Demo" }, { "code": null, "e": 2862, "s": 1174, "text": "<!DOCTYPE html>\n<html lang=\"en\">\n<head>\n<meta charset=\"UTF-8\" />\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\" />\n<title>Document</title>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .result,.sample {\n font-size: 18px;\n font-weight: 500;\n color: rebeccapurple;\n }\n .result {\n color: red;\n }\n</style>\n</head>\n<body>\n<h1>Filter an array depending on multiple checkbox conditions</h1>\n<div class=\"sample\">[22,10,50,30,90,33,80,75,33,99,150,105]</div>\n<div class=\"result\"></div>\n<br />\n<input type=\"checkbox\" class=\"check\" onclick=\"filterArr()\" />Number should be greater than 50<br />\n<input type=\"checkbox\" class=\"check\" onclick=\"filterArr()\" />Number should divide by 5<br />\n<input type=\"checkbox\" class=\"check\" onclick=\"filterArr()\" />Number should divide by 3<br />\n<h3>tick the above checkbox to apply filter to the array above</h3>\n<script>\n let resEle = document.querySelector(\".result\");\n let checkEle = document.querySelectorAll(\".check\");\n let arr = [22, 10, 50, 30, 90, 33, 80, 75, 33, 99, 150, 105];\n let resArr = arr;\n function filterArr() {\n checkEle.forEach((item, index) => {\n if (item.checked && index == 0) {\n resArr = resArr.filter((num) => num > 50);\n resEle.innerHTML = resArr;\n }\n else if (item.checked && index == 1) {\n resArr = resArr.filter((num) => num % 5 == 0);\n resEle.innerHTML = resArr;\n }\n else if (item.checked && index == 2)\n resArr = resArr.filter((num) => num % 3 == 0);\n resEle.innerHTML = resArr;\n });\n }\n</script>\n</body>\n</html>" }, { "code": null, "e": 2913, "s": 2862, "text": "The above code will produce the following output −" }, { "code": null, "e": 2947, "s": 2913, "text": "On ticking some of the checkbox −" } ]
isalpha() and isdigit() in C/C++
The function isalpha() is used to check that a character is an alphabet or not. This function is declared in “ctype.h” header file. It returns an integer value, if the argument is an alphabet otherwise, it returns zero. Here is the syntax of isalpha() in C language, int isalpha(int value); Here, value − This is a single argument of integer type. Here is an example of isalpha() in C language − Live Demo #include<stdio.h> #include<ctype.h> int main() { char val1 = 's'; char val2 = '8'; if(isalpha(val1)) printf("The character is an alphabet\n"); else printf("The character is not an alphabet\n"); if(isalpha(val2)) printf("The character is an alphabet\n"); else printf("The character is not an alphabet"); return 0; } Here is the output The character is an alphabet The character is not an alphabet The function isdigit() is used to check that character is a numeric character or not. This function is declared in “ctype.h” header file. It returns an integer value, if the argument is a digit otherwise, it returns zero. Here is the syntax of isdigit() in C language, int isdigit(int value); Here, value − This is a single argument of integer type. Here is an example of isdigit() in C language, Live Demo #include<stdio.h> #include<ctype.h> int main() { char val1 = 's'; char val2 = '8'; if(isdigit(val1)) printf("The character is a digit\n"); else printf("The character is not a digit\n"); if(isdigit(val2)) printf("The character is a digit\n"); else printf("The character is not a digit"); return 0; } Here is the output The character is not a digit The character is a digit
[ { "code": null, "e": 1282, "s": 1062, "text": "The function isalpha() is used to check that a character is an alphabet or not. This function is declared in “ctype.h” header file. It returns an integer value, if the argument is an alphabet otherwise, it returns zero." }, { "code": null, "e": 1329, "s": 1282, "text": "Here is the syntax of isalpha() in C language," }, { "code": null, "e": 1353, "s": 1329, "text": "int isalpha(int value);" }, { "code": null, "e": 1359, "s": 1353, "text": "Here," }, { "code": null, "e": 1410, "s": 1359, "text": "value − This is a single argument of integer type." }, { "code": null, "e": 1458, "s": 1410, "text": "Here is an example of isalpha() in C language −" }, { "code": null, "e": 1469, "s": 1458, "text": " Live Demo" }, { "code": null, "e": 1821, "s": 1469, "text": "#include<stdio.h>\n#include<ctype.h>\n\nint main() {\n char val1 = 's';\n char val2 = '8';\n\n if(isalpha(val1))\n printf(\"The character is an alphabet\\n\");\n else\n printf(\"The character is not an alphabet\\n\");\n\n if(isalpha(val2))\n printf(\"The character is an alphabet\\n\");\n else\n printf(\"The character is not an alphabet\");\n\n return 0;\n}" }, { "code": null, "e": 1840, "s": 1821, "text": "Here is the output" }, { "code": null, "e": 1902, "s": 1840, "text": "The character is an alphabet\nThe character is not an alphabet" }, { "code": null, "e": 2124, "s": 1902, "text": "The function isdigit() is used to check that character is a numeric character or not. This function is declared in “ctype.h” header file. It returns an integer value, if the argument is a digit otherwise, it returns zero." }, { "code": null, "e": 2171, "s": 2124, "text": "Here is the syntax of isdigit() in C language," }, { "code": null, "e": 2195, "s": 2171, "text": "int isdigit(int value);" }, { "code": null, "e": 2201, "s": 2195, "text": "Here," }, { "code": null, "e": 2252, "s": 2201, "text": "value − This is a single argument of integer type." }, { "code": null, "e": 2299, "s": 2252, "text": "Here is an example of isdigit() in C language," }, { "code": null, "e": 2310, "s": 2299, "text": " Live Demo" }, { "code": null, "e": 2646, "s": 2310, "text": "#include<stdio.h>\n#include<ctype.h>\n\nint main() {\n char val1 = 's';\n char val2 = '8';\n\n if(isdigit(val1))\n printf(\"The character is a digit\\n\");\n else\n printf(\"The character is not a digit\\n\");\n\n if(isdigit(val2))\n printf(\"The character is a digit\\n\");\n else\n printf(\"The character is not a digit\");\n\n return 0;\n}" }, { "code": null, "e": 2665, "s": 2646, "text": "Here is the output" }, { "code": null, "e": 2719, "s": 2665, "text": "The character is not a digit\nThe character is a digit" } ]
A Quicker Way to Download Kaggle Datasets in Google Colab | by Nabanita Roy | Towards Data Science
Kaggle is one of the best practice fields for Data Scientists and many of us like to use Google Colab to play around with datasets due availability of better data processing infrastructure. In this article, I have walked through three simple steps to download any dataset seamlessly from Kaggle with a simple configuration that would not require you to download/upload kaggle.json again and again make re-running jupyter notebooks smoother, even on other machine with access to your Google account and drive. Log in to Kaggle and access your account. Scroll down to the API section: Click on ‘Create New API Token’ and download the kaggle.json file which contains your API token. Make note of the path to this file. Let’s call this your/path/to/kag gle.json. Then open a new notebook in Google Colab and mount your drive by clicking on the icon as shown in the picture below. This step is important since you can alternately use code to mount your drive and upload file from your laptop but everytime you re-run the notebook, you’d have to scroll up & browse your kaggle.json from your your machine. !pip install -q kaggle!pip install -q kaggle-cli!mkdir -p ~/.kaggle!cp "your/path/to/kag gle.json" ~/.kaggle/!cat ~/.kaggle/kaggle.json !chmod 600 ~/.kaggle/kaggle.json# For competition datasets!kaggle competitions download -c dataset_name -p download_to_folder# For other datasets!kaggle datasets download -d user/dataset_name -p download_to_folder Replace: your/path/to/kag gle.json with your path to kaggle.json on drive. You can explore your drive on the left and copy the path. I have double quotes around it to avoid issues with blank spaces in folder names. download_to_folder with the folder where you’d like to store the downloaded dataset dataset_name and/or user/dataset_name as follow: To download a competition Dataset: You can easily get hold of the dataset_name to use in the URL. For example, if you want to download Fake News dataset, select just fake-news from the URL : Also, make sure to have agreed to the competition rules: Then, your final script would look like - !pip install -q kaggle!pip install -q kaggle-cli!mkdir -p ~/.kaggle!cp "/content/drive/My Drive/your_path_to_kaggle.json" ~/.kaggle/!cat ~/.kaggle/kaggle.json !chmod 600 ~/.kaggle/kaggle.json# For competition datasets!kaggle competitions download -c fake-news -p Dataset If you do not accept the competition rules then you’d encounter the 403 Forbidden Error. To download any other Dataset: Replace user_name/dataset_name with the Kaggle username and the dataset name. This can be easily extracted from the URL. For example, if you want to download US Election 2020 Tweets, you could simply copy the part after kaggle.com - ... and plug it into the script. Therefore your final script would look like - !pip install -q kaggle!pip install -q kaggle-cli!mkdir -p ~/.kaggle!cp "/content/drive/My Drive/your_path_to_kaggle.json" ~/.kaggle/!cat ~/.kaggle/kaggle.json !chmod 600 ~/.kaggle/kaggle.json# For other datasets!kaggle datasets download -d manchunhui/us-election-2020-tweets -p Dataset Finally, to re-run notebooks without having the necessity to scroll up, you could comment out the entire script including codes for unzipping datasets. If you are logged into your Google account, have access to your drive, you can run your code on any machine by directly downloading data without worrying about kaggle.json configs. There are of course other ways of downloading Kaggle datasets but this works the best for me. Hope this helps! Thanks for visiting! My Links: Medium | LinkedIn | GitHub
[ { "code": null, "e": 506, "s": 172, "text": "Kaggle is one of the best practice fields for Data Scientists and many of us like to use Google Colab to play around with datasets due availability of better data processing infrastructure. In this article, I have walked through three simple steps to download any dataset seamlessly from Kaggle with a simple configuration that would" }, { "code": null, "e": 569, "s": 506, "text": "not require you to download/upload kaggle.json again and again" }, { "code": null, "e": 681, "s": 569, "text": "make re-running jupyter notebooks smoother, even on other machine with access to your Google account and drive." }, { "code": null, "e": 755, "s": 681, "text": "Log in to Kaggle and access your account. Scroll down to the API section:" }, { "code": null, "e": 852, "s": 755, "text": "Click on ‘Create New API Token’ and download the kaggle.json file which contains your API token." }, { "code": null, "e": 931, "s": 852, "text": "Make note of the path to this file. Let’s call this your/path/to/kag gle.json." }, { "code": null, "e": 1048, "s": 931, "text": "Then open a new notebook in Google Colab and mount your drive by clicking on the icon as shown in the picture below." }, { "code": null, "e": 1272, "s": 1048, "text": "This step is important since you can alternately use code to mount your drive and upload file from your laptop but everytime you re-run the notebook, you’d have to scroll up & browse your kaggle.json from your your machine." }, { "code": null, "e": 1622, "s": 1272, "text": "!pip install -q kaggle!pip install -q kaggle-cli!mkdir -p ~/.kaggle!cp \"your/path/to/kag gle.json\" ~/.kaggle/!cat ~/.kaggle/kaggle.json !chmod 600 ~/.kaggle/kaggle.json# For competition datasets!kaggle competitions download -c dataset_name -p download_to_folder# For other datasets!kaggle datasets download -d user/dataset_name -p download_to_folder" }, { "code": null, "e": 1631, "s": 1622, "text": "Replace:" }, { "code": null, "e": 1837, "s": 1631, "text": "your/path/to/kag gle.json with your path to kaggle.json on drive. You can explore your drive on the left and copy the path. I have double quotes around it to avoid issues with blank spaces in folder names." }, { "code": null, "e": 1921, "s": 1837, "text": "download_to_folder with the folder where you’d like to store the downloaded dataset" }, { "code": null, "e": 1970, "s": 1921, "text": "dataset_name and/or user/dataset_name as follow:" }, { "code": null, "e": 2161, "s": 1970, "text": "To download a competition Dataset: You can easily get hold of the dataset_name to use in the URL. For example, if you want to download Fake News dataset, select just fake-news from the URL :" }, { "code": null, "e": 2218, "s": 2161, "text": "Also, make sure to have agreed to the competition rules:" }, { "code": null, "e": 2260, "s": 2218, "text": "Then, your final script would look like -" }, { "code": null, "e": 2531, "s": 2260, "text": "!pip install -q kaggle!pip install -q kaggle-cli!mkdir -p ~/.kaggle!cp \"/content/drive/My Drive/your_path_to_kaggle.json\" ~/.kaggle/!cat ~/.kaggle/kaggle.json !chmod 600 ~/.kaggle/kaggle.json# For competition datasets!kaggle competitions download -c fake-news -p Dataset" }, { "code": null, "e": 2620, "s": 2531, "text": "If you do not accept the competition rules then you’d encounter the 403 Forbidden Error." }, { "code": null, "e": 2884, "s": 2620, "text": "To download any other Dataset: Replace user_name/dataset_name with the Kaggle username and the dataset name. This can be easily extracted from the URL. For example, if you want to download US Election 2020 Tweets, you could simply copy the part after kaggle.com -" }, { "code": null, "e": 2917, "s": 2884, "text": "... and plug it into the script." }, { "code": null, "e": 2963, "s": 2917, "text": "Therefore your final script would look like -" }, { "code": null, "e": 3249, "s": 2963, "text": "!pip install -q kaggle!pip install -q kaggle-cli!mkdir -p ~/.kaggle!cp \"/content/drive/My Drive/your_path_to_kaggle.json\" ~/.kaggle/!cat ~/.kaggle/kaggle.json !chmod 600 ~/.kaggle/kaggle.json# For other datasets!kaggle datasets download -d manchunhui/us-election-2020-tweets -p Dataset" }, { "code": null, "e": 3582, "s": 3249, "text": "Finally, to re-run notebooks without having the necessity to scroll up, you could comment out the entire script including codes for unzipping datasets. If you are logged into your Google account, have access to your drive, you can run your code on any machine by directly downloading data without worrying about kaggle.json configs." }, { "code": null, "e": 3693, "s": 3582, "text": "There are of course other ways of downloading Kaggle datasets but this works the best for me. Hope this helps!" }, { "code": null, "e": 3714, "s": 3693, "text": "Thanks for visiting!" } ]
Understand Kaiming Initialization and Implementation Detail in PyTorch | by Xu LIANG | Towards Data Science
If you create weight implicitly by creating a linear layer, you should set modle='fan_in'. linear = torch.nn.Linear(node_in, node_out)init.kaiming_normal_(linear.weight, mode=’fan_in’)t = relu(linear(x_valid)) If you create weight explicitly by creating a random matrix, you should set modle='fan_out'. w1 = torch.randn(node_in, node_out)init.kaiming_normal_(w1, mode=’fan_out’)b1 = torch.randn(node_out)t = relu(linear(x_valid, w1, b1)) The content is structured as follows. Weight Initialization Matters!What is Kaiming initialization?Why Kaiming initialization works?Understand fan_in and fan_out mode in Pytorch implementation Weight Initialization Matters! What is Kaiming initialization? Why Kaiming initialization works? Understand fan_in and fan_out mode in Pytorch implementation Initialization is a process to create weight. In the below code snippet, we create a weight w1 randomly with the size of(784, 50). torhc.randn(*sizes) returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). The shape of the tensor is defined by the variable argument sizes. And this weight will be updated during the training phase. # random initw1 = torch.randn(784, 50) b1 = torch.randn(50)def linear(x, w, b): return x@w + bt1 = linear(x_valid, w1, b1)print(t1.mean(), t1.std())############# output ##############tensor(3.5744) tensor(28.4110) You may wonder why need we care about initialization if the weight can be updated during the training phase. No matter how to initialize the weight, it will be updated “well” eventually. But the reality is not so sweet. If we random initialize the weight, it will cause two problems, the vanishing gradient problem and exploding gradient problem. Vanishing gradient problem means weights vanish to 0. Because these weights are multiplied along with the layers in the backpropagation phase. If we initialize weights very small(<1), the gradients tend to get smaller and smaller as we go backward with hidden layers during backpropagation. Neurons in the earlier layers learn much more slowly than neurons in later layers. This causes minor weight updates. Exploding gradient problem means weights explode to infinity(NaN). Because these weights are multiplied along with the layers in the backpropagation phase. If we initialize weights very large(>1), the gradients tend to get larger and larger as we go backward with hidden layers during backpropagation. Neurons in the earlier layers update in huge steps, W = W — ⍺ * dW, and the downward moment will increase. Kaiming et al. derived a sound initialization method by cautiously modeling non-linearity of ReLUs, which makes extremely deep models (>30 layers) to converge. Below is the Kaiming initialization function. a: the negative slope of the rectifier used after this layer (0 for ReLU by default) fan_in: the number of input dimension. If we create a (784, 50), the fan_in is 784. fan_in is used in the feedforward phase. If we set it as fan_out, the fan_out is 50. fan_out is used in the backpropagation phase. I will explain two modes in detail later. We compare the random initialization and Kaiming initialization to show the effectiveness of Kaiming initialization. Random Initialization # random initw1 = torch.randn(784, 50) b1 = torch.randn(50)w2 = torch.randn(50, 10) b2 = torch.randn(10)w3 = torch.randn(10, 1) b3 = torch.randn(1)def linear(x, w, b): return x@w + bdef relu(x): return x.clamp_min(0.)t1 = relu(linear(x_valid, w1, b1))t2 = relu(linear(t1, w2, b2))t3 = relu(linear(t2, w3, b3))print(t1.mean(), t1.std())print(t2.mean(), t2.std())print(t3.mean(), t3.std())############# output ##############tensor(13.0542) tensor(17.9457)tensor(93.5488) tensor(113.1659)tensor(336.6660) tensor(208.7496) We initialize weight with a normal distribution with mean 0 and variance 1, and the ideal distribution of weight after ReLU should have slightly incremented mean layer by layer and variance close to 1. But the distribution changes a lot after some layers in the feedforward phase. Why the mean of weight should be slightly incremented layer by layer? Because we use the ReLU as the activation function. ReLU will return the value provided if input value is bigger than 0 and return value 0 if the input value is less than 0. if input < 0: return 0else: return input After ReLU, all negative values become 0. The mean will become larger when the layer becomes deeper. Kaiming Initialization # kaiming initnode_in = 784node_out = 50# random initw1 = torch.randn(784, 50) * math.sqrt(2/784)b1 = torch.randn(50)w2 = torch.randn(50, 10) * math.sqrt(2/50)b2 = torch.randn(10)w3 = torch.randn(10, 1) * math.sqrt(2/10)b3 = torch.randn(1)def linear(x, w, b): return x@w + bdef relu(x): return x.clamp_min(0.)t1 = relu(linear(x_valid, w1, b1))t2 = relu(linear(t1, w2, b2))t3 = relu(linear(t2, w3, b3))print(t1.mean(), t1.std())print(t2.mean(), t2.std())print(t3.mean(), t3.std())############# output ##############tensor(0.7418) tensor(1.0053)tensor(1.3356) tensor(1.4079)tensor(3.2972) tensor(1.1409) We initialize weight with a normal distribution with mean 0 and variance std, and the ideal distribution of weight after relu should have slightly incremented mean layer by layer and variance close to 1. We can see the output is close to what we expected. The mean increment slowly and std is close to 1 in the feedforward phase. And such stability will avoid the vanishing gradient problem and exploding gradient problem in the backpropagation phase. Kaiming initialization shows better stability than random initialization. nn.init.kaiming_normal_() will return tensor that has values sampled from mean 0 and variance std. There are two ways to do it. One way is to create weight implicitly by creating a linear layer. We set mode='fan_in' to indicate that using node_in calculate the std from torch.nn import init# linear layer implementationnode_in, node_out = 784, 50layer = torch.nn.Linear(node_in, node_out)init.kaiming_normal_(layer.weight, mode='fan_in')t = relu(layer(x_valid))print(t.mean(), t.std())############# output ##############tensor(0.4974, grad_fn=<MeanBackward0>) tensor(0.8027, grad_fn=<StdBackward0>) Another way is to create weight explicitly by creating a random matrix, you should set mode='fan_out'. def linear(x, w, b): return x@w + b# weight matrix implementationnode_in, node_out = 784, 50w1 = torch.randn(node_in, node_out)init.kaiming_normal_(w1, mode='fan_out')b1 = torch.randn(node_out)t = relu(linear(x_valid, w1, b1))print(t.mean(), t.std())############# output ##############tensor(0.6424) tensor(0.9772) Two implementation methods are both right. The mean is close to 0.5 and std is close to 1. But wait a minute, do you find something strange? Why the mode is different? According to the document, choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out' preserves the magnitudes in the backward pass. We can write as below. node_in, node_out = 784, 50# fan_in modeW = torch.randn(node_in, node_out) * math.sqrt(2 / node_in)# fan_out modeW = np.random.randn(node_in, node_out) * math.sqrt(2/ node_out) In the linear layer implementation, we set mode='fan_in'. Yes, this is the feedforward phase, we should set mode='fan_in' . Nothing wrong. But why we set the mode as fan_out in the weight matrix implementation? The reason behind the source code of nn.init.kaiming_normal_() def _calculate_fan_in_and_fan_out(tensor): dimensions = tensor.dim() if dimensions < 2: raise ValueError("Fan in and fan out can not be computed for tensor with fewer than 2 dimensions")if dimensions == 2: # Linear fan_in = tensor.size(1) fan_out = tensor.size(0) else: num_input_fmaps = tensor.size(1) num_output_fmaps = tensor.size(0) receptive_field_size = 1 if tensor.dim() > 2: receptive_field_size = tensor[0][0].numel() fan_in = num_input_fmaps * receptive_field_size fan_out = num_output_fmaps * receptive_field_sizereturn fan_in, fan_out This is the source code to get the right mode. The tensor is the w1 with size of (784, 50). So fan_in = 50, fan_out=784. When we set the mode as fan_out in the weight matrix implementation. The init.kaiming_normal_() actually calculates like below. node_in, node_out = 784, 50W = np.random.randn(node_in, node_out)init.kaiming_normal_(W, mode='fan_out')# what init.kaiming_normal_() actually does # fan_in = 50 # fan_out = 784W = W * torch.sqrt(784 / 2) Ok, make sense. But how to explain using fan_in in the linear layer implementation? When we use linear to create weight implicitly, the weight is transposed implicitly. Here is the source code of torch.nn.functional.linear. def linear(input, weight, bias=None): # type: (Tensor, Tensor, Optional[Tensor]) -> Tensor r""" Applies a linear transformation to the incoming data: :math:`y = xA^T + b`. Shape: - Input: :math:`(N, *, in\_features)` where `*` means any number of additional dimensions - Weight: :math:`(out\_features, in\_features)` - Bias: :math:`(out\_features)` - Output: :math:`(N, *, out\_features)` """ if input.dim() == 2 and bias is not None: # fused op is marginally faster ret = torch.addmm(bias, input, weight.t()) else: output = input.matmul(weight.t()) if bias is not None: output += bias ret = output return ret The weight is initialized with the size of (out_features, in_features). For example, if we input the size (784, 50) , the size of weight is actually (50, 784). torch.nn.Linear(784, 50).weight.shape############# output ##############torch.Size([50, 784]) That’s why linear need to first transpose the weight and then do the matmul operation. if input.dim() == 2 and bias is not None: # fused op is marginally faster ret = torch.addmm(bias, input, weight.t()) else: output = input.matmul(weight.t()) Because the weight in the linear layer has size of (50, 784), the init.kaiming_normal_() actually calculates like below. node_in, node_out = 784, 50layer = torch.nn.Linear(node_in, node_out)init.kaiming_normal_(layer.weight, mode='fan_out')# the size of layer.weight is (50, 784)# what init.kaiming_normal_() actually does # fan_in = 784 # fan_out = 50W = W * torch.sqrt(784 / 2) In this post, I first talked about why initialization matters and what is kaiming initialization. And I break down how to use PyTorch to implement it. Hope this post be helpful. Leave a comment if you have any advice. The full code in this snippet. Check out my other posts on Medium with a categorized view!GitHub: BrambleXuLinkedIn: Xu LiangBlog: BrambleXu Why cautiously initializing deep neural networks matters? Deep Learning Best Practices (1) — Weight Initialization Kaiming Initialization paper: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification A Gentle Introduction to the Rectified Linear Unit (ReLU) Fast.ai’s course Deep Learning for coders lesson 8
[ { "code": null, "e": 262, "s": 171, "text": "If you create weight implicitly by creating a linear layer, you should set modle='fan_in'." }, { "code": null, "e": 381, "s": 262, "text": "linear = torch.nn.Linear(node_in, node_out)init.kaiming_normal_(linear.weight, mode=’fan_in’)t = relu(linear(x_valid))" }, { "code": null, "e": 474, "s": 381, "text": "If you create weight explicitly by creating a random matrix, you should set modle='fan_out'." }, { "code": null, "e": 609, "s": 474, "text": "w1 = torch.randn(node_in, node_out)init.kaiming_normal_(w1, mode=’fan_out’)b1 = torch.randn(node_out)t = relu(linear(x_valid, w1, b1))" }, { "code": null, "e": 647, "s": 609, "text": "The content is structured as follows." }, { "code": null, "e": 802, "s": 647, "text": "Weight Initialization Matters!What is Kaiming initialization?Why Kaiming initialization works?Understand fan_in and fan_out mode in Pytorch implementation" }, { "code": null, "e": 833, "s": 802, "text": "Weight Initialization Matters!" }, { "code": null, "e": 865, "s": 833, "text": "What is Kaiming initialization?" }, { "code": null, "e": 899, "s": 865, "text": "Why Kaiming initialization works?" }, { "code": null, "e": 960, "s": 899, "text": "Understand fan_in and fan_out mode in Pytorch implementation" }, { "code": null, "e": 1091, "s": 960, "text": "Initialization is a process to create weight. In the below code snippet, we create a weight w1 randomly with the size of(784, 50)." }, { "code": null, "e": 1324, "s": 1091, "text": "torhc.randn(*sizes) returns a tensor filled with random numbers from a normal distribution with mean 0 and variance 1 (also called the standard normal distribution). The shape of the tensor is defined by the variable argument sizes." }, { "code": null, "e": 1383, "s": 1324, "text": "And this weight will be updated during the training phase." }, { "code": null, "e": 1600, "s": 1383, "text": "# random initw1 = torch.randn(784, 50) b1 = torch.randn(50)def linear(x, w, b): return x@w + bt1 = linear(x_valid, w1, b1)print(t1.mean(), t1.std())############# output ##############tensor(3.5744) tensor(28.4110)" }, { "code": null, "e": 1787, "s": 1600, "text": "You may wonder why need we care about initialization if the weight can be updated during the training phase. No matter how to initialize the weight, it will be updated “well” eventually." }, { "code": null, "e": 1947, "s": 1787, "text": "But the reality is not so sweet. If we random initialize the weight, it will cause two problems, the vanishing gradient problem and exploding gradient problem." }, { "code": null, "e": 2355, "s": 1947, "text": "Vanishing gradient problem means weights vanish to 0. Because these weights are multiplied along with the layers in the backpropagation phase. If we initialize weights very small(<1), the gradients tend to get smaller and smaller as we go backward with hidden layers during backpropagation. Neurons in the earlier layers learn much more slowly than neurons in later layers. This causes minor weight updates." }, { "code": null, "e": 2764, "s": 2355, "text": "Exploding gradient problem means weights explode to infinity(NaN). Because these weights are multiplied along with the layers in the backpropagation phase. If we initialize weights very large(>1), the gradients tend to get larger and larger as we go backward with hidden layers during backpropagation. Neurons in the earlier layers update in huge steps, W = W — ⍺ * dW, and the downward moment will increase." }, { "code": null, "e": 2970, "s": 2764, "text": "Kaiming et al. derived a sound initialization method by cautiously modeling non-linearity of ReLUs, which makes extremely deep models (>30 layers) to converge. Below is the Kaiming initialization function." }, { "code": null, "e": 3055, "s": 2970, "text": "a: the negative slope of the rectifier used after this layer (0 for ReLU by default)" }, { "code": null, "e": 3312, "s": 3055, "text": "fan_in: the number of input dimension. If we create a (784, 50), the fan_in is 784. fan_in is used in the feedforward phase. If we set it as fan_out, the fan_out is 50. fan_out is used in the backpropagation phase. I will explain two modes in detail later." }, { "code": null, "e": 3429, "s": 3312, "text": "We compare the random initialization and Kaiming initialization to show the effectiveness of Kaiming initialization." }, { "code": null, "e": 3451, "s": 3429, "text": "Random Initialization" }, { "code": null, "e": 3976, "s": 3451, "text": "# random initw1 = torch.randn(784, 50) b1 = torch.randn(50)w2 = torch.randn(50, 10) b2 = torch.randn(10)w3 = torch.randn(10, 1) b3 = torch.randn(1)def linear(x, w, b): return x@w + bdef relu(x): return x.clamp_min(0.)t1 = relu(linear(x_valid, w1, b1))t2 = relu(linear(t1, w2, b2))t3 = relu(linear(t2, w3, b3))print(t1.mean(), t1.std())print(t2.mean(), t2.std())print(t3.mean(), t3.std())############# output ##############tensor(13.0542) tensor(17.9457)tensor(93.5488) tensor(113.1659)tensor(336.6660) tensor(208.7496)" }, { "code": null, "e": 4257, "s": 3976, "text": "We initialize weight with a normal distribution with mean 0 and variance 1, and the ideal distribution of weight after ReLU should have slightly incremented mean layer by layer and variance close to 1. But the distribution changes a lot after some layers in the feedforward phase." }, { "code": null, "e": 4327, "s": 4257, "text": "Why the mean of weight should be slightly incremented layer by layer?" }, { "code": null, "e": 4501, "s": 4327, "text": "Because we use the ReLU as the activation function. ReLU will return the value provided if input value is bigger than 0 and return value 0 if the input value is less than 0." }, { "code": null, "e": 4548, "s": 4501, "text": "if input < 0: return 0else: return input" }, { "code": null, "e": 4649, "s": 4548, "text": "After ReLU, all negative values become 0. The mean will become larger when the layer becomes deeper." }, { "code": null, "e": 4672, "s": 4649, "text": "Kaiming Initialization" }, { "code": null, "e": 5280, "s": 4672, "text": "# kaiming initnode_in = 784node_out = 50# random initw1 = torch.randn(784, 50) * math.sqrt(2/784)b1 = torch.randn(50)w2 = torch.randn(50, 10) * math.sqrt(2/50)b2 = torch.randn(10)w3 = torch.randn(10, 1) * math.sqrt(2/10)b3 = torch.randn(1)def linear(x, w, b): return x@w + bdef relu(x): return x.clamp_min(0.)t1 = relu(linear(x_valid, w1, b1))t2 = relu(linear(t1, w2, b2))t3 = relu(linear(t2, w3, b3))print(t1.mean(), t1.std())print(t2.mean(), t2.std())print(t3.mean(), t3.std())############# output ##############tensor(0.7418) tensor(1.0053)tensor(1.3356) tensor(1.4079)tensor(3.2972) tensor(1.1409)" }, { "code": null, "e": 5732, "s": 5280, "text": "We initialize weight with a normal distribution with mean 0 and variance std, and the ideal distribution of weight after relu should have slightly incremented mean layer by layer and variance close to 1. We can see the output is close to what we expected. The mean increment slowly and std is close to 1 in the feedforward phase. And such stability will avoid the vanishing gradient problem and exploding gradient problem in the backpropagation phase." }, { "code": null, "e": 5806, "s": 5732, "text": "Kaiming initialization shows better stability than random initialization." }, { "code": null, "e": 5934, "s": 5806, "text": "nn.init.kaiming_normal_() will return tensor that has values sampled from mean 0 and variance std. There are two ways to do it." }, { "code": null, "e": 6071, "s": 5934, "text": "One way is to create weight implicitly by creating a linear layer. We set mode='fan_in' to indicate that using node_in calculate the std" }, { "code": null, "e": 6405, "s": 6071, "text": "from torch.nn import init# linear layer implementationnode_in, node_out = 784, 50layer = torch.nn.Linear(node_in, node_out)init.kaiming_normal_(layer.weight, mode='fan_in')t = relu(layer(x_valid))print(t.mean(), t.std())############# output ##############tensor(0.4974, grad_fn=<MeanBackward0>) tensor(0.8027, grad_fn=<StdBackward0>)" }, { "code": null, "e": 6508, "s": 6405, "text": "Another way is to create weight explicitly by creating a random matrix, you should set mode='fan_out'." }, { "code": null, "e": 6826, "s": 6508, "text": "def linear(x, w, b): return x@w + b# weight matrix implementationnode_in, node_out = 784, 50w1 = torch.randn(node_in, node_out)init.kaiming_normal_(w1, mode='fan_out')b1 = torch.randn(node_out)t = relu(linear(x_valid, w1, b1))print(t.mean(), t.std())############# output ##############tensor(0.6424) tensor(0.9772)" }, { "code": null, "e": 6967, "s": 6826, "text": "Two implementation methods are both right. The mean is close to 0.5 and std is close to 1. But wait a minute, do you find something strange?" }, { "code": null, "e": 6994, "s": 6967, "text": "Why the mode is different?" }, { "code": null, "e": 7204, "s": 6994, "text": "According to the document, choosing 'fan_in' preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out' preserves the magnitudes in the backward pass. We can write as below." }, { "code": null, "e": 7381, "s": 7204, "text": "node_in, node_out = 784, 50# fan_in modeW = torch.randn(node_in, node_out) * math.sqrt(2 / node_in)# fan_out modeW = np.random.randn(node_in, node_out) * math.sqrt(2/ node_out)" }, { "code": null, "e": 7520, "s": 7381, "text": "In the linear layer implementation, we set mode='fan_in'. Yes, this is the feedforward phase, we should set mode='fan_in' . Nothing wrong." }, { "code": null, "e": 7592, "s": 7520, "text": "But why we set the mode as fan_out in the weight matrix implementation?" }, { "code": null, "e": 7655, "s": 7592, "text": "The reason behind the source code of nn.init.kaiming_normal_()" }, { "code": null, "e": 8286, "s": 7655, "text": "def _calculate_fan_in_and_fan_out(tensor): dimensions = tensor.dim() if dimensions < 2: raise ValueError(\"Fan in and fan out can not be computed for tensor with fewer than 2 dimensions\")if dimensions == 2: # Linear fan_in = tensor.size(1) fan_out = tensor.size(0) else: num_input_fmaps = tensor.size(1) num_output_fmaps = tensor.size(0) receptive_field_size = 1 if tensor.dim() > 2: receptive_field_size = tensor[0][0].numel() fan_in = num_input_fmaps * receptive_field_size fan_out = num_output_fmaps * receptive_field_sizereturn fan_in, fan_out" }, { "code": null, "e": 8535, "s": 8286, "text": "This is the source code to get the right mode. The tensor is the w1 with size of (784, 50). So fan_in = 50, fan_out=784. When we set the mode as fan_out in the weight matrix implementation. The init.kaiming_normal_() actually calculates like below." }, { "code": null, "e": 8746, "s": 8535, "text": "node_in, node_out = 784, 50W = np.random.randn(node_in, node_out)init.kaiming_normal_(W, mode='fan_out')# what init.kaiming_normal_() actually does # fan_in = 50 # fan_out = 784W = W * torch.sqrt(784 / 2)" }, { "code": null, "e": 8830, "s": 8746, "text": "Ok, make sense. But how to explain using fan_in in the linear layer implementation?" }, { "code": null, "e": 8970, "s": 8830, "text": "When we use linear to create weight implicitly, the weight is transposed implicitly. Here is the source code of torch.nn.functional.linear." }, { "code": null, "e": 9687, "s": 8970, "text": "def linear(input, weight, bias=None): # type: (Tensor, Tensor, Optional[Tensor]) -> Tensor r\"\"\" Applies a linear transformation to the incoming data: :math:`y = xA^T + b`. Shape: - Input: :math:`(N, *, in\\_features)` where `*` means any number of additional dimensions - Weight: :math:`(out\\_features, in\\_features)` - Bias: :math:`(out\\_features)` - Output: :math:`(N, *, out\\_features)` \"\"\" if input.dim() == 2 and bias is not None: # fused op is marginally faster ret = torch.addmm(bias, input, weight.t()) else: output = input.matmul(weight.t()) if bias is not None: output += bias ret = output return ret" }, { "code": null, "e": 9847, "s": 9687, "text": "The weight is initialized with the size of (out_features, in_features). For example, if we input the size (784, 50) , the size of weight is actually (50, 784)." }, { "code": null, "e": 9941, "s": 9847, "text": "torch.nn.Linear(784, 50).weight.shape############# output ##############torch.Size([50, 784])" }, { "code": null, "e": 10028, "s": 9941, "text": "That’s why linear need to first transpose the weight and then do the matmul operation." }, { "code": null, "e": 10213, "s": 10028, "text": " if input.dim() == 2 and bias is not None: # fused op is marginally faster ret = torch.addmm(bias, input, weight.t()) else: output = input.matmul(weight.t())" }, { "code": null, "e": 10334, "s": 10213, "text": "Because the weight in the linear layer has size of (50, 784), the init.kaiming_normal_() actually calculates like below." }, { "code": null, "e": 10599, "s": 10334, "text": "node_in, node_out = 784, 50layer = torch.nn.Linear(node_in, node_out)init.kaiming_normal_(layer.weight, mode='fan_out')# the size of layer.weight is (50, 784)# what init.kaiming_normal_() actually does # fan_in = 784 # fan_out = 50W = W * torch.sqrt(784 / 2)" }, { "code": null, "e": 10817, "s": 10599, "text": "In this post, I first talked about why initialization matters and what is kaiming initialization. And I break down how to use PyTorch to implement it. Hope this post be helpful. Leave a comment if you have any advice." }, { "code": null, "e": 10848, "s": 10817, "text": "The full code in this snippet." }, { "code": null, "e": 10958, "s": 10848, "text": "Check out my other posts on Medium with a categorized view!GitHub: BrambleXuLinkedIn: Xu LiangBlog: BrambleXu" }, { "code": null, "e": 11016, "s": 10958, "text": "Why cautiously initializing deep neural networks matters?" }, { "code": null, "e": 11073, "s": 11016, "text": "Deep Learning Best Practices (1) — Weight Initialization" }, { "code": null, "e": 11195, "s": 11073, "text": "Kaiming Initialization paper: Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification" }, { "code": null, "e": 11253, "s": 11195, "text": "A Gentle Introduction to the Rectified Linear Unit (ReLU)" } ]
java variable declaration
A variable provides us with named storage that our programs can manipulate. Each variable in Java has a specific type, which determines the size and layout of the variable's memory; the range of values that can be stored within that memory; and the set of operations that can be applied to the variable. You must declare all variables before they can be used. Following is the basic form of a variable declaration - data type variable [ = value][, variable [ = value] ...] ; Here data type is one of Java's data types and the variable is the name of the variable. To declare more than one variable of the specified type, you can use a comma-separated list. Following are valid examples of variable declaration and initialization in Java - int a, b, c; // Declares three ints, a, b, and c. int a = 10, b = 10; // Example of initialization byte B = 22; // initializes a byte type variable B. double pi = 3.14159; // declares and assigns a value of PI. char a = 'a'; // the char variable a iis initialized with value 'a'
[ { "code": null, "e": 1366, "s": 1062, "text": "A variable provides us with named storage that our programs can manipulate. Each variable in Java has a specific type, which determines the size and layout of the variable's memory; the range of values that can be stored within that memory; and the set of operations that can be applied to the variable." }, { "code": null, "e": 1478, "s": 1366, "text": "You must declare all variables before they can be used. Following is the basic form of a variable declaration -" }, { "code": null, "e": 1537, "s": 1478, "text": "data type variable [ = value][, variable [ = value] ...] ;" }, { "code": null, "e": 1719, "s": 1537, "text": "Here data type is one of Java's data types and the variable is the name of the variable. To declare more than one variable of the specified type, you can use a comma-separated list." }, { "code": null, "e": 1801, "s": 1719, "text": "Following are valid examples of variable declaration and initialization in Java -" }, { "code": null, "e": 2104, "s": 1801, "text": "int a, b, c; // Declares three ints, a, b, and c.\nint a = 10, b = 10; // Example of initialization\nbyte B = 22; // initializes a byte type variable B.\ndouble pi = 3.14159; // declares and assigns a value of PI.\nchar a = 'a'; // the char variable a iis initialized with value 'a'" } ]
Monitoring Stock Performance made easy with R and Shiny | by Peer Christensen | Towards Data Science
Comparing the performance of many stocks in a single visualization can be time consuming. It is particularly tedious if you want to do this over and over again. With the help of R and Shiny, you can easily create and track a stock portfolio to see how individual stocks perform over time — all in one interactive visualization. In this article, we will go through the process step by step and do the following: Gather stock closing prices for a given time periodCreate a simple user interface allowing the user to select the data to be displayedDefine server functions Gather stock closing prices for a given time period Create a simple user interface allowing the user to select the data to be displayed Define server functions Let’s first load the R packages that we’ll be using in our app. You will need to install them on your system using the install.packages() function. library(shiny)library(shinyWidgets)library(shinythemes)library(plotly)library(tidyverse)library(tidyquant) We first define our portfolio’s ticker symbols and two stock indices as benchmarks. Then, we use the tq_get() function from the tidyquant package to get stock (closing) prices from Yahoo Finance. The to and from arguments are used to specify the desired date range. tickers <- c("GRVY","SE","PLTR","U","NET","SNOW","MDB")benchmarks <- c("^NDX","^GSPC") # Nasdaq100 and SP500prices <- tq_get(tickers, get = "stock.prices", from = today()-months(12), to = today(), complete_cases = F) %>% select(symbol,date,close)bench <- tq_get(benchmarks, get = "stock.prices", from = today()-months(12), to = today()) %>% select(symbol,date,close) The code specifying the user interface will be wrapped in the fluidPage function and saved in a variable that we callui . This is demonstrated at the bottom of the article where we will put it all together. Our app will have three main UI components: A title panelA sidebar where users can select and filter the dataA main panel for visualizing the data A title panel A sidebar where users can select and filter the data A main panel for visualizing the data This one is easy. We simple write the following: titlePanel("My Tech Stock Portfolio") This part requires a bit more work, since this is where we specify what the user can do. In our app, we will need some input components that allow the user to pick and choose between stocks, the desired time range and whether or not to include a benchmark. Please have a look at the bottom of the article to see how we put the individual components together in our sidebar. First, a pickerInput lets the user pick between stocks and select/deselect all. All are selected by default using the selected argument. Importantly, we specify the inputId in order to reference the selected stocks in our server logic. pickerInput( inputId = "stocks", label = h4("Stocks"), choices = c( "Gravity" = tickers[1], "Sea Limited" = tickers[2], "Palantir" = tickers[3], "Unity" = tickers[4], "Cloudflare" = tickers[5], "Snowflake" = tickers[6], "MongoDB" = tickers[7]), selected = tickers, options = list(`actions-box` = TRUE), multiple = T ) Then, radioButtons() provides different time ranges to choose from, as well as whether to include a benchmark index in the visualization. # Time rangeradioButtons( inputId = "period", label = h4("Period"), choices = list("1 month" = 1, "3 months" = 2, "6 months" = 3, "12 months" = 4, "YTD" = 5), selected = 4)# BenchmarkradioButtons( inputId = "benchmark", label = h4("Benchmark"), choices = list("SP500" = 1, "Nasdaq100" = 2,"None" = 3), selected = 3) This is where the plot goes. In our case, this is pretty straightforward. mainPanel(plotlyOutput("plot", height=800)) Based on the user input, we need to specify how our app should behave. This is all done in the server part of our code. Because we want our app to be responsive and modify the visualization according user input, we wrap the logic to filter the data inside observeEvent . For the sake of brevity, not all the input logic is included in the code chunk below. The complete code for this part is provided at the end. observeEvent(c(input$period,input$stocks,input$benchmark), { # filter stock symbols prices <- prices %>% filter(symbol %in% input$stocks)# filter time period (example) if (input$period == 1) { prices <- prices %>% filter( date >= today()-months(1)) }# filter benchmark (example) if (input$benchmark == 1) { bench <- bench %>% filter(symbol=="^GSPC", date >= min(prices$date)) prices <- rbind(prices,bench) }# add more server logic here..}) Finally, we can plot our data combining ggplot2 and plotly . In order to better compare our stocks, we also need to recalculate their price levels to index base 100. output$plot <- renderPlotly({ print( ggplotly(prices %>% # index 100 group_by(symbol) %>% mutate(init_close = if_else( date == min(date), close,NA_real_)) %>% mutate( value = round(100 * close / sum(init_close, na.rm=T),1)) %>% ungroup() %>% ggplot(aes(date, value, colour = symbol)) + geom_line(size = 1, alpha = .9) + #uncomment line below to show area under curves #geom_area(aes(fill = symbol),position="identity",alpha=.2) + theme_minimal(base_size=16) + theme(axis.title = element_blank(), plot.background = element_rect(fill = "black"), panel.background = element_rect(fill = "black"), panel.grid = element_blank(), legend.text = element_text(colour = "white")) ) ) }) That’s it! We’ve walked through pretty much everything we need to create our app. Now, we only need to put the pieces together. Below is the full code for our shiny new stock portfolio app! You can check out the app here As you can see, I’ve chosen a dark theme supplied by the shinythemes package.
[ { "code": null, "e": 500, "s": 172, "text": "Comparing the performance of many stocks in a single visualization can be time consuming. It is particularly tedious if you want to do this over and over again. With the help of R and Shiny, you can easily create and track a stock portfolio to see how individual stocks perform over time — all in one interactive visualization." }, { "code": null, "e": 583, "s": 500, "text": "In this article, we will go through the process step by step and do the following:" }, { "code": null, "e": 741, "s": 583, "text": "Gather stock closing prices for a given time periodCreate a simple user interface allowing the user to select the data to be displayedDefine server functions" }, { "code": null, "e": 793, "s": 741, "text": "Gather stock closing prices for a given time period" }, { "code": null, "e": 877, "s": 793, "text": "Create a simple user interface allowing the user to select the data to be displayed" }, { "code": null, "e": 901, "s": 877, "text": "Define server functions" }, { "code": null, "e": 1049, "s": 901, "text": "Let’s first load the R packages that we’ll be using in our app. You will need to install them on your system using the install.packages() function." }, { "code": null, "e": 1156, "s": 1049, "text": "library(shiny)library(shinyWidgets)library(shinythemes)library(plotly)library(tidyverse)library(tidyquant)" }, { "code": null, "e": 1422, "s": 1156, "text": "We first define our portfolio’s ticker symbols and two stock indices as benchmarks. Then, we use the tq_get() function from the tidyquant package to get stock (closing) prices from Yahoo Finance. The to and from arguments are used to specify the desired date range." }, { "code": null, "e": 1910, "s": 1422, "text": "tickers <- c(\"GRVY\",\"SE\",\"PLTR\",\"U\",\"NET\",\"SNOW\",\"MDB\")benchmarks <- c(\"^NDX\",\"^GSPC\") # Nasdaq100 and SP500prices <- tq_get(tickers, get = \"stock.prices\", from = today()-months(12), to = today(), complete_cases = F) %>% select(symbol,date,close)bench <- tq_get(benchmarks, get = \"stock.prices\", from = today()-months(12), to = today()) %>% select(symbol,date,close)" }, { "code": null, "e": 2117, "s": 1910, "text": "The code specifying the user interface will be wrapped in the fluidPage function and saved in a variable that we callui . This is demonstrated at the bottom of the article where we will put it all together." }, { "code": null, "e": 2161, "s": 2117, "text": "Our app will have three main UI components:" }, { "code": null, "e": 2264, "s": 2161, "text": "A title panelA sidebar where users can select and filter the dataA main panel for visualizing the data" }, { "code": null, "e": 2278, "s": 2264, "text": "A title panel" }, { "code": null, "e": 2331, "s": 2278, "text": "A sidebar where users can select and filter the data" }, { "code": null, "e": 2369, "s": 2331, "text": "A main panel for visualizing the data" }, { "code": null, "e": 2418, "s": 2369, "text": "This one is easy. We simple write the following:" }, { "code": null, "e": 2456, "s": 2418, "text": "titlePanel(\"My Tech Stock Portfolio\")" }, { "code": null, "e": 2830, "s": 2456, "text": "This part requires a bit more work, since this is where we specify what the user can do. In our app, we will need some input components that allow the user to pick and choose between stocks, the desired time range and whether or not to include a benchmark. Please have a look at the bottom of the article to see how we put the individual components together in our sidebar." }, { "code": null, "e": 3066, "s": 2830, "text": "First, a pickerInput lets the user pick between stocks and select/deselect all. All are selected by default using the selected argument. Importantly, we specify the inputId in order to reference the selected stocks in our server logic." }, { "code": null, "e": 3631, "s": 3066, "text": "pickerInput( inputId = \"stocks\", label = h4(\"Stocks\"), choices = c( \"Gravity\" = tickers[1], \"Sea Limited\" = tickers[2], \"Palantir\" = tickers[3], \"Unity\" = tickers[4], \"Cloudflare\" = tickers[5], \"Snowflake\" = tickers[6], \"MongoDB\" = tickers[7]), selected = tickers, options = list(`actions-box` = TRUE), multiple = T )" }, { "code": null, "e": 3769, "s": 3631, "text": "Then, radioButtons() provides different time ranges to choose from, as well as whether to include a benchmark index in the visualization." }, { "code": null, "e": 4116, "s": 3769, "text": "# Time rangeradioButtons( inputId = \"period\", label = h4(\"Period\"), choices = list(\"1 month\" = 1, \"3 months\" = 2, \"6 months\" = 3, \"12 months\" = 4, \"YTD\" = 5), selected = 4)# BenchmarkradioButtons( inputId = \"benchmark\", label = h4(\"Benchmark\"), choices = list(\"SP500\" = 1, \"Nasdaq100\" = 2,\"None\" = 3), selected = 3)" }, { "code": null, "e": 4190, "s": 4116, "text": "This is where the plot goes. In our case, this is pretty straightforward." }, { "code": null, "e": 4234, "s": 4190, "text": "mainPanel(plotlyOutput(\"plot\", height=800))" }, { "code": null, "e": 4505, "s": 4234, "text": "Based on the user input, we need to specify how our app should behave. This is all done in the server part of our code. Because we want our app to be responsive and modify the visualization according user input, we wrap the logic to filter the data inside observeEvent ." }, { "code": null, "e": 4647, "s": 4505, "text": "For the sake of brevity, not all the input logic is included in the code chunk below. The complete code for this part is provided at the end." }, { "code": null, "e": 5186, "s": 4647, "text": "observeEvent(c(input$period,input$stocks,input$benchmark), { # filter stock symbols prices <- prices %>% filter(symbol %in% input$stocks)# filter time period (example) if (input$period == 1) { prices <- prices %>% filter( date >= today()-months(1)) }# filter benchmark (example) if (input$benchmark == 1) { bench <- bench %>% filter(symbol==\"^GSPC\", date >= min(prices$date)) prices <- rbind(prices,bench) }# add more server logic here..})" }, { "code": null, "e": 5352, "s": 5186, "text": "Finally, we can plot our data combining ggplot2 and plotly . In order to better compare our stocks, we also need to recalculate their price levels to index base 100." }, { "code": null, "e": 6192, "s": 5352, "text": "output$plot <- renderPlotly({ print( ggplotly(prices %>% # index 100 group_by(symbol) %>% mutate(init_close = if_else( date == min(date), close,NA_real_)) %>% mutate( value = round(100 * close / sum(init_close, na.rm=T),1)) %>% ungroup() %>% ggplot(aes(date, value, colour = symbol)) + geom_line(size = 1, alpha = .9) + #uncomment line below to show area under curves #geom_area(aes(fill = symbol),position=\"identity\",alpha=.2) + theme_minimal(base_size=16) + theme(axis.title = element_blank(), plot.background = element_rect(fill = \"black\"), panel.background = element_rect(fill = \"black\"), panel.grid = element_blank(), legend.text = element_text(colour = \"white\")) ) ) })" }, { "code": null, "e": 6203, "s": 6192, "text": "That’s it!" }, { "code": null, "e": 6382, "s": 6203, "text": "We’ve walked through pretty much everything we need to create our app. Now, we only need to put the pieces together. Below is the full code for our shiny new stock portfolio app!" }, { "code": null, "e": 6413, "s": 6382, "text": "You can check out the app here" } ]
Downloading Files from Web using Perl - GeeksforGeeks
02 Feb, 2022 Perl is a multi-purpose interpreted language that is often implemented using Perl scripts that can be saved using the .pl extension and run directly using the terminal or command prompt. It is a stable, cross-platform language that was developed primarily with strong capabilities in terms of text manipulation and modifying, and extracting information from web pages. It is under active development and open source. It finds major use in web development, system administration, and even GUI development due to its capability of working with HTML, XML, and other mark-up languages. It is prominently used along with the Web as it can handle encrypted web data in addition to E-Commerce transactions. In this article, we will be seeing different approaches to download web pages as well as images using Perl scripts. In this approach, we write a sub routine where a URL is passed to a system command. The variable stores the content of the web page in the raw HTML form. We then return these contents. Perl #!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma# to generate warnings in case of incorrect# codeuse warnings; # specifying the Perl version use 5.010; # declaring the sub routinesub getWebPage { # variable to store the URL my $url = 'http://www.google.com/'; # variable to store the contents of the # web page my $webpage = system "wget --output-document=- $url"; # returning the contents of the web page return $webpage;} # printing user friendly messagesay "the contents of the downloaded web page : "; # calling the sub routinegetWebPage(); Output: the contents of the downloaded web page : <raw HTML web page> This approach is exactly the same as above, the only difference being that here the system command used is “curl” in place of “wget”. Perl #!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma to # generate warnings in case of # erroneous codeuse warnings; # specifying the Perl versionuse 5.010; # declaring the sub routinesub getWebPage { # variable to store the URL my $url = 'http://www.google.com/'; # variable to store the contents of the # downloaded web page my $downloadedPage = system "curl $url"; # returning the contents using the variable return $downloadedPage;} # displaying a user friendly messagesay "the contents of the web page : "; # calling the sub routinegetWebPage(); Output: the contents of the downloaded web page : <raw HTML web page> LWP::Simple is a module in Perl which provides a get() that takes the URL as a parameter and returns the body of the document. It returns undef if the requested URL cannot be processed by the server. Perl #!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma to# generate warnings in case of # erroneous codesuse warnings; # specifying the Perl versionuse 5.010; # calling the LWP::Simple moduleuse LWP::Simple; # declaring the sub routinesub getWebPage { # variable to store the URL my $url = 'http://www.google.com'; # passing the URL to the get function # of LWP::Simple module my $downloadedPage = get $url; # printing the contents of the web page say $downloadedPage;} # displaying a user friendly messagesay 'the contents of the web page are : '; #calling the sub routinegetWebPage(); Output: the contents of the downloaded web page : <raw HTML web page> HTTP::Tiny is a simple HTTP/1.1 client which implies it is used to get, put, delete, head (basic HTTP actions). It is used for performing simple requests without the overhead of a large framework. First, an HTTP variable is instantiated using the new operator. Next, we get the code for the request by passing the URL in the get method. On successful code, we get the length and the content of the web page at the address of the specified URL. In the case of an unsuccessful code, we display the appropriate message and mention the reasons for the failure of connection. Perl #!usr/bin/perl # using the warnings pragma to# generate warnings in case of # erroneous codeuse warnings; # specifying the Perl versionuse 5.010; # calling the HTTP::Tiny moduleuse HTTP::Tiny; # declaring the sub routinesub getWebPage{ # variable to store the URL my $url = 'http://www.google.com/'; # instantiating the HTTP variable my $httpVariable = HTTP::Tiny->new; # storing the response using the get # method my $response = $httpVariable->get($url); # checking if the code returned successful if ($response -> {success}){ # specifying the length of the # web page content using the # length keyword say 'the length of the web page : '; my $length = length $response->{content}; say $length; # displaying the contents of the webpage say 'the contents of the web page are : '; my $downloadedPage = $response->{content}; say $downloadedPage; } # logic for when the code is # unsuccessful else{ # displating the reason for failed # request say "Failed to establish connection : $response->{status}.$response->{reasons}"; }} # calling the sub routinegetWebPage(); Output: the length of the web page : 15175 the contents of the web page are : <html code of the web page> The approach for the download of multiple web pages using HTTP::Tiny is the same as mentioned above. The only modification is that here the URL of all the web pages are stored in an array and we loop through the array displaying the contents of each web page. Perl #!usr/bin/perl # using the warnings pragma# to generate warnings for# erroneous codeuse warnings; # specifying the Perl versionuse 5.010; # calling the HTTP::Tiny moduleuse HTTP::Tiny; # declaring the sub routinesub getWebPages{ # instantiating the HTTP client my $httpVariable = HTTP::Tiny->new; # array of URLs my @urls = ('http://www.google.com/', 'https://www.geeksforgeeks.org/' ); # start of foreach loop to # loop through the array of URLs foreach my $singleURL (@urls){ # displaying user friendly message say 'downloading web page...'; # variable to store the response my $response = $httpVariable-> get($singleURL); # logic for successful connection if ($response->{success}){ say $singleURL. " downloaded successfully"; # displaying the length of # the web page # the contents can be displayed # similarly say "Length : length $response->{content}"; } # logic for unsuccessful connection else{ say $singleURL. " could not be downloaded"; # displaying the reason for # unsuccessful connection say "$response->{status} $response->{reasons}"; } }} # calling the sub routinegetWebPages(); Output: downloading web page... downloaded successfully Length : 15175 <html content of the landing page of google> downloading web page... downloaded successfully Length : <Length of the landing page of GFG> <html content of the landing page of GFG> In this section, we will see two approaches to download images using Perl scripts. In order to get the URL of these images, we first right-click on them. Next, we click on Copy Image Address from the drop-down and paste this as the URL for the image. In this approach, we use LWP::Simple module and get the HTTP code using getstore function. In this function, we have to specify the URL of the image to be downloaded and the location to store the downloaded image. Next, we check if the code is successful or not and display the corresponding message to the user. Perl #!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma# to generate warnings for# erroneous codeuse warnings; # specifying the Perl versionuse 5.010; # calling the moduleuse LWP::Simple; # declaring the sub routinesub getImage { # displaying a user friendly message say "Downloading ... "; # variable to store the status code # first parameter is the URL of the image # second parameter is the location # of the downloaded image my $statusCode = getstore ("https://www.geeksforgeeks.org/wp-content/uploads/gfg_200X200-1.png", "downloaded_image.png"); # checking for successful # connection if ($statusCode == 200) { say "Image successfully downloaded."; } else { say "Image download failed."; }} # calling the sub routinegetImage(); Output: Downloading... Image successfully downloaded. (the downloaded image will be saved at the specified location with the given name. If no location is specified then the image would be saved in the current working directory. Image::Grab is a simple module meant for downloading the images specified by their URLs. It works with images that might be hidden by some method too. In this approach, we use the Image::Grab module and after instantiating it, we pass the URL. Next, we call the grab method and save the downloaded image to disk. Perl #!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma to# generate warnings for erroneous# codeuse warnings; # specifying the Perl versionuse 5.010; # calling the Image::Grab moduleuse Image::Grab; # instantiating the module# and storing it in a variablemy $instantiatedImage = new Image::Grab; # declaring the sub routinesub getImage { # specifying the URL $instantiatedImage->url ('https://www.geeksforgeeks.org/wp-content/uploads/gfg_200X200-1.png'); # calling grab to grab the image $instantiatedImage->grab; # creating a file to store # the downloaded image open(DOWNLOADEDIMAGE, '>downloaded_image1.png') || die'downloaded_image1.png: $!'; # for MSDOS only binmode DOWNLOADEDIMAGE; # saving the image in the created # file print DOWNLOADEDIMAGE $instantiatedImage->image; # closing the file close instantiatedImage;} # calling the sub routinegetImage(); Output: The image is stored with the specified file name. Downloaded Image: surinderdawra388 Picked Perl Perl Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Perl Tutorial - Learn Perl With Examples Perl | Inheritance in OOPs Perl | Basic Syntax of a Perl Program Perl | Opening and Reading a File Perl | ne operator Perl | Data Types Perl | Multidimensional Hashes Perl | Hashes Perl | defined() Function Perl | Scope of Variables
[ { "code": null, "e": 25425, "s": 25397, "text": "\n02 Feb, 2022" }, { "code": null, "e": 26126, "s": 25425, "text": "Perl is a multi-purpose interpreted language that is often implemented using Perl scripts that can be saved using the .pl extension and run directly using the terminal or command prompt. It is a stable, cross-platform language that was developed primarily with strong capabilities in terms of text manipulation and modifying, and extracting information from web pages. It is under active development and open source. It finds major use in web development, system administration, and even GUI development due to its capability of working with HTML, XML, and other mark-up languages. It is prominently used along with the Web as it can handle encrypted web data in addition to E-Commerce transactions. " }, { "code": null, "e": 26242, "s": 26126, "text": "In this article, we will be seeing different approaches to download web pages as well as images using Perl scripts." }, { "code": null, "e": 26427, "s": 26242, "text": "In this approach, we write a sub routine where a URL is passed to a system command. The variable stores the content of the web page in the raw HTML form. We then return these contents." }, { "code": null, "e": 26432, "s": 26427, "text": "Perl" }, { "code": "#!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma# to generate warnings in case of incorrect# codeuse warnings; # specifying the Perl version use 5.010; # declaring the sub routinesub getWebPage { # variable to store the URL my $url = 'http://www.google.com/'; # variable to store the contents of the # web page my $webpage = system \"wget --output-document=- $url\"; # returning the contents of the web page return $webpage;} # printing user friendly messagesay \"the contents of the downloaded web page : \"; # calling the sub routinegetWebPage();", "e": 27054, "s": 26432, "text": null }, { "code": null, "e": 27062, "s": 27054, "text": "Output:" }, { "code": null, "e": 27124, "s": 27062, "text": "the contents of the downloaded web page :\n<raw HTML web page>" }, { "code": null, "e": 27258, "s": 27124, "text": "This approach is exactly the same as above, the only difference being that here the system command used is “curl” in place of “wget”." }, { "code": null, "e": 27263, "s": 27258, "text": "Perl" }, { "code": "#!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma to # generate warnings in case of # erroneous codeuse warnings; # specifying the Perl versionuse 5.010; # declaring the sub routinesub getWebPage { # variable to store the URL my $url = 'http://www.google.com/'; # variable to store the contents of the # downloaded web page my $downloadedPage = system \"curl $url\"; # returning the contents using the variable return $downloadedPage;} # displaying a user friendly messagesay \"the contents of the web page : \"; # calling the sub routinegetWebPage();", "e": 27882, "s": 27263, "text": null }, { "code": null, "e": 27890, "s": 27882, "text": "Output:" }, { "code": null, "e": 27952, "s": 27890, "text": "the contents of the downloaded web page :\n<raw HTML web page>" }, { "code": null, "e": 28152, "s": 27952, "text": "LWP::Simple is a module in Perl which provides a get() that takes the URL as a parameter and returns the body of the document. It returns undef if the requested URL cannot be processed by the server." }, { "code": null, "e": 28157, "s": 28152, "text": "Perl" }, { "code": "#!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma to# generate warnings in case of # erroneous codesuse warnings; # specifying the Perl versionuse 5.010; # calling the LWP::Simple moduleuse LWP::Simple; # declaring the sub routinesub getWebPage { # variable to store the URL my $url = 'http://www.google.com'; # passing the URL to the get function # of LWP::Simple module my $downloadedPage = get $url; # printing the contents of the web page say $downloadedPage;} # displaying a user friendly messagesay 'the contents of the web page are : '; #calling the sub routinegetWebPage();", "e": 28812, "s": 28157, "text": null }, { "code": null, "e": 28820, "s": 28812, "text": "Output:" }, { "code": null, "e": 28882, "s": 28820, "text": "the contents of the downloaded web page :\n<raw HTML web page>" }, { "code": null, "e": 29453, "s": 28882, "text": "HTTP::Tiny is a simple HTTP/1.1 client which implies it is used to get, put, delete, head (basic HTTP actions). It is used for performing simple requests without the overhead of a large framework. First, an HTTP variable is instantiated using the new operator. Next, we get the code for the request by passing the URL in the get method. On successful code, we get the length and the content of the web page at the address of the specified URL. In the case of an unsuccessful code, we display the appropriate message and mention the reasons for the failure of connection." }, { "code": null, "e": 29458, "s": 29453, "text": "Perl" }, { "code": "#!usr/bin/perl # using the warnings pragma to# generate warnings in case of # erroneous codeuse warnings; # specifying the Perl versionuse 5.010; # calling the HTTP::Tiny moduleuse HTTP::Tiny; # declaring the sub routinesub getWebPage{ # variable to store the URL my $url = 'http://www.google.com/'; # instantiating the HTTP variable my $httpVariable = HTTP::Tiny->new; # storing the response using the get # method my $response = $httpVariable->get($url); # checking if the code returned successful if ($response -> {success}){ # specifying the length of the # web page content using the # length keyword say 'the length of the web page : '; my $length = length $response->{content}; say $length; # displaying the contents of the webpage say 'the contents of the web page are : '; my $downloadedPage = $response->{content}; say $downloadedPage; } # logic for when the code is # unsuccessful else{ # displating the reason for failed # request say \"Failed to establish connection : $response->{status}.$response->{reasons}\"; }} # calling the sub routinegetWebPage();", "e": 30719, "s": 29458, "text": null }, { "code": null, "e": 30727, "s": 30719, "text": "Output:" }, { "code": null, "e": 30826, "s": 30727, "text": "the length of the web page : \n15175\nthe contents of the web page are :\n<html code of the web page>" }, { "code": null, "e": 31086, "s": 30826, "text": "The approach for the download of multiple web pages using HTTP::Tiny is the same as mentioned above. The only modification is that here the URL of all the web pages are stored in an array and we loop through the array displaying the contents of each web page." }, { "code": null, "e": 31091, "s": 31086, "text": "Perl" }, { "code": "#!usr/bin/perl # using the warnings pragma# to generate warnings for# erroneous codeuse warnings; # specifying the Perl versionuse 5.010; # calling the HTTP::Tiny moduleuse HTTP::Tiny; # declaring the sub routinesub getWebPages{ # instantiating the HTTP client my $httpVariable = HTTP::Tiny->new; # array of URLs my @urls = ('http://www.google.com/', 'https://www.geeksforgeeks.org/' ); # start of foreach loop to # loop through the array of URLs foreach my $singleURL (@urls){ # displaying user friendly message say 'downloading web page...'; # variable to store the response my $response = $httpVariable-> get($singleURL); # logic for successful connection if ($response->{success}){ say $singleURL. \" downloaded successfully\"; # displaying the length of # the web page # the contents can be displayed # similarly say \"Length : length $response->{content}\"; } # logic for unsuccessful connection else{ say $singleURL. \" could not be downloaded\"; # displaying the reason for # unsuccessful connection say \"$response->{status} $response->{reasons}\"; } }} # calling the sub routinegetWebPages();", "e": 32531, "s": 31091, "text": null }, { "code": null, "e": 32539, "s": 32531, "text": "Output:" }, { "code": null, "e": 32782, "s": 32539, "text": "downloading web page...\ndownloaded successfully\nLength : 15175\n<html content of the landing page of google>\ndownloading web page...\ndownloaded successfully\nLength : <Length of the landing page of GFG>\n<html content of the landing page of GFG>" }, { "code": null, "e": 33033, "s": 32782, "text": "In this section, we will see two approaches to download images using Perl scripts. In order to get the URL of these images, we first right-click on them. Next, we click on Copy Image Address from the drop-down and paste this as the URL for the image." }, { "code": null, "e": 33346, "s": 33033, "text": "In this approach, we use LWP::Simple module and get the HTTP code using getstore function. In this function, we have to specify the URL of the image to be downloaded and the location to store the downloaded image. Next, we check if the code is successful or not and display the corresponding message to the user." }, { "code": null, "e": 33351, "s": 33346, "text": "Perl" }, { "code": "#!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma# to generate warnings for# erroneous codeuse warnings; # specifying the Perl versionuse 5.010; # calling the moduleuse LWP::Simple; # declaring the sub routinesub getImage { # displaying a user friendly message say \"Downloading ... \"; # variable to store the status code # first parameter is the URL of the image # second parameter is the location # of the downloaded image my $statusCode = getstore (\"https://www.geeksforgeeks.org/wp-content/uploads/gfg_200X200-1.png\", \"downloaded_image.png\"); # checking for successful # connection if ($statusCode == 200) { say \"Image successfully downloaded.\"; } else { say \"Image download failed.\"; }} # calling the sub routinegetImage();", "e": 34188, "s": 33351, "text": null }, { "code": null, "e": 34196, "s": 34188, "text": "Output:" }, { "code": null, "e": 34417, "s": 34196, "text": "Downloading...\nImage successfully downloaded.\n(the downloaded image will be saved at the specified location\nwith the given name. If no location is specified then the image\nwould be saved in the current working directory." }, { "code": null, "e": 34730, "s": 34417, "text": "Image::Grab is a simple module meant for downloading the images specified by their URLs. It works with images that might be hidden by some method too. In this approach, we use the Image::Grab module and after instantiating it, we pass the URL. Next, we call the grab method and save the downloaded image to disk." }, { "code": null, "e": 34735, "s": 34730, "text": "Perl" }, { "code": "#!usr/bin/perl # using the strict pragmause strict; # using the warnings pragma to# generate warnings for erroneous# codeuse warnings; # specifying the Perl versionuse 5.010; # calling the Image::Grab moduleuse Image::Grab; # instantiating the module# and storing it in a variablemy $instantiatedImage = new Image::Grab; # declaring the sub routinesub getImage { # specifying the URL $instantiatedImage->url ('https://www.geeksforgeeks.org/wp-content/uploads/gfg_200X200-1.png'); # calling grab to grab the image $instantiatedImage->grab; # creating a file to store # the downloaded image open(DOWNLOADEDIMAGE, '>downloaded_image1.png') || die'downloaded_image1.png: $!'; # for MSDOS only binmode DOWNLOADEDIMAGE; # saving the image in the created # file print DOWNLOADEDIMAGE $instantiatedImage->image; # closing the file close instantiatedImage;} # calling the sub routinegetImage();", "e": 35729, "s": 34735, "text": null }, { "code": null, "e": 35737, "s": 35729, "text": "Output:" }, { "code": null, "e": 35787, "s": 35737, "text": "The image is stored with the specified file name." }, { "code": null, "e": 35805, "s": 35787, "text": "Downloaded Image:" }, { "code": null, "e": 35822, "s": 35805, "text": "surinderdawra388" }, { "code": null, "e": 35829, "s": 35822, "text": "Picked" }, { "code": null, "e": 35834, "s": 35829, "text": "Perl" }, { "code": null, "e": 35839, "s": 35834, "text": "Perl" }, { "code": null, "e": 35937, "s": 35839, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 35978, "s": 35937, "text": "Perl Tutorial - Learn Perl With Examples" }, { "code": null, "e": 36005, "s": 35978, "text": "Perl | Inheritance in OOPs" }, { "code": null, "e": 36043, "s": 36005, "text": "Perl | Basic Syntax of a Perl Program" }, { "code": null, "e": 36077, "s": 36043, "text": "Perl | Opening and Reading a File" }, { "code": null, "e": 36096, "s": 36077, "text": "Perl | ne operator" }, { "code": null, "e": 36114, "s": 36096, "text": "Perl | Data Types" }, { "code": null, "e": 36145, "s": 36114, "text": "Perl | Multidimensional Hashes" }, { "code": null, "e": 36159, "s": 36145, "text": "Perl | Hashes" }, { "code": null, "e": 36185, "s": 36159, "text": "Perl | defined() Function" } ]
HTML | aria-label attribute
27 Sep, 2019 The aria-label helps define a string and provides additional information about the structure of the document for users using assistive technology. In most cases, arial-label is used to replace an existing label with more precise information. However, we should be careful while using aria-label as it does not work with all HTML elements. The aria-label attribute can be used with HTML elements such as: select textarea button a(when href=”#” is in use) audio and video(when control=”#” is in use) The aria-label attribute does not always work with HTML elements like span, p, div. It may work across some of the browse assistive technology combinations. Syntax: <button aria-label="open" onclick="function()">CLICK</button> Here, a button will be created with “click” written on it. The aria-label – Provides a label that exact mentions its output/function by using assistive technologies. Example: <!DOCTYPE html><html> <head></head> <body> <center> <h1 style="color:green">GeeksforGeeks</h1> <button value="open">open</button> <button aria-label="opens a new window" value="open"> open </button> </center></body> </html> Here, as you can see an HTML page will open and will contain buttons side by side that are identical to each other without any difference. Now if someone is using a chromevox extension in chrome and have their earphone on while pressing tab, then they will hear the word “open” when the first button is selected, whereas when the second button is selected they will hear the phrase “opens a new window”. This is particularly useful in cases of people with bad eyesight who can see the two buttons but can’t comprehend the text written on them. Output:Before clicking the button: After: HTML-Attributes Picked HTML Web Technologies HTML Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. REST API (Introduction) Design a Tribute Page using HTML & CSS Build a Survey Form using HTML and CSS Design a web page using HTML and CSS Angular File Upload Installation of Node.js on Linux Difference between var, let and const keywords in JavaScript How to fetch data from an API in ReactJS ? Differences between Functional Components and Class Components in React Remove elements from a JavaScript Array
[ { "code": null, "e": 54, "s": 26, "text": "\n27 Sep, 2019" }, { "code": null, "e": 393, "s": 54, "text": "The aria-label helps define a string and provides additional information about the structure of the document for users using assistive technology. In most cases, arial-label is used to replace an existing label with more precise information. However, we should be careful while using aria-label as it does not work with all HTML elements." }, { "code": null, "e": 458, "s": 393, "text": "The aria-label attribute can be used with HTML elements such as:" }, { "code": null, "e": 465, "s": 458, "text": "select" }, { "code": null, "e": 474, "s": 465, "text": "textarea" }, { "code": null, "e": 481, "s": 474, "text": "button" }, { "code": null, "e": 508, "s": 481, "text": "a(when href=”#” is in use)" }, { "code": null, "e": 552, "s": 508, "text": "audio and video(when control=”#” is in use)" }, { "code": null, "e": 709, "s": 552, "text": "The aria-label attribute does not always work with HTML elements like span, p, div. It may work across some of the browse assistive technology combinations." }, { "code": null, "e": 717, "s": 709, "text": "Syntax:" }, { "code": "<button aria-label=\"open\" onclick=\"function()\">CLICK</button>", "e": 779, "s": 717, "text": null }, { "code": null, "e": 945, "s": 779, "text": "Here, a button will be created with “click” written on it. The aria-label – Provides a label that exact mentions its output/function by using assistive technologies." }, { "code": null, "e": 954, "s": 945, "text": "Example:" }, { "code": "<!DOCTYPE html><html> <head></head> <body> <center> <h1 style=\"color:green\">GeeksforGeeks</h1> <button value=\"open\">open</button> <button aria-label=\"opens a new window\" value=\"open\"> open </button> </center></body> </html>", "e": 1226, "s": 954, "text": null }, { "code": null, "e": 1770, "s": 1226, "text": "Here, as you can see an HTML page will open and will contain buttons side by side that are identical to each other without any difference. Now if someone is using a chromevox extension in chrome and have their earphone on while pressing tab, then they will hear the word “open” when the first button is selected, whereas when the second button is selected they will hear the phrase “opens a new window”. This is particularly useful in cases of people with bad eyesight who can see the two buttons but can’t comprehend the text written on them." }, { "code": null, "e": 1805, "s": 1770, "text": "Output:Before clicking the button:" }, { "code": null, "e": 1812, "s": 1805, "text": "After:" }, { "code": null, "e": 1828, "s": 1812, "text": "HTML-Attributes" }, { "code": null, "e": 1835, "s": 1828, "text": "Picked" }, { "code": null, "e": 1840, "s": 1835, "text": "HTML" }, { "code": null, "e": 1857, "s": 1840, "text": "Web Technologies" }, { "code": null, "e": 1862, "s": 1857, "text": "HTML" }, { "code": null, "e": 1960, "s": 1862, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 1984, "s": 1960, "text": "REST API (Introduction)" }, { "code": null, "e": 2023, "s": 1984, "text": "Design a Tribute Page using HTML & CSS" }, { "code": null, "e": 2062, "s": 2023, "text": "Build a Survey Form using HTML and CSS" }, { "code": null, "e": 2099, "s": 2062, "text": "Design a web page using HTML and CSS" }, { "code": null, "e": 2119, "s": 2099, "text": "Angular File Upload" }, { "code": null, "e": 2152, "s": 2119, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 2213, "s": 2152, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 2256, "s": 2213, "text": "How to fetch data from an API in ReactJS ?" }, { "code": null, "e": 2328, "s": 2256, "text": "Differences between Functional Components and Class Components in React" } ]
Compiler Design | Syntax Directed Definition
04 Jan, 2022 Prerequisite – Introduction to Syntax Analysis, Syntax Directed Translation Syntax Directed Definition (SDD) is a kind of abstract specification. It is generalization of context free grammar in which each grammar production X –> a is associated with it a set of production rules of the form s = f(b1, b2, ......bk) where s is the attribute obtained from function f. The attribute can be a string, number, type or a memory location. Semantic rules are fragments of code which are embedded usually at the end of production and enclosed in curly braces ({ }). Example: E --> E1 + T { E.val = E1.val + T.val} Annotated Parse Tree – The parse tree containing the values of attributes at each node for given input string is called annotated or decorated parse tree. Features – High level specification Hides implementation details Explicit order of evaluation is not specified Types of attributes – There are two types of attributes: 1. Synthesized Attributes – These are those attributes which derive their values from their children nodes i.e. value of synthesized attribute at node is computed from the values of attributes at children nodes in parse tree. Example: E --> E1 + T { E.val = E1.val + T.val} In this, E.val derive its values from E1.val and T.val Computation of Synthesized Attributes – Write the SDD using appropriate semantic rules for each production in given grammar. The annotated parse tree is generated and attribute values are computed in bottom up manner. The value obtained at root node is the final output. Example: Consider the following grammar S --> E E --> E1 + T E --> T T --> T1 * F T --> F F --> digit The SDD for the above grammar can be written as follow Let us assume an input string 4 * 5 + 6 for computing synthesized attributes. The annotated parse tree for the input string is For computation of attributes we start from leftmost bottom node. The rule F –> digit is used to reduce digit to F and the value of digit is obtained from lexical analyzer which becomes value of F i.e. from semantic action F.val = digit.lexval. Hence, F.val = 4 and since T is parent node of F so, we get T.val = 4 from semantic action T.val = F.val. Then, for T –> T1 * F production, the corresponding semantic action is T.val = T1.val * F.val . Hence, T.val = 4 * 5 = 20 Similarly, combination of E1.val + T.val becomes E.val i.e. E.val = E1.val + T.val = 26. Then, the production S –> E is applied to reduce E.val = 26 and semantic action associated with it prints the result E.val . Hence, the output will be 26. 2. Inherited Attributes – These are the attributes which derive their values from their parent or sibling nodes i.e. value of inherited attributes are computed by value of parent or sibling nodes. Example: A --> BCD { C.in = A.in, C.type = B.type } Computation of Inherited Attributes – Construct the SDD using semantic actions. The annotated parse tree is generated and attribute values are computed in top down manner. Example: Consider the following grammar S --> T L T --> int T --> float T --> double L --> L1, id L --> id The SDD for the above grammar can be written as follow Let us assume an input string int a, c for computing inherited attributes. The annotated parse tree for the input string is The value of L nodes is obtained from T.type (sibling) which is basically lexical value obtained as int, float or double. Then L node gives type of identifiers a and c. The computation of type is done in top down manner or preorder traversal. Using function Enter_type the type of identifiers a and c is inserted in symbol table at corresponding id.entry. simmytarika5 19211a1234 Compiler Design GATE CS Technical Scripter Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ... Must Do Coding Questions for Product Based Companies Free Online Resume Builder By GeeksforGeeks - Create Your Resume Now! Spring Boot - Annotations How to Add External JAR File to an IntelliJ IDEA Project? Layers of OSI Model ACID Properties in DBMS TCP/IP Model Types of Operating Systems Normal Forms in DBMS
[ { "code": null, "e": 54, "s": 26, "text": "\n04 Jan, 2022" }, { "code": null, "e": 612, "s": 54, "text": "Prerequisite – Introduction to Syntax Analysis, Syntax Directed Translation Syntax Directed Definition (SDD) is a kind of abstract specification. It is generalization of context free grammar in which each grammar production X –> a is associated with it a set of production rules of the form s = f(b1, b2, ......bk) where s is the attribute obtained from function f. The attribute can be a string, number, type or a memory location. Semantic rules are fragments of code which are embedded usually at the end of production and enclosed in curly braces ({ }). " }, { "code": null, "e": 623, "s": 612, "text": "Example: " }, { "code": null, "e": 664, "s": 623, "text": "E --> E1 + T { E.val = E1.val + T.val} " }, { "code": null, "e": 820, "s": 664, "text": "Annotated Parse Tree – The parse tree containing the values of attributes at each node for given input string is called annotated or decorated parse tree. " }, { "code": null, "e": 833, "s": 820, "text": "Features – " }, { "code": null, "e": 858, "s": 833, "text": "High level specification" }, { "code": null, "e": 887, "s": 858, "text": "Hides implementation details" }, { "code": null, "e": 933, "s": 887, "text": "Explicit order of evaluation is not specified" }, { "code": null, "e": 991, "s": 933, "text": "Types of attributes – There are two types of attributes: " }, { "code": null, "e": 1218, "s": 991, "text": "1. Synthesized Attributes – These are those attributes which derive their values from their children nodes i.e. value of synthesized attribute at node is computed from the values of attributes at children nodes in parse tree. " }, { "code": null, "e": 1229, "s": 1218, "text": "Example: " }, { "code": null, "e": 1270, "s": 1229, "text": "E --> E1 + T { E.val = E1.val + T.val} " }, { "code": null, "e": 1326, "s": 1270, "text": "In this, E.val derive its values from E1.val and T.val " }, { "code": null, "e": 1368, "s": 1326, "text": "Computation of Synthesized Attributes – " }, { "code": null, "e": 1453, "s": 1368, "text": "Write the SDD using appropriate semantic rules for each production in given grammar." }, { "code": null, "e": 1546, "s": 1453, "text": "The annotated parse tree is generated and attribute values are computed in bottom up manner." }, { "code": null, "e": 1599, "s": 1546, "text": "The value obtained at root node is the final output." }, { "code": null, "e": 1641, "s": 1599, "text": "Example: Consider the following grammar " }, { "code": null, "e": 1703, "s": 1641, "text": "S --> E\nE --> E1 + T\nE --> T\nT --> T1 * F\nT --> F\nF --> digit" }, { "code": null, "e": 1759, "s": 1703, "text": "The SDD for the above grammar can be written as follow " }, { "code": null, "e": 1887, "s": 1759, "text": "Let us assume an input string 4 * 5 + 6 for computing synthesized attributes. The annotated parse tree for the input string is " }, { "code": null, "e": 2361, "s": 1887, "text": "For computation of attributes we start from leftmost bottom node. The rule F –> digit is used to reduce digit to F and the value of digit is obtained from lexical analyzer which becomes value of F i.e. from semantic action F.val = digit.lexval. Hence, F.val = 4 and since T is parent node of F so, we get T.val = 4 from semantic action T.val = F.val. Then, for T –> T1 * F production, the corresponding semantic action is T.val = T1.val * F.val . Hence, T.val = 4 * 5 = 20 " }, { "code": null, "e": 2606, "s": 2361, "text": "Similarly, combination of E1.val + T.val becomes E.val i.e. E.val = E1.val + T.val = 26. Then, the production S –> E is applied to reduce E.val = 26 and semantic action associated with it prints the result E.val . Hence, the output will be 26. " }, { "code": null, "e": 2814, "s": 2606, "text": "2. Inherited Attributes – These are the attributes which derive their values from their parent or sibling nodes i.e. value of inherited attributes are computed by value of parent or sibling nodes. Example: " }, { "code": null, "e": 2860, "s": 2814, "text": "A --> BCD { C.in = A.in, C.type = B.type } " }, { "code": null, "e": 2900, "s": 2860, "text": "Computation of Inherited Attributes – " }, { "code": null, "e": 2942, "s": 2900, "text": "Construct the SDD using semantic actions." }, { "code": null, "e": 3034, "s": 2942, "text": "The annotated parse tree is generated and attribute values are computed in top down manner." }, { "code": null, "e": 3076, "s": 3034, "text": "Example: Consider the following grammar " }, { "code": null, "e": 3143, "s": 3076, "text": "S --> T L\nT --> int\nT --> float\nT --> double\nL --> L1, id\nL --> id" }, { "code": null, "e": 3199, "s": 3143, "text": "The SDD for the above grammar can be written as follow " }, { "code": null, "e": 3324, "s": 3199, "text": "Let us assume an input string int a, c for computing inherited attributes. The annotated parse tree for the input string is " }, { "code": null, "e": 3681, "s": 3324, "text": "The value of L nodes is obtained from T.type (sibling) which is basically lexical value obtained as int, float or double. Then L node gives type of identifiers a and c. The computation of type is done in top down manner or preorder traversal. Using function Enter_type the type of identifiers a and c is inserted in symbol table at corresponding id.entry. " }, { "code": null, "e": 3694, "s": 3681, "text": "simmytarika5" }, { "code": null, "e": 3705, "s": 3694, "text": "19211a1234" }, { "code": null, "e": 3721, "s": 3705, "text": "Compiler Design" }, { "code": null, "e": 3729, "s": 3721, "text": "GATE CS" }, { "code": null, "e": 3748, "s": 3729, "text": "Technical Scripter" }, { "code": null, "e": 3846, "s": 3748, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 3920, "s": 3846, "text": "Must Do Coding Questions for Companies like Amazon, Microsoft, Adobe, ..." }, { "code": null, "e": 3973, "s": 3920, "text": "Must Do Coding Questions for Product Based Companies" }, { "code": null, "e": 4043, "s": 3973, "text": "Free Online Resume Builder By GeeksforGeeks - Create Your Resume Now!" }, { "code": null, "e": 4069, "s": 4043, "text": "Spring Boot - Annotations" }, { "code": null, "e": 4127, "s": 4069, "text": "How to Add External JAR File to an IntelliJ IDEA Project?" }, { "code": null, "e": 4147, "s": 4127, "text": "Layers of OSI Model" }, { "code": null, "e": 4171, "s": 4147, "text": "ACID Properties in DBMS" }, { "code": null, "e": 4184, "s": 4171, "text": "TCP/IP Model" }, { "code": null, "e": 4211, "s": 4184, "text": "Types of Operating Systems" } ]
PHP | Sending mails using mail() function
08 Mar, 2018 PHP is a server side scripting language that is enriched with various utilities required. Mailing is one of the server side utilities that is required in most of the web servers today. Mailing is used for advertisement, account recovery, subscription etc. In order to send mails in PHP, one can use the mail() method. Syntax: bool mail(to , subject , message , additional_headers , additional_parameters) Parameters: The function has two required parameters and one optional parameter as described below: to: Specifies the email id of the recipient(s). Multiple email ids can be passed using commas subject: Specifies the subject of the mail. message: Specifies the message to be sent. additional-headers(Optional): This is an optional parameter that can create multiple header elements such as From (Specifies the sender), CC (Specifies the CC/Carbon Copy recipients), BCC (Specifies the BCC/Blind Carbon Copy Recipients. Note: In order to add multiple header parameters one must use ‘\r\n’. additional-parameters(Optional): This is another optional parameter and can be passed as an extension to the additional headers. This can specify a set of flags that are used as the sendmail_path configuration settings. Return Type: This method returns TRUE if mail was sent successfully and FALSE on Failure. Examples: Sending a Simple Mail in PHP<?php $to = "[email protected]"; $sub = "Generic Mail"; $msg="Hello Geek! This is a generic email."; if (mail($to,$sub,$msg)) echo "Your Mail is sent successfully."; else echo "Your Mail is not sent. Try Again.";?> Output :Your Mail is sent successfully. Sending a Mail with Additional Options<?php $to = "[email protected]"; $sub = "Generic Mail"; $msg = "Hello Geek! This is a generic email."; $headers = 'From: [email protected]' . "\r\n" .'CC: [email protected]'; if(mail($to,$sub,$msg,$headers)) echo "Your Mail is sent successfully."; else echo "Your Mail is not sent. Try Again.";?> Output :Your Mail is sent successfully. Sending a Simple Mail in PHP<?php $to = "[email protected]"; $sub = "Generic Mail"; $msg="Hello Geek! This is a generic email."; if (mail($to,$sub,$msg)) echo "Your Mail is sent successfully."; else echo "Your Mail is not sent. Try Again.";?> Output :Your Mail is sent successfully. <?php $to = "[email protected]"; $sub = "Generic Mail"; $msg="Hello Geek! This is a generic email."; if (mail($to,$sub,$msg)) echo "Your Mail is sent successfully."; else echo "Your Mail is not sent. Try Again.";?> Output : Your Mail is sent successfully. Sending a Mail with Additional Options<?php $to = "[email protected]"; $sub = "Generic Mail"; $msg = "Hello Geek! This is a generic email."; $headers = 'From: [email protected]' . "\r\n" .'CC: [email protected]'; if(mail($to,$sub,$msg,$headers)) echo "Your Mail is sent successfully."; else echo "Your Mail is not sent. Try Again.";?> Output :Your Mail is sent successfully. <?php $to = "[email protected]"; $sub = "Generic Mail"; $msg = "Hello Geek! This is a generic email."; $headers = 'From: [email protected]' . "\r\n" .'CC: [email protected]'; if(mail($to,$sub,$msg,$headers)) echo "Your Mail is sent successfully."; else echo "Your Mail is not sent. Try Again.";?> Output : Your Mail is sent successfully. Summary: Using mail() method one can send various types of mails such as standards, html mail. The mail() method opens the SMTP socket, attempts to send the mail, closes the socket thus is a secure option. mail() method should not be used for bulk mailing as it is not very cost-efficient. The mail() method only checks for parameter or network failure, thus a success in the mail() method doesn’t guarantee that the intended person will receive the mail. PHP-function PHP Web Technologies PHP Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here. How to Insert Form Data into Database using PHP ? How to convert array to string in PHP ? How to Upload Image into Database and Display it using PHP ? How to check whether an array is empty using PHP? PHP | Converting string to Date and DateTime Top 10 Projects For Beginners To Practice HTML and CSS Skills Installation of Node.js on Linux Difference between var, let and const keywords in JavaScript How to insert spaces/tabs in text using HTML/CSS? How to fetch data from an API in ReactJS ?
[ { "code": null, "e": 28, "s": 0, "text": "\n08 Mar, 2018" }, { "code": null, "e": 284, "s": 28, "text": "PHP is a server side scripting language that is enriched with various utilities required. Mailing is one of the server side utilities that is required in most of the web servers today. Mailing is used for advertisement, account recovery, subscription etc." }, { "code": null, "e": 346, "s": 284, "text": "In order to send mails in PHP, one can use the mail() method." }, { "code": null, "e": 354, "s": 346, "text": "Syntax:" }, { "code": null, "e": 434, "s": 354, "text": "bool mail(to , subject , message , additional_headers , additional_parameters)\n" }, { "code": null, "e": 534, "s": 434, "text": "Parameters: The function has two required parameters and one optional parameter as described below:" }, { "code": null, "e": 628, "s": 534, "text": "to: Specifies the email id of the recipient(s). Multiple email ids can be passed using commas" }, { "code": null, "e": 672, "s": 628, "text": "subject: Specifies the subject of the mail." }, { "code": null, "e": 715, "s": 672, "text": "message: Specifies the message to be sent." }, { "code": null, "e": 1022, "s": 715, "text": "additional-headers(Optional): This is an optional parameter that can create multiple header elements such as From (Specifies the sender), CC (Specifies the CC/Carbon Copy recipients), BCC (Specifies the BCC/Blind Carbon Copy Recipients. Note: In order to add multiple header parameters one must use ‘\\r\\n’." }, { "code": null, "e": 1242, "s": 1022, "text": "additional-parameters(Optional): This is another optional parameter and can be passed as an extension to the additional headers. This can specify a set of flags that are used as the sendmail_path configuration settings." }, { "code": null, "e": 1332, "s": 1242, "text": "Return Type: This method returns TRUE if mail was sent successfully and FALSE on Failure." }, { "code": null, "e": 1342, "s": 1332, "text": "Examples:" }, { "code": null, "e": 2043, "s": 1342, "text": "Sending a Simple Mail in PHP<?php $to = \"[email protected]\"; $sub = \"Generic Mail\"; $msg=\"Hello Geek! This is a generic email.\"; if (mail($to,$sub,$msg)) echo \"Your Mail is sent successfully.\"; else echo \"Your Mail is not sent. Try Again.\";?> Output :Your Mail is sent successfully.\nSending a Mail with Additional Options<?php $to = \"[email protected]\"; $sub = \"Generic Mail\"; $msg = \"Hello Geek! This is a generic email.\"; $headers = 'From: [email protected]' . \"\\r\\n\" .'CC: [email protected]'; if(mail($to,$sub,$msg,$headers)) echo \"Your Mail is sent successfully.\"; else echo \"Your Mail is not sent. Try Again.\";?> Output :Your Mail is sent successfully.\n" }, { "code": null, "e": 2346, "s": 2043, "text": "Sending a Simple Mail in PHP<?php $to = \"[email protected]\"; $sub = \"Generic Mail\"; $msg=\"Hello Geek! This is a generic email.\"; if (mail($to,$sub,$msg)) echo \"Your Mail is sent successfully.\"; else echo \"Your Mail is not sent. Try Again.\";?> Output :Your Mail is sent successfully.\n" }, { "code": "<?php $to = \"[email protected]\"; $sub = \"Generic Mail\"; $msg=\"Hello Geek! This is a generic email.\"; if (mail($to,$sub,$msg)) echo \"Your Mail is sent successfully.\"; else echo \"Your Mail is not sent. Try Again.\";?> ", "e": 2581, "s": 2346, "text": null }, { "code": null, "e": 2590, "s": 2581, "text": "Output :" }, { "code": null, "e": 2623, "s": 2590, "text": "Your Mail is sent successfully.\n" }, { "code": null, "e": 3022, "s": 2623, "text": "Sending a Mail with Additional Options<?php $to = \"[email protected]\"; $sub = \"Generic Mail\"; $msg = \"Hello Geek! This is a generic email.\"; $headers = 'From: [email protected]' . \"\\r\\n\" .'CC: [email protected]'; if(mail($to,$sub,$msg,$headers)) echo \"Your Mail is sent successfully.\"; else echo \"Your Mail is not sent. Try Again.\";?> Output :Your Mail is sent successfully.\n" }, { "code": "<?php $to = \"[email protected]\"; $sub = \"Generic Mail\"; $msg = \"Hello Geek! This is a generic email.\"; $headers = 'From: [email protected]' . \"\\r\\n\" .'CC: [email protected]'; if(mail($to,$sub,$msg,$headers)) echo \"Your Mail is sent successfully.\"; else echo \"Your Mail is not sent. Try Again.\";?> ", "e": 3343, "s": 3022, "text": null }, { "code": null, "e": 3352, "s": 3343, "text": "Output :" }, { "code": null, "e": 3385, "s": 3352, "text": "Your Mail is sent successfully.\n" }, { "code": null, "e": 3394, "s": 3385, "text": "Summary:" }, { "code": null, "e": 3480, "s": 3394, "text": "Using mail() method one can send various types of mails such as standards, html mail." }, { "code": null, "e": 3591, "s": 3480, "text": "The mail() method opens the SMTP socket, attempts to send the mail, closes the socket thus is a secure option." }, { "code": null, "e": 3675, "s": 3591, "text": "mail() method should not be used for bulk mailing as it is not very cost-efficient." }, { "code": null, "e": 3841, "s": 3675, "text": "The mail() method only checks for parameter or network failure, thus a success in the mail() method doesn’t guarantee that the intended person will receive the mail." }, { "code": null, "e": 3854, "s": 3841, "text": "PHP-function" }, { "code": null, "e": 3858, "s": 3854, "text": "PHP" }, { "code": null, "e": 3875, "s": 3858, "text": "Web Technologies" }, { "code": null, "e": 3879, "s": 3875, "text": "PHP" }, { "code": null, "e": 3977, "s": 3879, "text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here." }, { "code": null, "e": 4027, "s": 3977, "text": "How to Insert Form Data into Database using PHP ?" }, { "code": null, "e": 4067, "s": 4027, "text": "How to convert array to string in PHP ?" }, { "code": null, "e": 4128, "s": 4067, "text": "How to Upload Image into Database and Display it using PHP ?" }, { "code": null, "e": 4178, "s": 4128, "text": "How to check whether an array is empty using PHP?" }, { "code": null, "e": 4223, "s": 4178, "text": "PHP | Converting string to Date and DateTime" }, { "code": null, "e": 4285, "s": 4223, "text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills" }, { "code": null, "e": 4318, "s": 4285, "text": "Installation of Node.js on Linux" }, { "code": null, "e": 4379, "s": 4318, "text": "Difference between var, let and const keywords in JavaScript" }, { "code": null, "e": 4429, "s": 4379, "text": "How to insert spaces/tabs in text using HTML/CSS?" } ]
Image Processing in Java – Creating a Random Pixel Image
14 Nov, 2021 Prerequisites: Image Processing in Java – Read and Write Image Processing In Java – Get and Set Pixels Image Processing in Java – Colored Image to Grayscale Image Conversion Image Processing in Java – Colored Image to Negative Image Conversion Image Processing in Java – Colored to Red Green Blue Image Conversion Image Processing in Java – Colored Image to Sepia Image Conversion In this article, we will be creating a random pixel image. For creating a random pixel image, we don’t need any input image. We can create an image file and set its pixel values generated randomly. A random image is an image in which the pixels are chosen at random, so they can take any color from the desired palette (generally 16 million colors). The resulting images look like multi-colored noise backgrounds. Set the dimension of the new image file.Create a BufferedImage object to hold the image. This object is used to store an image in RAM.Generate random number values for alpha, red, green, and blue components.Set the randomly generated ARGB (Alpha, Red, Green, and Blue) values.Repeat steps 3 and 4 for each pixel of the image. Set the dimension of the new image file. Create a BufferedImage object to hold the image. This object is used to store an image in RAM. Generate random number values for alpha, red, green, and blue components. Set the randomly generated ARGB (Alpha, Red, Green, and Blue) values. Repeat steps 3 and 4 for each pixel of the image. Java // Java program to demonstrate // creation of random pixel image import java.io.File;import java.io.IOException;import java.awt.image.BufferedImage;import javax.imageio.ImageIO; public class RandomImage{ public static void main(String args[])throws IOException { // Image file dimensions int width = 640, height = 320; // Create buffered image object BufferedImage img = null; img = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); // file object File f = null; // create random values pixel by pixel for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { // generating values less than 256 int a = (int)(Math.random()*256); int r = (int)(Math.random()*256); int g = (int)(Math.random()*256); int b = (int)(Math.random()*256); //pixel int p = (a<<24) | (r<<16) | (g<<8) | b; img.setRGB(x, y, p); } } // write image try { f = new File("C:/Users/hp/Desktop/Image Processing in Java/gfg-logo.png"); ImageIO.write(img, "png", f); } catch(IOException e) { System.out.println("Error: " + e); } }} Note: Code will not run on online ide since it writes image in drive. This article is contributed by Pratik Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. nishkarshgandhi Image-Processing Java Java Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 54, "s": 26, "text": "\n14 Nov, 2021" }, { "code": null, "e": 69, "s": 54, "text": "Prerequisites:" }, { "code": null, "e": 111, "s": 69, "text": "Image Processing in Java – Read and Write" }, { "code": null, "e": 157, "s": 111, "text": "Image Processing In Java – Get and Set Pixels" }, { "code": null, "e": 228, "s": 157, "text": "Image Processing in Java – Colored Image to Grayscale Image Conversion" }, { "code": null, "e": 298, "s": 228, "text": "Image Processing in Java – Colored Image to Negative Image Conversion" }, { "code": null, "e": 368, "s": 298, "text": "Image Processing in Java – Colored to Red Green Blue Image Conversion" }, { "code": null, "e": 435, "s": 368, "text": "Image Processing in Java – Colored Image to Sepia Image Conversion" }, { "code": null, "e": 633, "s": 435, "text": "In this article, we will be creating a random pixel image. For creating a random pixel image, we don’t need any input image. We can create an image file and set its pixel values generated randomly." }, { "code": null, "e": 849, "s": 633, "text": "A random image is an image in which the pixels are chosen at random, so they can take any color from the desired palette (generally 16 million colors). The resulting images look like multi-colored noise backgrounds." }, { "code": null, "e": 1175, "s": 849, "text": "Set the dimension of the new image file.Create a BufferedImage object to hold the image. This object is used to store an image in RAM.Generate random number values for alpha, red, green, and blue components.Set the randomly generated ARGB (Alpha, Red, Green, and Blue) values.Repeat steps 3 and 4 for each pixel of the image." }, { "code": null, "e": 1216, "s": 1175, "text": "Set the dimension of the new image file." }, { "code": null, "e": 1311, "s": 1216, "text": "Create a BufferedImage object to hold the image. This object is used to store an image in RAM." }, { "code": null, "e": 1385, "s": 1311, "text": "Generate random number values for alpha, red, green, and blue components." }, { "code": null, "e": 1455, "s": 1385, "text": "Set the randomly generated ARGB (Alpha, Red, Green, and Blue) values." }, { "code": null, "e": 1505, "s": 1455, "text": "Repeat steps 3 and 4 for each pixel of the image." }, { "code": null, "e": 1510, "s": 1505, "text": "Java" }, { "code": "// Java program to demonstrate // creation of random pixel image import java.io.File;import java.io.IOException;import java.awt.image.BufferedImage;import javax.imageio.ImageIO; public class RandomImage{ public static void main(String args[])throws IOException { // Image file dimensions int width = 640, height = 320; // Create buffered image object BufferedImage img = null; img = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB); // file object File f = null; // create random values pixel by pixel for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { // generating values less than 256 int a = (int)(Math.random()*256); int r = (int)(Math.random()*256); int g = (int)(Math.random()*256); int b = (int)(Math.random()*256); //pixel int p = (a<<24) | (r<<16) | (g<<8) | b; img.setRGB(x, y, p); } } // write image try { f = new File(\"C:/Users/hp/Desktop/Image Processing in Java/gfg-logo.png\"); ImageIO.write(img, \"png\", f); } catch(IOException e) { System.out.println(\"Error: \" + e); } }}", "e": 2871, "s": 1510, "text": null }, { "code": null, "e": 2941, "s": 2871, "text": "Note: Code will not run on online ide since it writes image in drive." }, { "code": null, "e": 3365, "s": 2941, "text": "This article is contributed by Pratik Agarwal. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. " }, { "code": null, "e": 3381, "s": 3365, "text": "nishkarshgandhi" }, { "code": null, "e": 3398, "s": 3381, "text": "Image-Processing" }, { "code": null, "e": 3403, "s": 3398, "text": "Java" }, { "code": null, "e": 3408, "s": 3403, "text": "Java" } ]
Lodash _.compact() Function
10 Nov, 2021 Lodash proves to be much useful when working with arrays, strings, objects etc. It makes math operations and function paradigm much easier, concise. The _.compact() function is used to creates an array with all falsey values removed in JavaScript. Syntax: _.compact(array) Parameters: This function accepts only a single parameter as mentioned above and described below: array: It is an array to be compacted. Note: The values false, null, 0, “”, undefined, and NaN are falsey. Return Value: This function returns the array after filtering the values. Few examples are given below for a better understanding of the function. Example 1: Passing a list of both the true and the false elements to _.compact() function. javascript // Requiring the lodash librarylet lodash = require("lodash"); // Original array to be compactedlet array = [0, 1, false, 2, '', 3]; let newArray = lodash.compact(array);console.log("Before compact: " + array); // Printing newArray console.log("After compact: " + newArray); Output: Example 2: Passing a list containing all the false values to the _.compact() function. javascript // Requiring the lodash librarylet lodash = require("lodash"); // Original array to be compactedlet array = [0, false, '', undefined, NaN]; let newArray = lodash.compact(array);console.log("Before compact: " + array); // Printing newArray console.log("After compact: " + newArray); Output: Example 3: Passing a list which contains a false element in ” to _.compact() function. javascript // Requiring the lodash librarylet lodash = require("lodash"); // Original array to be compactedlet array = [false, 'HTML', NaN, 'CSS', 'undefined']; let newArray = lodash.compact(array);console.log("Before compact: " + array); // Printing newArray console.log("After compact: " + newArray); Output: Example 4: Passing a list containing modified false values to the _.reduce() function. javascript // Requiring the lodash librarylet lodash = require("lodash"); // Original array to be compactedlet array = [false, true, 'yes', 'no', "no2"]; let newArray = lodash.compact(array);console.log("Before compact: " + array); // Printing newArray console.log("After compact: " + newArray); Output: surindertarika1234 JavaScript-Lodash JavaScript Web Technologies Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
[ { "code": null, "e": 28, "s": 0, "text": "\n10 Nov, 2021" }, { "code": null, "e": 276, "s": 28, "text": "Lodash proves to be much useful when working with arrays, strings, objects etc. It makes math operations and function paradigm much easier, concise. The _.compact() function is used to creates an array with all falsey values removed in JavaScript." }, { "code": null, "e": 284, "s": 276, "text": "Syntax:" }, { "code": null, "e": 301, "s": 284, "text": "_.compact(array)" }, { "code": null, "e": 399, "s": 301, "text": "Parameters: This function accepts only a single parameter as mentioned above and described below:" }, { "code": null, "e": 438, "s": 399, "text": "array: It is an array to be compacted." }, { "code": null, "e": 506, "s": 438, "text": "Note: The values false, null, 0, “”, undefined, and NaN are falsey." }, { "code": null, "e": 580, "s": 506, "text": "Return Value: This function returns the array after filtering the values." }, { "code": null, "e": 653, "s": 580, "text": "Few examples are given below for a better understanding of the function." }, { "code": null, "e": 744, "s": 653, "text": "Example 1: Passing a list of both the true and the false elements to _.compact() function." }, { "code": null, "e": 755, "s": 744, "text": "javascript" }, { "code": "// Requiring the lodash librarylet lodash = require(\"lodash\"); // Original array to be compactedlet array = [0, 1, false, 2, '', 3]; let newArray = lodash.compact(array);console.log(\"Before compact: \" + array); // Printing newArray console.log(\"After compact: \" + newArray);", "e": 1036, "s": 755, "text": null }, { "code": null, "e": 1044, "s": 1036, "text": "Output:" }, { "code": null, "e": 1131, "s": 1044, "text": "Example 2: Passing a list containing all the false values to the _.compact() function." }, { "code": null, "e": 1142, "s": 1131, "text": "javascript" }, { "code": "// Requiring the lodash librarylet lodash = require(\"lodash\"); // Original array to be compactedlet array = [0, false, '', undefined, NaN]; let newArray = lodash.compact(array);console.log(\"Before compact: \" + array); // Printing newArray console.log(\"After compact: \" + newArray);", "e": 1430, "s": 1142, "text": null }, { "code": null, "e": 1438, "s": 1430, "text": "Output:" }, { "code": null, "e": 1525, "s": 1438, "text": "Example 3: Passing a list which contains a false element in ” to _.compact() function." }, { "code": null, "e": 1536, "s": 1525, "text": "javascript" }, { "code": "// Requiring the lodash librarylet lodash = require(\"lodash\"); // Original array to be compactedlet array = [false, 'HTML', NaN, 'CSS', 'undefined']; let newArray = lodash.compact(array);console.log(\"Before compact: \" + array); // Printing newArray console.log(\"After compact: \" + newArray);", "e": 1856, "s": 1536, "text": null }, { "code": null, "e": 1864, "s": 1856, "text": "Output:" }, { "code": null, "e": 1951, "s": 1864, "text": "Example 4: Passing a list containing modified false values to the _.reduce() function." }, { "code": null, "e": 1962, "s": 1951, "text": "javascript" }, { "code": "// Requiring the lodash librarylet lodash = require(\"lodash\"); // Original array to be compactedlet array = [false, true, 'yes', 'no', \"no2\"]; let newArray = lodash.compact(array);console.log(\"Before compact: \" + array); // Printing newArray console.log(\"After compact: \" + newArray);", "e": 2253, "s": 1962, "text": null }, { "code": null, "e": 2261, "s": 2253, "text": "Output:" }, { "code": null, "e": 2280, "s": 2261, "text": "surindertarika1234" }, { "code": null, "e": 2298, "s": 2280, "text": "JavaScript-Lodash" }, { "code": null, "e": 2309, "s": 2298, "text": "JavaScript" }, { "code": null, "e": 2326, "s": 2309, "text": "Web Technologies" } ]