title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
How to close JFrame on the click of a Button in Java | Set frame.dispose() on the click of a button to close JFrame. At first create a button and frame −
JFrame frame = new JFrame();
JButton button = new JButton("Click to Close!");
Now, close the JFrame on the click of the above button with Action Listener −
button.addActionListener(e -> {
frame.dispose();
});
The following is an example to close JFrame on the click of a Button −
import java.awt.Color;
import java.awt.Dimension;
import javax.swing.JButton;
import javax.swing.JFrame;
public class SwingDemo {
public static void main(String[] args) {
JFrame frame = new JFrame();
JButton button = new JButton("Click to Close!");
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setContentPane(button);
button.addActionListener(e -> {
frame.dispose();
});
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.setPreferredSize(new Dimension(550, 300));
frame.getContentPane().setBackground(Color.ORANGE);
frame.pack();
frame.setVisible(true);
}
}
When you will click on the button “Click to Close”, the frame will close. | [
{
"code": null,
"e": 1161,
"s": 1062,
"text": "Set frame.dispose() on the click of a button to close JFrame. At first create a button and frame −"
},
{
"code": null,
"e": 1239,
"s": 1161,
"text": "JFrame frame = new JFrame();\nJButton button = new JButton(\"Click to Close!\");"
},
{
"code": null,
"e": 1317,
"s": 1239,
"text": "Now, close the JFrame on the click of the above button with Action Listener −"
},
{
"code": null,
"e": 1373,
"s": 1317,
"text": "button.addActionListener(e -> {\n frame.dispose();\n});"
},
{
"code": null,
"e": 1444,
"s": 1373,
"text": "The following is an example to close JFrame on the click of a Button −"
},
{
"code": null,
"e": 2108,
"s": 1444,
"text": "import java.awt.Color;\nimport java.awt.Dimension;\nimport javax.swing.JButton;\nimport javax.swing.JFrame;\npublic class SwingDemo {\n public static void main(String[] args) {\n JFrame frame = new JFrame();\n JButton button = new JButton(\"Click to Close!\");\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.setContentPane(button);\n button.addActionListener(e -> {\n frame.dispose();\n });\n frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n frame.setPreferredSize(new Dimension(550, 300));\n frame.getContentPane().setBackground(Color.ORANGE);\n frame.pack();\n frame.setVisible(true);\n }\n}"
},
{
"code": null,
"e": 2182,
"s": 2108,
"text": "When you will click on the button “Click to Close”, the frame will close."
}
] |
How to write a class inside an interface in Java? | Defining a class within an interface is allowed in Java. If the methods of an interface accept a class as an argument and the class is not used elsewhere, in such cases we can define a class inside an interface.
In the following example we have an interface with name CarRentalServices and this interface has two methods that accepts an object of the class Car as an argument. Within this interface we have the class Car.
Live Demo
interface CarRentalServices {
void lendCar(Car c);
void collectCar(Car c);
public class Car{
int carId;
String carModel;
int issueDate;
int returnDate;
}
}
public class InterfaceSample implements CarRentalServices {
public void lendCar(Car c) {
System.out.println("Car Issued");
}
public void collectCar(Car c) {
System.out.println("Car Retrieved");
}
public static void main(String args[]){
InterfaceSample obj = new InterfaceSample();
obj.lendCar(new CarRentalServices.Car());
obj.collectCar(new CarRentalServices.Car());
}
}
Car Issued
Car Retrieved | [
{
"code": null,
"e": 1274,
"s": 1062,
"text": "Defining a class within an interface is allowed in Java. If the methods of an interface accept a class as an argument and the class is not used elsewhere, in such cases we can define a class inside an interface."
},
{
"code": null,
"e": 1484,
"s": 1274,
"text": "In the following example we have an interface with name CarRentalServices and this interface has two methods that accepts an object of the class Car as an argument. Within this interface we have the class Car."
},
{
"code": null,
"e": 1495,
"s": 1484,
"text": " Live Demo"
},
{
"code": null,
"e": 2107,
"s": 1495,
"text": "interface CarRentalServices {\n void lendCar(Car c);\n void collectCar(Car c);\n public class Car{\n int carId;\n String carModel;\n int issueDate;\n int returnDate;\n }\n}\npublic class InterfaceSample implements CarRentalServices {\n public void lendCar(Car c) {\n System.out.println(\"Car Issued\");\n }\n public void collectCar(Car c) {\n System.out.println(\"Car Retrieved\");\n }\n public static void main(String args[]){\n InterfaceSample obj = new InterfaceSample();\n obj.lendCar(new CarRentalServices.Car());\n obj.collectCar(new CarRentalServices.Car());\n }\n}"
},
{
"code": null,
"e": 2132,
"s": 2107,
"text": "Car Issued\nCar Retrieved"
}
] |
What is the difference between a mutable and immutable string in C#? | StringBuilder is a mutable string in C#. With StringBuilder, you can expand the number of characters in the string. The string cannot be changed once it is created, but StringBuilder can be expanded. It does not create a new object in the memory.
Set StringBuilder −
StringBuilder str = new StringBuilder();
Let us see an example to learn how to work with StringBuilder in C# −
Live Demo
using System;
using System.Text;
public class Program {
public static void Main() {
StringBuilder str = new StringBuilder("Web World!!",30);
str.Replace("World", "Arena");
Console.WriteLine(str);
}
}
Web Arena!!
Immutable string is a String in C#. A new memory is created every time. A String cannot be changed once it is created, unlike StringBuilde. It does not create a new object in the memory.
Set a string −
String str = “tim”;
The following is an example of a string in which we are comparing two string −
Live Demo
using System;
namespace StringApplication {
class StringProg {
static void Main(string[] args) {
string str1 = "Steve";
string str2 = "Ben";
if (String.Compare(str1, str2) == 0) {
Console.WriteLine(str1 + " and " + str2 + " are equal strings.");
} else {
Console.WriteLine(str1 + " and " + str2 + " are not equal strings.");
}
Console.ReadKey() ;
}
}
}
Steve and Ben are not equal strings. | [
{
"code": null,
"e": 1309,
"s": 1062,
"text": "StringBuilder is a mutable string in C#. With StringBuilder, you can expand the number of characters in the string. The string cannot be changed once it is created, but StringBuilder can be expanded. It does not create a new object in the memory."
},
{
"code": null,
"e": 1329,
"s": 1309,
"text": "Set StringBuilder −"
},
{
"code": null,
"e": 1370,
"s": 1329,
"text": "StringBuilder str = new StringBuilder();"
},
{
"code": null,
"e": 1440,
"s": 1370,
"text": "Let us see an example to learn how to work with StringBuilder in C# −"
},
{
"code": null,
"e": 1451,
"s": 1440,
"text": " Live Demo"
},
{
"code": null,
"e": 1677,
"s": 1451,
"text": "using System;\nusing System.Text;\n\npublic class Program {\n public static void Main() {\n StringBuilder str = new StringBuilder(\"Web World!!\",30);\n str.Replace(\"World\", \"Arena\");\n\n Console.WriteLine(str);\n }\n}"
},
{
"code": null,
"e": 1689,
"s": 1677,
"text": "Web Arena!!"
},
{
"code": null,
"e": 1876,
"s": 1689,
"text": "Immutable string is a String in C#. A new memory is created every time. A String cannot be changed once it is created, unlike StringBuilde. It does not create a new object in the memory."
},
{
"code": null,
"e": 1891,
"s": 1876,
"text": "Set a string −"
},
{
"code": null,
"e": 1911,
"s": 1891,
"text": "String str = “tim”;"
},
{
"code": null,
"e": 1990,
"s": 1911,
"text": "The following is an example of a string in which we are comparing two string −"
},
{
"code": null,
"e": 2001,
"s": 1990,
"text": " Live Demo"
},
{
"code": null,
"e": 2454,
"s": 2001,
"text": "using System;\n\nnamespace StringApplication {\n\n class StringProg {\n\n static void Main(string[] args) {\n string str1 = \"Steve\";\n string str2 = \"Ben\";\n\n if (String.Compare(str1, str2) == 0) {\n Console.WriteLine(str1 + \" and \" + str2 + \" are equal strings.\");\n } else {\n Console.WriteLine(str1 + \" and \" + str2 + \" are not equal strings.\");\n }\n Console.ReadKey() ;\n }\n }\n}"
},
{
"code": null,
"e": 2491,
"s": 2454,
"text": "Steve and Ben are not equal strings."
}
] |
Group Anagrams in Python | Suppose, we have a set of strings. We have to group anagram together. So if the ["eat", "tea", "tan", "ate", "nat", "bat"], then the groups are [["ate","eat","tea"],["nat","tan"],["bat"]]
To solve this, we will follow these steps −
Define res as map
for i in string arrayx := x and join, sorted string of iif x in resultinsert i in result[x]else result[x] := [i]
x := x and join, sorted string of i
if x in resultinsert i in result[x]
insert i in result[x]
else result[x] := [i]
return values of res as list
Let us see the following implementation to get a better understanding −
Live Demo
class Solution:
def groupAnagrams(self, strs):
result = {}
for i in strs:
x = "".join(sorted(i))
if x in result:
result[x].append(i)
else:
result[x] = [i]
return list(result.values())
ob1 = Solution()
print(ob1.groupAnagrams(["eat", "tea", "tan", "ate", "nat", "bat"]))
["eat", "tea", "tan", "ate", "nat", "bat"]
[["ate","eat","tea"],["nat","tan"],["bat"]] | [
{
"code": null,
"e": 1250,
"s": 1062,
"text": "Suppose, we have a set of strings. We have to group anagram together. So if the [\"eat\", \"tea\", \"tan\", \"ate\", \"nat\", \"bat\"], then the groups are [[\"ate\",\"eat\",\"tea\"],[\"nat\",\"tan\"],[\"bat\"]]"
},
{
"code": null,
"e": 1294,
"s": 1250,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1312,
"s": 1294,
"text": "Define res as map"
},
{
"code": null,
"e": 1425,
"s": 1312,
"text": "for i in string arrayx := x and join, sorted string of iif x in resultinsert i in result[x]else result[x] := [i]"
},
{
"code": null,
"e": 1461,
"s": 1425,
"text": "x := x and join, sorted string of i"
},
{
"code": null,
"e": 1497,
"s": 1461,
"text": "if x in resultinsert i in result[x]"
},
{
"code": null,
"e": 1519,
"s": 1497,
"text": "insert i in result[x]"
},
{
"code": null,
"e": 1541,
"s": 1519,
"text": "else result[x] := [i]"
},
{
"code": null,
"e": 1570,
"s": 1541,
"text": "return values of res as list"
},
{
"code": null,
"e": 1642,
"s": 1570,
"text": "Let us see the following implementation to get a better understanding −"
},
{
"code": null,
"e": 1653,
"s": 1642,
"text": " Live Demo"
},
{
"code": null,
"e": 1995,
"s": 1653,
"text": "class Solution:\n def groupAnagrams(self, strs):\n result = {}\n for i in strs:\n x = \"\".join(sorted(i))\n if x in result:\n result[x].append(i)\n else:\n result[x] = [i]\n return list(result.values())\nob1 = Solution()\nprint(ob1.groupAnagrams([\"eat\", \"tea\", \"tan\", \"ate\", \"nat\", \"bat\"]))"
},
{
"code": null,
"e": 2038,
"s": 1995,
"text": "[\"eat\", \"tea\", \"tan\", \"ate\", \"nat\", \"bat\"]"
},
{
"code": null,
"e": 2082,
"s": 2038,
"text": "[[\"ate\",\"eat\",\"tea\"],[\"nat\",\"tan\"],[\"bat\"]]"
}
] |
Food Nutrition: Learn from Data | Towards Data Science | I love to explore my data and finding unexpected patterns from it. As a data scientist, in my personal opinion, we need to have this curiosity trait to succeed in this field. The way to explore data thou is not limited only to basic techniques such as visualizing the data and getting the statistic number, one way is to implement machine learning.
Machine learning is a technique for exploring data as well, not only for prediction purposes as people often like to promote. This is why I often focus on understanding the concept of the model to know how my data is processed; to know better what happens to our data.
In this article, I want to introduce what information we could get from the statistical number and a data mining technique via unsupervised learning to exploring data. Here, because one of my hobbies is to cook, I would use a dataset from Kaggle regarding the food nutrition value from common food and products. I want to explore this data for both learning and satisfy my own curiosity. It means, my target here is just to know what happens in my data and what kind of information I could get without any specific aim. Let’s get started.
First, we need to read our data and understand how our data is. This is an important first step.
import pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsdata = pd.read_csv('nutrition.csv')data.head()
data.info()
We actually have 76 columns which I did not show all in here (it would be a really long list) with the data example is shown in the table above.
Most of our data is consists of the nutrition value (calories, fat, sugar, carbohydrate, etc.) with the food name. The nutrition value columns have a different measurement such as g (gram), mg (milligram) and mcg (microgram). In this case, we could also ignore feature serving_size as it does not present any additional information other than all the data is based on the 100 grams of the food. Some of the columns also contain NaN value, which I believe this Null value means there are equal to 0. Now, let us do some data cleaning.
#Drop the serving_size columndata.drop('serving_size', axis = 1, inplace = True)#Fill the NaN value with 0data.fillna(0, inplace = True)
In this case, I would want all my features except the name feature to become a numerical column. It means, we need to remove all the non-numerical text in the data. I would also want to transform all the numerical data except calories to have the same measurement (gram). Let’s get into it.
#I would use Regular Expression Module to help me clean the dataimport re#Looping in each non-numerical features except the name feature for col in data.drop('name',axis = 1).select_dtypes(exclude = 'number').columns: for i in data[col]: if i == '0' or i == 0: pass else: point = re.findall('[a-zA-Z]+',i)[0] replace = [] if point == 'mg': for j in data[col]: if j == '0' or j == 0: replace.append(float(j)) else: replace.append(float(re.sub('[a-zA-Z]','',j))/1000) elif point == 'mcg': for j in data[col]: if j == '0' or j == 0: replace.append(float(j)) else: replace.append(float(re.sub('[a-zA-Z]','',j))/1000000) else: for j in data[col]: if j == '0' or j == 0: replace.append(float(j)) else: replace.append(float(re.sub('[a-zA-Z]','',j))) data[col] = replace data.rename({col:col+'(g)'}, axis =1, inplace = True) break
Here is the end result of my data that I would explore even further. Note that I know that all the data is now in the gram measurement except the calories but just for the purpose of learning, I add it in my column name so we did not forget about it.
I also create one more feature called food_categories because when examining carefully that name feature, the first word before the comma would be the food.
data['food_categories'] = data['name'].apply(lambda x: x.split(',')[0])
If we try to visualize the columns one by one, it would be massive and kinda repetitive as it would not give us much information. You could try it thou if you want. I could give you the code below.
for i in data.select_dtypes('number').columns: sns.distplot(data[i]) plt.title(i) plt.show()
Sometimes in a case like this, if we really just want to explore the data it would be more intuitive to look by the number rather than visualize it (I am a more numerical person after all as I believe sometimes visualization is biased).
pd.set_option('display.max_columns', None)data.agg(['mean', 'median', 'std', 'skew', 'kurtosis'])
Here, for example, I use the .agg method of the DataFrame to gain information about the mean, median, std, skewness and kurtosis of each column. This is where the number speak more than visualization.
As we know mean is the average of the data. Multiple features could have the same mean, but different in how they are spread around the mean and it signifies by the standard deviation (std). There is a rule called an empirical rule where we could get the probability of the data spreads via standard deviation. The empirical rule stated that:
68% of our data falls under mean±1*std
95% of our data falls under mean±2*std
99.7% of our data falls under mean±3*std
Empirical rule or some also say 68–95–99.7 rule are often used to analyzing the data outlier. The main problem with this statistic is that they are affected by outlier or extreme value(s) and often causing the data to be skewed. I show you with an image what is skewed data.
Above is the plot of the total_fat(g) feature. It is skewed right as the tail is on the right. But, how skewed is the skewness? It is the purpose of the skew statistic. Some rule we could remember about skewness are:
If the skewness is between -0.5 and 0.5, the data are fairly symmetrical
If the skewness is between -1 and — 0.5 or between 0.5 and 1, the data are moderately skewed
If the skewness is less than -1 or greater than 1, the data are highly skewed
So we could see that if our data above is highly skewed, which actually most of the data that you would encounter is like that. Now, what about kurtosis? What is this statistic tell us? Kurtosis is a measure of whether the data are heavy-tailed or light-tailed relative to a normal distribution. The analysis could be summarized below:
If the kurtosis is close to 0, then a normal distribution is often assumed. These are called mesokurtic distributions.
If the kurtosis is less than 0, then the distribution is light tails and is called a platykurtic distribution.
If the kurtosis is greater than 0, then the distribution has heavier tails and is called a leptokurtic distribution.
If we visualize it, it would look like the picture below.
To be precise what we have before is called Excess Kurtosis, where normal distribution is measured in kurtosis as 0. If we only talk about Kurtosis, the normal distribution would be equal to 3 so that is why in Excess Kurtosis we subtract the kurtosis by 3.
Turn out most of our data are skewed. Skewed data are actually really interesting as you could try to explore with it. For example, what food is considered to be an outlier based on the Calories.
As our data is skewed enough, I would not rely on the mean to find the outlier; instead, I would apply the IQR method which is based on the median.
IQR or Interquartile Range is based on the data position. For example, if we describe the ‘calories’ feature we would get the description below.
data['calories'].describe()
IQR would be based on the 25% position or Q1 and 75% position or Q3. We also get an IQR value by subtracting Q3 with Q1 (Q3-Q1). With the IQR method, we could decide which data are considered an outlier based on the upper or lower limit which is:
Lower Limit= Q1–1.5 * IQR
Upper Limit= Q3 + 1.5 * IQR
Any data above or below this limit would be considered as an outlier. Let’s try to implement this method and let’s see what kind of food is considered to be an outlier based on calories.
#Specifying the limitcal_Q1 = data.describe()['calories']['25%']cal_Q3 = data.describe()['calories']['75%']cal_IQR = cal_Q3 - cal_Q1data[(data['calories'] < 1.5 * (cal_Q1 - cal_IQR)) | (data['calories'] > 1.5 * (cal_Q3 + cal_IQR)) ]['food_categories'].value_counts()
Turn out, most of the high calories food category is oil, which is not surprising.
Above I show you a way to explore the data numerically, now I want to show you an example of how machine learning could help us explore the data.
Unsupervised learning is a machine learning case where we did not have any specific target to learn. One example is Clustering Analysis, where we feed the model with data and the output is a cluster of data where closest data is treated as one cluster.
What I like to do if we did not have any specific target for exploring data, we could leave it up to the machine learning to learn it for us. Using unsupervised learning, we could gain a new perspective that we did not realize before. Let’s do it by example with my favorite clustering analysis algorithm.
My favorite clustering algorithm is Agglomerative Clustering Analysis which you could read about it in detail here. Basically, this analysis assigns every single data point as a single cluster and proceed by merging every cluster so we left with a single cluster.
Now, before we proceed with the analysis, we need to prepare the data. Clustering analysis depends on the distance between the data. The distance of the data would be affected by their scale, that is why we also need to transform all of the features to have the same scale. If you remember, we already have every single column in our features to be in the gram measurement, but there is a ‘Calories’ column that is not on the same scale. This is why we still need to transform the data. Often we transform the data to following the standard distribution and that is what we would do.
#Importing the transformerfrom sklearn.preprocessing import StandardScaler#Transforming the data, I drop the name feature as we only need the numerical columnscaler = StandardScaler()training = pd.DataFrame(scaler.fit_transform(data.drop('name', axis =1)), columns = data.drop('name', axis =1).columns)
This is what we end up with, a dataset with the same scale for every feature. Now, let’s try to cluster the data via Agglomerative Clustering. First, we visualize how the cluster would end up with.
from scipy.cluster.hierarchy import linkage, dendrogram#Ward is the most common linkage methodZ = linkage(training,method = 'ward')dendrogram(Z, truncate_mode = 'lastp')plt.xticks(rotation = 90, fontsize = 10)plt.ylabel('Distance')plt.xlabel('Cluster')plt.title('Agglomerative Clustering')
Above is the tree produced by the Agglomerative Clustering. It only shows the last 30 merging events because it would be packed if we showed it all here. As we can see, it seems that the data could be divided into 2 clusters; of course, if you want to be more conservative it could be divided into 3 clusters as there is evidence in the visualization above that it could be the case. Although, I would keep it as 2 clusters for now.
Let’s get back to our previous data and input the Agglomerative Clustering result in our data.
from sklearn.cluster import AgglomerativeClustering#I specify n_clusters to be 2 based on our previous analysisach = AgglomerativeClustering(n_clusters = 2)ach.fit(training)#Input the label result to the datadata['label'] = ach.labels_
Now, via unsupervised learning, we could actually try to visualize the multidimensional data into two axes. There are several ways to do that, but I would show you a technique called t-SNE.
from sklearn.manifold import TSNE#t-SNE is based on a stochastic (random) process, that is why I set the random_state so we could repeat the resulttsne = TSNE(random_state=0)tsne_results = tsne.fit_transform(training) tsne_results=pd.DataFrame(tsne_results, columns=['tsne1', 'tsne2'])#Visualize the datatsne_results['label'] = data['label']sns.scatterplot(data = tsne_results, x = 'tsne1', y = 'tsne2', hue='label')plt.show()
Now our multidimensional data have been visualized and clearly the agglomerative clustering method separates our data in a clear line. Well, what exactly makes them separated? it is what we need to analyze. No easy way except getting dirty with the number again. Of course, visualization would help here. I would give the code for the distribution of each column below.
for i in data.select_dtypes('number').columns: sns.distplot(data[data['label'] == 0][i], label = 'label 0') sns.distplot(data[data['label'] == 1][i], label = 'label 1') plt.title(i) plt.legend() plt.show()
Above is a looping distribution plot for each column, but if you prefer using a number like me; we could use the groupby method from the DataFrame object.
data.groupby('label').agg(['mean', 'median', 'std'])
The result would be looks like above. I have done some analysis that makes them separated. Here is my summary:
Label 0 indicates food with less protein, more sugar and carbohydrates, more fiber, less fat and cholesterol, and the calories are spread except around 200 calories.
Label 1 indicates food with more protein, less sugar and carbohydrates, less fiber, more fat and cholesterol, and calories is only spread around 200 calories.
We could also see what kind of food we have from the label above.
#Food label 0data[data['label'] == 0]['food_categories'].value_counts()
Top 5 of the food with label 0 are beverages, cereals, baby food, soup, and snacks which is expected for a food that did not contain much protein and fat.
#Food label 1data[data['label'] == 1]['food_categories'].value_counts()
Here in the label 1, the top 5 food is all meat. This is not surprising, considering this label is for food that contains higher fat and protein compared to label 1.
Here I just try to playing around with the data and try to get what pattern I could get from the data. I did not have any specific aim except to get an insight into what my data would provide me.
We could see that sometimes numbers could tell more information compared to the visualization and machine learning is not always use for prediction but could be used as well for analysis.
If you are not subscribed as a Medium Member, please consider subscribing through my referral. | [
{
"code": null,
"e": 521,
"s": 172,
"text": "I love to explore my data and finding unexpected patterns from it. As a data scientist, in my personal opinion, we need to have this curiosity trait to succeed in this field. The way to explore data thou is not limited only to basic techniques such as visualizing the data and getting the statistic number, one way is to implement machine learning."
},
{
"code": null,
"e": 790,
"s": 521,
"text": "Machine learning is a technique for exploring data as well, not only for prediction purposes as people often like to promote. This is why I often focus on understanding the concept of the model to know how my data is processed; to know better what happens to our data."
},
{
"code": null,
"e": 1329,
"s": 790,
"text": "In this article, I want to introduce what information we could get from the statistical number and a data mining technique via unsupervised learning to exploring data. Here, because one of my hobbies is to cook, I would use a dataset from Kaggle regarding the food nutrition value from common food and products. I want to explore this data for both learning and satisfy my own curiosity. It means, my target here is just to know what happens in my data and what kind of information I could get without any specific aim. Let’s get started."
},
{
"code": null,
"e": 1426,
"s": 1329,
"text": "First, we need to read our data and understand how our data is. This is an important first step."
},
{
"code": null,
"e": 1544,
"s": 1426,
"text": "import pandas as pdimport matplotlib.pyplot as pltimport seaborn as snsdata = pd.read_csv('nutrition.csv')data.head()"
},
{
"code": null,
"e": 1556,
"s": 1544,
"text": "data.info()"
},
{
"code": null,
"e": 1701,
"s": 1556,
"text": "We actually have 76 columns which I did not show all in here (it would be a really long list) with the data example is shown in the table above."
},
{
"code": null,
"e": 2235,
"s": 1701,
"text": "Most of our data is consists of the nutrition value (calories, fat, sugar, carbohydrate, etc.) with the food name. The nutrition value columns have a different measurement such as g (gram), mg (milligram) and mcg (microgram). In this case, we could also ignore feature serving_size as it does not present any additional information other than all the data is based on the 100 grams of the food. Some of the columns also contain NaN value, which I believe this Null value means there are equal to 0. Now, let us do some data cleaning."
},
{
"code": null,
"e": 2372,
"s": 2235,
"text": "#Drop the serving_size columndata.drop('serving_size', axis = 1, inplace = True)#Fill the NaN value with 0data.fillna(0, inplace = True)"
},
{
"code": null,
"e": 2663,
"s": 2372,
"text": "In this case, I would want all my features except the name feature to become a numerical column. It means, we need to remove all the non-numerical text in the data. I would also want to transform all the numerical data except calories to have the same measurement (gram). Let’s get into it."
},
{
"code": null,
"e": 3937,
"s": 2663,
"text": "#I would use Regular Expression Module to help me clean the dataimport re#Looping in each non-numerical features except the name feature for col in data.drop('name',axis = 1).select_dtypes(exclude = 'number').columns: for i in data[col]: if i == '0' or i == 0: pass else: point = re.findall('[a-zA-Z]+',i)[0] replace = [] if point == 'mg': for j in data[col]: if j == '0' or j == 0: replace.append(float(j)) else: replace.append(float(re.sub('[a-zA-Z]','',j))/1000) elif point == 'mcg': for j in data[col]: if j == '0' or j == 0: replace.append(float(j)) else: replace.append(float(re.sub('[a-zA-Z]','',j))/1000000) else: for j in data[col]: if j == '0' or j == 0: replace.append(float(j)) else: replace.append(float(re.sub('[a-zA-Z]','',j))) data[col] = replace data.rename({col:col+'(g)'}, axis =1, inplace = True) break"
},
{
"code": null,
"e": 4188,
"s": 3937,
"text": "Here is the end result of my data that I would explore even further. Note that I know that all the data is now in the gram measurement except the calories but just for the purpose of learning, I add it in my column name so we did not forget about it."
},
{
"code": null,
"e": 4345,
"s": 4188,
"text": "I also create one more feature called food_categories because when examining carefully that name feature, the first word before the comma would be the food."
},
{
"code": null,
"e": 4417,
"s": 4345,
"text": "data['food_categories'] = data['name'].apply(lambda x: x.split(',')[0])"
},
{
"code": null,
"e": 4615,
"s": 4417,
"text": "If we try to visualize the columns one by one, it would be massive and kinda repetitive as it would not give us much information. You could try it thou if you want. I could give you the code below."
},
{
"code": null,
"e": 4717,
"s": 4615,
"text": "for i in data.select_dtypes('number').columns: sns.distplot(data[i]) plt.title(i) plt.show()"
},
{
"code": null,
"e": 4954,
"s": 4717,
"text": "Sometimes in a case like this, if we really just want to explore the data it would be more intuitive to look by the number rather than visualize it (I am a more numerical person after all as I believe sometimes visualization is biased)."
},
{
"code": null,
"e": 5052,
"s": 4954,
"text": "pd.set_option('display.max_columns', None)data.agg(['mean', 'median', 'std', 'skew', 'kurtosis'])"
},
{
"code": null,
"e": 5253,
"s": 5052,
"text": "Here, for example, I use the .agg method of the DataFrame to gain information about the mean, median, std, skewness and kurtosis of each column. This is where the number speak more than visualization."
},
{
"code": null,
"e": 5596,
"s": 5253,
"text": "As we know mean is the average of the data. Multiple features could have the same mean, but different in how they are spread around the mean and it signifies by the standard deviation (std). There is a rule called an empirical rule where we could get the probability of the data spreads via standard deviation. The empirical rule stated that:"
},
{
"code": null,
"e": 5635,
"s": 5596,
"text": "68% of our data falls under mean±1*std"
},
{
"code": null,
"e": 5674,
"s": 5635,
"text": "95% of our data falls under mean±2*std"
},
{
"code": null,
"e": 5715,
"s": 5674,
"text": "99.7% of our data falls under mean±3*std"
},
{
"code": null,
"e": 5990,
"s": 5715,
"text": "Empirical rule or some also say 68–95–99.7 rule are often used to analyzing the data outlier. The main problem with this statistic is that they are affected by outlier or extreme value(s) and often causing the data to be skewed. I show you with an image what is skewed data."
},
{
"code": null,
"e": 6207,
"s": 5990,
"text": "Above is the plot of the total_fat(g) feature. It is skewed right as the tail is on the right. But, how skewed is the skewness? It is the purpose of the skew statistic. Some rule we could remember about skewness are:"
},
{
"code": null,
"e": 6280,
"s": 6207,
"text": "If the skewness is between -0.5 and 0.5, the data are fairly symmetrical"
},
{
"code": null,
"e": 6373,
"s": 6280,
"text": "If the skewness is between -1 and — 0.5 or between 0.5 and 1, the data are moderately skewed"
},
{
"code": null,
"e": 6451,
"s": 6373,
"text": "If the skewness is less than -1 or greater than 1, the data are highly skewed"
},
{
"code": null,
"e": 6787,
"s": 6451,
"text": "So we could see that if our data above is highly skewed, which actually most of the data that you would encounter is like that. Now, what about kurtosis? What is this statistic tell us? Kurtosis is a measure of whether the data are heavy-tailed or light-tailed relative to a normal distribution. The analysis could be summarized below:"
},
{
"code": null,
"e": 6906,
"s": 6787,
"text": "If the kurtosis is close to 0, then a normal distribution is often assumed. These are called mesokurtic distributions."
},
{
"code": null,
"e": 7017,
"s": 6906,
"text": "If the kurtosis is less than 0, then the distribution is light tails and is called a platykurtic distribution."
},
{
"code": null,
"e": 7134,
"s": 7017,
"text": "If the kurtosis is greater than 0, then the distribution has heavier tails and is called a leptokurtic distribution."
},
{
"code": null,
"e": 7192,
"s": 7134,
"text": "If we visualize it, it would look like the picture below."
},
{
"code": null,
"e": 7450,
"s": 7192,
"text": "To be precise what we have before is called Excess Kurtosis, where normal distribution is measured in kurtosis as 0. If we only talk about Kurtosis, the normal distribution would be equal to 3 so that is why in Excess Kurtosis we subtract the kurtosis by 3."
},
{
"code": null,
"e": 7646,
"s": 7450,
"text": "Turn out most of our data are skewed. Skewed data are actually really interesting as you could try to explore with it. For example, what food is considered to be an outlier based on the Calories."
},
{
"code": null,
"e": 7794,
"s": 7646,
"text": "As our data is skewed enough, I would not rely on the mean to find the outlier; instead, I would apply the IQR method which is based on the median."
},
{
"code": null,
"e": 7939,
"s": 7794,
"text": "IQR or Interquartile Range is based on the data position. For example, if we describe the ‘calories’ feature we would get the description below."
},
{
"code": null,
"e": 7967,
"s": 7939,
"text": "data['calories'].describe()"
},
{
"code": null,
"e": 8214,
"s": 7967,
"text": "IQR would be based on the 25% position or Q1 and 75% position or Q3. We also get an IQR value by subtracting Q3 with Q1 (Q3-Q1). With the IQR method, we could decide which data are considered an outlier based on the upper or lower limit which is:"
},
{
"code": null,
"e": 8240,
"s": 8214,
"text": "Lower Limit= Q1–1.5 * IQR"
},
{
"code": null,
"e": 8268,
"s": 8240,
"text": "Upper Limit= Q3 + 1.5 * IQR"
},
{
"code": null,
"e": 8455,
"s": 8268,
"text": "Any data above or below this limit would be considered as an outlier. Let’s try to implement this method and let’s see what kind of food is considered to be an outlier based on calories."
},
{
"code": null,
"e": 8722,
"s": 8455,
"text": "#Specifying the limitcal_Q1 = data.describe()['calories']['25%']cal_Q3 = data.describe()['calories']['75%']cal_IQR = cal_Q3 - cal_Q1data[(data['calories'] < 1.5 * (cal_Q1 - cal_IQR)) | (data['calories'] > 1.5 * (cal_Q3 + cal_IQR)) ]['food_categories'].value_counts()"
},
{
"code": null,
"e": 8805,
"s": 8722,
"text": "Turn out, most of the high calories food category is oil, which is not surprising."
},
{
"code": null,
"e": 8951,
"s": 8805,
"text": "Above I show you a way to explore the data numerically, now I want to show you an example of how machine learning could help us explore the data."
},
{
"code": null,
"e": 9204,
"s": 8951,
"text": "Unsupervised learning is a machine learning case where we did not have any specific target to learn. One example is Clustering Analysis, where we feed the model with data and the output is a cluster of data where closest data is treated as one cluster."
},
{
"code": null,
"e": 9510,
"s": 9204,
"text": "What I like to do if we did not have any specific target for exploring data, we could leave it up to the machine learning to learn it for us. Using unsupervised learning, we could gain a new perspective that we did not realize before. Let’s do it by example with my favorite clustering analysis algorithm."
},
{
"code": null,
"e": 9774,
"s": 9510,
"text": "My favorite clustering algorithm is Agglomerative Clustering Analysis which you could read about it in detail here. Basically, this analysis assigns every single data point as a single cluster and proceed by merging every cluster so we left with a single cluster."
},
{
"code": null,
"e": 10358,
"s": 9774,
"text": "Now, before we proceed with the analysis, we need to prepare the data. Clustering analysis depends on the distance between the data. The distance of the data would be affected by their scale, that is why we also need to transform all of the features to have the same scale. If you remember, we already have every single column in our features to be in the gram measurement, but there is a ‘Calories’ column that is not on the same scale. This is why we still need to transform the data. Often we transform the data to following the standard distribution and that is what we would do."
},
{
"code": null,
"e": 10661,
"s": 10358,
"text": "#Importing the transformerfrom sklearn.preprocessing import StandardScaler#Transforming the data, I drop the name feature as we only need the numerical columnscaler = StandardScaler()training = pd.DataFrame(scaler.fit_transform(data.drop('name', axis =1)), columns = data.drop('name', axis =1).columns)"
},
{
"code": null,
"e": 10859,
"s": 10661,
"text": "This is what we end up with, a dataset with the same scale for every feature. Now, let’s try to cluster the data via Agglomerative Clustering. First, we visualize how the cluster would end up with."
},
{
"code": null,
"e": 11149,
"s": 10859,
"text": "from scipy.cluster.hierarchy import linkage, dendrogram#Ward is the most common linkage methodZ = linkage(training,method = 'ward')dendrogram(Z, truncate_mode = 'lastp')plt.xticks(rotation = 90, fontsize = 10)plt.ylabel('Distance')plt.xlabel('Cluster')plt.title('Agglomerative Clustering')"
},
{
"code": null,
"e": 11582,
"s": 11149,
"text": "Above is the tree produced by the Agglomerative Clustering. It only shows the last 30 merging events because it would be packed if we showed it all here. As we can see, it seems that the data could be divided into 2 clusters; of course, if you want to be more conservative it could be divided into 3 clusters as there is evidence in the visualization above that it could be the case. Although, I would keep it as 2 clusters for now."
},
{
"code": null,
"e": 11677,
"s": 11582,
"text": "Let’s get back to our previous data and input the Agglomerative Clustering result in our data."
},
{
"code": null,
"e": 11913,
"s": 11677,
"text": "from sklearn.cluster import AgglomerativeClustering#I specify n_clusters to be 2 based on our previous analysisach = AgglomerativeClustering(n_clusters = 2)ach.fit(training)#Input the label result to the datadata['label'] = ach.labels_"
},
{
"code": null,
"e": 12103,
"s": 11913,
"text": "Now, via unsupervised learning, we could actually try to visualize the multidimensional data into two axes. There are several ways to do that, but I would show you a technique called t-SNE."
},
{
"code": null,
"e": 12530,
"s": 12103,
"text": "from sklearn.manifold import TSNE#t-SNE is based on a stochastic (random) process, that is why I set the random_state so we could repeat the resulttsne = TSNE(random_state=0)tsne_results = tsne.fit_transform(training) tsne_results=pd.DataFrame(tsne_results, columns=['tsne1', 'tsne2'])#Visualize the datatsne_results['label'] = data['label']sns.scatterplot(data = tsne_results, x = 'tsne1', y = 'tsne2', hue='label')plt.show()"
},
{
"code": null,
"e": 12900,
"s": 12530,
"text": "Now our multidimensional data have been visualized and clearly the agglomerative clustering method separates our data in a clear line. Well, what exactly makes them separated? it is what we need to analyze. No easy way except getting dirty with the number again. Of course, visualization would help here. I would give the code for the distribution of each column below."
},
{
"code": null,
"e": 13121,
"s": 12900,
"text": "for i in data.select_dtypes('number').columns: sns.distplot(data[data['label'] == 0][i], label = 'label 0') sns.distplot(data[data['label'] == 1][i], label = 'label 1') plt.title(i) plt.legend() plt.show()"
},
{
"code": null,
"e": 13276,
"s": 13121,
"text": "Above is a looping distribution plot for each column, but if you prefer using a number like me; we could use the groupby method from the DataFrame object."
},
{
"code": null,
"e": 13329,
"s": 13276,
"text": "data.groupby('label').agg(['mean', 'median', 'std'])"
},
{
"code": null,
"e": 13440,
"s": 13329,
"text": "The result would be looks like above. I have done some analysis that makes them separated. Here is my summary:"
},
{
"code": null,
"e": 13606,
"s": 13440,
"text": "Label 0 indicates food with less protein, more sugar and carbohydrates, more fiber, less fat and cholesterol, and the calories are spread except around 200 calories."
},
{
"code": null,
"e": 13765,
"s": 13606,
"text": "Label 1 indicates food with more protein, less sugar and carbohydrates, less fiber, more fat and cholesterol, and calories is only spread around 200 calories."
},
{
"code": null,
"e": 13831,
"s": 13765,
"text": "We could also see what kind of food we have from the label above."
},
{
"code": null,
"e": 13903,
"s": 13831,
"text": "#Food label 0data[data['label'] == 0]['food_categories'].value_counts()"
},
{
"code": null,
"e": 14058,
"s": 13903,
"text": "Top 5 of the food with label 0 are beverages, cereals, baby food, soup, and snacks which is expected for a food that did not contain much protein and fat."
},
{
"code": null,
"e": 14130,
"s": 14058,
"text": "#Food label 1data[data['label'] == 1]['food_categories'].value_counts()"
},
{
"code": null,
"e": 14296,
"s": 14130,
"text": "Here in the label 1, the top 5 food is all meat. This is not surprising, considering this label is for food that contains higher fat and protein compared to label 1."
},
{
"code": null,
"e": 14492,
"s": 14296,
"text": "Here I just try to playing around with the data and try to get what pattern I could get from the data. I did not have any specific aim except to get an insight into what my data would provide me."
},
{
"code": null,
"e": 14680,
"s": 14492,
"text": "We could see that sometimes numbers could tell more information compared to the visualization and machine learning is not always use for prediction but could be used as well for analysis."
}
] |
How to play YouTube video in my Android Application? | This example demonstrates how do I play Youtube video in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 – Add following the dependancies in the build.gradle (Module:app)
implementation 'com.android.support:recyclerview-v7:28.0.0'
implementation 'com.android.support:cardview-v7:28.0.0'
Step 3 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="16sp"
tools:context=".MainActivity">
<android.support.v7.widget.RecyclerView
android:id="@+id/recyclerView"
android:layout_width="match_parent"
android:layout_height="match_parent">
</android.support.v7.widget.RecyclerView>
</RelativeLayout>
Step 4 – Create a layout resource file (Video_view.xml) and add the following code −
<?xml version="1.0" encoding="utf-8"?>
<WebView xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/webView"
android:layout_width="match_parent" android:layout_height="180dp">
</WebView>
Step 5 – Create a java class youTubeVideos.java and the following code −
public class youTubeVideos {
String videoUrl;
public youTubeVideos() {
}
public youTubeVideos(String videoUrl) {
this.videoUrl = videoUrl;
}
public String getVideoUrl() {
return videoUrl;
}
public void setVideoUrl(String videoUrl) {
this.videoUrl = videoUrl;
}
}
Step 6 – Create a java class VideoAdapter.java and the following code −
import android.support.v7.widget.RecyclerView;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.webkit.WebChromeClient;
import android.webkit.WebView;
import java.util.List;
public class VideoAdapter extends RecyclerView.Adapter<VideoAdapter.VideoViewHolder> {
private List<youTubeVideos> youtubeVideoList;
VideoAdapter(List<youTubeVideos> youtubeVideoList) {
this.youtubeVideoList = youtubeVideoList;
}
@Override
public VideoViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
View view = LayoutInflater.from( parent.getContext()).inflate(R.layout.video_view, parent, false);
return new VideoViewHolder(view);
}
@Override
public void onBindViewHolder(VideoViewHolder holder, int position) {
holder.videoWeb.loadData( youtubeVideoList.get(position).getVideoUrl(), "text/html" , "utf-8");
}
@Override
public int getItemCount() {
return youtubeVideoList.size();
}
class VideoViewHolder extends RecyclerView.ViewHolder{
WebView videoWeb;
VideoViewHolder(View itemView) {
super(itemView);
videoWeb = itemView.findViewById(R.id.webView);
videoWeb.getSettings().setJavaScriptEnabled(true);
videoWeb.setWebChromeClient(new WebChromeClient() {
} );
}
}
}
Step 7 − Add the following code to src/MainActivity.java
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.support.v7.widget.LinearLayoutManager;
import android.support.v7.widget.RecyclerView;
import java.util.Vector;
public class MainActivity extends AppCompatActivity {
RecyclerView recyclerView;
Vector<youTubeVideos> youtubeVideos = new Vector<>();
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
recyclerView = findViewById(R.id.recyclerView);
recyclerView.setHasFixedSize(true);
recyclerView.setLayoutManager( new LinearLayoutManager(this));
youtubeVideos.add( new youTubeVideos("<iframe width=\"100%\" height=\"100%\" src=\"https://www" + ".youtube.com/embed/eWEF1Zrmdow\" frameborder=\"0\" allowfullscreen></iframe>") );
youtubeVideos.add( new youTubeVideos("<iframe width=\"100%\" height=\"100%\" src=\"https://www" +".youtube.com/embed/KyJ71G2UxTQ\" frameborder=\"0\" allowfullscreen></iframe>") );
youtubeVideos.add( new youTubeVideos("<iframe width=\"100%\" height=\"100%\" src=\"https://www" +".youtube.com/embed/y8Rr39jKFKU\" frameborder=\"0\" allowfullscreen></iframe>") );
youtubeVideos.add( new youTubeVideos("<iframe width=\"100%\" height=\"100%\" src=\"https://www" +".youtube.com/embed/8Hg1tqIwIfI\" frameborder=\"0\" allowfullscreen></iframe>") );
youtubeVideos.add( new youTubeVideos("<iframe width=\"100%\" height=\"100%\" src=\"https://www" +".youtube.com/embed/uhQ7mh_o_cM\" frameborder=\"0\" allowfullscreen></iframe>") );
VideoAdapter videoAdapter = new VideoAdapter(youtubeVideos);
recyclerView.setAdapter(videoAdapter);
}
}
Step 8 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample">
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
<uses-permission android:name="android.permission.ACCESS_WIFI_STATE" />
<uses-permission android:name="android.permission.INTERNET" />
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
Click here to download the project code. | [
{
"code": null,
"e": 1128,
"s": 1062,
"text": "This example demonstrates how do I play Youtube video in android."
},
{
"code": null,
"e": 1257,
"s": 1128,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1330,
"s": 1257,
"text": "Step 2 – Add following the dependancies in the build.gradle (Module:app)"
},
{
"code": null,
"e": 1446,
"s": 1330,
"text": "implementation 'com.android.support:recyclerview-v7:28.0.0'\nimplementation 'com.android.support:cardview-v7:28.0.0'"
},
{
"code": null,
"e": 1511,
"s": 1446,
"text": "Step 3 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2043,
"s": 1511,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:padding=\"16sp\"\n tools:context=\".MainActivity\">\n <android.support.v7.widget.RecyclerView\n android:id=\"@+id/recyclerView\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\">\n </android.support.v7.widget.RecyclerView>\n</RelativeLayout>"
},
{
"code": null,
"e": 2128,
"s": 2043,
"text": "Step 4 – Create a layout resource file (Video_view.xml) and add the following code −"
},
{
"code": null,
"e": 2345,
"s": 2128,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<WebView xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:id=\"@+id/webView\"\n android:layout_width=\"match_parent\" android:layout_height=\"180dp\">\n</WebView>"
},
{
"code": null,
"e": 2418,
"s": 2345,
"text": "Step 5 – Create a java class youTubeVideos.java and the following code −"
},
{
"code": null,
"e": 2726,
"s": 2418,
"text": "public class youTubeVideos {\n String videoUrl;\n public youTubeVideos() {\n }\n public youTubeVideos(String videoUrl) {\n this.videoUrl = videoUrl;\n }\n public String getVideoUrl() {\n return videoUrl;\n }\n public void setVideoUrl(String videoUrl) {\n this.videoUrl = videoUrl;\n }\n}"
},
{
"code": null,
"e": 2798,
"s": 2726,
"text": "Step 6 – Create a java class VideoAdapter.java and the following code −"
},
{
"code": null,
"e": 4151,
"s": 2798,
"text": "import android.support.v7.widget.RecyclerView;\nimport android.view.LayoutInflater;\nimport android.view.View;\nimport android.view.ViewGroup;\nimport android.webkit.WebChromeClient;\nimport android.webkit.WebView;\nimport java.util.List;\npublic class VideoAdapter extends RecyclerView.Adapter<VideoAdapter.VideoViewHolder> {\n private List<youTubeVideos> youtubeVideoList;\n VideoAdapter(List<youTubeVideos> youtubeVideoList) {\n this.youtubeVideoList = youtubeVideoList;\n }\n @Override\n public VideoViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {\n View view = LayoutInflater.from( parent.getContext()).inflate(R.layout.video_view, parent, false);\n return new VideoViewHolder(view);\n }\n @Override\n public void onBindViewHolder(VideoViewHolder holder, int position) {\n holder.videoWeb.loadData( youtubeVideoList.get(position).getVideoUrl(), \"text/html\" , \"utf-8\");\n }\n @Override\n public int getItemCount() {\n return youtubeVideoList.size();\n }\n class VideoViewHolder extends RecyclerView.ViewHolder{\n WebView videoWeb;\n VideoViewHolder(View itemView) {\n super(itemView);\n videoWeb = itemView.findViewById(R.id.webView);\n videoWeb.getSettings().setJavaScriptEnabled(true);\n videoWeb.setWebChromeClient(new WebChromeClient() {\n } );\n }\n }\n}"
},
{
"code": null,
"e": 4208,
"s": 4151,
"text": "Step 7 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 5922,
"s": 4208,
"text": "import android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.support.v7.widget.LinearLayoutManager;\nimport android.support.v7.widget.RecyclerView;\nimport java.util.Vector;\npublic class MainActivity extends AppCompatActivity {\n RecyclerView recyclerView;\n Vector<youTubeVideos> youtubeVideos = new Vector<>();\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n recyclerView = findViewById(R.id.recyclerView);\n recyclerView.setHasFixedSize(true);\n recyclerView.setLayoutManager( new LinearLayoutManager(this));\n youtubeVideos.add( new youTubeVideos(\"<iframe width=\\\"100%\\\" height=\\\"100%\\\" src=\\\"https://www\" + \".youtube.com/embed/eWEF1Zrmdow\\\" frameborder=\\\"0\\\" allowfullscreen></iframe>\") );\n youtubeVideos.add( new youTubeVideos(\"<iframe width=\\\"100%\\\" height=\\\"100%\\\" src=\\\"https://www\" +\".youtube.com/embed/KyJ71G2UxTQ\\\" frameborder=\\\"0\\\" allowfullscreen></iframe>\") );\n youtubeVideos.add( new youTubeVideos(\"<iframe width=\\\"100%\\\" height=\\\"100%\\\" src=\\\"https://www\" +\".youtube.com/embed/y8Rr39jKFKU\\\" frameborder=\\\"0\\\" allowfullscreen></iframe>\") );\n youtubeVideos.add( new youTubeVideos(\"<iframe width=\\\"100%\\\" height=\\\"100%\\\" src=\\\"https://www\" +\".youtube.com/embed/8Hg1tqIwIfI\\\" frameborder=\\\"0\\\" allowfullscreen></iframe>\") );\n youtubeVideos.add( new youTubeVideos(\"<iframe width=\\\"100%\\\" height=\\\"100%\\\" src=\\\"https://www\" +\".youtube.com/embed/uhQ7mh_o_cM\\\" frameborder=\\\"0\\\" allowfullscreen></iframe>\") );\n VideoAdapter videoAdapter = new VideoAdapter(youtubeVideos);\n recyclerView.setAdapter(videoAdapter);\n }\n}"
},
{
"code": null,
"e": 5977,
"s": 5922,
"text": "Step 8 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 6945,
"s": 5977,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\">\n <uses-permission android:name=\"android.permission.WRITE_EXTERNAL_STORAGE\"/>\n <uses-permission android:name=\"android.permission.ACCESS_NETWORK_STATE\" />\n <uses-permission android:name=\"android.permission.ACCESS_WIFI_STATE\" />\n <uses-permission android:name=\"android.permission.INTERNET\" />\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 7292,
"s": 6945,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –"
},
{
"code": null,
"e": 7333,
"s": 7292,
"text": "Click here to download the project code."
}
] |
You never get bored playing with Computer Vision | by Denys Periel | Towards Data Science | In this article, you will learn how to record a screen with a decent frame rate by using Python and MSS. How to use template matching and edge detection with OpenCV. And if you wish you can make your machine to play a game.
I like automation and once I read a review written by Markus Rene Pae about yet another python library PyAutoGUI. The library allows you to manipulate by OS inputs like emitting a mouse or keyboard events. One of the challenges Markus proposed was to automate the Google Dino Game. And I was curious does PyAutoGUI allow me to capture a monitor in realtime, find Dino and perform a jump when it’s needed? I decided to give it a try and do not stick to a browser implementation, so Dino should run regardless is it a browser or standalone app. Later in this article you’ll find out for which tasks PyAutoGUI works well, for which it’s better to use other techniques.
PyAutoGUI
OpenCV
Python MSS (Multiple Screen Shots)
And of course NumPy, Matplotlib and Jupyter
Computer vision is a pretty hot topic nowadays. It is used in many places, where images or videos should be processed for future usage. For example Face ID: before understanding that it is you, first it tries to detect a face, and then process the picture and ask ML (Machine Learning) model to classify if it is you or not. Maybe it is someone else who is trying to unblock your phone. One of the most popular CV library at the moment is OpenCV. According to the official web site: the library has more than 2500 optimised algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. OpenCV is written on C++, available on all platforms, and uses APIs written on C++, Python, etc. It accelerates calculations on your GPU if you have one which follows CUDA or OpenCL standards.
First I implemented a game loop and tried to use only PyAutoGUI for capturing a screen, match a template (in my case it is a Dino) and technically it worked, but... The screenshots in PyAutoGUI are not intended for realtime capturing. So I got a latency about ONE second between frames. It’s too much because Dino runs with a speed of more than 400 pixels per second. And when my program pressed a “jump” key the game was over. I decided to specify which area to capture each time to mitigate the latency and got a latency of around 0.4 seconds. Better but still not enough. I understood that I need something else to perform object detection and all calculations should happen ant least with 30 FPS (Frames Per Second). It means I need to perform my calculations and all side effects within 0.03[3] seconds.
First, what is MSS?According to the docs MSS is an ultra-fast cross-platform multiple screenshots module in pure python using ctypes. The API is easy to use, it is already integrated with Numpy and OpenCV. Why did I choose MSS? Basically if you want to capture the whole screen MSS does it fast, much faster than other libraries. If you need to cast your screen somewhere, I’d go with this library.
After experimenting with different libraries that can provide a screenshot functionality, I understood that most of them use the same approach of doing it. Each time you grab the screen a “connection” to the screen resources reestablishes. I didn’t dig into this part too deep so far, I can only say that we spend too much time on this reestablishment. Meanwhile MSS is optimised for any OS. When you grab a screen it uses XGetImage method from already created “connection” to your screen resources. That means, you can init MSS instance with with statement and make your game loop there and you get much better performance.
with mss.mss() as sct: while True: screen = sct.grab(monitor) process_image(screen) if trigger_to_leave: break
Yep, that simple. This way you speed up grabbing a screen hundreds of times. Here I achieved getting screenshots with 100 FPS, I even added a sleep there to reduce redundant calculations. Next we need to process the image, analyse all blocks, and “jump” when it’s needed.I split this into two parts:
Find a dino on the screen and detect an area with “obstacles”Using the area from the previous step, grab it in a loop, calculate the distances to the obstacles, and calculate the velocity.
Find a dino on the screen and detect an area with “obstacles”
Using the area from the previous step, grab it in a loop, calculate the distances to the obstacles, and calculate the velocity.
Let’s review these steps.
This part is visually represented in a Jupyter Notebook on my GitHub: https://github.com/dperyel/run-dino-run/blob/master/search.ipynb
At this point I widely use OpenCV for image processing, template matching, edge detection. First I eliminate the color channels from the images and use only 1 channel by transforming the image with cv2.cvtColot(img, cv2.COLOR_BRG2GRAY). Then I need to remove distinctions between Day and Night.
What can we do for this? Actually, there are many ways to approach, I decided to use a Canny algorithm to detect edges and use extreme values for max and min thresholds. It allows me to get pretty much the same picture for both day and night canvases.
Of course if an image has a lot of noise, I’d need to blur it first, but in this particular case it is enough just to find edges. And I’m good to use template matching to find the Dino. The only thing is that template won’t scale during the match. Our Dino template should be taken from the screen where the game will be played. Or you can extend this functionality and perform template matching with a template scaling.
By using cv2.matchTemplate I get the location of the match. Initially you get a bunch of locations because when OpenCV slides the template over the source image it compares the area and you get a matching value. The matching values represent how precise the pixels matched. In my case I’m looking for only 1 Dino, which means I can take the highest value and use a location that is mapped to the value.
match_res = cv2.matchTemplate(canny_night, dino_bordered_template, cv2.TM_CCOEFF_NORMED)_, max_val, _, max_loc = cv2.minMaxLoc(match_res)
The max_val for me on average is 0.81, which means my template was matched on the image on 81%. Good enough to continue.
Knowing a dino location we can highlight the area where the obstacles appear. I select the part where no other noise but only obstacles are visible. It is needed to group barriers like cactuses and birds.
To make the grouping wasn’t too hard, as essentially, I have a matrix (image) where each cell has one of two values 0 or 255, kind of not normalised binary matrix. And I need to find the positions of the groups from the left where at least one pixel with value 255 exists. To do this I iterate through the canvas by X-axis with a step that defines a minimum distance between the “obstacles”. Each group represents a tuple with a position from the left and width of the group. When I find all groups, I trim the findings from the left to know the exact edge of an “obstacle”. It is needed for the future part which simplifies an optical flow. Also worth to mention that because a step value is constant the complexity of this approach is linear O(n+k) where n is the width of the canvas and k is the amount of the “obstacles”. And that’s not too bad because I need to do this calculation on each frame and care about performance here. Below you can see a visual representation of how the grouping works.
And now I have everything to switch to the next step.
The running script which finds a dino and starts the game loop is located in the next file: https://github.com/dperyel/run-dino-run/blob/master/run.py
Alright, I’d say that the most complicated part is done in the first part. Now we know at least the distance to the first “obstacle” and can use pyautogui.press('space') if the “danger” object is too close. The problem is that the game changes a speed. Dino runs faster and faster. My first idea was to use optical flow and Lucas-Kanade algorithm in particular to compare the previous frame and current frame. When I get pixel deviations I can calculate the speed. It would work, the only thing is I already have everything I need. My “obstacle” groups represent the features I need to look after and I can store a state from the previous frame to find the deviation I need. I always feel relief when avoiding usage of a complicated algorithm and get the same result by doing a couple of “plusses and minuses” (-:
By knowing a velocity it is a matter of math (or time) to find a dependency on which distance from an “obstacle” do you need to trigger a “jump”. And here is a result.
Computer vision is a great part of many automation processes. As you can see in this small example you can even make a complete End2End test for a game by letting a machine to find the sensitive parts.
It is a good idea to try different libraries to compare performance. In my case MSS is an absolute winner in screen capturing when PyAutoGUI is still used for other side effects.
P.S.: All sources are laying on my GitHub https://github.com/dperyel/run-dino-run The repo uses git LFS to store all binary files. To make the script work, you need to take a screenshot of the assets/dino_crop.png from your monitor, or make a template scaling (-;The code might contain bugs as it was done mostly like a POC.You are welcome to comment or ask questions below. | [
{
"code": null,
"e": 396,
"s": 172,
"text": "In this article, you will learn how to record a screen with a decent frame rate by using Python and MSS. How to use template matching and edge detection with OpenCV. And if you wish you can make your machine to play a game."
},
{
"code": null,
"e": 1062,
"s": 396,
"text": "I like automation and once I read a review written by Markus Rene Pae about yet another python library PyAutoGUI. The library allows you to manipulate by OS inputs like emitting a mouse or keyboard events. One of the challenges Markus proposed was to automate the Google Dino Game. And I was curious does PyAutoGUI allow me to capture a monitor in realtime, find Dino and perform a jump when it’s needed? I decided to give it a try and do not stick to a browser implementation, so Dino should run regardless is it a browser or standalone app. Later in this article you’ll find out for which tasks PyAutoGUI works well, for which it’s better to use other techniques."
},
{
"code": null,
"e": 1072,
"s": 1062,
"text": "PyAutoGUI"
},
{
"code": null,
"e": 1079,
"s": 1072,
"text": "OpenCV"
},
{
"code": null,
"e": 1114,
"s": 1079,
"text": "Python MSS (Multiple Screen Shots)"
},
{
"code": null,
"e": 1158,
"s": 1114,
"text": "And of course NumPy, Matplotlib and Jupyter"
},
{
"code": null,
"e": 2008,
"s": 1158,
"text": "Computer vision is a pretty hot topic nowadays. It is used in many places, where images or videos should be processed for future usage. For example Face ID: before understanding that it is you, first it tries to detect a face, and then process the picture and ask ML (Machine Learning) model to classify if it is you or not. Maybe it is someone else who is trying to unblock your phone. One of the most popular CV library at the moment is OpenCV. According to the official web site: the library has more than 2500 optimised algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms. OpenCV is written on C++, available on all platforms, and uses APIs written on C++, Python, etc. It accelerates calculations on your GPU if you have one which follows CUDA or OpenCL standards."
},
{
"code": null,
"e": 2817,
"s": 2008,
"text": "First I implemented a game loop and tried to use only PyAutoGUI for capturing a screen, match a template (in my case it is a Dino) and technically it worked, but... The screenshots in PyAutoGUI are not intended for realtime capturing. So I got a latency about ONE second between frames. It’s too much because Dino runs with a speed of more than 400 pixels per second. And when my program pressed a “jump” key the game was over. I decided to specify which area to capture each time to mitigate the latency and got a latency of around 0.4 seconds. Better but still not enough. I understood that I need something else to perform object detection and all calculations should happen ant least with 30 FPS (Frames Per Second). It means I need to perform my calculations and all side effects within 0.03[3] seconds."
},
{
"code": null,
"e": 3216,
"s": 2817,
"text": "First, what is MSS?According to the docs MSS is an ultra-fast cross-platform multiple screenshots module in pure python using ctypes. The API is easy to use, it is already integrated with Numpy and OpenCV. Why did I choose MSS? Basically if you want to capture the whole screen MSS does it fast, much faster than other libraries. If you need to cast your screen somewhere, I’d go with this library."
},
{
"code": null,
"e": 3841,
"s": 3216,
"text": "After experimenting with different libraries that can provide a screenshot functionality, I understood that most of them use the same approach of doing it. Each time you grab the screen a “connection” to the screen resources reestablishes. I didn’t dig into this part too deep so far, I can only say that we spend too much time on this reestablishment. Meanwhile MSS is optimised for any OS. When you grab a screen it uses XGetImage method from already created “connection” to your screen resources. That means, you can init MSS instance with with statement and make your game loop there and you get much better performance."
},
{
"code": null,
"e": 3967,
"s": 3841,
"text": "with mss.mss() as sct: while True: screen = sct.grab(monitor) process_image(screen) if trigger_to_leave: break"
},
{
"code": null,
"e": 4267,
"s": 3967,
"text": "Yep, that simple. This way you speed up grabbing a screen hundreds of times. Here I achieved getting screenshots with 100 FPS, I even added a sleep there to reduce redundant calculations. Next we need to process the image, analyse all blocks, and “jump” when it’s needed.I split this into two parts:"
},
{
"code": null,
"e": 4456,
"s": 4267,
"text": "Find a dino on the screen and detect an area with “obstacles”Using the area from the previous step, grab it in a loop, calculate the distances to the obstacles, and calculate the velocity."
},
{
"code": null,
"e": 4518,
"s": 4456,
"text": "Find a dino on the screen and detect an area with “obstacles”"
},
{
"code": null,
"e": 4646,
"s": 4518,
"text": "Using the area from the previous step, grab it in a loop, calculate the distances to the obstacles, and calculate the velocity."
},
{
"code": null,
"e": 4672,
"s": 4646,
"text": "Let’s review these steps."
},
{
"code": null,
"e": 4807,
"s": 4672,
"text": "This part is visually represented in a Jupyter Notebook on my GitHub: https://github.com/dperyel/run-dino-run/blob/master/search.ipynb"
},
{
"code": null,
"e": 5102,
"s": 4807,
"text": "At this point I widely use OpenCV for image processing, template matching, edge detection. First I eliminate the color channels from the images and use only 1 channel by transforming the image with cv2.cvtColot(img, cv2.COLOR_BRG2GRAY). Then I need to remove distinctions between Day and Night."
},
{
"code": null,
"e": 5354,
"s": 5102,
"text": "What can we do for this? Actually, there are many ways to approach, I decided to use a Canny algorithm to detect edges and use extreme values for max and min thresholds. It allows me to get pretty much the same picture for both day and night canvases."
},
{
"code": null,
"e": 5775,
"s": 5354,
"text": "Of course if an image has a lot of noise, I’d need to blur it first, but in this particular case it is enough just to find edges. And I’m good to use template matching to find the Dino. The only thing is that template won’t scale during the match. Our Dino template should be taken from the screen where the game will be played. Or you can extend this functionality and perform template matching with a template scaling."
},
{
"code": null,
"e": 6178,
"s": 5775,
"text": "By using cv2.matchTemplate I get the location of the match. Initially you get a bunch of locations because when OpenCV slides the template over the source image it compares the area and you get a matching value. The matching values represent how precise the pixels matched. In my case I’m looking for only 1 Dino, which means I can take the highest value and use a location that is mapped to the value."
},
{
"code": null,
"e": 6316,
"s": 6178,
"text": "match_res = cv2.matchTemplate(canny_night, dino_bordered_template, cv2.TM_CCOEFF_NORMED)_, max_val, _, max_loc = cv2.minMaxLoc(match_res)"
},
{
"code": null,
"e": 6437,
"s": 6316,
"text": "The max_val for me on average is 0.81, which means my template was matched on the image on 81%. Good enough to continue."
},
{
"code": null,
"e": 6642,
"s": 6437,
"text": "Knowing a dino location we can highlight the area where the obstacles appear. I select the part where no other noise but only obstacles are visible. It is needed to group barriers like cactuses and birds."
},
{
"code": null,
"e": 7645,
"s": 6642,
"text": "To make the grouping wasn’t too hard, as essentially, I have a matrix (image) where each cell has one of two values 0 or 255, kind of not normalised binary matrix. And I need to find the positions of the groups from the left where at least one pixel with value 255 exists. To do this I iterate through the canvas by X-axis with a step that defines a minimum distance between the “obstacles”. Each group represents a tuple with a position from the left and width of the group. When I find all groups, I trim the findings from the left to know the exact edge of an “obstacle”. It is needed for the future part which simplifies an optical flow. Also worth to mention that because a step value is constant the complexity of this approach is linear O(n+k) where n is the width of the canvas and k is the amount of the “obstacles”. And that’s not too bad because I need to do this calculation on each frame and care about performance here. Below you can see a visual representation of how the grouping works."
},
{
"code": null,
"e": 7699,
"s": 7645,
"text": "And now I have everything to switch to the next step."
},
{
"code": null,
"e": 7850,
"s": 7699,
"text": "The running script which finds a dino and starts the game loop is located in the next file: https://github.com/dperyel/run-dino-run/blob/master/run.py"
},
{
"code": null,
"e": 8664,
"s": 7850,
"text": "Alright, I’d say that the most complicated part is done in the first part. Now we know at least the distance to the first “obstacle” and can use pyautogui.press('space') if the “danger” object is too close. The problem is that the game changes a speed. Dino runs faster and faster. My first idea was to use optical flow and Lucas-Kanade algorithm in particular to compare the previous frame and current frame. When I get pixel deviations I can calculate the speed. It would work, the only thing is I already have everything I need. My “obstacle” groups represent the features I need to look after and I can store a state from the previous frame to find the deviation I need. I always feel relief when avoiding usage of a complicated algorithm and get the same result by doing a couple of “plusses and minuses” (-:"
},
{
"code": null,
"e": 8832,
"s": 8664,
"text": "By knowing a velocity it is a matter of math (or time) to find a dependency on which distance from an “obstacle” do you need to trigger a “jump”. And here is a result."
},
{
"code": null,
"e": 9034,
"s": 8832,
"text": "Computer vision is a great part of many automation processes. As you can see in this small example you can even make a complete End2End test for a game by letting a machine to find the sensitive parts."
},
{
"code": null,
"e": 9213,
"s": 9034,
"text": "It is a good idea to try different libraries to compare performance. In my case MSS is an absolute winner in screen capturing when PyAutoGUI is still used for other side effects."
}
] |
Restrict keyword in C | Here we will see what is the restrict keyword in C. The restrict keyword first introduced in C99 version. Let us see what is actually this restrict keyword.
The restrict keyword is used for pointer declarations as a type quantifier of the pointer.
The restrict keyword is used for pointer declarations as a type quantifier of the pointer.
This keyword does not add new functionalities. Using this the programmer can inform about an optimization that compiler can make.
This keyword does not add new functionalities. Using this the programmer can inform about an optimization that compiler can make.
When the restrict keyword is used with a pointer p, then it tells the compiler, that ptr is only way to access the object pointed by this. So compiler will not add any additional checks.
When the restrict keyword is used with a pointer p, then it tells the compiler, that ptr is only way to access the object pointed by this. So compiler will not add any additional checks.
If the programmer uses restrict keyword then violate the above condition, it will generate some un-defined behavior.
If the programmer uses restrict keyword then violate the above condition, it will generate some un-defined behavior.
#include <stdio.h>
void my_function(int* x, int* y, int* restrict z) {
*x += *z;
*y += *z;
}
main(void) {
int x = 10, y = 20, z = 30;
my_function(&x, &y, &z);
printf("%d %d %d", x, y, z);
}
40 50 30 | [
{
"code": null,
"e": 1219,
"s": 1062,
"text": "Here we will see what is the restrict keyword in C. The restrict keyword first introduced in C99 version. Let us see what is actually this restrict keyword."
},
{
"code": null,
"e": 1310,
"s": 1219,
"text": "The restrict keyword is used for pointer declarations as a type quantifier of the pointer."
},
{
"code": null,
"e": 1401,
"s": 1310,
"text": "The restrict keyword is used for pointer declarations as a type quantifier of the pointer."
},
{
"code": null,
"e": 1531,
"s": 1401,
"text": "This keyword does not add new functionalities. Using this the programmer can inform about an optimization that compiler can make."
},
{
"code": null,
"e": 1661,
"s": 1531,
"text": "This keyword does not add new functionalities. Using this the programmer can inform about an optimization that compiler can make."
},
{
"code": null,
"e": 1848,
"s": 1661,
"text": "When the restrict keyword is used with a pointer p, then it tells the compiler, that ptr is only way to access the object pointed by this. So compiler will not add any additional checks."
},
{
"code": null,
"e": 2035,
"s": 1848,
"text": "When the restrict keyword is used with a pointer p, then it tells the compiler, that ptr is only way to access the object pointed by this. So compiler will not add any additional checks."
},
{
"code": null,
"e": 2152,
"s": 2035,
"text": "If the programmer uses restrict keyword then violate the above condition, it will generate some un-defined behavior."
},
{
"code": null,
"e": 2269,
"s": 2152,
"text": "If the programmer uses restrict keyword then violate the above condition, it will generate some un-defined behavior."
},
{
"code": null,
"e": 2474,
"s": 2269,
"text": "#include <stdio.h>\nvoid my_function(int* x, int* y, int* restrict z) {\n *x += *z;\n *y += *z;\n}\nmain(void) {\n int x = 10, y = 20, z = 30;\n my_function(&x, &y, &z);\n printf(\"%d %d %d\", x, y, z);\n}"
},
{
"code": null,
"e": 2483,
"s": 2474,
"text": "40 50 30"
}
] |
Check if a given string is a valid number (Integer or Floating Point) in Java | SET 2 (Regular Expression approach) | 28 Sep, 2020
In Set 1, we have discussed general approach to check whether a string is a valid number or not. In this post, we will discuss regular expression approach to check for a number.
Examples:
Input : str = "11.5"
Output : true
Input : str = "abc"
Output : false
Input : str = "2e10"
Output : true
Input : 10e5.4
Output : false
Check if a given string is a valid Integer
For integer number : Below is the regular definition for an integer number.
sign -> + | - | epsilon
digit -> 0 | 1 | .... | 9
num -> sign digit digit*
Hence one of the regular expression for an integer number is
[+-]?[0-9][0-9]*
// Java program to check whether given string// is a valid integer number using regex import java.util.regex.Matcher;import java.util.regex.Pattern; class GFG { public static void main (String[] args) { String input1 = "abc"; String input2 = "1234"; // regular expression for an integer number String regex = "[+-]?[0-9]+"; // compiling regex Pattern p = Pattern.compile(regex); // Creates a matcher that will match input1 against regex Matcher m = p.matcher(input1); // If match found and equal to input1 if(m.find() && m.group().equals(input1)) System.out.println(input1 + " is a valid integer number"); else System.out.println(input1 + " is not a valid integer number"); // Creates a matcher that will match input2 against regex m = p.matcher(input2); // If match found and equal to input2 if(m.find() && m.group().equals(input2)) System.out.println(input2 + " is a valid integer number"); else System.out.println(input2 + " is not a valid integer number"); }}
Output:
abc is not a valid integer number
1234 is a valid integer number
Below are other short-hands regular expression for an integer number
[+-]?[0-9]+
[+-]?\d\d*
[+-]?\d+
Check if a given string is a valid floating point number
For floating point number : Below is the regular definition for a floating point number.
sign -> + | - | epsilon
digit -> 0 | 1 | .... | 9
digits -> digit digit*
optional_fraction -> . digits | epsilon
optional_exponent -> ((E | e) (+ | - | epsilon) digits) | epsilon
num -> sign digits optional_fraction optional_exponent
Hence one of the regular expression for a floating number is
[+-]?[0-9]+(\.[0-9]+)?([Ee][+-]?[0-9]+)?
//Java program to check whether given string// is a valid floating point number using regex import java.util.regex.Matcher;import java.util.regex.Pattern; class GFG { public static void main (String[] args) { String input1 = "10e5.4"; String input2 = "2e10"; // regular expression for a floating point number String regex = "[+-]?[0-9]+(\\.[0-9]+)?([Ee][+-]?[0-9]+)?"; // compiling regex Pattern p = Pattern.compile(regex); // Creates a matcher that will match input1 against regex Matcher m = p.matcher(input1); // If match found and equal to input1 if(m.find() && m.group().equals(input1)) System.out.println(input1 + " is a valid float number"); else System.out.println(input1 + " is not a valid float number"); // Creates a matcher that will match input2 against regex m = p.matcher(input2); // If match found and equal to input2 if(m.find() && m.group().equals(input2)) System.out.println(input2 + " is a valid float number"); else System.out.println(input2 + " is not a valid float number"); }}
Output:
10e5.4 is not a valid float number
2e10 is a valid float number
Below is other short-hand regular expression for a float number
[+-]?\d+(\.\d+)?([Ee][+-]?\d+)?
Related Article : Check if a given string is a valid number (Integer or Floating Point) in Java
This article is contributed by Gaurav Miglani. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
ansh21
Java-Strings
Strings
Java-Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Check for Balanced Brackets in an expression (well-formedness) using Stack
Python program to check if a string is palindrome or not
KMP Algorithm for Pattern Searching
Longest Palindromic Substring | Set 1
Length of the longest substring without repeating characters
Convert string to char array in C++
Top 50 String Coding Problems for Interviews
Check whether two strings are anagram of each other
What is Data Structure: Types, Classifications and Applications
Print all the duplicates in the input string | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n28 Sep, 2020"
},
{
"code": null,
"e": 230,
"s": 52,
"text": "In Set 1, we have discussed general approach to check whether a string is a valid number or not. In this post, we will discuss regular expression approach to check for a number."
},
{
"code": null,
"e": 240,
"s": 230,
"text": "Examples:"
},
{
"code": null,
"e": 378,
"s": 240,
"text": "Input : str = \"11.5\"\nOutput : true\n\nInput : str = \"abc\"\nOutput : false\n\nInput : str = \"2e10\"\nOutput : true\n\nInput : 10e5.4\nOutput : false"
},
{
"code": null,
"e": 421,
"s": 378,
"text": "Check if a given string is a valid Integer"
},
{
"code": null,
"e": 497,
"s": 421,
"text": "For integer number : Below is the regular definition for an integer number."
},
{
"code": null,
"e": 573,
"s": 497,
"text": "sign -> + | - | epsilon\ndigit -> 0 | 1 | .... | 9\nnum -> sign digit digit*\n"
},
{
"code": null,
"e": 634,
"s": 573,
"text": "Hence one of the regular expression for an integer number is"
},
{
"code": null,
"e": 652,
"s": 634,
"text": "[+-]?[0-9][0-9]*\n"
},
{
"code": "// Java program to check whether given string// is a valid integer number using regex import java.util.regex.Matcher;import java.util.regex.Pattern; class GFG { public static void main (String[] args) { String input1 = \"abc\"; String input2 = \"1234\"; // regular expression for an integer number String regex = \"[+-]?[0-9]+\"; // compiling regex Pattern p = Pattern.compile(regex); // Creates a matcher that will match input1 against regex Matcher m = p.matcher(input1); // If match found and equal to input1 if(m.find() && m.group().equals(input1)) System.out.println(input1 + \" is a valid integer number\"); else System.out.println(input1 + \" is not a valid integer number\"); // Creates a matcher that will match input2 against regex m = p.matcher(input2); // If match found and equal to input2 if(m.find() && m.group().equals(input2)) System.out.println(input2 + \" is a valid integer number\"); else System.out.println(input2 + \" is not a valid integer number\"); }}",
"e": 1851,
"s": 652,
"text": null
},
{
"code": null,
"e": 1859,
"s": 1851,
"text": "Output:"
},
{
"code": null,
"e": 1925,
"s": 1859,
"text": "abc is not a valid integer number\n1234 is a valid integer number\n"
},
{
"code": null,
"e": 1994,
"s": 1925,
"text": "Below are other short-hands regular expression for an integer number"
},
{
"code": null,
"e": 2027,
"s": 1994,
"text": "[+-]?[0-9]+\n[+-]?\\d\\d*\n[+-]?\\d+\n"
},
{
"code": null,
"e": 2084,
"s": 2027,
"text": "Check if a given string is a valid floating point number"
},
{
"code": null,
"e": 2173,
"s": 2084,
"text": "For floating point number : Below is the regular definition for a floating point number."
},
{
"code": null,
"e": 2408,
"s": 2173,
"text": "sign -> + | - | epsilon\ndigit -> 0 | 1 | .... | 9\ndigits -> digit digit*\noptional_fraction -> . digits | epsilon\noptional_exponent -> ((E | e) (+ | - | epsilon) digits) | epsilon\nnum -> sign digits optional_fraction optional_exponent\n"
},
{
"code": null,
"e": 2469,
"s": 2408,
"text": "Hence one of the regular expression for a floating number is"
},
{
"code": null,
"e": 2511,
"s": 2469,
"text": "[+-]?[0-9]+(\\.[0-9]+)?([Ee][+-]?[0-9]+)?\n"
},
{
"code": "//Java program to check whether given string// is a valid floating point number using regex import java.util.regex.Matcher;import java.util.regex.Pattern; class GFG { public static void main (String[] args) { String input1 = \"10e5.4\"; String input2 = \"2e10\"; // regular expression for a floating point number String regex = \"[+-]?[0-9]+(\\\\.[0-9]+)?([Ee][+-]?[0-9]+)?\"; // compiling regex Pattern p = Pattern.compile(regex); // Creates a matcher that will match input1 against regex Matcher m = p.matcher(input1); // If match found and equal to input1 if(m.find() && m.group().equals(input1)) System.out.println(input1 + \" is a valid float number\"); else System.out.println(input1 + \" is not a valid float number\"); // Creates a matcher that will match input2 against regex m = p.matcher(input2); // If match found and equal to input2 if(m.find() && m.group().equals(input2)) System.out.println(input2 + \" is a valid float number\"); else System.out.println(input2 + \" is not a valid float number\"); }}",
"e": 3747,
"s": 2511,
"text": null
},
{
"code": null,
"e": 3755,
"s": 3747,
"text": "Output:"
},
{
"code": null,
"e": 3820,
"s": 3755,
"text": "10e5.4 is not a valid float number\n2e10 is a valid float number\n"
},
{
"code": null,
"e": 3884,
"s": 3820,
"text": "Below is other short-hand regular expression for a float number"
},
{
"code": null,
"e": 3917,
"s": 3884,
"text": "[+-]?\\d+(\\.\\d+)?([Ee][+-]?\\d+)?\n"
},
{
"code": null,
"e": 4013,
"s": 3917,
"text": "Related Article : Check if a given string is a valid number (Integer or Floating Point) in Java"
},
{
"code": null,
"e": 4315,
"s": 4013,
"text": "This article is contributed by Gaurav Miglani. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 4440,
"s": 4315,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 4447,
"s": 4440,
"text": "ansh21"
},
{
"code": null,
"e": 4460,
"s": 4447,
"text": "Java-Strings"
},
{
"code": null,
"e": 4468,
"s": 4460,
"text": "Strings"
},
{
"code": null,
"e": 4481,
"s": 4468,
"text": "Java-Strings"
},
{
"code": null,
"e": 4489,
"s": 4481,
"text": "Strings"
},
{
"code": null,
"e": 4587,
"s": 4489,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4662,
"s": 4587,
"text": "Check for Balanced Brackets in an expression (well-formedness) using Stack"
},
{
"code": null,
"e": 4719,
"s": 4662,
"text": "Python program to check if a string is palindrome or not"
},
{
"code": null,
"e": 4755,
"s": 4719,
"text": "KMP Algorithm for Pattern Searching"
},
{
"code": null,
"e": 4793,
"s": 4755,
"text": "Longest Palindromic Substring | Set 1"
},
{
"code": null,
"e": 4854,
"s": 4793,
"text": "Length of the longest substring without repeating characters"
},
{
"code": null,
"e": 4890,
"s": 4854,
"text": "Convert string to char array in C++"
},
{
"code": null,
"e": 4935,
"s": 4890,
"text": "Top 50 String Coding Problems for Interviews"
},
{
"code": null,
"e": 4987,
"s": 4935,
"text": "Check whether two strings are anagram of each other"
},
{
"code": null,
"e": 5051,
"s": 4987,
"text": "What is Data Structure: Types, Classifications and Applications"
}
] |
Working with the pycricbuzz library in Python | 01 Nov, 2020
Pycricbuzz is a python library that can be used to get live scores, commentary and full scorecard for recent and live matches.
In case you want to know how the library was developed, you can watch the video: https://youtu.be/OQqYbC1BKxw
Installation: Run the following pip command in the terminal.
pip install pycricbuzz
First of all, we need to create an object of Cricbuzz() for further operations.
Python3
# importing the modulesfrom pycricbuzz import Cricbuzz # creating a Cricbuzz objectc = Cricbuzz()
We use the matches() method to fetch all the live, upcoming and recently finished matches. Each match has an id associated with it.
Python3
# importing the modulesfrom pycricbuzz import Cricbuzz # creating a Cricbuzz objectc = Cricbuzz() # displaying all the matchesprint(c.matches())
Output:
The output by default is quite difficult to read. We can use JSON to make the output matches more human-readable.
Python3
# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying all the matchesprint(json.dumps(c.matches(), indent = 4))
Output:
We can use the matchinfo() method to get the information of a particular match. We have to pass the match id in this method.
Python3
# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying the match informationprint(json.dumps(c.matchinfo('30560'), indent = 4))
Output:
We can fetch the score of a live match by using the livescore() method. Use it only for live matches.
Python3
# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying the match scoreprint(json.dumps(c.livescore('30505'), indent = 4))
Output:
We can get the scorecard of a match by using the scorecard() method. Pass the match id of the target match in this method.
Python3
# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying the match scoreprint(json.dumps(c.scorecard('30505'), indent = 4))
Output:
We can get the commentary of a particular match by using the commentary() method. Pass the match id in this method.
Python3
# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying the match commentaryprint(json.dumps(c.commentary('30505'), indent = 4))
Output:
python-modules
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Python Classes and Objects
Python OOPs Concepts
Introduction To PYTHON
How to drop one or multiple columns in Pandas Dataframe
Python | os.path.join() method
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
Python | Get unique values from a list
Python | datetime.timedelta() function | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n01 Nov, 2020"
},
{
"code": null,
"e": 155,
"s": 28,
"text": "Pycricbuzz is a python library that can be used to get live scores, commentary and full scorecard for recent and live matches."
},
{
"code": null,
"e": 266,
"s": 155,
"text": " In case you want to know how the library was developed, you can watch the video: https://youtu.be/OQqYbC1BKxw"
},
{
"code": null,
"e": 327,
"s": 266,
"text": "Installation: Run the following pip command in the terminal."
},
{
"code": null,
"e": 351,
"s": 327,
"text": "pip install pycricbuzz\n"
},
{
"code": null,
"e": 431,
"s": 351,
"text": "First of all, we need to create an object of Cricbuzz() for further operations."
},
{
"code": null,
"e": 439,
"s": 431,
"text": "Python3"
},
{
"code": "# importing the modulesfrom pycricbuzz import Cricbuzz # creating a Cricbuzz objectc = Cricbuzz()",
"e": 538,
"s": 439,
"text": null
},
{
"code": null,
"e": 670,
"s": 538,
"text": "We use the matches() method to fetch all the live, upcoming and recently finished matches. Each match has an id associated with it."
},
{
"code": null,
"e": 678,
"s": 670,
"text": "Python3"
},
{
"code": "# importing the modulesfrom pycricbuzz import Cricbuzz # creating a Cricbuzz objectc = Cricbuzz() # displaying all the matchesprint(c.matches())",
"e": 825,
"s": 678,
"text": null
},
{
"code": null,
"e": 833,
"s": 825,
"text": "Output:"
},
{
"code": null,
"e": 947,
"s": 833,
"text": "The output by default is quite difficult to read. We can use JSON to make the output matches more human-readable."
},
{
"code": null,
"e": 955,
"s": 947,
"text": "Python3"
},
{
"code": "# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying all the matchesprint(json.dumps(c.matches(), indent = 4))",
"e": 1137,
"s": 955,
"text": null
},
{
"code": null,
"e": 1145,
"s": 1137,
"text": "Output:"
},
{
"code": null,
"e": 1270,
"s": 1145,
"text": "We can use the matchinfo() method to get the information of a particular match. We have to pass the match id in this method."
},
{
"code": null,
"e": 1278,
"s": 1270,
"text": "Python3"
},
{
"code": "# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying the match informationprint(json.dumps(c.matchinfo('30560'), indent = 4))",
"e": 1475,
"s": 1278,
"text": null
},
{
"code": null,
"e": 1483,
"s": 1475,
"text": "Output:"
},
{
"code": null,
"e": 1585,
"s": 1483,
"text": "We can fetch the score of a live match by using the livescore() method. Use it only for live matches."
},
{
"code": null,
"e": 1593,
"s": 1585,
"text": "Python3"
},
{
"code": "# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying the match scoreprint(json.dumps(c.livescore('30505'), indent = 4))",
"e": 1784,
"s": 1593,
"text": null
},
{
"code": null,
"e": 1792,
"s": 1784,
"text": "Output:"
},
{
"code": null,
"e": 1915,
"s": 1792,
"text": "We can get the scorecard of a match by using the scorecard() method. Pass the match id of the target match in this method."
},
{
"code": null,
"e": 1923,
"s": 1915,
"text": "Python3"
},
{
"code": "# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying the match scoreprint(json.dumps(c.scorecard('30505'), indent = 4))",
"e": 2114,
"s": 1923,
"text": null
},
{
"code": null,
"e": 2122,
"s": 2114,
"text": "Output:"
},
{
"code": null,
"e": 2238,
"s": 2122,
"text": "We can get the commentary of a particular match by using the commentary() method. Pass the match id in this method."
},
{
"code": null,
"e": 2246,
"s": 2238,
"text": "Python3"
},
{
"code": "# importing the modulesfrom pycricbuzz import Cricbuzzimport json # creating a Cricbuzz objectc = Cricbuzz() # displaying the match commentaryprint(json.dumps(c.commentary('30505'), indent = 4))",
"e": 2443,
"s": 2246,
"text": null
},
{
"code": null,
"e": 2451,
"s": 2443,
"text": "Output:"
},
{
"code": null,
"e": 2466,
"s": 2451,
"text": "python-modules"
},
{
"code": null,
"e": 2473,
"s": 2466,
"text": "Python"
},
{
"code": null,
"e": 2571,
"s": 2473,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2603,
"s": 2571,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 2630,
"s": 2603,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 2651,
"s": 2630,
"text": "Python OOPs Concepts"
},
{
"code": null,
"e": 2674,
"s": 2651,
"text": "Introduction To PYTHON"
},
{
"code": null,
"e": 2730,
"s": 2674,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 2761,
"s": 2730,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 2803,
"s": 2761,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 2845,
"s": 2803,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 2884,
"s": 2845,
"text": "Python | Get unique values from a list"
}
] |
Transport Layer Security (TLS) | 14 Feb, 2022
Transport Layer Securities (TLS) are designed to provide security at the transport layer. TLS was derived from a security protocol called Secure Socket Layer (SSL). TLS ensures that no third party may eavesdrop or tampers with any message.
There are several benefits of TLS:
Encryption: TLS/SSL can help to secure transmitted data using encryption.
Interoperability: TLS/SSL works with most web browsers, including Microsoft Internet Explorer and on most operating systems and web servers.
Algorithm flexibility: TLS/SSL provides operations for authentication mechanism, encryption algorithms and hashing algorithm that are used during the secure session.
Ease of Deployment: Many applications TLS/SSL temporarily on a windows server 2003 operating systems.
Ease of Use: Because we implement TLS/SSL beneath the application layer, most of its operations are completely invisible to client.
Working of TLS: The client connect to server (using TCP), the client will be something. The client sends number of specification:
Version of SSL/TLS.which cipher suites, compression method it wants to use.
Version of SSL/TLS.
which cipher suites, compression method it wants to use.
The server checks what the highest SSL/TLS version is that is supported by them both, picks a cipher suite from one of the clients option (if it supports one) and optionally picks a compression method. After this the basic setup is done, the server provides its certificate. This certificate must be trusted either by the client itself or a party that the client trusts. Having verified the certificate and being certain this server really is who he claims to be (and not a man in the middle), a key is exchanged. This can be a public key, “PreMasterSecret” or simply nothing depending upon cipher suite.
Both the server and client can now compute the key for symmetric encryption. The handshake is finished and the two hosts can communicate securely. To close a connection by finishing. TCP connection both sides will know the connection was improperly terminated. The connection cannot be compromised by this through, merely interrupted.
bhrkaviani
prathyushtaneti11
Computer Networks
Computer Networks
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n14 Feb, 2022"
},
{
"code": null,
"e": 295,
"s": 54,
"text": "Transport Layer Securities (TLS) are designed to provide security at the transport layer. TLS was derived from a security protocol called Secure Socket Layer (SSL). TLS ensures that no third party may eavesdrop or tampers with any message. "
},
{
"code": null,
"e": 332,
"s": 295,
"text": "There are several benefits of TLS: "
},
{
"code": null,
"e": 406,
"s": 332,
"text": "Encryption: TLS/SSL can help to secure transmitted data using encryption."
},
{
"code": null,
"e": 547,
"s": 406,
"text": "Interoperability: TLS/SSL works with most web browsers, including Microsoft Internet Explorer and on most operating systems and web servers."
},
{
"code": null,
"e": 713,
"s": 547,
"text": "Algorithm flexibility: TLS/SSL provides operations for authentication mechanism, encryption algorithms and hashing algorithm that are used during the secure session."
},
{
"code": null,
"e": 815,
"s": 713,
"text": "Ease of Deployment: Many applications TLS/SSL temporarily on a windows server 2003 operating systems."
},
{
"code": null,
"e": 949,
"s": 815,
"text": "Ease of Use: Because we implement TLS/SSL beneath the application layer, most of its operations are completely invisible to client. "
},
{
"code": null,
"e": 1080,
"s": 949,
"text": "Working of TLS: The client connect to server (using TCP), the client will be something. The client sends number of specification: "
},
{
"code": null,
"e": 1158,
"s": 1080,
"text": "Version of SSL/TLS.which cipher suites, compression method it wants to use. "
},
{
"code": null,
"e": 1178,
"s": 1158,
"text": "Version of SSL/TLS."
},
{
"code": null,
"e": 1237,
"s": 1178,
"text": "which cipher suites, compression method it wants to use. "
},
{
"code": null,
"e": 1843,
"s": 1237,
"text": "The server checks what the highest SSL/TLS version is that is supported by them both, picks a cipher suite from one of the clients option (if it supports one) and optionally picks a compression method. After this the basic setup is done, the server provides its certificate. This certificate must be trusted either by the client itself or a party that the client trusts. Having verified the certificate and being certain this server really is who he claims to be (and not a man in the middle), a key is exchanged. This can be a public key, “PreMasterSecret” or simply nothing depending upon cipher suite. "
},
{
"code": null,
"e": 2179,
"s": 1843,
"text": "Both the server and client can now compute the key for symmetric encryption. The handshake is finished and the two hosts can communicate securely. To close a connection by finishing. TCP connection both sides will know the connection was improperly terminated. The connection cannot be compromised by this through, merely interrupted. "
},
{
"code": null,
"e": 2190,
"s": 2179,
"text": "bhrkaviani"
},
{
"code": null,
"e": 2208,
"s": 2190,
"text": "prathyushtaneti11"
},
{
"code": null,
"e": 2226,
"s": 2208,
"text": "Computer Networks"
},
{
"code": null,
"e": 2244,
"s": 2226,
"text": "Computer Networks"
}
] |
Convert List to Array in Java | Difficulty Level :
Easy
The List interface provides a way to store the ordered collection. It is a child interface of Collection. It is an ordered collection of objects in which duplicate values can be stored. Since List preserves the insertion order, it allows positional access and insertion of elements. Now here we are given a List be it any LinkedList or ArrayList of strings, our motive s to convert this list to an array of strings in java using different methods.
Methods:
Using get() methodUsing toArray() methodUsing Stream introduced in Java 8
Using get() method
Using toArray() method
Using Stream introduced in Java 8
Method 1: Using get() method
We can use the below list method to get all elements one by one and insert them into an array.
Return Type: The element at the specified index in the list.
Syntax:
public E get(int index)
Example:
Java
// Java program to Convert a List to an Array// Using get() method in a loop // Importing required classesimport java.io.*;import java.util.LinkedList;import java.util.List; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating a LinkedList of string type by // declaring object of List List<String> list = new LinkedList<String>(); // Adding custom element to LinkedList // using add() method list.add("Geeks"); list.add("for"); list.add("Geeks"); list.add("Practice"); // Storing it inside array of strings String[] arr = new String[list.size()]; // Converting ArrayList to Array // using get() method for (int i = 0; i < list.size(); i++) arr[i] = list.get(i); // Printing elements of array on console for (String x : arr) System.out.print(x + " "); }}
Geeks for Geeks Practice
Method 2: Using toArray() method
Example:
Java
// Java Program to Convert a List to an array// using toArray() Within a loop // Importing utility classesimport java.util.*; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an empty LinkedList of string type // by declaring object of List List<String> list = new LinkedList<String>(); // Adding elements to above LinkedList // using add() method list.add("Geeks"); list.add("for"); list.add("Geeks"); list.add("Practice"); // Converting List to array // using toArray() method String[] arr = list.toArray(new String[0]); // Printing elements of array // using for-each loop for (String x : arr) System.out.print(x + " "); }}
Geeks for Geeks Practice
Method 3: Using Stream introduced in Java8
Example:
Java
// Java Program to Demonstrate conversion of List to Array// Using stream // Importing utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating an empty LinkedList of string type List<String> list = new LinkedList<String>(); // Adding elements to above LinkedList // using add() method list.add("Geeks"); list.add("for"); list.add("Geeks"); list.add("Practice"); // Storing size of List int n = list.size(); // Converting List to array via scope resolution // operator using streams String[] arr = list.stream().toArray(String[] ::new); // Printing elements of array // using enhanced for loop for (String x : arr) System.out.print(x + " "); }}
Geeks for Geeks Practice
Tip: We can convert the array back to the list via asList() method.
Related Articles:
ArrayList to Array Conversion in Java
Set to Array in Java
solankimayank
gulshankumarar231
Java-Array-Programs
Java-Collections
java-list
Java-List-Programs
Java
Java
Java-Collections
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 24,
"s": 0,
"text": "Difficulty Level :\nEasy"
},
{
"code": null,
"e": 473,
"s": 24,
"text": "The List interface provides a way to store the ordered collection. It is a child interface of Collection. It is an ordered collection of objects in which duplicate values can be stored. Since List preserves the insertion order, it allows positional access and insertion of elements. Now here we are given a List be it any LinkedList or ArrayList of strings, our motive s to convert this list to an array of strings in java using different methods. "
},
{
"code": null,
"e": 482,
"s": 473,
"text": "Methods:"
},
{
"code": null,
"e": 556,
"s": 482,
"text": "Using get() methodUsing toArray() methodUsing Stream introduced in Java 8"
},
{
"code": null,
"e": 575,
"s": 556,
"text": "Using get() method"
},
{
"code": null,
"e": 598,
"s": 575,
"text": "Using toArray() method"
},
{
"code": null,
"e": 632,
"s": 598,
"text": "Using Stream introduced in Java 8"
},
{
"code": null,
"e": 661,
"s": 632,
"text": "Method 1: Using get() method"
},
{
"code": null,
"e": 756,
"s": 661,
"text": "We can use the below list method to get all elements one by one and insert them into an array."
},
{
"code": null,
"e": 817,
"s": 756,
"text": "Return Type: The element at the specified index in the list."
},
{
"code": null,
"e": 826,
"s": 817,
"text": "Syntax: "
},
{
"code": null,
"e": 850,
"s": 826,
"text": "public E get(int index)"
},
{
"code": null,
"e": 859,
"s": 850,
"text": "Example:"
},
{
"code": null,
"e": 864,
"s": 859,
"text": "Java"
},
{
"code": "// Java program to Convert a List to an Array// Using get() method in a loop // Importing required classesimport java.io.*;import java.util.LinkedList;import java.util.List; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating a LinkedList of string type by // declaring object of List List<String> list = new LinkedList<String>(); // Adding custom element to LinkedList // using add() method list.add(\"Geeks\"); list.add(\"for\"); list.add(\"Geeks\"); list.add(\"Practice\"); // Storing it inside array of strings String[] arr = new String[list.size()]; // Converting ArrayList to Array // using get() method for (int i = 0; i < list.size(); i++) arr[i] = list.get(i); // Printing elements of array on console for (String x : arr) System.out.print(x + \" \"); }}",
"e": 1818,
"s": 864,
"text": null
},
{
"code": null,
"e": 1843,
"s": 1818,
"text": "Geeks for Geeks Practice"
},
{
"code": null,
"e": 1878,
"s": 1845,
"text": "Method 2: Using toArray() method"
},
{
"code": null,
"e": 1887,
"s": 1878,
"text": "Example:"
},
{
"code": null,
"e": 1892,
"s": 1887,
"text": "Java"
},
{
"code": "// Java Program to Convert a List to an array// using toArray() Within a loop // Importing utility classesimport java.util.*; // Main classpublic class GFG { // Main driver method public static void main(String[] args) { // Creating an empty LinkedList of string type // by declaring object of List List<String> list = new LinkedList<String>(); // Adding elements to above LinkedList // using add() method list.add(\"Geeks\"); list.add(\"for\"); list.add(\"Geeks\"); list.add(\"Practice\"); // Converting List to array // using toArray() method String[] arr = list.toArray(new String[0]); // Printing elements of array // using for-each loop for (String x : arr) System.out.print(x + \" \"); }}",
"e": 2710,
"s": 1892,
"text": null
},
{
"code": null,
"e": 2735,
"s": 2710,
"text": "Geeks for Geeks Practice"
},
{
"code": null,
"e": 2780,
"s": 2737,
"text": "Method 3: Using Stream introduced in Java8"
},
{
"code": null,
"e": 2789,
"s": 2780,
"text": "Example:"
},
{
"code": null,
"e": 2794,
"s": 2789,
"text": "Java"
},
{
"code": "// Java Program to Demonstrate conversion of List to Array// Using stream // Importing utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating an empty LinkedList of string type List<String> list = new LinkedList<String>(); // Adding elements to above LinkedList // using add() method list.add(\"Geeks\"); list.add(\"for\"); list.add(\"Geeks\"); list.add(\"Practice\"); // Storing size of List int n = list.size(); // Converting List to array via scope resolution // operator using streams String[] arr = list.stream().toArray(String[] ::new); // Printing elements of array // using enhanced for loop for (String x : arr) System.out.print(x + \" \"); }}",
"e": 3669,
"s": 2794,
"text": null
},
{
"code": null,
"e": 3694,
"s": 3669,
"text": "Geeks for Geeks Practice"
},
{
"code": null,
"e": 3766,
"s": 3696,
"text": "Tip: We can convert the array back to the list via asList() method. "
},
{
"code": null,
"e": 3786,
"s": 3766,
"text": "Related Articles: "
},
{
"code": null,
"e": 3824,
"s": 3786,
"text": "ArrayList to Array Conversion in Java"
},
{
"code": null,
"e": 3845,
"s": 3824,
"text": "Set to Array in Java"
},
{
"code": null,
"e": 3859,
"s": 3845,
"text": "solankimayank"
},
{
"code": null,
"e": 3877,
"s": 3859,
"text": "gulshankumarar231"
},
{
"code": null,
"e": 3897,
"s": 3877,
"text": "Java-Array-Programs"
},
{
"code": null,
"e": 3914,
"s": 3897,
"text": "Java-Collections"
},
{
"code": null,
"e": 3924,
"s": 3914,
"text": "java-list"
},
{
"code": null,
"e": 3943,
"s": 3924,
"text": "Java-List-Programs"
},
{
"code": null,
"e": 3948,
"s": 3943,
"text": "Java"
},
{
"code": null,
"e": 3953,
"s": 3948,
"text": "Java"
},
{
"code": null,
"e": 3970,
"s": 3953,
"text": "Java-Collections"
}
] |
PyQt5 – Flames Calculator | 03 Jun, 2022
In this article we will see how we can create a flames calculator using PyQt5. This flames calculator assesses and predicts the outcome of a relationship based on an algorithm of two given names.FLAMES is a popular game named after the acronym: Friends, Lovers, Affectionate, Marriage, Enemies, Sibling. This game does not accurately predict whether or not an individual is right for you, but it can be fun to play this with your friends.Below is how the flames calculator will look like
GUI implementation steps : 1. Create a label that says enter player 1 name and set color and geometry to it 2. Add QLineEdit widget in front of the first name label to get the first name 3. Similarly create another label that says enter player 2 name and set color and geometry to it 4. Add QLineEdit widget in front of this label to get the second name 5. Create a label to show the result and set its border, geometry and change its font. 6. Create a push button at the bottom which says get result.Back end implementation steps : 1. Add action to the push button 2. Inside the push button action get both player names 3. Remove the spacing in between the names 4. Call the get result method that returns the result 5. Inside the get result method call the remove letter method that removes the common characters with their respective common occurrences. 6. Then get the count of characters that are left and take FLAMES letters as [“F”, “L”, “A”, “M”, “E”, “S”] 7. Start removing letter using the count we got. The letter which last the process is the result, return the result 8. Set the result to the label using setText method.
Below is the implementation
Python3
# importing librariesfrom PyQt5.QtWidgets import *from PyQt5 import QtCore, QtGuifrom PyQt5.QtGui import *from PyQt5.QtCore import *import sys class Window(QMainWindow): def __init__(self): super().__init__() # setting title self.setWindowTitle("Python ") # setting geometry self.setGeometry(100, 100, 320, 400) # calling method self.UiComponents() # showing all the widgets self.show() # method for components def UiComponents(self): # creating label to tell user enter first name name1_label = QLabel("Enter Player 1 Name : ", self) # setting border and color to the label name1_label.setStyleSheet("border : 1px solid black ; background : lightgrey;") # setting geometry name1_label.setGeometry(10, 20, 140, 40) # creating label to tell user enter second name name2_label = QLabel("Enter Player 2 Name : ", self) # setting border and color to the label name2_label.setStyleSheet("border : 1px solid black ; background : lightgrey;") # setting geometry name2_label.setGeometry(10, 70, 140, 40) # creating a line edit to get the first name self.name1 = QLineEdit(self) # setting geometry self.name1.setGeometry(160, 20, 150, 40) # creating a line edit to get the second name self.name2 = QLineEdit(self) # setting geometry self.name2.setGeometry(160, 70, 150, 40) # creating a label to show result self.output = QLabel("Find Relationship Status", self) # setting geometry to the output label self.output.setGeometry(20, 160, 280, 60) # setting border and background color to it self.output.setStyleSheet("border : 2px solid black; background : white;") # setting alignment to output self.output.setAlignment(Qt.AlignCenter) # setting font to the output self.output.setFont(QFont('Times', 11)) # creating push button to get result self.push = QPushButton("Get Result", self) # setting geometry tot he button self.push.setGeometry(80, 260, 140, 50) # adding action to the push button self.push.clicked.connect(self.do_action) # action called by the push button def do_action(self): # getting names name1 = self.name1.text() name2 = self.name2.text() # removing spacing form the name name1.replace(" ", "") name2.replace(" ", "") # function for removing common characters # with their respective occurrences def remove_match_char(list1, list2): for i in range(len(list1)): for j in range(len(list2)): # if common character is found # then remove that character # and return list of concatenated # list with True Flag if list1[i] == list2[j]: c = list1[i] # remove character from the list list1.remove(c) list2.remove(c) # concatenation of two list elements with * # * is act as border mark here list3 = list1 + ["*"] + list2 # return the concatenated list with True flag return [list3, True] # no common characters is found # return the concatenated list with False flag list3 = list1 + ["*"] + list2 return [list3, False] # method to find the result def find_relation(p1_list, p2_list): # taking a flag as True initially proceed = True # keep calling remove_match_char function # until common characters is found or # keep looping until proceed flag is True while proceed: # function calling and store return value ret_list = remove_match_char(p1_list, p2_list) # take out concatenated list from return list con_list = ret_list[0] # take out flag value from return list proceed = ret_list[1] # find the index of "*" / border mark star_index = con_list.index("*") # list slicing perform # all characters before * store in p1_list p1_list = con_list[: star_index] # all characters after * store in p2_list p2_list = con_list[star_index + 1:] # count total remaining characters count = len(p1_list) + len(p2_list) # list of FLAMES acronym result = ["Friends", "Love", "Affection", "Marriage", "Enemy", "Siblings"] # keep looping until only one item # is not remaining in the result list while len(result) > 1: # store that index value from # where we have to perform slicing. split_index = (count % len(result) - 1) # this steps is done for performing # anticlock-wise circular fashion counting. if split_index >= 0: # list slicing right = result[split_index + 1:] left = result[: split_index] # list concatenation result = right + left else: result = result[: len(result) - 1] # print final result return result[0] # calling find relation method result = find_relation(list(name1), list(name2)) # setting text to the output label self.output.setText("Relationship : " + result) # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window() # start the appsys.exit(App.exec())
Output :
surinderdawra388
nikhatkhan11
PyQt-exercise
Python-gui
Python-PyQt
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n03 Jun, 2022"
},
{
"code": null,
"e": 517,
"s": 28,
"text": "In this article we will see how we can create a flames calculator using PyQt5. This flames calculator assesses and predicts the outcome of a relationship based on an algorithm of two given names.FLAMES is a popular game named after the acronym: Friends, Lovers, Affectionate, Marriage, Enemies, Sibling. This game does not accurately predict whether or not an individual is right for you, but it can be fun to play this with your friends.Below is how the flames calculator will look like "
},
{
"code": null,
"e": 1655,
"s": 519,
"text": "GUI implementation steps : 1. Create a label that says enter player 1 name and set color and geometry to it 2. Add QLineEdit widget in front of the first name label to get the first name 3. Similarly create another label that says enter player 2 name and set color and geometry to it 4. Add QLineEdit widget in front of this label to get the second name 5. Create a label to show the result and set its border, geometry and change its font. 6. Create a push button at the bottom which says get result.Back end implementation steps : 1. Add action to the push button 2. Inside the push button action get both player names 3. Remove the spacing in between the names 4. Call the get result method that returns the result 5. Inside the get result method call the remove letter method that removes the common characters with their respective common occurrences. 6. Then get the count of characters that are left and take FLAMES letters as [“F”, “L”, “A”, “M”, “E”, “S”] 7. Start removing letter using the count we got. The letter which last the process is the result, return the result 8. Set the result to the label using setText method. "
},
{
"code": null,
"e": 1685,
"s": 1655,
"text": "Below is the implementation "
},
{
"code": null,
"e": 1693,
"s": 1685,
"text": "Python3"
},
{
"code": "# importing librariesfrom PyQt5.QtWidgets import *from PyQt5 import QtCore, QtGuifrom PyQt5.QtGui import *from PyQt5.QtCore import *import sys class Window(QMainWindow): def __init__(self): super().__init__() # setting title self.setWindowTitle(\"Python \") # setting geometry self.setGeometry(100, 100, 320, 400) # calling method self.UiComponents() # showing all the widgets self.show() # method for components def UiComponents(self): # creating label to tell user enter first name name1_label = QLabel(\"Enter Player 1 Name : \", self) # setting border and color to the label name1_label.setStyleSheet(\"border : 1px solid black ; background : lightgrey;\") # setting geometry name1_label.setGeometry(10, 20, 140, 40) # creating label to tell user enter second name name2_label = QLabel(\"Enter Player 2 Name : \", self) # setting border and color to the label name2_label.setStyleSheet(\"border : 1px solid black ; background : lightgrey;\") # setting geometry name2_label.setGeometry(10, 70, 140, 40) # creating a line edit to get the first name self.name1 = QLineEdit(self) # setting geometry self.name1.setGeometry(160, 20, 150, 40) # creating a line edit to get the second name self.name2 = QLineEdit(self) # setting geometry self.name2.setGeometry(160, 70, 150, 40) # creating a label to show result self.output = QLabel(\"Find Relationship Status\", self) # setting geometry to the output label self.output.setGeometry(20, 160, 280, 60) # setting border and background color to it self.output.setStyleSheet(\"border : 2px solid black; background : white;\") # setting alignment to output self.output.setAlignment(Qt.AlignCenter) # setting font to the output self.output.setFont(QFont('Times', 11)) # creating push button to get result self.push = QPushButton(\"Get Result\", self) # setting geometry tot he button self.push.setGeometry(80, 260, 140, 50) # adding action to the push button self.push.clicked.connect(self.do_action) # action called by the push button def do_action(self): # getting names name1 = self.name1.text() name2 = self.name2.text() # removing spacing form the name name1.replace(\" \", \"\") name2.replace(\" \", \"\") # function for removing common characters # with their respective occurrences def remove_match_char(list1, list2): for i in range(len(list1)): for j in range(len(list2)): # if common character is found # then remove that character # and return list of concatenated # list with True Flag if list1[i] == list2[j]: c = list1[i] # remove character from the list list1.remove(c) list2.remove(c) # concatenation of two list elements with * # * is act as border mark here list3 = list1 + [\"*\"] + list2 # return the concatenated list with True flag return [list3, True] # no common characters is found # return the concatenated list with False flag list3 = list1 + [\"*\"] + list2 return [list3, False] # method to find the result def find_relation(p1_list, p2_list): # taking a flag as True initially proceed = True # keep calling remove_match_char function # until common characters is found or # keep looping until proceed flag is True while proceed: # function calling and store return value ret_list = remove_match_char(p1_list, p2_list) # take out concatenated list from return list con_list = ret_list[0] # take out flag value from return list proceed = ret_list[1] # find the index of \"*\" / border mark star_index = con_list.index(\"*\") # list slicing perform # all characters before * store in p1_list p1_list = con_list[: star_index] # all characters after * store in p2_list p2_list = con_list[star_index + 1:] # count total remaining characters count = len(p1_list) + len(p2_list) # list of FLAMES acronym result = [\"Friends\", \"Love\", \"Affection\", \"Marriage\", \"Enemy\", \"Siblings\"] # keep looping until only one item # is not remaining in the result list while len(result) > 1: # store that index value from # where we have to perform slicing. split_index = (count % len(result) - 1) # this steps is done for performing # anticlock-wise circular fashion counting. if split_index >= 0: # list slicing right = result[split_index + 1:] left = result[: split_index] # list concatenation result = right + left else: result = result[: len(result) - 1] # print final result return result[0] # calling find relation method result = find_relation(list(name1), list(name2)) # setting text to the output label self.output.setText(\"Relationship : \" + result) # create pyqt5 appApp = QApplication(sys.argv) # create the instance of our Windowwindow = Window() # start the appsys.exit(App.exec())",
"e": 7784,
"s": 1693,
"text": null
},
{
"code": null,
"e": 7795,
"s": 7784,
"text": "Output : "
},
{
"code": null,
"e": 7814,
"s": 7797,
"text": "surinderdawra388"
},
{
"code": null,
"e": 7827,
"s": 7814,
"text": "nikhatkhan11"
},
{
"code": null,
"e": 7841,
"s": 7827,
"text": "PyQt-exercise"
},
{
"code": null,
"e": 7852,
"s": 7841,
"text": "Python-gui"
},
{
"code": null,
"e": 7864,
"s": 7852,
"text": "Python-PyQt"
},
{
"code": null,
"e": 7871,
"s": 7864,
"text": "Python"
}
] |
Extract URLs present in a given string | 08 Feb, 2021
Given a string S, the task is to find and extract all the URLs from the string. If no URL is present in the string, then print “-1”.
Examples:
Input: S = “Welcome to https://www.geeksforgeeks.org Computer Science Portal”Output: https://www.geeksforgeeks.orgExplanation:The given string contains the URL ‘https://www.geeksforgeeks.org’.
Input: S = “Welcome to https://write.geeksforgeeks.org portal of https://www.geeksforgeeks.org Computer Science Portal”Output:https://write.geeksforgeeks.org https://www.geeksforgeeks.orgExplanation:The given string contains two URLs ‘https://write.geeksforgeeks.org’ and ‘https://www.geeksforgeeks.org’.
Approach: The idea is to use Regular Expression to solve this problem. Follow the steps below to solve the given problem:
Create a regular expression to extract all the URLs from the string as mentioned below:
regex = “\\b((?:https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:, .;]*[-a-zA-Z0-9+&@#/%=~_|])”
Create an ArrayList in Java and compile the regular expression using Pattern.compile().
Match the given string with the regular expression. In Java, this can be done by using Pattern.matcher().
Find the substring from the first index of match result to the last index of the match result and add this substring into the list.
After completing the above steps, if the list is found to be empty, then print “-1” as there is no URL present in the string S. Otherwise, print all the string stored in the list.
Below is the implementation of the above approach:
Java
// Java program for the above approach import java.util.*;import java.util.regex.*;class GFG { // Function to extract all the URL // from the string public static void extractURL( String str) { // Creating an empty ArrayList List<String> list = new ArrayList<>(); // Regular Expression to extract // URL from the string String regex = "\\b((?:https?|ftp|file):" + "//[-a-zA-Z0-9+&@#/%?=" + "~_|!:, .;]*[-a-zA-Z0-9+" + "&@#/%=~_|])"; // Compile the Regular Expression Pattern p = Pattern.compile( regex, Pattern.CASE_INSENSITIVE); // Find the match between string // and the regular expression Matcher m = p.matcher(str); // Find the next subsequence of // the input subsequence that // find the pattern while (m.find()) { // Find the substring from the // first index of match result // to the last index of match // result and add in the list list.add(str.substring( m.start(0), m.end(0))); } // IF there no URL present if (list.size() == 0) { System.out.println("-1"); return; } // Print all the URLs stored for (String url : list) { System.out.println(url); } } // Driver Code public static void main(String args[]) { // Given String str String str = "Welcome to https:// www.geeksforgeeks" + ".org Computer Science Portal"; // Function Call extractURL(str); }}
https://www.geeksforgeeks.org
Time Complexity: O(N)Auxiliary Space: O(1)
regular-expression
Technical Scripter 2020
Pattern Searching
Strings
Technical Scripter
Strings
Pattern Searching
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Find all occurrences of a given word in a matrix
Reverse the substrings of the given String according to the given Array of indices
Check if the given string is shuffled substring of another string
How to validate GUID (Globally Unique Identifier) using Regular Expression
How to check Aadhaar number is valid or not using Regular Expression
Write a program to reverse an array or string
Reverse a string in Java
Write a program to print all permutations of a given string
C++ Data Types
Check for Balanced Brackets in an expression (well-formedness) using Stack | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n08 Feb, 2021"
},
{
"code": null,
"e": 161,
"s": 28,
"text": "Given a string S, the task is to find and extract all the URLs from the string. If no URL is present in the string, then print “-1”."
},
{
"code": null,
"e": 171,
"s": 161,
"text": "Examples:"
},
{
"code": null,
"e": 364,
"s": 171,
"text": "Input: S = “Welcome to https://www.geeksforgeeks.org Computer Science Portal”Output: https://www.geeksforgeeks.orgExplanation:The given string contains the URL ‘https://www.geeksforgeeks.org’."
},
{
"code": null,
"e": 669,
"s": 364,
"text": "Input: S = “Welcome to https://write.geeksforgeeks.org portal of https://www.geeksforgeeks.org Computer Science Portal”Output:https://write.geeksforgeeks.org https://www.geeksforgeeks.orgExplanation:The given string contains two URLs ‘https://write.geeksforgeeks.org’ and ‘https://www.geeksforgeeks.org’."
},
{
"code": null,
"e": 791,
"s": 669,
"text": "Approach: The idea is to use Regular Expression to solve this problem. Follow the steps below to solve the given problem:"
},
{
"code": null,
"e": 879,
"s": 791,
"text": "Create a regular expression to extract all the URLs from the string as mentioned below:"
},
{
"code": null,
"e": 969,
"s": 879,
"text": "regex = “\\\\b((?:https?|ftp|file)://[-a-zA-Z0-9+&@#/%?=~_|!:, .;]*[-a-zA-Z0-9+&@#/%=~_|])”"
},
{
"code": null,
"e": 1057,
"s": 969,
"text": "Create an ArrayList in Java and compile the regular expression using Pattern.compile()."
},
{
"code": null,
"e": 1163,
"s": 1057,
"text": "Match the given string with the regular expression. In Java, this can be done by using Pattern.matcher()."
},
{
"code": null,
"e": 1295,
"s": 1163,
"text": "Find the substring from the first index of match result to the last index of the match result and add this substring into the list."
},
{
"code": null,
"e": 1475,
"s": 1295,
"text": "After completing the above steps, if the list is found to be empty, then print “-1” as there is no URL present in the string S. Otherwise, print all the string stored in the list."
},
{
"code": null,
"e": 1526,
"s": 1475,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 1531,
"s": 1526,
"text": "Java"
},
{
"code": "// Java program for the above approach import java.util.*;import java.util.regex.*;class GFG { // Function to extract all the URL // from the string public static void extractURL( String str) { // Creating an empty ArrayList List<String> list = new ArrayList<>(); // Regular Expression to extract // URL from the string String regex = \"\\\\b((?:https?|ftp|file):\" + \"//[-a-zA-Z0-9+&@#/%?=\" + \"~_|!:, .;]*[-a-zA-Z0-9+\" + \"&@#/%=~_|])\"; // Compile the Regular Expression Pattern p = Pattern.compile( regex, Pattern.CASE_INSENSITIVE); // Find the match between string // and the regular expression Matcher m = p.matcher(str); // Find the next subsequence of // the input subsequence that // find the pattern while (m.find()) { // Find the substring from the // first index of match result // to the last index of match // result and add in the list list.add(str.substring( m.start(0), m.end(0))); } // IF there no URL present if (list.size() == 0) { System.out.println(\"-1\"); return; } // Print all the URLs stored for (String url : list) { System.out.println(url); } } // Driver Code public static void main(String args[]) { // Given String str String str = \"Welcome to https:// www.geeksforgeeks\" + \".org Computer Science Portal\"; // Function Call extractURL(str); }}",
"e": 3240,
"s": 1531,
"text": null
},
{
"code": null,
"e": 3271,
"s": 3240,
"text": "https://www.geeksforgeeks.org\n"
},
{
"code": null,
"e": 3314,
"s": 3271,
"text": "Time Complexity: O(N)Auxiliary Space: O(1)"
},
{
"code": null,
"e": 3333,
"s": 3314,
"text": "regular-expression"
},
{
"code": null,
"e": 3357,
"s": 3333,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 3375,
"s": 3357,
"text": "Pattern Searching"
},
{
"code": null,
"e": 3383,
"s": 3375,
"text": "Strings"
},
{
"code": null,
"e": 3402,
"s": 3383,
"text": "Technical Scripter"
},
{
"code": null,
"e": 3410,
"s": 3402,
"text": "Strings"
},
{
"code": null,
"e": 3428,
"s": 3410,
"text": "Pattern Searching"
},
{
"code": null,
"e": 3526,
"s": 3428,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3575,
"s": 3526,
"text": "Find all occurrences of a given word in a matrix"
},
{
"code": null,
"e": 3658,
"s": 3575,
"text": "Reverse the substrings of the given String according to the given Array of indices"
},
{
"code": null,
"e": 3724,
"s": 3658,
"text": "Check if the given string is shuffled substring of another string"
},
{
"code": null,
"e": 3799,
"s": 3724,
"text": "How to validate GUID (Globally Unique Identifier) using Regular Expression"
},
{
"code": null,
"e": 3868,
"s": 3799,
"text": "How to check Aadhaar number is valid or not using Regular Expression"
},
{
"code": null,
"e": 3914,
"s": 3868,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 3939,
"s": 3914,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 3999,
"s": 3939,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 4014,
"s": 3999,
"text": "C++ Data Types"
}
] |
Create a string with multiple spaces in JavaScript | 10 May, 2019
We have a string with extra-spaces and if we want to display it in the browser then extra spaces will not be displayed. Adding the number of spaces to the string can be done in the following ways.
substr():This method gets a part of a string, starts at the character at the defined position, and returns the specified number of characters.
Syntax:
string.substr(start, length)
parameters:
start:This parameter is required. It specifies the position from where to start the extraction. First index starts at 0.If start parameter is positive and greater than, or equal, to the length of the provided string, this method will return an empty string.If start parameter is negative, this method uses it as an index from the end of string.If start parameter is negative or greater than the length of string, start is taken as 0.
If start parameter is positive and greater than, or equal, to the length of the provided string, this method will return an empty string.
If start parameter is negative, this method uses it as an index from the end of string.
If start parameter is negative or greater than the length of string, start is taken as 0.
length:This parameter is optional. It specifies the number of characters to extract. If not used, it extracts the whole string.
Return Value: Returns a new string, Which contains the extracted part of the text. If length is either 0 or negative, It will return an empty string.
Example-1:This example adds spaces to the string by .
<!DOCTYPE html><html> <head> <title> JavaScript | Create a string with multiple spaces. </title></head> <body style="text-align:center;" id="body"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP" style="font-size: 16px;"> </p> <button onclick="gfg_Run()"> Add spaces </button> <p id="GFG_DOWN" style="color:green; font-size: 20px; font-weight: bold;"> </p> <script> var el_up = document.getElementById("GFG_UP"); var el_down = document.getElementById("GFG_DOWN"); var string = 'A Computer Science Portal'; el_up.innerHTML = string; function gfg_Run() { el_down.innerHTML = string.substr(0, 2) + ' ' + string.substr(2); } </script></body> </html>
Output:
Before clicking on the button:
After clicking on the button:
Example-2: This example adds spaces to the string by \xa0(it’s a NO-BREAK SPACE char).
<!DOCTYPE html><html> <head> <title> JavaScript | Create a string with multiple spaces. </title></head> <body style="text-align:center;" id="body"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_UP" style="font-size: 16px;"> </p> <button onclick="gfg_Run()"> Add spaces </button> <p id="GFG_DOWN" style="color:green; font-size: 20px; font-weight: bold;"> </p> <script> var el_up = document.getElementById("GFG_UP"); var el_down = document.getElementById("GFG_DOWN"); var string = 'A Computer Science Portal'; el_up.innerHTML = string; function gfg_Run() { el_down.innerHTML = string.substr(0, 18) + '\xa0\xa0\xa0\xa0\xa0\xa0\xa0 ' + string.substr(18); } </script></body> </html>
Output:
Before clicking on the button:
After clicking on the button:
javascript-string
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n10 May, 2019"
},
{
"code": null,
"e": 225,
"s": 28,
"text": "We have a string with extra-spaces and if we want to display it in the browser then extra spaces will not be displayed. Adding the number of spaces to the string can be done in the following ways."
},
{
"code": null,
"e": 368,
"s": 225,
"text": "substr():This method gets a part of a string, starts at the character at the defined position, and returns the specified number of characters."
},
{
"code": null,
"e": 376,
"s": 368,
"text": "Syntax:"
},
{
"code": null,
"e": 406,
"s": 376,
"text": "string.substr(start, length)\n"
},
{
"code": null,
"e": 418,
"s": 406,
"text": "parameters:"
},
{
"code": null,
"e": 852,
"s": 418,
"text": "start:This parameter is required. It specifies the position from where to start the extraction. First index starts at 0.If start parameter is positive and greater than, or equal, to the length of the provided string, this method will return an empty string.If start parameter is negative, this method uses it as an index from the end of string.If start parameter is negative or greater than the length of string, start is taken as 0."
},
{
"code": null,
"e": 990,
"s": 852,
"text": "If start parameter is positive and greater than, or equal, to the length of the provided string, this method will return an empty string."
},
{
"code": null,
"e": 1078,
"s": 990,
"text": "If start parameter is negative, this method uses it as an index from the end of string."
},
{
"code": null,
"e": 1168,
"s": 1078,
"text": "If start parameter is negative or greater than the length of string, start is taken as 0."
},
{
"code": null,
"e": 1296,
"s": 1168,
"text": "length:This parameter is optional. It specifies the number of characters to extract. If not used, it extracts the whole string."
},
{
"code": null,
"e": 1446,
"s": 1296,
"text": "Return Value: Returns a new string, Which contains the extracted part of the text. If length is either 0 or negative, It will return an empty string."
},
{
"code": null,
"e": 1501,
"s": 1446,
"text": "Example-1:This example adds spaces to the string by ."
},
{
"code": "<!DOCTYPE html><html> <head> <title> JavaScript | Create a string with multiple spaces. </title></head> <body style=\"text-align:center;\" id=\"body\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\" style=\"font-size: 16px;\"> </p> <button onclick=\"gfg_Run()\"> Add spaces </button> <p id=\"GFG_DOWN\" style=\"color:green; font-size: 20px; font-weight: bold;\"> </p> <script> var el_up = document.getElementById(\"GFG_UP\"); var el_down = document.getElementById(\"GFG_DOWN\"); var string = 'A Computer Science Portal'; el_up.innerHTML = string; function gfg_Run() { el_down.innerHTML = string.substr(0, 2) + ' ' + string.substr(2); } </script></body> </html>",
"e": 2363,
"s": 1501,
"text": null
},
{
"code": null,
"e": 2371,
"s": 2363,
"text": "Output:"
},
{
"code": null,
"e": 2402,
"s": 2371,
"text": "Before clicking on the button:"
},
{
"code": null,
"e": 2432,
"s": 2402,
"text": "After clicking on the button:"
},
{
"code": null,
"e": 2519,
"s": 2432,
"text": "Example-2: This example adds spaces to the string by \\xa0(it’s a NO-BREAK SPACE char)."
},
{
"code": "<!DOCTYPE html><html> <head> <title> JavaScript | Create a string with multiple spaces. </title></head> <body style=\"text-align:center;\" id=\"body\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_UP\" style=\"font-size: 16px;\"> </p> <button onclick=\"gfg_Run()\"> Add spaces </button> <p id=\"GFG_DOWN\" style=\"color:green; font-size: 20px; font-weight: bold;\"> </p> <script> var el_up = document.getElementById(\"GFG_UP\"); var el_down = document.getElementById(\"GFG_DOWN\"); var string = 'A Computer Science Portal'; el_up.innerHTML = string; function gfg_Run() { el_down.innerHTML = string.substr(0, 18) + '\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0\\xa0 ' + string.substr(18); } </script></body> </html>",
"e": 3399,
"s": 2519,
"text": null
},
{
"code": null,
"e": 3407,
"s": 3399,
"text": "Output:"
},
{
"code": null,
"e": 3438,
"s": 3407,
"text": "Before clicking on the button:"
},
{
"code": null,
"e": 3468,
"s": 3438,
"text": "After clicking on the button:"
},
{
"code": null,
"e": 3486,
"s": 3468,
"text": "javascript-string"
},
{
"code": null,
"e": 3497,
"s": 3486,
"text": "JavaScript"
},
{
"code": null,
"e": 3514,
"s": 3497,
"text": "Web Technologies"
}
] |
Horizontal CalendarView in Android | 05 May, 2021
If we are making an application that provides services such as booking flights, movie tickets, or others we generally have to implement a calendar in our application. We have to align the calendar in such a way so that it will look better and will take less amount of space in the mobile app. Most of the apps prefer to use Horizontal Calendar View inside their which looks better than the normal calendar. In this article, we will take a look at implementing a similar calendar in our app.
We will be building a simple application in which we will be simply creating a horizontal calendar and we will be displaying the whole dates of the month in it. We will be displaying these dates in the horizontal Calendar View. A sample GIF is given below to get an idea about what we are going to do in this article. Note that we are going to implement this project using the Java language.
Step 1: Create a New Project
To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language.
Step 2: Add dependency
Navigate to the Gradle Scripts > build.gradle(Module:app) and add the below dependency in the dependencies section.
implementation ‘devs.mulham.horizontalcalendar:horizontalcalendar:1.3.4’
Step 3: Working with the activity_main.xml file
Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file.
XML
<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <!--on below line we are creating our calendar view selector color is use as a indicator for selected date text color normal is use to give text color to unselected date text color selected is use to give text color to selected date--> <devs.mulham.horizontalcalendar.HorizontalCalendarView android:id="@+id/calendarView" android:layout_width="match_parent" android:layout_height="wrap_content" app:selectorColor="@android:color/holo_red_dark" app:textColorNormal="@color/purple_200" app:textColorSelected="@color/purple_200" /> </RelativeLayout>
Step 4: Working with the MainActivity.java file
Go to the MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail.
Java
import android.os.Bundle;import android.util.Log; import androidx.appcompat.app.AppCompatActivity; import java.util.Calendar; import devs.mulham.horizontalcalendar.HorizontalCalendar;import devs.mulham.horizontalcalendar.utils.HorizontalCalendarListener; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); /* starts before 1 month from now */ Calendar startDate = Calendar.getInstance(); startDate.add(Calendar.MONTH, -1); /* ends after 1 month from now */ Calendar endDate = Calendar.getInstance(); endDate.add(Calendar.MONTH, 1); // on below line we are setting up our horizontal calendar view and passing id our calendar view to it. HorizontalCalendar horizontalCalendar = new HorizontalCalendar.Builder(this, R.id.calendarView) // on below line we are adding a range // as start date and end date to our calendar. .range(startDate, endDate) // on below line we are providing a number of dates // which will be visible on the screen at a time. .datesNumberOnScreen(5) // at last we are calling a build method // to build our horizontal recycler view. .build(); // on below line we are setting calendar listener to our calendar view. horizontalCalendar.setCalendarListener(new HorizontalCalendarListener() { @Override public void onDateSelected(Calendar date, int position) { // on below line we are printing date // in the logcat which is selected. Log.e("TAG", "CURRENT DATE IS " + date); } }); }}
Now run your app and see the output of the app.
Output:
Android-Date-time
Android
Java
Java
Android
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n05 May, 2021"
},
{
"code": null,
"e": 520,
"s": 28,
"text": "If we are making an application that provides services such as booking flights, movie tickets, or others we generally have to implement a calendar in our application. We have to align the calendar in such a way so that it will look better and will take less amount of space in the mobile app. Most of the apps prefer to use Horizontal Calendar View inside their which looks better than the normal calendar. In this article, we will take a look at implementing a similar calendar in our app. "
},
{
"code": null,
"e": 913,
"s": 520,
"text": "We will be building a simple application in which we will be simply creating a horizontal calendar and we will be displaying the whole dates of the month in it. We will be displaying these dates in the horizontal Calendar View. A sample GIF is given below to get an idea about what we are going to do in this article. Note that we are going to implement this project using the Java language. "
},
{
"code": null,
"e": 942,
"s": 913,
"text": "Step 1: Create a New Project"
},
{
"code": null,
"e": 1104,
"s": 942,
"text": "To create a new project in Android Studio please refer to How to Create/Start a New Project in Android Studio. Note that select Java as the programming language."
},
{
"code": null,
"e": 1127,
"s": 1104,
"text": "Step 2: Add dependency"
},
{
"code": null,
"e": 1246,
"s": 1127,
"text": "Navigate to the Gradle Scripts > build.gradle(Module:app) and add the below dependency in the dependencies section. "
},
{
"code": null,
"e": 1319,
"s": 1246,
"text": "implementation ‘devs.mulham.horizontalcalendar:horizontalcalendar:1.3.4’"
},
{
"code": null,
"e": 1367,
"s": 1319,
"text": "Step 3: Working with the activity_main.xml file"
},
{
"code": null,
"e": 1510,
"s": 1367,
"text": "Navigate to the app > res > layout > activity_main.xml and add the below code to that file. Below is the code for the activity_main.xml file. "
},
{
"code": null,
"e": 1514,
"s": 1510,
"text": "XML"
},
{
"code": "<?xml version=\"1.0\" encoding=\"utf-8\"?><RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\" xmlns:app=\"http://schemas.android.com/apk/res-auto\" xmlns:tools=\"http://schemas.android.com/tools\" android:layout_width=\"match_parent\" android:layout_height=\"match_parent\" tools:context=\".MainActivity\"> <!--on below line we are creating our calendar view selector color is use as a indicator for selected date text color normal is use to give text color to unselected date text color selected is use to give text color to selected date--> <devs.mulham.horizontalcalendar.HorizontalCalendarView android:id=\"@+id/calendarView\" android:layout_width=\"match_parent\" android:layout_height=\"wrap_content\" app:selectorColor=\"@android:color/holo_red_dark\" app:textColorNormal=\"@color/purple_200\" app:textColorSelected=\"@color/purple_200\" /> </RelativeLayout>",
"e": 2468,
"s": 1514,
"text": null
},
{
"code": null,
"e": 2516,
"s": 2468,
"text": "Step 4: Working with the MainActivity.java file"
},
{
"code": null,
"e": 2706,
"s": 2516,
"text": "Go to the MainActivity.java file and refer to the following code. Below is the code for the MainActivity.java file. Comments are added inside the code to understand the code in more detail."
},
{
"code": null,
"e": 2711,
"s": 2706,
"text": "Java"
},
{
"code": "import android.os.Bundle;import android.util.Log; import androidx.appcompat.app.AppCompatActivity; import java.util.Calendar; import devs.mulham.horizontalcalendar.HorizontalCalendar;import devs.mulham.horizontalcalendar.utils.HorizontalCalendarListener; public class MainActivity extends AppCompatActivity { @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); /* starts before 1 month from now */ Calendar startDate = Calendar.getInstance(); startDate.add(Calendar.MONTH, -1); /* ends after 1 month from now */ Calendar endDate = Calendar.getInstance(); endDate.add(Calendar.MONTH, 1); // on below line we are setting up our horizontal calendar view and passing id our calendar view to it. HorizontalCalendar horizontalCalendar = new HorizontalCalendar.Builder(this, R.id.calendarView) // on below line we are adding a range // as start date and end date to our calendar. .range(startDate, endDate) // on below line we are providing a number of dates // which will be visible on the screen at a time. .datesNumberOnScreen(5) // at last we are calling a build method // to build our horizontal recycler view. .build(); // on below line we are setting calendar listener to our calendar view. horizontalCalendar.setCalendarListener(new HorizontalCalendarListener() { @Override public void onDateSelected(Calendar date, int position) { // on below line we are printing date // in the logcat which is selected. Log.e(\"TAG\", \"CURRENT DATE IS \" + date); } }); }}",
"e": 4607,
"s": 2711,
"text": null
},
{
"code": null,
"e": 4656,
"s": 4607,
"text": "Now run your app and see the output of the app. "
},
{
"code": null,
"e": 4664,
"s": 4656,
"text": "Output:"
},
{
"code": null,
"e": 4682,
"s": 4664,
"text": "Android-Date-time"
},
{
"code": null,
"e": 4690,
"s": 4682,
"text": "Android"
},
{
"code": null,
"e": 4695,
"s": 4690,
"text": "Java"
},
{
"code": null,
"e": 4700,
"s": 4695,
"text": "Java"
},
{
"code": null,
"e": 4708,
"s": 4700,
"text": "Android"
}
] |
Python – Inverse Dictionary Values List | 14 Oct, 2020
Given a Dictionary as value lists, inverse it, i.e map elements in list to keys and create new values list.
Input : test_dict = {1: [2, 3], 2: [3], 3: [1]}Output : {2: [1], 3: [1, 2], 1: [3]}Explanation : List elements mapped with their keys.
Input : test_dict = {1: [2, 3, 4]}Output : {2: [1], 3: [1], 4: [1]}Explanation : List elements mapped with their keys.
Method : Using defaultdict() + loop
This is a way in which this task can be performed. In this, we initialize result keys with dictionary list, and iterate using loop to assign each value its keys, and reform the result dictionary values list.
Python3
# Python3 code to demonstrate working of # Inverse Dictionary Values List# Using from collections import defaultdict # initializing dictionarytest_dict = {1: [2, 3], 2: [3], 3: [1], 4: [2, 1]} # printing original dictionaryprint("The original dictionary is : " + str(test_dict)) # initializing empty list as Valuesres = defaultdict(list) # using loop to perform reverse mappingfor keys, vals in test_dict.items(): for val in vals: res[val].append(keys) # printing result print("The required result : " + str(dict(res)))
The original dictionary is : {1: [2, 3], 2: [3], 3: [1], 4: [2, 1]}
The required result : {2: [1, 4], 3: [1, 2], 1: [3, 4]}
Python dictionary-programs
Python list-programs
Python
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n14 Oct, 2020"
},
{
"code": null,
"e": 136,
"s": 28,
"text": "Given a Dictionary as value lists, inverse it, i.e map elements in list to keys and create new values list."
},
{
"code": null,
"e": 271,
"s": 136,
"text": "Input : test_dict = {1: [2, 3], 2: [3], 3: [1]}Output : {2: [1], 3: [1, 2], 1: [3]}Explanation : List elements mapped with their keys."
},
{
"code": null,
"e": 390,
"s": 271,
"text": "Input : test_dict = {1: [2, 3, 4]}Output : {2: [1], 3: [1], 4: [1]}Explanation : List elements mapped with their keys."
},
{
"code": null,
"e": 426,
"s": 390,
"text": "Method : Using defaultdict() + loop"
},
{
"code": null,
"e": 634,
"s": 426,
"text": "This is a way in which this task can be performed. In this, we initialize result keys with dictionary list, and iterate using loop to assign each value its keys, and reform the result dictionary values list."
},
{
"code": null,
"e": 642,
"s": 634,
"text": "Python3"
},
{
"code": "# Python3 code to demonstrate working of # Inverse Dictionary Values List# Using from collections import defaultdict # initializing dictionarytest_dict = {1: [2, 3], 2: [3], 3: [1], 4: [2, 1]} # printing original dictionaryprint(\"The original dictionary is : \" + str(test_dict)) # initializing empty list as Valuesres = defaultdict(list) # using loop to perform reverse mappingfor keys, vals in test_dict.items(): for val in vals: res[val].append(keys) # printing result print(\"The required result : \" + str(dict(res))) ",
"e": 1243,
"s": 642,
"text": null
},
{
"code": null,
"e": 1368,
"s": 1243,
"text": "The original dictionary is : {1: [2, 3], 2: [3], 3: [1], 4: [2, 1]}\nThe required result : {2: [1, 4], 3: [1, 2], 1: [3, 4]}\n"
},
{
"code": null,
"e": 1395,
"s": 1368,
"text": "Python dictionary-programs"
},
{
"code": null,
"e": 1416,
"s": 1395,
"text": "Python list-programs"
},
{
"code": null,
"e": 1423,
"s": 1416,
"text": "Python"
},
{
"code": null,
"e": 1439,
"s": 1423,
"text": "Python Programs"
}
] |
Complete Graph using Networkx in Python | 29 Apr, 2021
A complete graph also called a Full Graph it is a graph that has n vertices where the degree of each vertex is n-1. In other words, each vertex is connected with every other vertex.
Example: Complete Graph with 6 edges:
C_G6
Properties of Complete Graph:
The degree of each vertex is n-1.
The total number of edges is n(n-1)/2.
All possible edges in a simple graph exist in a complete graph.
It is a cyclic graph.
The maximum distance between any pair of nodes is 1.
The chromatic number is n as every node is connected to every other node.
Its complement is an empty graph.
We will use the networkx module for realizing a Complete graph. It comes with an inbuilt function networkx.complete_graph() and can be illustrated using the networkx.draw() method. This module in Python is used for visualizing and analyzing different kinds of graphs.
Syntax: networkx.complete_graph(n)
Parameters:
N: Number of nodes in complete graph.
Returns an networkx graph complete object.
Nodes are indexed from zero to n-1.
Used to realize the graph by passing graph object.
networkx.draw(G, node_size, node_color)
Parameters:
G: It refers to the complete graph object
node_size: It refers to the size of nodes.
node_color: It refers to color of the nodes.
Approach:
We will import the required module networkx.
Then we will create a graph object using networkx.complete_graph(n).
Where n specifies n number of nodes.
For realizing graph, we will use networkx.draw(G, node_color = ’green’, node_size=1500)
The node_color and node_size arguments specify the color and size of graph nodes.
Example 1:
Python3
# import required moduleimport networkx # create objectG = networkx.complete_graph(6) # illustrate graphnetworkx.draw(G, node_color = 'green', node_size = 1500)
Output:
Output
The output of the above program gives a complete graph with 6 nodes as output as we passed 6 as an argument to the complete_graph function.
Example 2:
Python3
# import required moduleimport networkx # create objectG = networkx.complete_graph(10) # illustrate graphnetworkx.draw(G, node_color = 'green', node_size = 1500)
Output:
simmytarika5
Python Networx-module
Graph
Python
Graph
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Apr, 2021"
},
{
"code": null,
"e": 210,
"s": 28,
"text": "A complete graph also called a Full Graph it is a graph that has n vertices where the degree of each vertex is n-1. In other words, each vertex is connected with every other vertex."
},
{
"code": null,
"e": 248,
"s": 210,
"text": "Example: Complete Graph with 6 edges:"
},
{
"code": null,
"e": 253,
"s": 248,
"text": "C_G6"
},
{
"code": null,
"e": 283,
"s": 253,
"text": "Properties of Complete Graph:"
},
{
"code": null,
"e": 317,
"s": 283,
"text": "The degree of each vertex is n-1."
},
{
"code": null,
"e": 356,
"s": 317,
"text": "The total number of edges is n(n-1)/2."
},
{
"code": null,
"e": 420,
"s": 356,
"text": "All possible edges in a simple graph exist in a complete graph."
},
{
"code": null,
"e": 442,
"s": 420,
"text": "It is a cyclic graph."
},
{
"code": null,
"e": 495,
"s": 442,
"text": "The maximum distance between any pair of nodes is 1."
},
{
"code": null,
"e": 569,
"s": 495,
"text": "The chromatic number is n as every node is connected to every other node."
},
{
"code": null,
"e": 603,
"s": 569,
"text": "Its complement is an empty graph."
},
{
"code": null,
"e": 871,
"s": 603,
"text": "We will use the networkx module for realizing a Complete graph. It comes with an inbuilt function networkx.complete_graph() and can be illustrated using the networkx.draw() method. This module in Python is used for visualizing and analyzing different kinds of graphs."
},
{
"code": null,
"e": 906,
"s": 871,
"text": "Syntax: networkx.complete_graph(n)"
},
{
"code": null,
"e": 918,
"s": 906,
"text": "Parameters:"
},
{
"code": null,
"e": 956,
"s": 918,
"text": "N: Number of nodes in complete graph."
},
{
"code": null,
"e": 999,
"s": 956,
"text": "Returns an networkx graph complete object."
},
{
"code": null,
"e": 1035,
"s": 999,
"text": "Nodes are indexed from zero to n-1."
},
{
"code": null,
"e": 1086,
"s": 1035,
"text": "Used to realize the graph by passing graph object."
},
{
"code": null,
"e": 1126,
"s": 1086,
"text": "networkx.draw(G, node_size, node_color)"
},
{
"code": null,
"e": 1138,
"s": 1126,
"text": "Parameters:"
},
{
"code": null,
"e": 1180,
"s": 1138,
"text": "G: It refers to the complete graph object"
},
{
"code": null,
"e": 1223,
"s": 1180,
"text": "node_size: It refers to the size of nodes."
},
{
"code": null,
"e": 1268,
"s": 1223,
"text": "node_color: It refers to color of the nodes."
},
{
"code": null,
"e": 1278,
"s": 1268,
"text": "Approach:"
},
{
"code": null,
"e": 1323,
"s": 1278,
"text": "We will import the required module networkx."
},
{
"code": null,
"e": 1392,
"s": 1323,
"text": "Then we will create a graph object using networkx.complete_graph(n)."
},
{
"code": null,
"e": 1429,
"s": 1392,
"text": "Where n specifies n number of nodes."
},
{
"code": null,
"e": 1517,
"s": 1429,
"text": "For realizing graph, we will use networkx.draw(G, node_color = ’green’, node_size=1500)"
},
{
"code": null,
"e": 1599,
"s": 1517,
"text": "The node_color and node_size arguments specify the color and size of graph nodes."
},
{
"code": null,
"e": 1610,
"s": 1599,
"text": "Example 1:"
},
{
"code": null,
"e": 1618,
"s": 1610,
"text": "Python3"
},
{
"code": "# import required moduleimport networkx # create objectG = networkx.complete_graph(6) # illustrate graphnetworkx.draw(G, node_color = 'green', node_size = 1500)",
"e": 1792,
"s": 1618,
"text": null
},
{
"code": null,
"e": 1800,
"s": 1792,
"text": "Output:"
},
{
"code": null,
"e": 1807,
"s": 1800,
"text": "Output"
},
{
"code": null,
"e": 1947,
"s": 1807,
"text": "The output of the above program gives a complete graph with 6 nodes as output as we passed 6 as an argument to the complete_graph function."
},
{
"code": null,
"e": 1958,
"s": 1947,
"text": "Example 2:"
},
{
"code": null,
"e": 1966,
"s": 1958,
"text": "Python3"
},
{
"code": "# import required moduleimport networkx # create objectG = networkx.complete_graph(10) # illustrate graphnetworkx.draw(G, node_color = 'green', node_size = 1500)",
"e": 2141,
"s": 1966,
"text": null
},
{
"code": null,
"e": 2149,
"s": 2141,
"text": "Output:"
},
{
"code": null,
"e": 2162,
"s": 2149,
"text": "simmytarika5"
},
{
"code": null,
"e": 2184,
"s": 2162,
"text": "Python Networx-module"
},
{
"code": null,
"e": 2190,
"s": 2184,
"text": "Graph"
},
{
"code": null,
"e": 2197,
"s": 2190,
"text": "Python"
},
{
"code": null,
"e": 2203,
"s": 2197,
"text": "Graph"
}
] |
How to put the y-axis in logarithmic scale with Matplotlib ? | 17 Dec, 2020
Axes’ in all plots using Matplotlib are linear by default, yscale() method of the matplotlib.pyplot library can be used to change the y-axis scale to logarithmic.
The method yscale() takes a single value as a parameter which is the type of conversion of the scale, to convert y-axes to logarithmic scale we pass the “log” keyword or the matplotlib.scale.LogScale class to the yscale method.
Syntax : matplotlib.pyplot.yscale(value, **kwargs)
Parameters:
Value = { “linear”, “log”, “symlog”, “logit”, ... }
**kwargs = Different keyword arguments are accepted, depending on the scale (matplotlib.scale.LinearScale, LogScale, SymmetricalLogScale, LogitScale)
Returns : Converts the y-axes to the given scale type. (Here we use the “log” scale type)
Linear Scale Example :
Python3
import matplotlib.pyplot as plt data = [10**i for i in range(4)]plt.plot(data)
Output:
Linear Scale
Logarithmic Scale Example :
Python3
import matplotlib.pyplot as plt data = [10**i for i in range(4)] # convert y-axis to Logarithmic scaleplt.yscale("log") plt.plot(data)
Output:
Logarithmic y-axis
Picked
Python-matplotlib
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n17 Dec, 2020"
},
{
"code": null,
"e": 217,
"s": 54,
"text": "Axes’ in all plots using Matplotlib are linear by default, yscale() method of the matplotlib.pyplot library can be used to change the y-axis scale to logarithmic."
},
{
"code": null,
"e": 445,
"s": 217,
"text": "The method yscale() takes a single value as a parameter which is the type of conversion of the scale, to convert y-axes to logarithmic scale we pass the “log” keyword or the matplotlib.scale.LogScale class to the yscale method."
},
{
"code": null,
"e": 497,
"s": 445,
"text": "Syntax : matplotlib.pyplot.yscale(value, **kwargs)"
},
{
"code": null,
"e": 509,
"s": 497,
"text": "Parameters:"
},
{
"code": null,
"e": 561,
"s": 509,
"text": "Value = { “linear”, “log”, “symlog”, “logit”, ... }"
},
{
"code": null,
"e": 711,
"s": 561,
"text": "**kwargs = Different keyword arguments are accepted, depending on the scale (matplotlib.scale.LinearScale, LogScale, SymmetricalLogScale, LogitScale)"
},
{
"code": null,
"e": 801,
"s": 711,
"text": "Returns : Converts the y-axes to the given scale type. (Here we use the “log” scale type)"
},
{
"code": null,
"e": 825,
"s": 801,
"text": "Linear Scale Example : "
},
{
"code": null,
"e": 833,
"s": 825,
"text": "Python3"
},
{
"code": "import matplotlib.pyplot as plt data = [10**i for i in range(4)]plt.plot(data)",
"e": 913,
"s": 833,
"text": null
},
{
"code": null,
"e": 922,
"s": 913,
"text": "Output: "
},
{
"code": null,
"e": 935,
"s": 922,
"text": "Linear Scale"
},
{
"code": null,
"e": 964,
"s": 935,
"text": "Logarithmic Scale Example : "
},
{
"code": null,
"e": 972,
"s": 964,
"text": "Python3"
},
{
"code": "import matplotlib.pyplot as plt data = [10**i for i in range(4)] # convert y-axis to Logarithmic scaleplt.yscale(\"log\") plt.plot(data)",
"e": 1110,
"s": 972,
"text": null
},
{
"code": null,
"e": 1119,
"s": 1110,
"text": "Output: "
},
{
"code": null,
"e": 1138,
"s": 1119,
"text": "Logarithmic y-axis"
},
{
"code": null,
"e": 1145,
"s": 1138,
"text": "Picked"
},
{
"code": null,
"e": 1163,
"s": 1145,
"text": "Python-matplotlib"
},
{
"code": null,
"e": 1170,
"s": 1163,
"text": "Python"
}
] |
How to Generate JVM Heap Memory Dump? | 23 Aug, 2021
Java Heap dump is a snapshot of all java objects that are present in the JVM(Java Virtual Machine) at a certain point in time. The JVM allocates memory for objects which are class instances or arrays in the heap memory. When the objects are no longer needed or are no more referenced, the Garbage Collector runs and reclaims the memory space occupied by these objects.
The heap dump is in binary format, and it has .hprof extension. It can be opened and analyzed using applications like JVisualVM and Eclipse MAT(Memory Analyzer Tool). We generate java memory heap dump to identify issues like memory leaks and to optimize memory usage in our application.
Methods:
There are different ways of generating a java memory heap dump. JDK comes up with various tools for generating heap dump. These tools are located in bin folder under JDK home directory.. Let us discuss how to generate JVM Heap Dump which is as follows:
Using jmap commandUsing jcmd command on terminalUsing the JVisualVM toolIdentifying HeapDumpOnOutOfMemoryUsing JMX ConsoleUsing HotSpotDiagnosticMBean by writing a program
Using jmap command
Using jcmd command on terminal
Using the JVisualVM tool
Identifying HeapDumpOnOutOfMemory
Using JMX Console
Using HotSpotDiagnosticMBean by writing a program
Method 1: Using map command
jmap is a command which you run inside the bin folder of your JDK home directory. It gives statistics about memory usage. The structure is as follows:
Example
jmap -dump:[live],format=b,file=<file-path> <pid>
live:- This parameter is optional. If set, it prints all those objects that
have active references.
format = b , which means the heap dump file is in binary format. It is not necessary
to set this parameter.
file =<file-path> indicates where the heap dump file will be generated.
<pid> :- process id of the java process
Now in order to get the process id of a running java process, one can use one of the below options as defined:
1.1
jps
We type this command from a Unix terminal or Windows Command prompt where JDK is installed. It gives the process ID of the running java process
jps command
1.2
ps -eaf| grep java
This gives the process ID of all running java processes. It works only on a Unix Terminal
ps -eaf | grep java
1.3 Using task manager application in windows operating systems.
Method 2: Using jcmd command on terminal
This command sends a request to the JVM to generate a heap dump. One of its parameters is GC.heap_dump. It is as shown below:
jcmd <pid> GC.heap_dump <file-path>
<pid> - Process id of java process
<file-path> - Path where the heap dump is to be generated
jcmd
Method 3: Using the JVisualVM tool
This is a tool that is packaged within JDK. It helps to monitor and troubleshoot java applications. It has a Graphical User Interface that is simple and intuitive. We type jvisualvm in the start menu, or we go to the bin directory of JDK home directory through command prompt or terminal window in Unix and type jvisualvm
It launches a Java Visual VM application. On the left side, it shows the currently running java process. Right-click the process ID whose heap dump you wish to generate. When we click on heap dump, it generates heap dump for the selected process. Under Basic info, it shows the file path where the heap dump is generated.
Method 4: Identifying HeapDumpOnOutOfMemory
It is ideal to capture heap dumps when an application experiences java.lang.OutOfMemoryError. Heap dumps help identify live objects sitting in the memory and the percentage of memory it occupies.
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<file-path>
While running your java application when this system property is set, JVM takes a snapshot of the heap dump when it encounters OutOfMemoryError.
Method 5: Using JMX Console
There is a Hotspot Diagnostic MBean which has a heap dump operation. We use jmx clients like jconsole to invoke the MBean operation. Through JConsole, we can either connect to a local java process or a remote process by specifying the host and port number and username/password. Once we connect to the process id, jconsole applications open with multiple tabs. The Overview tab shows Heap Memory usage, threads, classes, CPU usage
Jconsole – new connection
Jconsole – MBeans tab
Method 6: Using HotSpotDiagnosticMBean by writing a program
We use HotSpotDiagnosticMBean to invoke the heap dump operation. We get the MBean object from the MBean platform server. In the below example, we have used reflection to invoke the heapDump() method of the MBean object.
Example
Java
// Java Program Invoking heapDump() Method of MBean Object // Importing input output classesimport java.io.*; // Main classclass GFG { // Custom string passed as input private static final String HOTSPOT_BEAN = "com.sun.management:type=HotSpotDiagnostic"; // Private member variable of this class private static volatile Object hotSpotMBean; // Invoke this method when heap dump is to be generated // @param fileName - name of the heap dump file // @param live - indicates if only live objects are to // be included in the heap dump // Method 1 // To generate heap dumps static void generateHeapDump(String fileName, boolean live) { initHotspotMBean(); // Try block to check if any exceptions occurs try { Class clazz = Class.forName( "com.sun.management.HotSpotDiagnosticMXBean"); Method m = clazz.getMethod( "dumpHeap", String.class, boolean.class); m.invoke(hotSpotMBean, fileName, live); } // Catch block handling runtime exceptions catch (RuntimeException re) { throw re; } catch (Exception exp) { throw new RuntimeException(exp); } } // Method 2 private static void initHotspotMBean() { if (hotSpotMBean == null) { synchronized (JavaHeapDump.class) { if (hotSpotMBean == null) { hotSpotMBean = getHotSpotMbean(); } } } } // Method 3 // To get the HOtSpotBean from the MBean server private static Object getHotSpotMbean() { Object hotspotBean = null; // Try block tocheck for exceptions try { // Loading using .forName() method Class clazz = Class.forName( "com.sun.management.HotSpotDiagnosticMXBean"); MBeanServer mbeanServer = ManagementFactory .getPlatformMBeanServer(); hotspotBean = ManagementFactory.newPlatformMXBeanProxy( mbeanServer, HOTSPOT_BEAN, clazz); return hotspotBean; } // Catch block 1 // Handling exceptions if class not found catch (ClassNotFoundException e) { // Printthe exception along with line number // using printStackTrace() method e.printStackTrace(); } // Catch block 2 // Handling basic I/O exceptions catch (IOException e) { // Printthe exception along with line number // using printStackTrace() method e.printStackTrace(); } return hotspotBean; } // Method 4 // Main driver method public static void main(String[] args) { // File taken as an input String fileName = "/home/suchitra/Desktop/suchitra/projects/java-concurrency-examples/JavaHeapDumpGenerator/src/heap1.hprof"; // Flag variable set to true boolean live = true; // Switch case switch (args.length) { case 2: live = args[1].equals("true"); case 1: fileName = args[0]; } // Calling Method 1 in main() method to // generate heap dumps generateHeapDump(fileName, live); }}
Note:
We run this application by passing command-line arguments for file path where the heap dump is to be generated and live parameter which can be set as true or false. When this java code is run, it generates a heap1.hprof file in src folder. This heap dump can be analyzed using MAT(Memory Analyzer Tool). This can be installed as a plugin in Eclipse from Marketplace.
Heap Dump Analysis using MAT
Now lastly let us analyze the heap dump file with help of JVisualVM.
Once a heap dump file is generated, we use tools like JVisualVM to analyze the file. When you open a heap dump, Java VisualVM displays the Summary view by default. The Summary view displays the running environment where the heap dump was taken and other system properties.
In JvisualVM, we go to File -> Load and select the folder location where the ‘.hprof file’ is generated which is pictorially aided below to get a fair understanding for the same.
Analysis of heap dump file – Summary tab
Analysis of heap dump – Classes tab
varshagumber28
Picked
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Introduction to Java
Constructors in Java
Exceptions in Java
Generics in Java
Functional Interfaces in Java
Java Programming Examples
Strings in Java
Differences between JDK, JRE and JVM
Abstraction in Java | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n23 Aug, 2021"
},
{
"code": null,
"e": 397,
"s": 28,
"text": "Java Heap dump is a snapshot of all java objects that are present in the JVM(Java Virtual Machine) at a certain point in time. The JVM allocates memory for objects which are class instances or arrays in the heap memory. When the objects are no longer needed or are no more referenced, the Garbage Collector runs and reclaims the memory space occupied by these objects."
},
{
"code": null,
"e": 684,
"s": 397,
"text": "The heap dump is in binary format, and it has .hprof extension. It can be opened and analyzed using applications like JVisualVM and Eclipse MAT(Memory Analyzer Tool). We generate java memory heap dump to identify issues like memory leaks and to optimize memory usage in our application."
},
{
"code": null,
"e": 693,
"s": 684,
"text": "Methods:"
},
{
"code": null,
"e": 946,
"s": 693,
"text": "There are different ways of generating a java memory heap dump. JDK comes up with various tools for generating heap dump. These tools are located in bin folder under JDK home directory.. Let us discuss how to generate JVM Heap Dump which is as follows:"
},
{
"code": null,
"e": 1118,
"s": 946,
"text": "Using jmap commandUsing jcmd command on terminalUsing the JVisualVM toolIdentifying HeapDumpOnOutOfMemoryUsing JMX ConsoleUsing HotSpotDiagnosticMBean by writing a program"
},
{
"code": null,
"e": 1137,
"s": 1118,
"text": "Using jmap command"
},
{
"code": null,
"e": 1168,
"s": 1137,
"text": "Using jcmd command on terminal"
},
{
"code": null,
"e": 1193,
"s": 1168,
"text": "Using the JVisualVM tool"
},
{
"code": null,
"e": 1227,
"s": 1193,
"text": "Identifying HeapDumpOnOutOfMemory"
},
{
"code": null,
"e": 1245,
"s": 1227,
"text": "Using JMX Console"
},
{
"code": null,
"e": 1295,
"s": 1245,
"text": "Using HotSpotDiagnosticMBean by writing a program"
},
{
"code": null,
"e": 1323,
"s": 1295,
"text": "Method 1: Using map command"
},
{
"code": null,
"e": 1474,
"s": 1323,
"text": "jmap is a command which you run inside the bin folder of your JDK home directory. It gives statistics about memory usage. The structure is as follows:"
},
{
"code": null,
"e": 1482,
"s": 1474,
"text": "Example"
},
{
"code": null,
"e": 1858,
"s": 1482,
"text": "jmap -dump:[live],format=b,file=<file-path> <pid>\n\nlive:- This parameter is optional. If set, it prints all those objects that \nhave active references.\n\nformat = b , which means the heap dump file is in binary format. It is not necessary \nto set this parameter.\n\nfile =<file-path> indicates where the heap dump file will be generated.\n\n<pid> :- process id of the java process"
},
{
"code": null,
"e": 1969,
"s": 1858,
"text": "Now in order to get the process id of a running java process, one can use one of the below options as defined:"
},
{
"code": null,
"e": 1974,
"s": 1969,
"text": "1.1 "
},
{
"code": null,
"e": 1978,
"s": 1974,
"text": "jps"
},
{
"code": null,
"e": 2122,
"s": 1978,
"text": "We type this command from a Unix terminal or Windows Command prompt where JDK is installed. It gives the process ID of the running java process"
},
{
"code": null,
"e": 2134,
"s": 2122,
"text": "jps command"
},
{
"code": null,
"e": 2139,
"s": 2134,
"text": "1.2 "
},
{
"code": null,
"e": 2158,
"s": 2139,
"text": "ps -eaf| grep java"
},
{
"code": null,
"e": 2248,
"s": 2158,
"text": "This gives the process ID of all running java processes. It works only on a Unix Terminal"
},
{
"code": null,
"e": 2268,
"s": 2248,
"text": "ps -eaf | grep java"
},
{
"code": null,
"e": 2333,
"s": 2268,
"text": "1.3 Using task manager application in windows operating systems."
},
{
"code": null,
"e": 2374,
"s": 2333,
"text": "Method 2: Using jcmd command on terminal"
},
{
"code": null,
"e": 2500,
"s": 2374,
"text": "This command sends a request to the JVM to generate a heap dump. One of its parameters is GC.heap_dump. It is as shown below:"
},
{
"code": null,
"e": 2629,
"s": 2500,
"text": "jcmd <pid> GC.heap_dump <file-path>\n<pid> - Process id of java process\n<file-path> - Path where the heap dump is to be generated"
},
{
"code": null,
"e": 2634,
"s": 2629,
"text": "jcmd"
},
{
"code": null,
"e": 2670,
"s": 2634,
"text": "Method 3: Using the JVisualVM tool "
},
{
"code": null,
"e": 2994,
"s": 2670,
"text": "This is a tool that is packaged within JDK. It helps to monitor and troubleshoot java applications. It has a Graphical User Interface that is simple and intuitive. We type jvisualvm in the start menu, or we go to the bin directory of JDK home directory through command prompt or terminal window in Unix and type jvisualvm"
},
{
"code": null,
"e": 3316,
"s": 2994,
"text": "It launches a Java Visual VM application. On the left side, it shows the currently running java process. Right-click the process ID whose heap dump you wish to generate. When we click on heap dump, it generates heap dump for the selected process. Under Basic info, it shows the file path where the heap dump is generated."
},
{
"code": null,
"e": 3360,
"s": 3316,
"text": "Method 4: Identifying HeapDumpOnOutOfMemory"
},
{
"code": null,
"e": 3557,
"s": 3360,
"text": "It is ideal to capture heap dumps when an application experiences java.lang.OutOfMemoryError. Heap dumps help identify live objects sitting in the memory and the percentage of memory it occupies. "
},
{
"code": null,
"e": 3618,
"s": 3557,
"text": "-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=<file-path>"
},
{
"code": null,
"e": 3764,
"s": 3618,
"text": "While running your java application when this system property is set, JVM takes a snapshot of the heap dump when it encounters OutOfMemoryError. "
},
{
"code": null,
"e": 3792,
"s": 3764,
"text": "Method 5: Using JMX Console"
},
{
"code": null,
"e": 4223,
"s": 3792,
"text": "There is a Hotspot Diagnostic MBean which has a heap dump operation. We use jmx clients like jconsole to invoke the MBean operation. Through JConsole, we can either connect to a local java process or a remote process by specifying the host and port number and username/password. Once we connect to the process id, jconsole applications open with multiple tabs. The Overview tab shows Heap Memory usage, threads, classes, CPU usage"
},
{
"code": null,
"e": 4249,
"s": 4223,
"text": "Jconsole – new connection"
},
{
"code": null,
"e": 4271,
"s": 4249,
"text": "Jconsole – MBeans tab"
},
{
"code": null,
"e": 4332,
"s": 4271,
"text": "Method 6: Using HotSpotDiagnosticMBean by writing a program "
},
{
"code": null,
"e": 4553,
"s": 4332,
"text": "We use HotSpotDiagnosticMBean to invoke the heap dump operation. We get the MBean object from the MBean platform server. In the below example, we have used reflection to invoke the heapDump() method of the MBean object."
},
{
"code": null,
"e": 4561,
"s": 4553,
"text": "Example"
},
{
"code": null,
"e": 4566,
"s": 4561,
"text": "Java"
},
{
"code": "// Java Program Invoking heapDump() Method of MBean Object // Importing input output classesimport java.io.*; // Main classclass GFG { // Custom string passed as input private static final String HOTSPOT_BEAN = \"com.sun.management:type=HotSpotDiagnostic\"; // Private member variable of this class private static volatile Object hotSpotMBean; // Invoke this method when heap dump is to be generated // @param fileName - name of the heap dump file // @param live - indicates if only live objects are to // be included in the heap dump // Method 1 // To generate heap dumps static void generateHeapDump(String fileName, boolean live) { initHotspotMBean(); // Try block to check if any exceptions occurs try { Class clazz = Class.forName( \"com.sun.management.HotSpotDiagnosticMXBean\"); Method m = clazz.getMethod( \"dumpHeap\", String.class, boolean.class); m.invoke(hotSpotMBean, fileName, live); } // Catch block handling runtime exceptions catch (RuntimeException re) { throw re; } catch (Exception exp) { throw new RuntimeException(exp); } } // Method 2 private static void initHotspotMBean() { if (hotSpotMBean == null) { synchronized (JavaHeapDump.class) { if (hotSpotMBean == null) { hotSpotMBean = getHotSpotMbean(); } } } } // Method 3 // To get the HOtSpotBean from the MBean server private static Object getHotSpotMbean() { Object hotspotBean = null; // Try block tocheck for exceptions try { // Loading using .forName() method Class clazz = Class.forName( \"com.sun.management.HotSpotDiagnosticMXBean\"); MBeanServer mbeanServer = ManagementFactory .getPlatformMBeanServer(); hotspotBean = ManagementFactory.newPlatformMXBeanProxy( mbeanServer, HOTSPOT_BEAN, clazz); return hotspotBean; } // Catch block 1 // Handling exceptions if class not found catch (ClassNotFoundException e) { // Printthe exception along with line number // using printStackTrace() method e.printStackTrace(); } // Catch block 2 // Handling basic I/O exceptions catch (IOException e) { // Printthe exception along with line number // using printStackTrace() method e.printStackTrace(); } return hotspotBean; } // Method 4 // Main driver method public static void main(String[] args) { // File taken as an input String fileName = \"/home/suchitra/Desktop/suchitra/projects/java-concurrency-examples/JavaHeapDumpGenerator/src/heap1.hprof\"; // Flag variable set to true boolean live = true; // Switch case switch (args.length) { case 2: live = args[1].equals(\"true\"); case 1: fileName = args[0]; } // Calling Method 1 in main() method to // generate heap dumps generateHeapDump(fileName, live); }}",
"e": 7942,
"s": 4566,
"text": null
},
{
"code": null,
"e": 7951,
"s": 7945,
"text": "Note:"
},
{
"code": null,
"e": 8318,
"s": 7951,
"text": "We run this application by passing command-line arguments for file path where the heap dump is to be generated and live parameter which can be set as true or false. When this java code is run, it generates a heap1.hprof file in src folder. This heap dump can be analyzed using MAT(Memory Analyzer Tool). This can be installed as a plugin in Eclipse from Marketplace."
},
{
"code": null,
"e": 8347,
"s": 8318,
"text": "Heap Dump Analysis using MAT"
},
{
"code": null,
"e": 8416,
"s": 8347,
"text": "Now lastly let us analyze the heap dump file with help of JVisualVM."
},
{
"code": null,
"e": 8689,
"s": 8416,
"text": "Once a heap dump file is generated, we use tools like JVisualVM to analyze the file. When you open a heap dump, Java VisualVM displays the Summary view by default. The Summary view displays the running environment where the heap dump was taken and other system properties."
},
{
"code": null,
"e": 8868,
"s": 8689,
"text": "In JvisualVM, we go to File -> Load and select the folder location where the ‘.hprof file’ is generated which is pictorially aided below to get a fair understanding for the same."
},
{
"code": null,
"e": 8909,
"s": 8868,
"text": "Analysis of heap dump file – Summary tab"
},
{
"code": null,
"e": 8945,
"s": 8909,
"text": "Analysis of heap dump – Classes tab"
},
{
"code": null,
"e": 8962,
"s": 8947,
"text": "varshagumber28"
},
{
"code": null,
"e": 8969,
"s": 8962,
"text": "Picked"
},
{
"code": null,
"e": 8974,
"s": 8969,
"text": "Java"
},
{
"code": null,
"e": 8979,
"s": 8974,
"text": "Java"
},
{
"code": null,
"e": 9077,
"s": 8979,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 9092,
"s": 9077,
"text": "Stream In Java"
},
{
"code": null,
"e": 9113,
"s": 9092,
"text": "Introduction to Java"
},
{
"code": null,
"e": 9134,
"s": 9113,
"text": "Constructors in Java"
},
{
"code": null,
"e": 9153,
"s": 9134,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 9170,
"s": 9153,
"text": "Generics in Java"
},
{
"code": null,
"e": 9200,
"s": 9170,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 9226,
"s": 9200,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 9242,
"s": 9226,
"text": "Strings in Java"
},
{
"code": null,
"e": 9279,
"s": 9242,
"text": "Differences between JDK, JRE and JVM"
}
] |
Convert Java String to Short in Java | Use valueOf() method to convert a String in Java to Short.
Let us take a string.
The following is an example −
String myStr = "5";
Now take Short object and use the valueOf() method. The argument should be the string we declared above.
The following is an example.
Short myShort = Short.valueOf(myStr);
Let us now see the entire example to learn how to convert a string in Java to Short.
Live Demo
public class Demo {
public static void main(String []args) {
String myStr = "5";
System.out.println("String: "+myStr);
Short myShort = Short.valueOf(myStr);
System.out.println("Short: "+myShort);
}
}
String: 5
Short: 5 | [
{
"code": null,
"e": 1246,
"s": 1187,
"text": "Use valueOf() method to convert a String in Java to Short."
},
{
"code": null,
"e": 1268,
"s": 1246,
"text": "Let us take a string."
},
{
"code": null,
"e": 1298,
"s": 1268,
"text": "The following is an example −"
},
{
"code": null,
"e": 1318,
"s": 1298,
"text": "String myStr = \"5\";"
},
{
"code": null,
"e": 1423,
"s": 1318,
"text": "Now take Short object and use the valueOf() method. The argument should be the string we declared above."
},
{
"code": null,
"e": 1452,
"s": 1423,
"text": "The following is an example."
},
{
"code": null,
"e": 1490,
"s": 1452,
"text": "Short myShort = Short.valueOf(myStr);"
},
{
"code": null,
"e": 1575,
"s": 1490,
"text": "Let us now see the entire example to learn how to convert a string in Java to Short."
},
{
"code": null,
"e": 1586,
"s": 1575,
"text": " Live Demo"
},
{
"code": null,
"e": 1816,
"s": 1586,
"text": "public class Demo {\n public static void main(String []args) {\n String myStr = \"5\";\n System.out.println(\"String: \"+myStr);\n Short myShort = Short.valueOf(myStr);\n System.out.println(\"Short: \"+myShort);\n }\n}"
},
{
"code": null,
"e": 1835,
"s": 1816,
"text": "String: 5\nShort: 5"
}
] |
Friend class and function in C++ | A friend function of a class is defined outside that class' scope but it has the right to access all private and protected members of the class. Even though the prototypes for friend functions appear in the class definition, friends are not member functions.
A friend can be a function, function template, or member function, or a class or class template, in which case the entire class and all of its members are friends.
To declare a function as a friend of a class, precede the function prototype in the class definition with keyword friend as follows −
class Box {
double width;
public:
double length;
friend void printWidth( Box box );
void setWidth( double wid );
};
To declare all member functions of class ClassTwo as friends of class ClassOne, place the following declaration in the definition of class ClassOne −
friend class ClassTwo;
#include <iostream>
using namespace std;
class Box {
double width;
public:
friend void printWidth( Box box );
void setWidth( double wid );
};
// Member function definition
void Box::setWidth( double wid ) {
width = wid;
}
// Note: printWidth() is not a member function of any class.
void printWidth( Box box ) {
/* Because printWidth() is a friend of Box, it can
directly access any member of this class */
cout << "Width of box : " << box.width <<endl;
}
// Main function for the program
int main() {
Box box;
// set box width without member function
box.setWidth(10.0);
// Use friend function to print the wdith.
printWidth( box );
return 0;
}
This will give the output −
Width of box: 10
Even though the function was not a member of the class, it could directly access the member variables of that class. This can be very useful in certain situations. | [
{
"code": null,
"e": 1446,
"s": 1187,
"text": "A friend function of a class is defined outside that class' scope but it has the right to access all private and protected members of the class. Even though the prototypes for friend functions appear in the class definition, friends are not member functions."
},
{
"code": null,
"e": 1610,
"s": 1446,
"text": "A friend can be a function, function template, or member function, or a class or class template, in which case the entire class and all of its members are friends."
},
{
"code": null,
"e": 1744,
"s": 1610,
"text": "To declare a function as a friend of a class, precede the function prototype in the class definition with keyword friend as follows −"
},
{
"code": null,
"e": 1870,
"s": 1744,
"text": "class Box {\ndouble width;\n\npublic:\n double length;\n friend void printWidth( Box box );\n void setWidth( double wid );\n};"
},
{
"code": null,
"e": 2020,
"s": 1870,
"text": "To declare all member functions of class ClassTwo as friends of class ClassOne, place the following declaration in the definition of class ClassOne −"
},
{
"code": null,
"e": 2043,
"s": 2020,
"text": "friend class ClassTwo;"
},
{
"code": null,
"e": 2739,
"s": 2043,
"text": "#include <iostream>\nusing namespace std;\n\nclass Box {\n double width;\n\n public:\n friend void printWidth( Box box );\n void setWidth( double wid );\n};\n\n// Member function definition\nvoid Box::setWidth( double wid ) {\n width = wid;\n}\n\n// Note: printWidth() is not a member function of any class.\nvoid printWidth( Box box ) {\n /* Because printWidth() is a friend of Box, it can\n directly access any member of this class */\n cout << \"Width of box : \" << box.width <<endl;\n}\n\n// Main function for the program\nint main() {\n Box box;\n\n // set box width without member function\n box.setWidth(10.0);\n\n // Use friend function to print the wdith.\n printWidth( box );\n\n return 0;\n}"
},
{
"code": null,
"e": 2767,
"s": 2739,
"text": "This will give the output −"
},
{
"code": null,
"e": 2784,
"s": 2767,
"text": "Width of box: 10"
},
{
"code": null,
"e": 2948,
"s": 2784,
"text": "Even though the function was not a member of the class, it could directly access the member variables of that class. This can be very useful in certain situations."
}
] |
Python Program for Longest Palindromic Subsequence | DP-12 | 21 Feb, 2022
Given a sequence, find the length of the longest palindromic subsequence in it.
As another example, if the given sequence is “BBABCBCAB”, then the output should be 7 as “BABCBAB” is the longest palindromic subsequence in it. “BBBBB” and “BBCBB” are also palindromic subsequences of the given sequence, but not the longest ones. 1) Optimal Substructure: Let X[0..n-1] be the input sequence of length n and L(0, n-1) be the length of the longest palindromic subsequence of X[0..n-1]. If last and first characters of X are same, then L(0, n-1) = L(1, n-2) + 2. Else L(0, n-1) = MAX (L(1, n-1), L(0, n-2)). Following is a general recursive solution with all cases handled. Dynamic Programming Solution
Python3
# A Dynamic Programming based Python # program for LPS problem Returns the length# of the longest palindromic subsequence in seqdef lps(str): n = len(str) # Create a table to store results of subproblems L = [[0 for x in range(n)] for x in range(n)] # Strings of length 1 are palindrome of length 1 for i in range(n): L[i][i] = 1 # Build the table. Note that the lower # diagonal values of table are # useless and not filled in the process. # The values are filled in a # manner similar to Matrix Chain # Multiplication DP solution (See # https://www.geeksforgeeks.org/ # dynamic-programming-set-8-matrix-chain-multiplication/ # cl is length of substring for cl in range(2, n + 1): for i in range(n-cl + 1): j = i + cl-1 if str[i] == str[j] and cl == 2: L[i][j] = 2 elif str[i] == str[j]: L[i][j] = L[i + 1][j-1] + 2 else: L[i][j] = max(L[i][j-1], L[i + 1][j]); return L[0][n-1] # Driver program to test above functionsseq = "GEEKS FOR GEEKS"n = len(seq)print("The length of the LPS is " + str(lps(seq))) # This code is contributed by Bhavya Jain
The length of the LPS is 7
Time Complexity :- O(n2)
Please refer complete article on Longest Palindromic Subsequence | DP-12 for more details!
amartyaghoshgfg
simmytarika5
Python Programs
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python program to interchange first and last elements in a list
Appending to list in Python dictionary
Differences and Applications of List, Tuple, Set and Dictionary in Python
Appending a dictionary to a list in Python
Python Program to check Armstrong Number
Python | Difference between two dates (in minutes) using datetime.timedelta() method
Python | Remove spaces from a string
Python Program for Merge Sort
Python - Convert JSON to string
Python Program for simple interest | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n21 Feb, 2022"
},
{
"code": null,
"e": 109,
"s": 28,
"text": "Given a sequence, find the length of the longest palindromic subsequence in it. "
},
{
"code": null,
"e": 729,
"s": 109,
"text": "As another example, if the given sequence is “BBABCBCAB”, then the output should be 7 as “BABCBAB” is the longest palindromic subsequence in it. “BBBBB” and “BBCBB” are also palindromic subsequences of the given sequence, but not the longest ones. 1) Optimal Substructure: Let X[0..n-1] be the input sequence of length n and L(0, n-1) be the length of the longest palindromic subsequence of X[0..n-1]. If last and first characters of X are same, then L(0, n-1) = L(1, n-2) + 2. Else L(0, n-1) = MAX (L(1, n-1), L(0, n-2)). Following is a general recursive solution with all cases handled. Dynamic Programming Solution "
},
{
"code": null,
"e": 737,
"s": 729,
"text": "Python3"
},
{
"code": "# A Dynamic Programming based Python # program for LPS problem Returns the length# of the longest palindromic subsequence in seqdef lps(str): n = len(str) # Create a table to store results of subproblems L = [[0 for x in range(n)] for x in range(n)] # Strings of length 1 are palindrome of length 1 for i in range(n): L[i][i] = 1 # Build the table. Note that the lower # diagonal values of table are # useless and not filled in the process. # The values are filled in a # manner similar to Matrix Chain # Multiplication DP solution (See # https://www.geeksforgeeks.org/ # dynamic-programming-set-8-matrix-chain-multiplication/ # cl is length of substring for cl in range(2, n + 1): for i in range(n-cl + 1): j = i + cl-1 if str[i] == str[j] and cl == 2: L[i][j] = 2 elif str[i] == str[j]: L[i][j] = L[i + 1][j-1] + 2 else: L[i][j] = max(L[i][j-1], L[i + 1][j]); return L[0][n-1] # Driver program to test above functionsseq = \"GEEKS FOR GEEKS\"n = len(seq)print(\"The length of the LPS is \" + str(lps(seq))) # This code is contributed by Bhavya Jain",
"e": 1948,
"s": 737,
"text": null
},
{
"code": null,
"e": 1975,
"s": 1948,
"text": "The length of the LPS is 7"
},
{
"code": null,
"e": 2002,
"s": 1977,
"text": "Time Complexity :- O(n2)"
},
{
"code": null,
"e": 2094,
"s": 2002,
"text": "Please refer complete article on Longest Palindromic Subsequence | DP-12 for more details! "
},
{
"code": null,
"e": 2110,
"s": 2094,
"text": "amartyaghoshgfg"
},
{
"code": null,
"e": 2123,
"s": 2110,
"text": "simmytarika5"
},
{
"code": null,
"e": 2139,
"s": 2123,
"text": "Python Programs"
},
{
"code": null,
"e": 2237,
"s": 2139,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2301,
"s": 2237,
"text": "Python program to interchange first and last elements in a list"
},
{
"code": null,
"e": 2340,
"s": 2301,
"text": "Appending to list in Python dictionary"
},
{
"code": null,
"e": 2414,
"s": 2340,
"text": "Differences and Applications of List, Tuple, Set and Dictionary in Python"
},
{
"code": null,
"e": 2457,
"s": 2414,
"text": "Appending a dictionary to a list in Python"
},
{
"code": null,
"e": 2498,
"s": 2457,
"text": "Python Program to check Armstrong Number"
},
{
"code": null,
"e": 2583,
"s": 2498,
"text": "Python | Difference between two dates (in minutes) using datetime.timedelta() method"
},
{
"code": null,
"e": 2620,
"s": 2583,
"text": "Python | Remove spaces from a string"
},
{
"code": null,
"e": 2650,
"s": 2620,
"text": "Python Program for Merge Sort"
},
{
"code": null,
"e": 2682,
"s": 2650,
"text": "Python - Convert JSON to string"
}
] |
MySQL | DEFAULT() Function | 25 Mar, 2019
The DEFAULT() function returns the default value for table column.
DEFAULT value of a column is a value used in the case, there is no value specified by user.
In order, to use this function there should be a DEFAULT value assign to the column. Otherwise, it will generate an error.
Syntax:
DEFAULT ( column_name)
column_name: Name of column whose default value is written.
Example: Consider two relations student and result–
Structure of table “student”-
Structure of table “result”-
Data in the tables-
Select * from student;
Select * from result;
Problem Description: We have to find result of all the students-
Query:
Select sid, sname, subject, marks,
IF ( grade is NULL, DEFAULT ( grade ), grade )
AS grade FROM student LEFT JOIN result
ON marks > lowest_marks
AND marks < = highest_marks;
Output:
Explanation: Here, default() function is use to return default grade i.e “FAIL”. This default value is used in place where student marks doesn’t match according to joining condition. Those students results is shown as FAIL.
Note: The default function with select statement will return default value for all rows. That means, instead of getting a single default value of the column, we will get list of default values for that column.
For example for above table result, Output of query is-
Select default ( grade) from result;
Output:
mysql
DBMS
SQL
DBMS
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between Clustered and Non-clustered index
Introduction of DBMS (Database Management System) | Set 1
Introduction of B-Tree
SQL Interview Questions
SQL | Views
SQL | DDL, DQL, DML, DCL and TCL Commands
How to find Nth highest salary from a table
SQL | ALTER (RENAME)
How to Update Multiple Columns in Single Update Statement in SQL?
SQL Interview Questions | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n25 Mar, 2019"
},
{
"code": null,
"e": 95,
"s": 28,
"text": "The DEFAULT() function returns the default value for table column."
},
{
"code": null,
"e": 187,
"s": 95,
"text": "DEFAULT value of a column is a value used in the case, there is no value specified by user."
},
{
"code": null,
"e": 310,
"s": 187,
"text": "In order, to use this function there should be a DEFAULT value assign to the column. Otherwise, it will generate an error."
},
{
"code": null,
"e": 318,
"s": 310,
"text": "Syntax:"
},
{
"code": null,
"e": 403,
"s": 318,
"text": "DEFAULT ( column_name)\n\ncolumn_name: Name of column whose default value is written.\n"
},
{
"code": null,
"e": 455,
"s": 403,
"text": "Example: Consider two relations student and result–"
},
{
"code": null,
"e": 485,
"s": 455,
"text": "Structure of table “student”-"
},
{
"code": null,
"e": 514,
"s": 485,
"text": "Structure of table “result”-"
},
{
"code": null,
"e": 534,
"s": 514,
"text": "Data in the tables-"
},
{
"code": null,
"e": 558,
"s": 534,
"text": "Select * from student;\n"
},
{
"code": null,
"e": 581,
"s": 558,
"text": "Select * from result;\n"
},
{
"code": null,
"e": 646,
"s": 581,
"text": "Problem Description: We have to find result of all the students-"
},
{
"code": null,
"e": 653,
"s": 646,
"text": "Query:"
},
{
"code": null,
"e": 841,
"s": 653,
"text": "Select sid, sname, subject, marks, \n IF ( grade is NULL, DEFAULT ( grade ), grade )\nAS grade FROM student LEFT JOIN result \n ON marks > lowest_marks \nAND marks < = highest_marks;\n"
},
{
"code": null,
"e": 849,
"s": 841,
"text": "Output:"
},
{
"code": null,
"e": 1073,
"s": 849,
"text": "Explanation: Here, default() function is use to return default grade i.e “FAIL”. This default value is used in place where student marks doesn’t match according to joining condition. Those students results is shown as FAIL."
},
{
"code": null,
"e": 1283,
"s": 1073,
"text": "Note: The default function with select statement will return default value for all rows. That means, instead of getting a single default value of the column, we will get list of default values for that column."
},
{
"code": null,
"e": 1339,
"s": 1283,
"text": "For example for above table result, Output of query is-"
},
{
"code": null,
"e": 1377,
"s": 1339,
"text": "Select default ( grade) from result;\n"
},
{
"code": null,
"e": 1385,
"s": 1377,
"text": "Output:"
},
{
"code": null,
"e": 1391,
"s": 1385,
"text": "mysql"
},
{
"code": null,
"e": 1396,
"s": 1391,
"text": "DBMS"
},
{
"code": null,
"e": 1400,
"s": 1396,
"text": "SQL"
},
{
"code": null,
"e": 1405,
"s": 1400,
"text": "DBMS"
},
{
"code": null,
"e": 1409,
"s": 1405,
"text": "SQL"
},
{
"code": null,
"e": 1507,
"s": 1409,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1560,
"s": 1507,
"text": "Difference between Clustered and Non-clustered index"
},
{
"code": null,
"e": 1618,
"s": 1560,
"text": "Introduction of DBMS (Database Management System) | Set 1"
},
{
"code": null,
"e": 1641,
"s": 1618,
"text": "Introduction of B-Tree"
},
{
"code": null,
"e": 1665,
"s": 1641,
"text": "SQL Interview Questions"
},
{
"code": null,
"e": 1677,
"s": 1665,
"text": "SQL | Views"
},
{
"code": null,
"e": 1719,
"s": 1677,
"text": "SQL | DDL, DQL, DML, DCL and TCL Commands"
},
{
"code": null,
"e": 1763,
"s": 1719,
"text": "How to find Nth highest salary from a table"
},
{
"code": null,
"e": 1784,
"s": 1763,
"text": "SQL | ALTER (RENAME)"
},
{
"code": null,
"e": 1850,
"s": 1784,
"text": "How to Update Multiple Columns in Single Update Statement in SQL?"
}
] |
Array Subset of another array | Practice | GeeksforGeeks | Given two arrays: a1[0..n-1] of size n and a2[0..m-1] of size m. Task is to check whether a2[] is a subset of a1[] or not. Both the arrays can be sorted or unsorted.
Example 1:
Input:
a1[] = {11, 1, 13, 21, 3, 7}
a2[] = {11, 3, 7, 1}
Output:
Yes
Explanation:
a2[] is a subset of a1[]
Example 2:
Input:
a1[] = {1, 2, 3, 4, 5, 6}
a2[] = {1, 2, 4}
Output:
Yes
Explanation:
a2[] is a subset of a1[]
Example 3:
Input:
a1[] = {10, 5, 2, 23, 19}
a2[] = {19, 5, 3}
Output:
No
Explanation:
a2[] is not a subset of a1[]
Your Task:
You don't need to read input or print anything. Your task is to complete the function isSubset() which takes the array a1[], a2[], its size n and m as inputs and return "Yes" if arr2 is subset of arr1 else return "No" if arr2 is not subset of arr1.
Expected Time Complexity: O(n)
Expected Auxiliary Space: O(n)
Constraints:
1 <= n,m <= 105
1 <= a1[i], a2[j] <= 105
0
kvkhairkarin 8 hours
#Your Code here
s1 = set(a1) s2 = set(a2) if s2.intersection(s1) == s2: return "Yes" else: return "No"
0
ayckerayush3 hours ago
check this out (c++)
sort(a1,a1+n);
sort(a2,a2+m);
int indexA1=0;
int indexA2=0;
if(m>n){
return "No";
}
for(int i=0;i<m;i++){
for(int j=indexA1;j<n;j++){
if(a1[j]==a2[i]){
indexA1=j+1;
break;
}
else if(a1[j]>a2[i] || j>=n-1){
return "No";
}
}
}
return "Yes";
0
surabhichoubey551 day ago
string isSubset(int a1[], int a2[], int n, int m) { //make unordered_set which will store all array1 elements unordered_set<int>s1; //array1 elements insert for(int i = 0;i<n;i++) { s1.insert(a1[i]); } //now treverse array2 for(int i = 0;i<m;i++) { if(s1.find(a2[i]) == s1.end()) { return "No"; } } //also while treverseing cheack if array element is not seen in array1 return false return "Yes"; //else return true or yes }
0
aamir255y2 days ago
string isSubset(int a1[], int a2[], int n, int m) {
unordered_map<int, int> map_a1;
for(int i = 0; i < n; ++i) map_a1[ a1[i] ] = 1;
for(int i = 0; i < m; ++i) {
if( map_a1[ a2[i] ] != 1 )
return "No";
}
return "Yes";
}
0
abhishekjune20162 days ago
map<int,int>mp; for(int i=0;i<n;i++) { mp[a1[i]]++; } // for (auto i : mp) // cout << i.first << " " << i.second // << endl; map<int, int>::iterator it ; int count=0; for(int i=0;i<m;i++) { it=mp.find(a2[i]); if(it!=mp.end()) { count++; } } // cout<<count<<endl; if(count==m) { return "Yes"; } else { return "No"; }
0
abhishekjune2016
This comment was deleted.
0
arobindosuklabaidya4 days ago
STL Solution !!!!!
string isSubset(int a1[], int a2[], int n, int m) { unordered_map<int,int> mp; for(int i=0;i<n;i++){ mp[a1[i]]++; } for(int i=0;i<m;i++){ if(mp[a2[i]]==0){ return "No"; } } return "Yes"; }
0
sankeerthsirikonda5 days ago
#python
def isSubset( a1, a2, n, m): for i in a2: if i not in a1: return 'No' break return 'Yes'
+1
hayatunisha15 days ago
//We will store the intersection of a1 and a2 into ans
//if all the elements of ans are same as a2 return yes else return no
string isSubset(int a1[], int a2[], int n, int m) { int ans[m]; int i,j,k; i=j=k=0; sort(a1,a1+n); sort(a2,a2+m); while(i<n&&j<m){ if(a1[i]<a2[j]){i++;} else if(a1[i]>a2[j]){j++;} else if(a1[i]==a2[j]){ans[k++]=a1[i++]; j++;} } for(int i = 0;i<m;i++){ if(ans[i]!=a2[i]){return "No";} } return "Yes";}
+1
imjunior4716 days ago
HashSet<Long> set = new HashSet<>(); for(long ele : a1) set.add(ele); for(long ele : a2){ if(!set.contains(ele)) return "No"; } return "Yes";
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested
against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code.
On submission, your code is tested against multiple test cases consisting of all
possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as
the final solution code.
You can view the solutions submitted by other users from the submission tab.
Make sure you are not using ad-blockers.
Disable browser extensions.
We recommend using latest version of your browser for best experience.
Avoid using static/global variables in coding problems as your code is tested
against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases in coding problems does not guarantee the
correctness of code. On submission, your code is tested against multiple test cases
consisting of all possible corner cases and stress constraints. | [
{
"code": null,
"e": 407,
"s": 238,
"text": "Given two arrays: a1[0..n-1] of size n and a2[0..m-1] of size m. Task is to check whether a2[] is a subset of a1[] or not. Both the arrays can be sorted or unsorted. \n "
},
{
"code": null,
"e": 418,
"s": 407,
"text": "Example 1:"
},
{
"code": null,
"e": 525,
"s": 418,
"text": "Input:\na1[] = {11, 1, 13, 21, 3, 7}\na2[] = {11, 3, 7, 1}\nOutput:\nYes\nExplanation:\na2[] is a subset of a1[]"
},
{
"code": null,
"e": 537,
"s": 525,
"text": "\nExample 2:"
},
{
"code": null,
"e": 637,
"s": 537,
"text": "Input:\na1[] = {1, 2, 3, 4, 5, 6}\na2[] = {1, 2, 4}\nOutput:\nYes\nExplanation:\na2[] is a subset of a1[]"
},
{
"code": null,
"e": 649,
"s": 637,
"text": "\nExample 3:"
},
{
"code": null,
"e": 753,
"s": 649,
"text": "Input:\na1[] = {10, 5, 2, 23, 19}\na2[] = {19, 5, 3}\nOutput:\nNo\nExplanation:\na2[] is not a subset of a1[]"
},
{
"code": null,
"e": 1017,
"s": 755,
"text": "Your Task: \nYou don't need to read input or print anything. Your task is to complete the function isSubset() which takes the array a1[], a2[], its size n and m as inputs and return \"Yes\" if arr2 is subset of arr1 else return \"No\" if arr2 is not subset of arr1."
},
{
"code": null,
"e": 1142,
"s": 1019,
"text": "Expected Time Complexity: O(n)\nExpected Auxiliary Space: O(n)\n\n\nConstraints:\n1 <= n,m <= 105\n1 <= a1[i], a2[j] <= 105\n\n\n\n "
},
{
"code": null,
"e": 1146,
"s": 1144,
"text": "0"
},
{
"code": null,
"e": 1167,
"s": 1146,
"text": "kvkhairkarin 8 hours"
},
{
"code": null,
"e": 1184,
"s": 1167,
"text": "#Your Code here "
},
{
"code": null,
"e": 1289,
"s": 1184,
"text": "s1 = set(a1) s2 = set(a2) if s2.intersection(s1) == s2: return \"Yes\" else: return \"No\""
},
{
"code": null,
"e": 1291,
"s": 1289,
"text": "0"
},
{
"code": null,
"e": 1314,
"s": 1291,
"text": "ayckerayush3 hours ago"
},
{
"code": null,
"e": 1336,
"s": 1314,
"text": " check this out (c++)"
},
{
"code": null,
"e": 1727,
"s": 1336,
"text": "sort(a1,a1+n);\n sort(a2,a2+m);\n int indexA1=0;\n int indexA2=0;\n if(m>n){\n return \"No\";\n }\n for(int i=0;i<m;i++){\n for(int j=indexA1;j<n;j++){\n if(a1[j]==a2[i]){\n indexA1=j+1;\n break;\n }\n else if(a1[j]>a2[i] || j>=n-1){\n return \"No\";\n }\n }\n }\n return \"Yes\";"
},
{
"code": null,
"e": 1729,
"s": 1727,
"text": "0"
},
{
"code": null,
"e": 1755,
"s": 1729,
"text": "surabhichoubey551 day ago"
},
{
"code": null,
"e": 2269,
"s": 1755,
"text": "string isSubset(int a1[], int a2[], int n, int m) { //make unordered_set which will store all array1 elements unordered_set<int>s1; //array1 elements insert for(int i = 0;i<n;i++) { s1.insert(a1[i]); } //now treverse array2 for(int i = 0;i<m;i++) { if(s1.find(a2[i]) == s1.end()) { return \"No\"; } } //also while treverseing cheack if array element is not seen in array1 return false return \"Yes\"; //else return true or yes }"
},
{
"code": null,
"e": 2271,
"s": 2269,
"text": "0"
},
{
"code": null,
"e": 2291,
"s": 2271,
"text": "aamir255y2 days ago"
},
{
"code": null,
"e": 2560,
"s": 2291,
"text": "string isSubset(int a1[], int a2[], int n, int m) {\n unordered_map<int, int> map_a1;\n \n for(int i = 0; i < n; ++i) map_a1[ a1[i] ] = 1;\n \n for(int i = 0; i < m; ++i) {\n if( map_a1[ a2[i] ] != 1 )\n return \"No\";\n }\n return \"Yes\";\n}"
},
{
"code": null,
"e": 2562,
"s": 2560,
"text": "0"
},
{
"code": null,
"e": 2589,
"s": 2562,
"text": "abhishekjune20162 days ago"
},
{
"code": null,
"e": 3008,
"s": 2589,
"text": "map<int,int>mp; for(int i=0;i<n;i++) { mp[a1[i]]++; } // for (auto i : mp) // cout << i.first << \" \" << i.second // << endl; map<int, int>::iterator it ; int count=0; for(int i=0;i<m;i++) { it=mp.find(a2[i]); if(it!=mp.end()) { count++; } } // cout<<count<<endl; if(count==m) { return \"Yes\"; } else { return \"No\"; }"
},
{
"code": null,
"e": 3010,
"s": 3008,
"text": "0"
},
{
"code": null,
"e": 3027,
"s": 3010,
"text": "abhishekjune2016"
},
{
"code": null,
"e": 3053,
"s": 3027,
"text": "This comment was deleted."
},
{
"code": null,
"e": 3055,
"s": 3053,
"text": "0"
},
{
"code": null,
"e": 3085,
"s": 3055,
"text": "arobindosuklabaidya4 days ago"
},
{
"code": null,
"e": 3104,
"s": 3085,
"text": "STL Solution !!!!!"
},
{
"code": null,
"e": 3343,
"s": 3106,
"text": "string isSubset(int a1[], int a2[], int n, int m) { unordered_map<int,int> mp; for(int i=0;i<n;i++){ mp[a1[i]]++; } for(int i=0;i<m;i++){ if(mp[a2[i]]==0){ return \"No\"; } } return \"Yes\"; }"
},
{
"code": null,
"e": 3345,
"s": 3343,
"text": "0"
},
{
"code": null,
"e": 3374,
"s": 3345,
"text": "sankeerthsirikonda5 days ago"
},
{
"code": null,
"e": 3382,
"s": 3374,
"text": "#python"
},
{
"code": null,
"e": 3501,
"s": 3382,
"text": "def isSubset( a1, a2, n, m): for i in a2: if i not in a1: return 'No' break return 'Yes'"
},
{
"code": null,
"e": 3504,
"s": 3501,
"text": "+1"
},
{
"code": null,
"e": 3527,
"s": 3504,
"text": "hayatunisha15 days ago"
},
{
"code": null,
"e": 3582,
"s": 3527,
"text": "//We will store the intersection of a1 and a2 into ans"
},
{
"code": null,
"e": 3652,
"s": 3582,
"text": "//if all the elements of ans are same as a2 return yes else return no"
},
{
"code": null,
"e": 4006,
"s": 3652,
"text": "string isSubset(int a1[], int a2[], int n, int m) { int ans[m]; int i,j,k; i=j=k=0; sort(a1,a1+n); sort(a2,a2+m); while(i<n&&j<m){ if(a1[i]<a2[j]){i++;} else if(a1[i]>a2[j]){j++;} else if(a1[i]==a2[j]){ans[k++]=a1[i++]; j++;} } for(int i = 0;i<m;i++){ if(ans[i]!=a2[i]){return \"No\";} } return \"Yes\";}"
},
{
"code": null,
"e": 4009,
"s": 4006,
"text": "+1"
},
{
"code": null,
"e": 4031,
"s": 4009,
"text": "imjunior4716 days ago"
},
{
"code": null,
"e": 4215,
"s": 4031,
"text": " HashSet<Long> set = new HashSet<>(); for(long ele : a1) set.add(ele); for(long ele : a2){ if(!set.contains(ele)) return \"No\"; } return \"Yes\";"
},
{
"code": null,
"e": 4361,
"s": 4215,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 4397,
"s": 4361,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 4407,
"s": 4397,
"text": "\nProblem\n"
},
{
"code": null,
"e": 4417,
"s": 4407,
"text": "\nContest\n"
},
{
"code": null,
"e": 4480,
"s": 4417,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 4665,
"s": 4480,
"text": "Avoid using static/global variables in your code as your code is tested \n against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 4949,
"s": 4665,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code.\n On submission, your code is tested against multiple test cases consisting of all\n possible corner cases and stress constraints."
},
{
"code": null,
"e": 5095,
"s": 4949,
"text": "You can access the hints to get an idea about what is expected of you as well as\n the final solution code."
},
{
"code": null,
"e": 5172,
"s": 5095,
"text": "You can view the solutions submitted by other users from the submission tab."
},
{
"code": null,
"e": 5213,
"s": 5172,
"text": "Make sure you are not using ad-blockers."
},
{
"code": null,
"e": 5241,
"s": 5213,
"text": "Disable browser extensions."
},
{
"code": null,
"e": 5312,
"s": 5241,
"text": "We recommend using latest version of your browser for best experience."
},
{
"code": null,
"e": 5499,
"s": 5312,
"text": "Avoid using static/global variables in coding problems as your code is tested \n against multiple test cases and these tend to retain their previous values."
}
] |
Python dictionary type() Method | Python dictionary method type() returns the type of the passed variable. If passed variable is dictionary then it would return a dictionary type.
Following is the syntax for type() method −
type(dict)
dict − This is the dictionary.
dict − This is the dictionary.
This method returns the type of the passed variable.
The following example shows the usage of type() method.
#!/usr/bin/python
dict = {'Name': 'Zara', 'Age': 7};
print "Variable Type : %s" % type (dict)
When we run above program, it produces following result −
Variable Type : <type 'dict'>
187 Lectures
17.5 hours
Malhar Lathkar
55 Lectures
8 hours
Arnab Chakraborty
136 Lectures
11 hours
In28Minutes Official
75 Lectures
13 hours
Eduonix Learning Solutions
70 Lectures
8.5 hours
Lets Kode It
63 Lectures
6 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2391,
"s": 2244,
"text": "Python dictionary method type() returns the type of the passed variable. If passed variable is dictionary then it would return a dictionary type."
},
{
"code": null,
"e": 2435,
"s": 2391,
"text": "Following is the syntax for type() method −"
},
{
"code": null,
"e": 2446,
"s": 2435,
"text": "type(dict)"
},
{
"code": null,
"e": 2477,
"s": 2446,
"text": "dict − This is the dictionary."
},
{
"code": null,
"e": 2508,
"s": 2477,
"text": "dict − This is the dictionary."
},
{
"code": null,
"e": 2561,
"s": 2508,
"text": "This method returns the type of the passed variable."
},
{
"code": null,
"e": 2617,
"s": 2561,
"text": "The following example shows the usage of type() method."
},
{
"code": null,
"e": 2713,
"s": 2617,
"text": "#!/usr/bin/python\n\ndict = {'Name': 'Zara', 'Age': 7};\nprint \"Variable Type : %s\" % type (dict)"
},
{
"code": null,
"e": 2771,
"s": 2713,
"text": "When we run above program, it produces following result −"
},
{
"code": null,
"e": 2802,
"s": 2771,
"text": "Variable Type : <type 'dict'>\n"
},
{
"code": null,
"e": 2839,
"s": 2802,
"text": "\n 187 Lectures \n 17.5 hours \n"
},
{
"code": null,
"e": 2855,
"s": 2839,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 2888,
"s": 2855,
"text": "\n 55 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 2907,
"s": 2888,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 2942,
"s": 2907,
"text": "\n 136 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 2964,
"s": 2942,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 2998,
"s": 2964,
"text": "\n 75 Lectures \n 13 hours \n"
},
{
"code": null,
"e": 3026,
"s": 2998,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 3061,
"s": 3026,
"text": "\n 70 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 3075,
"s": 3061,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3108,
"s": 3075,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3125,
"s": 3108,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 3132,
"s": 3125,
"text": " Print"
},
{
"code": null,
"e": 3143,
"s": 3132,
"text": " Add Notes"
}
] |
CASE WHEN in PostgreSQL? | If you are a programmer, you may be very familiar with IF-ELSE statements. The equivalent in PostgreSQL is CASE WHEN.
Let’s understand with an example. If you have table marks containing percentage marks of a student, and you want to find out whether the students have passed or failed. An example table is given below.
Say the passing marks are 40. Now, if the student has scored above 40 marks, we want to print ‘PASS’ against that student’s name, otherwise ‘FAIL’. This is how you can do it −
SELECT name, CASE WHEN perc_marks >= 40 THEN 'PASS' ELSE 'FAIL' END
status from marks
The output will be −
Remember, the END at the end of the CASE WHEN expression is important. You can add multiple WHEN statements. Suppose you want to say that those who scored above 80 marks have the status ‘DISTINCTION’, between 40 and 80, they have status ‘PASS’ and below 40, they have the status ‘FAIL’, you can do that as follows −
SELECT name, CASE
WHEN perc_marks >= 80 THEN 'DISTINCTION'
WHEN perc_marks >= 40 and perc_marks < 80 THEN 'PASS'
ELSE 'FAIL' END status from marks
The output will be − | [
{
"code": null,
"e": 1180,
"s": 1062,
"text": "If you are a programmer, you may be very familiar with IF-ELSE statements. The equivalent in PostgreSQL is CASE WHEN."
},
{
"code": null,
"e": 1382,
"s": 1180,
"text": "Let’s understand with an example. If you have table marks containing percentage marks of a student, and you want to find out whether the students have passed or failed. An example table is given below."
},
{
"code": null,
"e": 1558,
"s": 1382,
"text": "Say the passing marks are 40. Now, if the student has scored above 40 marks, we want to print ‘PASS’ against that student’s name, otherwise ‘FAIL’. This is how you can do it −"
},
{
"code": null,
"e": 1644,
"s": 1558,
"text": "SELECT name, CASE WHEN perc_marks >= 40 THEN 'PASS' ELSE 'FAIL' END\nstatus from marks"
},
{
"code": null,
"e": 1665,
"s": 1644,
"text": "The output will be −"
},
{
"code": null,
"e": 1981,
"s": 1665,
"text": "Remember, the END at the end of the CASE WHEN expression is important. You can add multiple WHEN statements. Suppose you want to say that those who scored above 80 marks have the status ‘DISTINCTION’, between 40 and 80, they have status ‘PASS’ and below 40, they have the status ‘FAIL’, you can do that as follows −"
},
{
"code": null,
"e": 2128,
"s": 1981,
"text": "SELECT name, CASE\nWHEN perc_marks >= 80 THEN 'DISTINCTION'\nWHEN perc_marks >= 40 and perc_marks < 80 THEN 'PASS'\nELSE 'FAIL' END status from marks"
},
{
"code": null,
"e": 2149,
"s": 2128,
"text": "The output will be −"
}
] |
Bypassing Pandas Memory Limitations - GeeksforGeeks | 30 Apr, 2021
Pandas is a Python library used for analyzing and manipulating data sets but one of the major drawbacks of Pandas is memory limitation issues while working with large datasets since Pandas DataFrames (two-dimensional data structure) are kept in memory, there is a limit to how much data can be processed at a time.
Dataset in use: train_dataset
Processing large amounts of data in Pandas requires one of the below approaches:
pandas.read_csv() has a parameter called chunksize which is used to load data in chunks. The parameter chunksize is the number of rows read at a time in a file by Pandas. It returns an iterator TextFileReader which needs to be iterated to get the data.
Syntax:
pd.read_csv(‘file_name’, chunksize= size_of_chunk)
Example:
Python3
import pandas as pd data = pd.read_csv('train dataset.csv', chunksize=100) for x in data: print(x.shape[0])
Output:
Large datasets have many columns/features but only some of them are actually used. So to save more time for data manipulation and computation, load only useful columns.
Syntax:
dataframe = dataframe[[‘column_1’, ‘column_2’, ‘column_3’, ‘column_4’, ‘column_5’]]
Example :
Python3
import pandas as pd data=pd.read_csv('train_dataset.csv') data = data[['Gender', 'Age', 'openness', 'neuroticism', 'conscientiousness', 'agreeableness', 'extraversion']] display(data)
Output :
By default, pandas assigns int64 range(which is the largest available dtype) for all numeric values. But if the values in the numeric column are less than int64 range, then lesser capacity dtypes can be used to prevent extra memory allocation as larger dtypes use more memory.
Syntax:
dataframe =pd.read_csv(‘file_name’,dtype={‘col_1’:‘dtype_value’,‘col_2’:‘dtype_value’})
Example :
Python3
import pandas as pd data = pd.read_csv('train_dataset.csv', dtype={'Age': 'int32'}) print(data.info())
Output :
Pandas Dataframe can be converted to Sparse Dataframe which means that any data matching a specific value is omitted in the representation. The sparse DataFrame allows for more efficient storage.
Syntax:
dataframe = dataFrame.to_sparse(fill_value=None, kind=’block’)
Since there are no null values in the above dataset, let’s create dataframe with some null values and convert it to a sparse dataframe.
Example :
Python3
import pandas as pdimport numpy as np df = pd.DataFrame(np.random.randn(10000, 4))df.iloc[:9998] = np.nan sdf = df.astype(pd.SparseDtype("float", np.nan)) sdf.head() sdf.dtypes
Output:
While data cleaning/pre-processing many temporary data frames and objects are created which should be deleted after their use so that less memory is used. The del keyword in python is primarily used to delete objects in Python.
Syntax:
del object_name
Example :
Python3
import pandas as pd data = pd.read_csv('train_dataset.csv') del data
Picked
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Python | os.path.join() method
Create a directory in Python
Python | Get unique values from a list
Python | Pandas dataframe.groupby()
Defaultdict in Python | [
{
"code": null,
"e": 25580,
"s": 25552,
"text": "\n30 Apr, 2021"
},
{
"code": null,
"e": 25895,
"s": 25580,
"text": "Pandas is a Python library used for analyzing and manipulating data sets but one of the major drawbacks of Pandas is memory limitation issues while working with large datasets since Pandas DataFrames (two-dimensional data structure) are kept in memory, there is a limit to how much data can be processed at a time."
},
{
"code": null,
"e": 25925,
"s": 25895,
"text": "Dataset in use: train_dataset"
},
{
"code": null,
"e": 26006,
"s": 25925,
"text": "Processing large amounts of data in Pandas requires one of the below approaches:"
},
{
"code": null,
"e": 26259,
"s": 26006,
"text": "pandas.read_csv() has a parameter called chunksize which is used to load data in chunks. The parameter chunksize is the number of rows read at a time in a file by Pandas. It returns an iterator TextFileReader which needs to be iterated to get the data."
},
{
"code": null,
"e": 26267,
"s": 26259,
"text": "Syntax:"
},
{
"code": null,
"e": 26322,
"s": 26267,
"text": " pd.read_csv(‘file_name’, chunksize= size_of_chunk)"
},
{
"code": null,
"e": 26331,
"s": 26322,
"text": "Example:"
},
{
"code": null,
"e": 26339,
"s": 26331,
"text": "Python3"
},
{
"code": "import pandas as pd data = pd.read_csv('train dataset.csv', chunksize=100) for x in data: print(x.shape[0])",
"e": 26452,
"s": 26339,
"text": null
},
{
"code": null,
"e": 26460,
"s": 26452,
"text": "Output:"
},
{
"code": null,
"e": 26629,
"s": 26460,
"text": "Large datasets have many columns/features but only some of them are actually used. So to save more time for data manipulation and computation, load only useful columns."
},
{
"code": null,
"e": 26637,
"s": 26629,
"text": "Syntax:"
},
{
"code": null,
"e": 26721,
"s": 26637,
"text": "dataframe = dataframe[[‘column_1’, ‘column_2’, ‘column_3’, ‘column_4’, ‘column_5’]]"
},
{
"code": null,
"e": 26732,
"s": 26721,
"text": "Example : "
},
{
"code": null,
"e": 26740,
"s": 26732,
"text": "Python3"
},
{
"code": "import pandas as pd data=pd.read_csv('train_dataset.csv') data = data[['Gender', 'Age', 'openness', 'neuroticism', 'conscientiousness', 'agreeableness', 'extraversion']] display(data)",
"e": 26939,
"s": 26740,
"text": null
},
{
"code": null,
"e": 26949,
"s": 26939,
"text": "Output : "
},
{
"code": null,
"e": 27226,
"s": 26949,
"text": "By default, pandas assigns int64 range(which is the largest available dtype) for all numeric values. But if the values in the numeric column are less than int64 range, then lesser capacity dtypes can be used to prevent extra memory allocation as larger dtypes use more memory."
},
{
"code": null,
"e": 27234,
"s": 27226,
"text": "Syntax:"
},
{
"code": null,
"e": 27324,
"s": 27234,
"text": " dataframe =pd.read_csv(‘file_name’,dtype={‘col_1’:‘dtype_value’,‘col_2’:‘dtype_value’})"
},
{
"code": null,
"e": 27335,
"s": 27324,
"text": "Example : "
},
{
"code": null,
"e": 27343,
"s": 27335,
"text": "Python3"
},
{
"code": "import pandas as pd data = pd.read_csv('train_dataset.csv', dtype={'Age': 'int32'}) print(data.info())",
"e": 27448,
"s": 27343,
"text": null
},
{
"code": null,
"e": 27457,
"s": 27448,
"text": "Output :"
},
{
"code": null,
"e": 27653,
"s": 27457,
"text": "Pandas Dataframe can be converted to Sparse Dataframe which means that any data matching a specific value is omitted in the representation. The sparse DataFrame allows for more efficient storage."
},
{
"code": null,
"e": 27661,
"s": 27653,
"text": "Syntax:"
},
{
"code": null,
"e": 27728,
"s": 27661,
"text": " dataframe = dataFrame.to_sparse(fill_value=None, kind=’block’)"
},
{
"code": null,
"e": 27864,
"s": 27728,
"text": "Since there are no null values in the above dataset, let’s create dataframe with some null values and convert it to a sparse dataframe."
},
{
"code": null,
"e": 27874,
"s": 27864,
"text": "Example :"
},
{
"code": null,
"e": 27882,
"s": 27874,
"text": "Python3"
},
{
"code": "import pandas as pdimport numpy as np df = pd.DataFrame(np.random.randn(10000, 4))df.iloc[:9998] = np.nan sdf = df.astype(pd.SparseDtype(\"float\", np.nan)) sdf.head() sdf.dtypes",
"e": 28063,
"s": 27882,
"text": null
},
{
"code": null,
"e": 28071,
"s": 28063,
"text": "Output:"
},
{
"code": null,
"e": 28299,
"s": 28071,
"text": "While data cleaning/pre-processing many temporary data frames and objects are created which should be deleted after their use so that less memory is used. The del keyword in python is primarily used to delete objects in Python."
},
{
"code": null,
"e": 28307,
"s": 28299,
"text": "Syntax:"
},
{
"code": null,
"e": 28327,
"s": 28307,
"text": " del object_name"
},
{
"code": null,
"e": 28338,
"s": 28327,
"text": "Example : "
},
{
"code": null,
"e": 28346,
"s": 28338,
"text": "Python3"
},
{
"code": "import pandas as pd data = pd.read_csv('train_dataset.csv') del data",
"e": 28417,
"s": 28346,
"text": null
},
{
"code": null,
"e": 28424,
"s": 28417,
"text": "Picked"
},
{
"code": null,
"e": 28438,
"s": 28424,
"text": "Python-pandas"
},
{
"code": null,
"e": 28445,
"s": 28438,
"text": "Python"
},
{
"code": null,
"e": 28543,
"s": 28445,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28575,
"s": 28543,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28617,
"s": 28575,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 28659,
"s": 28617,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 28715,
"s": 28659,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 28742,
"s": 28715,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 28773,
"s": 28742,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 28802,
"s": 28773,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 28841,
"s": 28802,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 28877,
"s": 28841,
"text": "Python | Pandas dataframe.groupby()"
}
] |
Mealy Machine for 1's Complement - GeeksforGeeks | 07 Jul, 2021
After converting a number to its binary form, replace every one of the number with 0 and every 0 with 1, then the resulting number will be known as 1, s complement of that binary number.
Input-1 :
101010
Output-1 :
010101
Input-2 :
1110100
Output-2 :
0001011
Approach :
First make a initial state. Then convert each 0 to 1 and each 1 to 0, and reach to next possible state. after performing step 2 on each digit of binary number, reach at some final state to complete the process.
First make a initial state.
Then convert each 0 to 1 and each 1 to 0, and reach to next possible state.
after performing step 2 on each digit of binary number, reach at some final state to complete the process.
Design Mealy Machine :
Step-1: Take a initial state say q1, and if the input is found 0, convert it to 1, remain at same state and check for another input alphabet.
Step-2: If state q1 takes input alphabet is 1, then convert it to 0 and remain at same state.
Step-3:As after converting all alphabet, it is on same state then this state will be final state as well.
Example : Suppose a string 10001 and start parsing from left to right. Every 0 will be replaced by 1 and every one is replaces by 0. Then the output is 01110.
sooda367
Theory of Computation & Automata
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Introduction of Pushdown Automata
Construct Pushdown Automata for given languages
Converting Context Free Grammar to Chomsky Normal Form
How to identify if a language is regular or not
Simplifying Context Free Grammars
Introduction of Theory of Computation
Arden's Theorem in Theory of Computation
Conversion of Epsilon-NFA to NFA
Check if the language is Context Free or Not
Construct a Turing Machine for language L = {wwr | w ∈ {0, 1}} | [
{
"code": null,
"e": 24254,
"s": 24226,
"text": "\n07 Jul, 2021"
},
{
"code": null,
"e": 24443,
"s": 24254,
"text": "After converting a number to its binary form, replace every one of the number with 0 and every 0 with 1, then the resulting number will be known as 1, s complement of that binary number. "
},
{
"code": null,
"e": 24519,
"s": 24443,
"text": "Input-1 :\n101010\nOutput-1 : \n010101 \n\nInput-2 :\n1110100\nOutput-2 :\n0001011 "
},
{
"code": null,
"e": 24531,
"s": 24519,
"text": "Approach : "
},
{
"code": null,
"e": 24744,
"s": 24531,
"text": "First make a initial state. Then convert each 0 to 1 and each 1 to 0, and reach to next possible state. after performing step 2 on each digit of binary number, reach at some final state to complete the process. "
},
{
"code": null,
"e": 24773,
"s": 24744,
"text": "First make a initial state. "
},
{
"code": null,
"e": 24850,
"s": 24773,
"text": "Then convert each 0 to 1 and each 1 to 0, and reach to next possible state. "
},
{
"code": null,
"e": 24959,
"s": 24850,
"text": "after performing step 2 on each digit of binary number, reach at some final state to complete the process. "
},
{
"code": null,
"e": 24983,
"s": 24959,
"text": "Design Mealy Machine : "
},
{
"code": null,
"e": 25126,
"s": 24983,
"text": "Step-1: Take a initial state say q1, and if the input is found 0, convert it to 1, remain at same state and check for another input alphabet. "
},
{
"code": null,
"e": 25223,
"s": 25128,
"text": "Step-2: If state q1 takes input alphabet is 1, then convert it to 0 and remain at same state. "
},
{
"code": null,
"e": 25332,
"s": 25225,
"text": "Step-3:As after converting all alphabet, it is on same state then this state will be final state as well. "
},
{
"code": null,
"e": 25494,
"s": 25334,
"text": "Example : Suppose a string 10001 and start parsing from left to right. Every 0 will be replaced by 1 and every one is replaces by 0. Then the output is 01110. "
},
{
"code": null,
"e": 25503,
"s": 25494,
"text": "sooda367"
},
{
"code": null,
"e": 25536,
"s": 25503,
"text": "Theory of Computation & Automata"
},
{
"code": null,
"e": 25634,
"s": 25536,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25643,
"s": 25634,
"text": "Comments"
},
{
"code": null,
"e": 25656,
"s": 25643,
"text": "Old Comments"
},
{
"code": null,
"e": 25690,
"s": 25656,
"text": "Introduction of Pushdown Automata"
},
{
"code": null,
"e": 25738,
"s": 25690,
"text": "Construct Pushdown Automata for given languages"
},
{
"code": null,
"e": 25793,
"s": 25738,
"text": "Converting Context Free Grammar to Chomsky Normal Form"
},
{
"code": null,
"e": 25841,
"s": 25793,
"text": "How to identify if a language is regular or not"
},
{
"code": null,
"e": 25875,
"s": 25841,
"text": "Simplifying Context Free Grammars"
},
{
"code": null,
"e": 25913,
"s": 25875,
"text": "Introduction of Theory of Computation"
},
{
"code": null,
"e": 25954,
"s": 25913,
"text": "Arden's Theorem in Theory of Computation"
},
{
"code": null,
"e": 25987,
"s": 25954,
"text": "Conversion of Epsilon-NFA to NFA"
},
{
"code": null,
"e": 26032,
"s": 25987,
"text": "Check if the language is Context Free or Not"
}
] |
The Influence Of Data Scaling On Machine Learning Algorithms | by K.A. | Towards Data Science | Scaling is the act of data preprocessing.
Preprocessing data involves transforming and scaling the data, up or down, before it becomes used for further steps. Every so often attributes are not expressed by the same standards, scales or measures, to such an extent that their statistics yield distorted data modeling results. For instance, K-Means Clustering algorithm is not scale invariant; it computes the space between two points by the Euclidean distance. To refresh one’s memory on the concept of Euclidean distance — is the non-negative value difference between two points in one-dimensional space. So, if one of the attributes has a broad range of values, the computed distance will be skewed by this attribute (i.e. smaller valued attribute will contribute very little). For example, if one of the attributes is measured in centimeters, and one then decided to convert the measure to millimeters (i.e. multiplying the values by 10), the resulting Euclidean distance can be significantly affected. To have the attribute adds approximately proportionately to the final computed distance, the range of that attribute should be normalized.
Normalization has many meanings, in its simplest case, it refers to shifting certain attribute scale to a version that removes the effect of gross statistics influence when compared against another attribute.
Attempting to use analysis technique like the Principal Components Regression (PCR) asks for all the attributes to be on the same scale. The attributes may have high variance which would influence the PCR model. Another reason for scaling attributes is for computational efficiency; in the case of the gradient descent the function converges rather quickly than no normalization at all.
There are several normalization methods among which the common ones are the Z-score and Min-Max.
Some statistical learning techniques (i.e. linear regression) where scaling the attributes has no effect may benefit from another preprocessing technique like codifying nominal-valued attributes to some fixed numerical values. For example, to give arbitrarily a gender attribute a value ‘1’ for female and ‘0’ for male. The motivation for that is to allow the attribute to be incorporated into a regression model. Be sure to document the meaning of the codes somewhere.
Choosing the best preprocessing technique — Z-Score or Min-Max?
The short answer is both and depending on the application. Each method has its practical use. The Z-score of an observation is defined as the number of standard deviations it falls above or below the mean, in other words it computes the variance (i.e. distance). As mentioned earlier, clustering data modeling technique needs normalization, in the sense that it requires to compute the Euclidean distance. The Z-score is suited well and is essential to compare similarities between attributes based on certain distance measure. The same applies to Principal Components Regression (PCR); in it we are interested in the components that maximize the variance. On the other hand, we have the min-max technique transforming the data attributes to a fixed range; typically, between 0 to 1. Min-max method takes the function form of y = (x-min(x))/(max(x)-min(x)), where x is a vector. Example uses are in image processing and neural networking algorithms (NNA), because large integer inputs like [0,255] in NNA can disrupt or slow down the learning process. Min-max normalization changes the range of pixel intensity values of an image [0,255] in an 8-bit RGB color space to the range between 0–1 for easy computation.
Learning Data Preprocessing intuitively
Perhaps applying normalization methods on a dataset may shed light on what happens to it; we could visualize the data points transformation to explain it more intuitively as well. So, let’s begin by loading a dataset from UCI Machine Learning Databases. This is a wine dataset that features three classes of wines identified as (1,2,3) in the first column. The data came from an analysis which determined the quantities of 13 constituents found in each of the three types of wines.
df <- read.csv(“wine.data”, header=F)wine <- df[1:3]colnames(wine) <- c(‘type’,’alcohol’,’malic acid’)wine$type <- as.factor(wine$type)
The wine data was read as a CSV file with no header using read.csv. Appropriate header names were given using the function colnames(). The wine types were also transformed into a factor using as.factor(). These steps are not necessary for normalization but are good general practice.
We selected three attributes including the wine classes, and the two constituents labeled alcohol and malic acid which are measured in different scales. The former constituent is measured by Percent/Volume while the latter is by G/L. If we were to use the two attributes in a clustering algorithm, it would be clear to us that a method of normalization (scaling) is needed. We will first apply the Z-score normalization followed by the min-max method on the wine dataset.
var(wine[,-1])
std.wine <- as.data.frame(scale(wine[,-1])) #normalize using the Z-score methodvar(std.wine) #display the variance after the Z-score application
mean(std.wine[,1]) #display the mean of the first attribute
mean(std.wine[,2]) #display the mean of the second attribute
We could see that alcohol and malic acid are standardized with variance of 1 and 0 mean.
Note that the mean numbers were raised to the power of -16, -17 (e-16, e-17) respectively denoting a number close to zero.
Next, we create min-max function that transforms the data points to a range of values between 0 and 1.
min_max_wine <- as.data.frame(sapply(wine[,-1], function(x) { return((x- min(x,na.rm = F)) / (max(x,na.rm = F)-min(x,na.rm = F)))}))
Plotting all the three different scales of the wine data points as below:
plot(std.wine$alcohol,std.wine$`malic acid`,col = “dark red”,xlim=c(-5,20), ylim = c(-2,7),xlab=’Alcohol’,ylab=’Malic Acid’, grid(lwd=1, nx=10, ny=10))par(new=T)plot(min_max_wine$alcohol,min_max_wine$`malic acid`,col=”dark blue”,xlim=c(-5,20),ylim=c(-2,7),xlab=’’, ylab=’’,axes = F)par(new=T)plot(wine$alcohol,wine$`malic acid`,col=”dark green”, xlim=c(-5,20),ylim = c(-2,7),xlab=’’, ylab=’’,axes = F)legend(-6,7.5, c(“std.wine”,”min_max_wine “,”input scale”), cex=0.75, bty=”n”, fill = c(“dark red”,”dark blue”,”dark green”))
As you could see, there are three data point sets; in the green set the measurement is in the original volume-percent, while the standardized attributes are in red where the data is centered around mean zero and variance of one, and the normalized min-max attributes range between 0–1.
The three sets may seem different in shape, however, if you were to zoom in to each set using their new scale you would notice that, regardless of the overall shape size, the points are still located exactly in the same place relative to each other. These normalization methods have preserved the integrity of the data via scaling. | [
{
"code": null,
"e": 214,
"s": 172,
"text": "Scaling is the act of data preprocessing."
},
{
"code": null,
"e": 1316,
"s": 214,
"text": "Preprocessing data involves transforming and scaling the data, up or down, before it becomes used for further steps. Every so often attributes are not expressed by the same standards, scales or measures, to such an extent that their statistics yield distorted data modeling results. For instance, K-Means Clustering algorithm is not scale invariant; it computes the space between two points by the Euclidean distance. To refresh one’s memory on the concept of Euclidean distance — is the non-negative value difference between two points in one-dimensional space. So, if one of the attributes has a broad range of values, the computed distance will be skewed by this attribute (i.e. smaller valued attribute will contribute very little). For example, if one of the attributes is measured in centimeters, and one then decided to convert the measure to millimeters (i.e. multiplying the values by 10), the resulting Euclidean distance can be significantly affected. To have the attribute adds approximately proportionately to the final computed distance, the range of that attribute should be normalized."
},
{
"code": null,
"e": 1525,
"s": 1316,
"text": "Normalization has many meanings, in its simplest case, it refers to shifting certain attribute scale to a version that removes the effect of gross statistics influence when compared against another attribute."
},
{
"code": null,
"e": 1912,
"s": 1525,
"text": "Attempting to use analysis technique like the Principal Components Regression (PCR) asks for all the attributes to be on the same scale. The attributes may have high variance which would influence the PCR model. Another reason for scaling attributes is for computational efficiency; in the case of the gradient descent the function converges rather quickly than no normalization at all."
},
{
"code": null,
"e": 2009,
"s": 1912,
"text": "There are several normalization methods among which the common ones are the Z-score and Min-Max."
},
{
"code": null,
"e": 2479,
"s": 2009,
"text": "Some statistical learning techniques (i.e. linear regression) where scaling the attributes has no effect may benefit from another preprocessing technique like codifying nominal-valued attributes to some fixed numerical values. For example, to give arbitrarily a gender attribute a value ‘1’ for female and ‘0’ for male. The motivation for that is to allow the attribute to be incorporated into a regression model. Be sure to document the meaning of the codes somewhere."
},
{
"code": null,
"e": 2543,
"s": 2479,
"text": "Choosing the best preprocessing technique — Z-Score or Min-Max?"
},
{
"code": null,
"e": 3756,
"s": 2543,
"text": "The short answer is both and depending on the application. Each method has its practical use. The Z-score of an observation is defined as the number of standard deviations it falls above or below the mean, in other words it computes the variance (i.e. distance). As mentioned earlier, clustering data modeling technique needs normalization, in the sense that it requires to compute the Euclidean distance. The Z-score is suited well and is essential to compare similarities between attributes based on certain distance measure. The same applies to Principal Components Regression (PCR); in it we are interested in the components that maximize the variance. On the other hand, we have the min-max technique transforming the data attributes to a fixed range; typically, between 0 to 1. Min-max method takes the function form of y = (x-min(x))/(max(x)-min(x)), where x is a vector. Example uses are in image processing and neural networking algorithms (NNA), because large integer inputs like [0,255] in NNA can disrupt or slow down the learning process. Min-max normalization changes the range of pixel intensity values of an image [0,255] in an 8-bit RGB color space to the range between 0–1 for easy computation."
},
{
"code": null,
"e": 3796,
"s": 3756,
"text": "Learning Data Preprocessing intuitively"
},
{
"code": null,
"e": 4278,
"s": 3796,
"text": "Perhaps applying normalization methods on a dataset may shed light on what happens to it; we could visualize the data points transformation to explain it more intuitively as well. So, let’s begin by loading a dataset from UCI Machine Learning Databases. This is a wine dataset that features three classes of wines identified as (1,2,3) in the first column. The data came from an analysis which determined the quantities of 13 constituents found in each of the three types of wines."
},
{
"code": null,
"e": 4414,
"s": 4278,
"text": "df <- read.csv(“wine.data”, header=F)wine <- df[1:3]colnames(wine) <- c(‘type’,’alcohol’,’malic acid’)wine$type <- as.factor(wine$type)"
},
{
"code": null,
"e": 4698,
"s": 4414,
"text": "The wine data was read as a CSV file with no header using read.csv. Appropriate header names were given using the function colnames(). The wine types were also transformed into a factor using as.factor(). These steps are not necessary for normalization but are good general practice."
},
{
"code": null,
"e": 5170,
"s": 4698,
"text": "We selected three attributes including the wine classes, and the two constituents labeled alcohol and malic acid which are measured in different scales. The former constituent is measured by Percent/Volume while the latter is by G/L. If we were to use the two attributes in a clustering algorithm, it would be clear to us that a method of normalization (scaling) is needed. We will first apply the Z-score normalization followed by the min-max method on the wine dataset."
},
{
"code": null,
"e": 5185,
"s": 5170,
"text": "var(wine[,-1])"
},
{
"code": null,
"e": 5330,
"s": 5185,
"text": "std.wine <- as.data.frame(scale(wine[,-1])) #normalize using the Z-score methodvar(std.wine) #display the variance after the Z-score application"
},
{
"code": null,
"e": 5390,
"s": 5330,
"text": "mean(std.wine[,1]) #display the mean of the first attribute"
},
{
"code": null,
"e": 5451,
"s": 5390,
"text": "mean(std.wine[,2]) #display the mean of the second attribute"
},
{
"code": null,
"e": 5540,
"s": 5451,
"text": "We could see that alcohol and malic acid are standardized with variance of 1 and 0 mean."
},
{
"code": null,
"e": 5663,
"s": 5540,
"text": "Note that the mean numbers were raised to the power of -16, -17 (e-16, e-17) respectively denoting a number close to zero."
},
{
"code": null,
"e": 5766,
"s": 5663,
"text": "Next, we create min-max function that transforms the data points to a range of values between 0 and 1."
},
{
"code": null,
"e": 5899,
"s": 5766,
"text": "min_max_wine <- as.data.frame(sapply(wine[,-1], function(x) { return((x- min(x,na.rm = F)) / (max(x,na.rm = F)-min(x,na.rm = F)))}))"
},
{
"code": null,
"e": 5973,
"s": 5899,
"text": "Plotting all the three different scales of the wine data points as below:"
},
{
"code": null,
"e": 6500,
"s": 5973,
"text": "plot(std.wine$alcohol,std.wine$`malic acid`,col = “dark red”,xlim=c(-5,20), ylim = c(-2,7),xlab=’Alcohol’,ylab=’Malic Acid’, grid(lwd=1, nx=10, ny=10))par(new=T)plot(min_max_wine$alcohol,min_max_wine$`malic acid`,col=”dark blue”,xlim=c(-5,20),ylim=c(-2,7),xlab=’’, ylab=’’,axes = F)par(new=T)plot(wine$alcohol,wine$`malic acid`,col=”dark green”, xlim=c(-5,20),ylim = c(-2,7),xlab=’’, ylab=’’,axes = F)legend(-6,7.5, c(“std.wine”,”min_max_wine “,”input scale”), cex=0.75, bty=”n”, fill = c(“dark red”,”dark blue”,”dark green”))"
},
{
"code": null,
"e": 6786,
"s": 6500,
"text": "As you could see, there are three data point sets; in the green set the measurement is in the original volume-percent, while the standardized attributes are in red where the data is centered around mean zero and variance of one, and the normalized min-max attributes range between 0–1."
}
] |
How to handle the exception using UncaughtExceptionHandler in Java? | The UncaughtExceptionHandler is an interface inside a Thread class. When the main thread is about to terminate due to an uncaught exception the java virtual machine will invoke the thread’s UncaughtExceptionHandler for a chance to perform some error handling like logging the exception to a file or uploading the log to the server before it gets killed. We can set a Default Exception Handler which will be called for the all unhandled exceptions. It is introduced in Java 5 Version.
This Handler can be set by using the below static method of java.lang.Thread class.
public static void setDefaultUncaughtExceptionHandler(Thread.UncaughtExceptionHandler ueh)
We have to provide an implementation of the interface Thread.UncaughtExceptionHandler, which has only one method.
@FunctionalInterface
public interface UncaughtExceptionHandler {
void uncaughtException(Thread t, Throwable e);
}
Live Demo
public class UncaughtExceptionHandlerTest {
public static void main(String[] args) throws Exception {
Thread.setDefaultUncaughtExceptionHandler(new MyHandler());
throw new Exception("Test Exception");
}
private static final class MyHandler implements Thread.UncaughtExceptionHandler {
@Override
public void uncaughtException(Thread t, Throwable e) {
System.out.println("The Exception Caught: " + e);
}
}
}
The Exception Caught: java.lang.Exception: Test Exception | [
{
"code": null,
"e": 1546,
"s": 1062,
"text": "The UncaughtExceptionHandler is an interface inside a Thread class. When the main thread is about to terminate due to an uncaught exception the java virtual machine will invoke the thread’s UncaughtExceptionHandler for a chance to perform some error handling like logging the exception to a file or uploading the log to the server before it gets killed. We can set a Default Exception Handler which will be called for the all unhandled exceptions. It is introduced in Java 5 Version."
},
{
"code": null,
"e": 1630,
"s": 1546,
"text": "This Handler can be set by using the below static method of java.lang.Thread class."
},
{
"code": null,
"e": 1721,
"s": 1630,
"text": "public static void setDefaultUncaughtExceptionHandler(Thread.UncaughtExceptionHandler ueh)"
},
{
"code": null,
"e": 1835,
"s": 1721,
"text": "We have to provide an implementation of the interface Thread.UncaughtExceptionHandler, which has only one method."
},
{
"code": null,
"e": 1952,
"s": 1835,
"text": "@FunctionalInterface\npublic interface UncaughtExceptionHandler {\n void uncaughtException(Thread t, Throwable e);\n}"
},
{
"code": null,
"e": 1962,
"s": 1952,
"text": "Live Demo"
},
{
"code": null,
"e": 2419,
"s": 1962,
"text": "public class UncaughtExceptionHandlerTest {\n public static void main(String[] args) throws Exception {\n Thread.setDefaultUncaughtExceptionHandler(new MyHandler());\n throw new Exception(\"Test Exception\");\n }\n private static final class MyHandler implements Thread.UncaughtExceptionHandler {\n @Override\n public void uncaughtException(Thread t, Throwable e) {\n System.out.println(\"The Exception Caught: \" + e);\n }\n }\n}"
},
{
"code": null,
"e": 2477,
"s": 2419,
"text": "The Exception Caught: java.lang.Exception: Test Exception"
}
] |
How to find the confusion matrix for linear discriminant analysis in R? | To find the confusion matrix for linear discriminant analysis in R, we can follow the below steps −
First of all, create a data frame.
Create new features using linear discriminant analysis.
Find the confusion matrix for linear discriminant analysis using table and predict function.
Let's create a data frame as shown below −
Live Demo
Group<-sample(c("I","II","III","IV"),25,replace=TRUE)
Score1<-sample(1:10,25,replace=TRUE)
Score2<-sample(1:10,25,replace=TRUE)
Score3<-sample(1:10,25,replace=TRUE)
Score4<-sample(1:10,25,replace=TRUE)
df<-data.frame(Group,Score1,Score2,Score3,Score4)
df
On executing, the above script generates the below output(this output will vary on your system due to randomization) −
Group Score1 Score2 Score3 Score4
1 IV 7 5 2 5
2 III 5 3 2 4
3 III 8 9 4 7
4 IV 6 1 1 5
5 III 8 4 6 8
6 IV 9 2 1 7
7 I 3 2 3 2
8 IV 5 8 3 3
9 II 7 4 4 1
10 IV 5 4 1 10
11 II 3 1 2 4
12 III 3 2 1 7
13 IV 1 4 7 6
14 III 10 8 9 2
15 II 3 7 8 1
16 I 9 2 3 1
17 III 2 7 3 2
18 IV 7 7 1 7
19 IV 2 6 1 3
20 I 4 10 6 1
21 I 1 6 4 4
22 I 6 3 6 2
23 III 6 6 3 5
24 I 2 3 10 10
25 II 4 4 2 5
Use lda function of MASS package to find the new features for data in data frame df −
Live Demo
Group<-sample(c("I","II","III","IV"),25,replace=TRUE)
Score1<-sample(1:10,25,replace=TRUE)
Score2<-sample(1:10,25,replace=TRUE)
Score3<-sample(1:10,25,replace=TRUE)
Score4<-sample(1:10,25,replace=TRUE)
df<-data.frame(Group,Score1,Score2,Score3,Score4)
library(MASS)
LDA_df=lda(Group~.,data=df)
LDA_df
Call:
lda(Group ~ ., data = df)
Prior probabilities of groups:
I II III IV
0.24 0.16 0.28 0.32
Group means:
Score1 Score2 Score3 Score4
I 4.166667 4.333333 5.333333 3.333333
II 4.250000 4.000000 4.000000 2.750000
III 6.000000 5.571429 4.000000 5.000000
IV 5.250000 4.625000 2.125000 5.750000
Coefficients of linear discriminants:
LD1 LD2 LD3
Score1 0.1358158 0.18645755 -0.18790651
Score2 0.2598956 0.15492088 -0.07433529
Score3 -0.3052648 0.25571648 0.14567716
Score4 0.3117567 0.08656138 0.25216169
Proportion of trace:
LD1 LD2 LD3
0.8681 0.1161 0.0159
Create the confusion matrix for linear discriminant analysis performed above by using table and predict function as shown below −
Live Demo
Group<-sample(c("I","II","III","IV"),25,replace=TRUE)
Score1<-sample(1:10,25,replace=TRUE)
Score2<-sample(1:10,25,replace=TRUE)
Score3<-sample(1:10,25,replace=TRUE)
Score4<-sample(1:10,25,replace=TRUE)
df<-data.frame(Group,Score1,Score2,Score3,Score4)
library(MASS)
LDA_df=lda(Group~.,data=df)
table(predict(LDA_df,type="class")$class,df$Group)
I II III IV
I 3 2 1 1
II 2 1 0 0
III 1 0 3 1
IV 0 1 3 6 | [
{
"code": null,
"e": 1162,
"s": 1062,
"text": "To find the confusion matrix for linear discriminant analysis in R, we can follow the below steps −"
},
{
"code": null,
"e": 1197,
"s": 1162,
"text": "First of all, create a data frame."
},
{
"code": null,
"e": 1253,
"s": 1197,
"text": "Create new features using linear discriminant analysis."
},
{
"code": null,
"e": 1346,
"s": 1253,
"text": "Find the confusion matrix for linear discriminant analysis using table and predict function."
},
{
"code": null,
"e": 1389,
"s": 1346,
"text": "Let's create a data frame as shown below −"
},
{
"code": null,
"e": 1400,
"s": 1389,
"text": " Live Demo"
},
{
"code": null,
"e": 1655,
"s": 1400,
"text": "Group<-sample(c(\"I\",\"II\",\"III\",\"IV\"),25,replace=TRUE)\nScore1<-sample(1:10,25,replace=TRUE)\nScore2<-sample(1:10,25,replace=TRUE)\nScore3<-sample(1:10,25,replace=TRUE)\nScore4<-sample(1:10,25,replace=TRUE)\ndf<-data.frame(Group,Score1,Score2,Score3,Score4)\ndf"
},
{
"code": null,
"e": 1774,
"s": 1655,
"text": "On executing, the above script generates the below output(this output will vary on your system due to randomization) −"
},
{
"code": null,
"e": 2690,
"s": 1774,
"text": " Group Score1 Score2 Score3 Score4\n1 IV 7 5 2 5\n2 III 5 3 2 4\n3 III 8 9 4 7\n4 IV 6 1 1 5\n5 III 8 4 6 8\n6 IV 9 2 1 7\n7 I 3 2 3 2\n8 IV 5 8 3 3\n9 II 7 4 4 1\n10 IV 5 4 1 10\n11 II 3 1 2 4\n12 III 3 2 1 7\n13 IV 1 4 7 6\n14 III 10 8 9 2\n15 II 3 7 8 1\n16 I 9 2 3 1\n17 III 2 7 3 2\n18 IV 7 7 1 7\n19 IV 2 6 1 3\n20 I 4 10 6 1\n21 I 1 6 4 4\n22 I 6 3 6 2\n23 III 6 6 3 5\n24 I 2 3 10 10\n25 II 4 4 2 5"
},
{
"code": null,
"e": 2776,
"s": 2690,
"text": "Use lda function of MASS package to find the new features for data in data frame df −"
},
{
"code": null,
"e": 2787,
"s": 2776,
"text": " Live Demo"
},
{
"code": null,
"e": 3088,
"s": 2787,
"text": "Group<-sample(c(\"I\",\"II\",\"III\",\"IV\"),25,replace=TRUE)\nScore1<-sample(1:10,25,replace=TRUE)\nScore2<-sample(1:10,25,replace=TRUE)\nScore3<-sample(1:10,25,replace=TRUE)\nScore4<-sample(1:10,25,replace=TRUE)\ndf<-data.frame(Group,Score1,Score2,Score3,Score4)\nlibrary(MASS)\nLDA_df=lda(Group~.,data=df)\nLDA_df"
},
{
"code": null,
"e": 3699,
"s": 3088,
"text": "Call:\nlda(Group ~ ., data = df)\nPrior probabilities of groups:\n I II III IV\n0.24 0.16 0.28 0.32\n\nGroup means:\n Score1 Score2 Score3 Score4\nI 4.166667 4.333333 5.333333 3.333333\nII 4.250000 4.000000 4.000000 2.750000\nIII 6.000000 5.571429 4.000000 5.000000\nIV 5.250000 4.625000 2.125000 5.750000\n\nCoefficients of linear discriminants:\n LD1 LD2 LD3\nScore1 0.1358158 0.18645755 -0.18790651\nScore2 0.2598956 0.15492088 -0.07433529\nScore3 -0.3052648 0.25571648 0.14567716\nScore4 0.3117567 0.08656138 0.25216169\n\nProportion of trace:\n LD1 LD2 LD3\n0.8681 0.1161 0.0159"
},
{
"code": null,
"e": 3829,
"s": 3699,
"text": "Create the confusion matrix for linear discriminant analysis performed above by using table and predict function as shown below −"
},
{
"code": null,
"e": 3840,
"s": 3829,
"text": " Live Demo"
},
{
"code": null,
"e": 4185,
"s": 3840,
"text": "Group<-sample(c(\"I\",\"II\",\"III\",\"IV\"),25,replace=TRUE)\nScore1<-sample(1:10,25,replace=TRUE)\nScore2<-sample(1:10,25,replace=TRUE)\nScore3<-sample(1:10,25,replace=TRUE)\nScore4<-sample(1:10,25,replace=TRUE)\ndf<-data.frame(Group,Score1,Score2,Score3,Score4)\nlibrary(MASS)\nLDA_df=lda(Group~.,data=df)\ntable(predict(LDA_df,type=\"class\")$class,df$Group)"
},
{
"code": null,
"e": 4275,
"s": 4185,
"text": " I II III IV\nI 3 2 1 1\nII 2 1 0 0\nIII 1 0 3 1\nIV 0 1 3 6"
}
] |
Making recommendations using association rules (R Programming) | by Sheenal Srivastava | Towards Data Science | Retailers typically have a wealth of customer transaction data which consists of the type of items purchased by a customer, their value and the date they were purchased. Unless the retailer has a loyalty rewards system, they may not have demographic information on their customers such as height, age, gender and address. Thus, in order to make suggestions on what this customer might want to buy in the future, i.e which products to recommend to a customer, this has to be based on their purchase history and information on the purchase history of other customers.
In collaborative filtering, recommendations are made to customers based on finding similarities between the purchase history of customers. So, if Customers A and B both purchase Product A, but customer B also purchases Product B, then it is likely that customer A may also be interested in Product B. This is a very simple example and there are various algorithms that can be used to find out how similar customers are in order to make recommendations.
One such algorithm is k-nearest neighbour where the objective is to find k customers that are most similar to the target customer. It involves choosing a k and a similarity metric (with Euclidean distance being most common). The basis of this algorithm is that points that are closest in space to each other are also likely to be most similar to each other.
Another techinque is to use basket analysis or association rules. In this method, the aim is to find out which items are bought together (put in the same basket) and the frequency of this purchase. The output of this algorithm is a series of if-then rules i.e. if a customer buys a candle, then they are also likely to buy matches. Association rules can assist retailers with the following:
Modifying store layout where associated items are stocked together;
Sending emails to customers with recommendations on products to purchase based on their previous purchase (i.e. we noticed you bought a candle, perhaps these matches may interest you?); and
Insights into customer behaviour
Let’s now apply association rules to a dummy dataset
A dataset of 2,178,282 observations/rows and 16 variables/features was provided.
The first thing I did with this dataset was quickly check for any missing values or NAs as per follows. As shown below, no missing values were found.
Now the variables were all either read in as numeric or string variables. In order to meaningfully interpret categorical variables, they need to be changed to factors. As such, the following changes were made.
retail <- retail %>% mutate(MerchCategoryName = as.factor(MerchCategoryName)) %>% mutate(CategoryName = as.factor(CategoryName)) %>% mutate(SubCategoryName = as.factor(SubCategoryName)) %>% mutate(StoreState = as.factor(StoreState)) %>% mutate(OrderType = as.factor(OrderType)) %>% mutate (BasketID = as.numeric(BasketID)) %>% mutate(MerchCategoryCode = as.numeric(MerchCategoryCode)) %>% mutate(CategoryCode = as.numeric(CategoryCode)) %>% mutate(SubCategoryCode = as.numeric(SubCategoryCode)) %>% mutate(ProductName = as.factor(ProductName))
Then, all the numeric variables were summarised into their five-point summary (min, median, max, std dev., and mean) to identify any outliers within the data. By running this summary, it was found that the features MerchCategoryCode, CategoryCode, and SubCategoryCode contained a large number of NAs. Upon further inspection, it was found that the majority of these code values contained digits; however, the ones that had been converted to NAs contained characters such as “Freight” or the letter “C”. As these codes are not related to customer purchases, these observations were removed.
Negative gross sales and negative quantity indicate either erroneous values or customer returns. This may be interesting information; however, it is not related to our objective of analysis and as such these observations were omitted.
It is always a good idea to explore the data to see if you can see any trends or patterns within the dataset. Later on, you can use an algorithm/machine learning model to validate these trends.
The graph below shows me that the highest number of transactions come from Victoria followed by Queensland. If a retailer wants to know where to increase sales then this plot may be useful as the number of sales are proportionately low in all other states.
The below plot shows us that most gross sales values around >0-$40 (median is $37.60).
We can also see this plot by state as below. However, the transactions from Victoria and Queensland seem to cover up information for other states. Boxplots may be better for visualisation.
The below boxplots (though hard to see due to the scale being extended by the outliers) show that most sales across all states are close to the overall median. There in an abnormally high outlier for NT and a couple for VIC. For our purpose, since we are only interested in understanding which products do customers buy together in order to make recommendations, we do not need to deal with these outliers.
Now that we have had a look at sales by state. Let’s try and get a better understanding of the products purchased by customers.
The plot below is coloured based on the frequency of purchases per item. Lighter shades of blue indicate higher frequencies.
Some key takeaways are:
No sales for team sports in ACT, NSW, SA, and WA — could be due to these products not being stocked there or perhaps they need to be marketed better
No sales for ski products in ACT, NSW, SA, and WA. I find this quite shocking as NSW and ACT are quite close to some major ski resorts like Thredbo. It is weird that there are ski product sales in QLD which experiences a warm climate throughout the year. Either these products have been mislabelled or they were not stocked in NSW and ACT.
Paint and panel sales in WA only.
Bike sales in VIC only.
Camping and apparel recorded highest sales in VIC, followed by Gas, Fuel and BBQing.
Due to the distribution of sales by product and state, it appears that any association rules we come up with will mainly be based on sales from VIC and QLD. Furthermore, as not all products were stocked/sold in all states, it is expected that the association rules will be limited to a very few number of products. However, since I have already embarked on this mode of analysis, let’s continue to see what we get.
We have two years worth of data, 2016 and 2017. So, I decided to compare the gross number of sales for the two years.
Despite the higher number of transactions in 2016 (2.5 times more than 2017), mean gross sales were higher for 2017 than 2016. This seems quite counter-intuitive. So, I decided to dive into this deeper by looking at monthly sales.
Year# of TransactionsMean Gross Sales ($)2016 1481922 $69.02017 593315 $86.0
In 2016, the highest number of sales were recorded for January and March with steep declines in September to November and then an increase in December. However, transactions continued to decline in 2017 with an increase in December (Xmas season).
Deduction: As highest number of sales are for Camping, apparel and BBQ & Gas, it makes sense that sales for these products is high during the holiday season
Recommendation to the retailer: May want to explore whether stores have sufficient stock for these products in Dec-Jan as they are the most popular.
Deduction: Despite the steady decline in the number of transactions, mean gross sales continue to increase month on month with it being highest in Dec 2017. This indicates fewer customers that made purchases but made purchases of products of greater value.
Recommendation: What can the retailer do to ensure there is a steady state of purchases throughout the year rather than an increasing trend with maximum number of purchases at the end of the year as the retailer is still paying overhead costs and employee salaries amongst other costs to run its stores?
Let’s go back to our objective.
Aim: To determine which products are customers likely to buy together in order to make recommendations for products
I used the arules package and the read.transactions function to convert the dataset into a transaction object. A summary of this object gives the following output
## transactions as itemMatrix in sparse format with## 1019952 rows (elements/itemsets/transactions) and## 21209 columns (items) and a density of 9.531951e-05 ## ## most frequent items:## GAS BOTTLE REFILL 9KG* GAS BOTTLE REFILL 4KG* ## 30628 11724 ## 6 PACK BUTANE - WILD COUNTRY SNAP HOOK ALUMINIUM GRIPWELL ## 9209 7086 ## PEG TENT GALV 225X6.3MM P04G (Other) ## 6948 1996372 ## ## element (itemset/transaction) length distribution:## sizes## 1 2 3 4 5 6 7 8 9 10 ## 546138 234643 109888 55319 30185 16656 9878 6018 3716 2332 ## 11 12 13 14 15 16 17 18 19 20 ## 1611 993 751 490 353 237 157 140 99 88 ## 21 22 23 24 25 26 27 28 29 30 ## 53 48 28 31 20 13 12 15 8 1 ## 31 32 33 34 35 36 37 38 39 40 ## 4 2 4 3 4 1 4 2 1 4 ## 43 46 ## 1 1 ## ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 1.000 1.000 1.000 2.022 2.000 46.000 ## ## includes extended item information - examples:## labels## 1 10## 2 11## 3 11/12
Based on the output above, we can conclude the following.
There are 1019952 collections (baskets) of items and 21209 items.
Density measures the percentage of non-zero cells in a sparse matrix. It is the total number of items that are purchased divided by the possible number of items in that matrix. You can calculate how many items were purchased by using density: 1019952212090.0000953 = 2,061,545
Element (itemset/transaction) length distribution: This tells you you how many transactions are there for 1-itemset, for 2-itemset and so on. The first row is telling you the number of items and the second row is telling you the number of transactions.
Majority of baskets (87%) consist of between 1 to 3 items.
Minimum number of items in a basket = 1 and maximum = 46 (only one basket)
Most popular items are gas bottle, gas bottle refill, gripwell, and peg tent.
We can look at this information graphically via absolute frequency and relative frequency plots.
Both plots are in descending order of frequency of purchase. The absolute frequency plot tells us that the highest number of sales are for gas related products. The relative frequency plot shows how the sales of the products that are close to each other in the bar chart are related to each other (i.e. relative). Thus, a recommendation that one can make to the retailer is to stock these products together in the store or send customers an EDM making recommendations for products that are related in the plot and have not yet been purchased by the customer.
The next step to do is to generate rules for our transaction object. The output is as follows.
## Apriori## ## Parameter specification:## confidence minval smax arem aval originalSupport maxtime support minlen## 0.5 0.1 1 none FALSE TRUE 5 0.001 1## maxlen target ext## 10 rules FALSE## ## Algorithmic control:## filter tree heap memopt load sort verbose## 0.1 TRUE TRUE FALSE TRUE 2 TRUE## ## Absolute minimum support count: 1019 ## ## set item appearances ...[0 item(s)] done [0.00s].## set transactions ...[21209 item(s), 1019952 transaction(s)] done [2.52s].## sorting and recoding items ... [317 item(s)] done [0.04s].## creating transaction tree ... done [0.84s].## checking subsets of size 1 2 done [0.04s].## writing ... [7 rule(s)] done [0.00s].## creating S4 object ... done [0.25s].
The above output shows us that 7 rules were generated.
Details of these rules are shown below.
## set of 7 rules## ## rule length distribution (lhs + rhs):sizes## 2 ## 7 ## ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 2 2 2 2 2 2 ## ## summary of quality measures:## support confidence lift count ## Min. :0.001128 Min. :0.5458 Min. : 26.30 Min. :1150 ## 1st Qu.:0.001464 1st Qu.:0.6395 1st Qu.: 80.36 1st Qu.:1493 ## Median :0.001650 Median :0.6634 Median :154.58 Median :1683 ## Mean :0.001652 Mean :0.6759 Mean :154.48 Mean :1685 ## 3rd Qu.:0.001668 3rd Qu.:0.7265 3rd Qu.:245.30 3rd Qu.:1701 ## Max. :0.002524 Max. :0.7898 Max. :249.14 Max. :2574 ## ## mining info:## data ntransactions support confidence## tr 1019952 0.001 0.5
Now each of these rules have support, confidence, and lift values.
Let’s start with support which is the proportion of transactions out of all transactions used to generate the rules (i.e. 1,019,952) that contain the two items together (i.e. 1190/1019952 = 0.0011 or 0.11%, where count is the number of transactions that contain the two items.
Confidence is the proportion of transactions where two items are bought together out of all transactions where one of the item is purchased. As these are apriori rules, the probability of buying item B is based on the purchase of item A.
Mathematically, this looks like the following:
Confidence(A=>B) = P(A∩B) / P(A) = frequency(A,B) / frequency(A)
In the results above, confidence values range from 54% to 79%.
Probability of customers buying items together with confidence ranges from 54% to 79%, where buying item A has a positive effect on buying item B (as lift values are all greater than 1) .
Note: When I ran the algorithm, I experimented with higher support and confidence values as if there is a greater number of transactions within the dataset where two items are bought together then the higher the confidence. However, when I ran the algorithm with 80% or more confidence, I obtained zero rules.
This was expected due to the sparsity in data for frequent items where 1-item baskets are most common and the majority of purchased items related to camping or gas products.
Thus, the algorithm was run with the following parameters.
association.rules <- apriori(tr, parameter = list(supp=0.001, conf=0.5,maxlen=10))
Lift indicates how two items are correlated to each other. A positive lift value indicates that buying item A is likely to result in a purchase of item B. Mathematically, lift is calculated as follows.
Lift(A=>B) = Support / (Supp(A) * Supp(B) )
All our rules have positive lift values indicating that buying item A is likely to lead to a purchase of item B.
Let’s now inspect the rules.
lhs rhs support confidence lift count## [1] {GAS BOTTLE 9KG POL CODE 2 DC} => {GAS BOTTLE REFILL 9KG*} 0.001650078 0.7897701 26.30036 1683## [2] {WEBER BABY Q (Q1000) ROASTING TRIVET} => {WEBER BABY Q CONVECTION TRAY} 0.001127504 0.6526674 241.45428 1150## [3] {GAS BOTTLE 2KG CODE 4 DC} => {GAS BOTTLE REFILL 2KG*} 0.001344181 0.7308102 154.58137 1371## [4] {GAS BOTTLE 4KG POL CODE 2 DC} => {GAS BOTTLE REFILL 4KG*} 0.001583408 0.7222719 62.83544 1615## [5] {YTH L J PP THERMAL OE} => {YTH LS TOP PP THERMAL OE} 0.001667726 0.6634165 249.13587 1701## [6] {YTH LS TOP PP THERMAL OE} => {YTH L J PP THERMAL OE} 0.001667726 0.6262887 249.13587 1701## [7] {UNI L J PP THERMAL OE} => {UNI L S TOP PP THERMAL OE} 0.002523648 0.5458015 97.88840 2574
Interpretation of the first rule is as follows:
If a customer buys the 9kg gas bottle, there is a 79% chance that customer will also buy its refill. This is identified for 1,683 transactions in the dataset.
Now, let’s look at these plots visually.
All rules have a confidence value greater than 0.5 with lift ranging from 26 to 249.
The Parallel coordinates plot for the seven rules shows how the purchase of one product influences the purchase of another product. RHS is the item we propose the customer buy. For LHS, 2 is the most recent addition to the basket and 1 is the item that the customer previously purchased.
Looking at the first arrow we can see that if a customer has Weber Baby (Q1000) roasting trivet in their basket, then they are likely to purchase weber babgy q convection tray.
The below plots would be more useful if we could visualize more than 2-itemset baskets.
You have now learnt how to make recommendations to customers based on which items are most frequently purchased together based on apriori rules. However, some important things to note about this analysis.
The most popular/frequent items have confounded the analysis to some extent where it appears that we can only make recommendations with respect to only seven association rules with confidence. This is due to the uneven distribution of the number of items by frequency in the basket.
Customer segmentation may be another approach for this dataset where customers are grouped by spend (SalesGross), product type (i.e. CategoryCode), StateStore, and time of sale (i.e. Month/Year). However, it would be useful to have more features on customers to do this effectively.
Reference: https://www.datacamp.com/community/tutorials/market-basket-analysis-r
Code and dataset: https://github.com/shedoesdatascience/basketanalysis | [
{
"code": null,
"e": 737,
"s": 171,
"text": "Retailers typically have a wealth of customer transaction data which consists of the type of items purchased by a customer, their value and the date they were purchased. Unless the retailer has a loyalty rewards system, they may not have demographic information on their customers such as height, age, gender and address. Thus, in order to make suggestions on what this customer might want to buy in the future, i.e which products to recommend to a customer, this has to be based on their purchase history and information on the purchase history of other customers."
},
{
"code": null,
"e": 1190,
"s": 737,
"text": "In collaborative filtering, recommendations are made to customers based on finding similarities between the purchase history of customers. So, if Customers A and B both purchase Product A, but customer B also purchases Product B, then it is likely that customer A may also be interested in Product B. This is a very simple example and there are various algorithms that can be used to find out how similar customers are in order to make recommendations."
},
{
"code": null,
"e": 1548,
"s": 1190,
"text": "One such algorithm is k-nearest neighbour where the objective is to find k customers that are most similar to the target customer. It involves choosing a k and a similarity metric (with Euclidean distance being most common). The basis of this algorithm is that points that are closest in space to each other are also likely to be most similar to each other."
},
{
"code": null,
"e": 1939,
"s": 1548,
"text": "Another techinque is to use basket analysis or association rules. In this method, the aim is to find out which items are bought together (put in the same basket) and the frequency of this purchase. The output of this algorithm is a series of if-then rules i.e. if a customer buys a candle, then they are also likely to buy matches. Association rules can assist retailers with the following:"
},
{
"code": null,
"e": 2007,
"s": 1939,
"text": "Modifying store layout where associated items are stocked together;"
},
{
"code": null,
"e": 2197,
"s": 2007,
"text": "Sending emails to customers with recommendations on products to purchase based on their previous purchase (i.e. we noticed you bought a candle, perhaps these matches may interest you?); and"
},
{
"code": null,
"e": 2230,
"s": 2197,
"text": "Insights into customer behaviour"
},
{
"code": null,
"e": 2283,
"s": 2230,
"text": "Let’s now apply association rules to a dummy dataset"
},
{
"code": null,
"e": 2364,
"s": 2283,
"text": "A dataset of 2,178,282 observations/rows and 16 variables/features was provided."
},
{
"code": null,
"e": 2514,
"s": 2364,
"text": "The first thing I did with this dataset was quickly check for any missing values or NAs as per follows. As shown below, no missing values were found."
},
{
"code": null,
"e": 2724,
"s": 2514,
"text": "Now the variables were all either read in as numeric or string variables. In order to meaningfully interpret categorical variables, they need to be changed to factors. As such, the following changes were made."
},
{
"code": null,
"e": 3284,
"s": 2724,
"text": "retail <- retail %>% mutate(MerchCategoryName = as.factor(MerchCategoryName)) %>% mutate(CategoryName = as.factor(CategoryName)) %>% mutate(SubCategoryName = as.factor(SubCategoryName)) %>% mutate(StoreState = as.factor(StoreState)) %>% mutate(OrderType = as.factor(OrderType)) %>% mutate (BasketID = as.numeric(BasketID)) %>% mutate(MerchCategoryCode = as.numeric(MerchCategoryCode)) %>% mutate(CategoryCode = as.numeric(CategoryCode)) %>% mutate(SubCategoryCode = as.numeric(SubCategoryCode)) %>% mutate(ProductName = as.factor(ProductName))"
},
{
"code": null,
"e": 3874,
"s": 3284,
"text": "Then, all the numeric variables were summarised into their five-point summary (min, median, max, std dev., and mean) to identify any outliers within the data. By running this summary, it was found that the features MerchCategoryCode, CategoryCode, and SubCategoryCode contained a large number of NAs. Upon further inspection, it was found that the majority of these code values contained digits; however, the ones that had been converted to NAs contained characters such as “Freight” or the letter “C”. As these codes are not related to customer purchases, these observations were removed."
},
{
"code": null,
"e": 4109,
"s": 3874,
"text": "Negative gross sales and negative quantity indicate either erroneous values or customer returns. This may be interesting information; however, it is not related to our objective of analysis and as such these observations were omitted."
},
{
"code": null,
"e": 4303,
"s": 4109,
"text": "It is always a good idea to explore the data to see if you can see any trends or patterns within the dataset. Later on, you can use an algorithm/machine learning model to validate these trends."
},
{
"code": null,
"e": 4560,
"s": 4303,
"text": "The graph below shows me that the highest number of transactions come from Victoria followed by Queensland. If a retailer wants to know where to increase sales then this plot may be useful as the number of sales are proportionately low in all other states."
},
{
"code": null,
"e": 4647,
"s": 4560,
"text": "The below plot shows us that most gross sales values around >0-$40 (median is $37.60)."
},
{
"code": null,
"e": 4836,
"s": 4647,
"text": "We can also see this plot by state as below. However, the transactions from Victoria and Queensland seem to cover up information for other states. Boxplots may be better for visualisation."
},
{
"code": null,
"e": 5243,
"s": 4836,
"text": "The below boxplots (though hard to see due to the scale being extended by the outliers) show that most sales across all states are close to the overall median. There in an abnormally high outlier for NT and a couple for VIC. For our purpose, since we are only interested in understanding which products do customers buy together in order to make recommendations, we do not need to deal with these outliers."
},
{
"code": null,
"e": 5371,
"s": 5243,
"text": "Now that we have had a look at sales by state. Let’s try and get a better understanding of the products purchased by customers."
},
{
"code": null,
"e": 5496,
"s": 5371,
"text": "The plot below is coloured based on the frequency of purchases per item. Lighter shades of blue indicate higher frequencies."
},
{
"code": null,
"e": 5520,
"s": 5496,
"text": "Some key takeaways are:"
},
{
"code": null,
"e": 5669,
"s": 5520,
"text": "No sales for team sports in ACT, NSW, SA, and WA — could be due to these products not being stocked there or perhaps they need to be marketed better"
},
{
"code": null,
"e": 6009,
"s": 5669,
"text": "No sales for ski products in ACT, NSW, SA, and WA. I find this quite shocking as NSW and ACT are quite close to some major ski resorts like Thredbo. It is weird that there are ski product sales in QLD which experiences a warm climate throughout the year. Either these products have been mislabelled or they were not stocked in NSW and ACT."
},
{
"code": null,
"e": 6043,
"s": 6009,
"text": "Paint and panel sales in WA only."
},
{
"code": null,
"e": 6067,
"s": 6043,
"text": "Bike sales in VIC only."
},
{
"code": null,
"e": 6152,
"s": 6067,
"text": "Camping and apparel recorded highest sales in VIC, followed by Gas, Fuel and BBQing."
},
{
"code": null,
"e": 6567,
"s": 6152,
"text": "Due to the distribution of sales by product and state, it appears that any association rules we come up with will mainly be based on sales from VIC and QLD. Furthermore, as not all products were stocked/sold in all states, it is expected that the association rules will be limited to a very few number of products. However, since I have already embarked on this mode of analysis, let’s continue to see what we get."
},
{
"code": null,
"e": 6685,
"s": 6567,
"text": "We have two years worth of data, 2016 and 2017. So, I decided to compare the gross number of sales for the two years."
},
{
"code": null,
"e": 6916,
"s": 6685,
"text": "Despite the higher number of transactions in 2016 (2.5 times more than 2017), mean gross sales were higher for 2017 than 2016. This seems quite counter-intuitive. So, I decided to dive into this deeper by looking at monthly sales."
},
{
"code": null,
"e": 6993,
"s": 6916,
"text": "Year# of TransactionsMean Gross Sales ($)2016 1481922 $69.02017 593315 $86.0"
},
{
"code": null,
"e": 7240,
"s": 6993,
"text": "In 2016, the highest number of sales were recorded for January and March with steep declines in September to November and then an increase in December. However, transactions continued to decline in 2017 with an increase in December (Xmas season)."
},
{
"code": null,
"e": 7397,
"s": 7240,
"text": "Deduction: As highest number of sales are for Camping, apparel and BBQ & Gas, it makes sense that sales for these products is high during the holiday season"
},
{
"code": null,
"e": 7546,
"s": 7397,
"text": "Recommendation to the retailer: May want to explore whether stores have sufficient stock for these products in Dec-Jan as they are the most popular."
},
{
"code": null,
"e": 7803,
"s": 7546,
"text": "Deduction: Despite the steady decline in the number of transactions, mean gross sales continue to increase month on month with it being highest in Dec 2017. This indicates fewer customers that made purchases but made purchases of products of greater value."
},
{
"code": null,
"e": 8107,
"s": 7803,
"text": "Recommendation: What can the retailer do to ensure there is a steady state of purchases throughout the year rather than an increasing trend with maximum number of purchases at the end of the year as the retailer is still paying overhead costs and employee salaries amongst other costs to run its stores?"
},
{
"code": null,
"e": 8139,
"s": 8107,
"text": "Let’s go back to our objective."
},
{
"code": null,
"e": 8255,
"s": 8139,
"text": "Aim: To determine which products are customers likely to buy together in order to make recommendations for products"
},
{
"code": null,
"e": 8418,
"s": 8255,
"text": "I used the arules package and the read.transactions function to convert the dataset into a transaction object. A summary of this object gives the following output"
},
{
"code": null,
"e": 9854,
"s": 8418,
"text": "## transactions as itemMatrix in sparse format with## 1019952 rows (elements/itemsets/transactions) and## 21209 columns (items) and a density of 9.531951e-05 ## ## most frequent items:## GAS BOTTLE REFILL 9KG* GAS BOTTLE REFILL 4KG* ## 30628 11724 ## 6 PACK BUTANE - WILD COUNTRY SNAP HOOK ALUMINIUM GRIPWELL ## 9209 7086 ## PEG TENT GALV 225X6.3MM P04G (Other) ## 6948 1996372 ## ## element (itemset/transaction) length distribution:## sizes## 1 2 3 4 5 6 7 8 9 10 ## 546138 234643 109888 55319 30185 16656 9878 6018 3716 2332 ## 11 12 13 14 15 16 17 18 19 20 ## 1611 993 751 490 353 237 157 140 99 88 ## 21 22 23 24 25 26 27 28 29 30 ## 53 48 28 31 20 13 12 15 8 1 ## 31 32 33 34 35 36 37 38 39 40 ## 4 2 4 3 4 1 4 2 1 4 ## 43 46 ## 1 1 ## ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 1.000 1.000 1.000 2.022 2.000 46.000 ## ## includes extended item information - examples:## labels## 1 10## 2 11## 3 11/12"
},
{
"code": null,
"e": 9912,
"s": 9854,
"text": "Based on the output above, we can conclude the following."
},
{
"code": null,
"e": 9978,
"s": 9912,
"text": "There are 1019952 collections (baskets) of items and 21209 items."
},
{
"code": null,
"e": 10255,
"s": 9978,
"text": "Density measures the percentage of non-zero cells in a sparse matrix. It is the total number of items that are purchased divided by the possible number of items in that matrix. You can calculate how many items were purchased by using density: 1019952212090.0000953 = 2,061,545"
},
{
"code": null,
"e": 10508,
"s": 10255,
"text": "Element (itemset/transaction) length distribution: This tells you you how many transactions are there for 1-itemset, for 2-itemset and so on. The first row is telling you the number of items and the second row is telling you the number of transactions."
},
{
"code": null,
"e": 10567,
"s": 10508,
"text": "Majority of baskets (87%) consist of between 1 to 3 items."
},
{
"code": null,
"e": 10642,
"s": 10567,
"text": "Minimum number of items in a basket = 1 and maximum = 46 (only one basket)"
},
{
"code": null,
"e": 10720,
"s": 10642,
"text": "Most popular items are gas bottle, gas bottle refill, gripwell, and peg tent."
},
{
"code": null,
"e": 10817,
"s": 10720,
"text": "We can look at this information graphically via absolute frequency and relative frequency plots."
},
{
"code": null,
"e": 11376,
"s": 10817,
"text": "Both plots are in descending order of frequency of purchase. The absolute frequency plot tells us that the highest number of sales are for gas related products. The relative frequency plot shows how the sales of the products that are close to each other in the bar chart are related to each other (i.e. relative). Thus, a recommendation that one can make to the retailer is to stock these products together in the store or send customers an EDM making recommendations for products that are related in the plot and have not yet been purchased by the customer."
},
{
"code": null,
"e": 11471,
"s": 11376,
"text": "The next step to do is to generate rules for our transaction object. The output is as follows."
},
{
"code": null,
"e": 12232,
"s": 11471,
"text": "## Apriori## ## Parameter specification:## confidence minval smax arem aval originalSupport maxtime support minlen## 0.5 0.1 1 none FALSE TRUE 5 0.001 1## maxlen target ext## 10 rules FALSE## ## Algorithmic control:## filter tree heap memopt load sort verbose## 0.1 TRUE TRUE FALSE TRUE 2 TRUE## ## Absolute minimum support count: 1019 ## ## set item appearances ...[0 item(s)] done [0.00s].## set transactions ...[21209 item(s), 1019952 transaction(s)] done [2.52s].## sorting and recoding items ... [317 item(s)] done [0.04s].## creating transaction tree ... done [0.84s].## checking subsets of size 1 2 done [0.04s].## writing ... [7 rule(s)] done [0.00s].## creating S4 object ... done [0.25s]."
},
{
"code": null,
"e": 12287,
"s": 12232,
"text": "The above output shows us that 7 rules were generated."
},
{
"code": null,
"e": 12327,
"s": 12287,
"text": "Details of these rules are shown below."
},
{
"code": null,
"e": 13139,
"s": 12327,
"text": "## set of 7 rules## ## rule length distribution (lhs + rhs):sizes## 2 ## 7 ## ## Min. 1st Qu. Median Mean 3rd Qu. Max. ## 2 2 2 2 2 2 ## ## summary of quality measures:## support confidence lift count ## Min. :0.001128 Min. :0.5458 Min. : 26.30 Min. :1150 ## 1st Qu.:0.001464 1st Qu.:0.6395 1st Qu.: 80.36 1st Qu.:1493 ## Median :0.001650 Median :0.6634 Median :154.58 Median :1683 ## Mean :0.001652 Mean :0.6759 Mean :154.48 Mean :1685 ## 3rd Qu.:0.001668 3rd Qu.:0.7265 3rd Qu.:245.30 3rd Qu.:1701 ## Max. :0.002524 Max. :0.7898 Max. :249.14 Max. :2574 ## ## mining info:## data ntransactions support confidence## tr 1019952 0.001 0.5"
},
{
"code": null,
"e": 13206,
"s": 13139,
"text": "Now each of these rules have support, confidence, and lift values."
},
{
"code": null,
"e": 13483,
"s": 13206,
"text": "Let’s start with support which is the proportion of transactions out of all transactions used to generate the rules (i.e. 1,019,952) that contain the two items together (i.e. 1190/1019952 = 0.0011 or 0.11%, where count is the number of transactions that contain the two items."
},
{
"code": null,
"e": 13721,
"s": 13483,
"text": "Confidence is the proportion of transactions where two items are bought together out of all transactions where one of the item is purchased. As these are apriori rules, the probability of buying item B is based on the purchase of item A."
},
{
"code": null,
"e": 13768,
"s": 13721,
"text": "Mathematically, this looks like the following:"
},
{
"code": null,
"e": 13833,
"s": 13768,
"text": "Confidence(A=>B) = P(A∩B) / P(A) = frequency(A,B) / frequency(A)"
},
{
"code": null,
"e": 13896,
"s": 13833,
"text": "In the results above, confidence values range from 54% to 79%."
},
{
"code": null,
"e": 14084,
"s": 13896,
"text": "Probability of customers buying items together with confidence ranges from 54% to 79%, where buying item A has a positive effect on buying item B (as lift values are all greater than 1) ."
},
{
"code": null,
"e": 14394,
"s": 14084,
"text": "Note: When I ran the algorithm, I experimented with higher support and confidence values as if there is a greater number of transactions within the dataset where two items are bought together then the higher the confidence. However, when I ran the algorithm with 80% or more confidence, I obtained zero rules."
},
{
"code": null,
"e": 14568,
"s": 14394,
"text": "This was expected due to the sparsity in data for frequent items where 1-item baskets are most common and the majority of purchased items related to camping or gas products."
},
{
"code": null,
"e": 14627,
"s": 14568,
"text": "Thus, the algorithm was run with the following parameters."
},
{
"code": null,
"e": 14710,
"s": 14627,
"text": "association.rules <- apriori(tr, parameter = list(supp=0.001, conf=0.5,maxlen=10))"
},
{
"code": null,
"e": 14912,
"s": 14710,
"text": "Lift indicates how two items are correlated to each other. A positive lift value indicates that buying item A is likely to result in a purchase of item B. Mathematically, lift is calculated as follows."
},
{
"code": null,
"e": 14956,
"s": 14912,
"text": "Lift(A=>B) = Support / (Supp(A) * Supp(B) )"
},
{
"code": null,
"e": 15069,
"s": 14956,
"text": "All our rules have positive lift values indicating that buying item A is likely to lead to a purchase of item B."
},
{
"code": null,
"e": 15098,
"s": 15069,
"text": "Let’s now inspect the rules."
},
{
"code": null,
"e": 15964,
"s": 15098,
"text": "lhs rhs support confidence lift count## [1] {GAS BOTTLE 9KG POL CODE 2 DC} => {GAS BOTTLE REFILL 9KG*} 0.001650078 0.7897701 26.30036 1683## [2] {WEBER BABY Q (Q1000) ROASTING TRIVET} => {WEBER BABY Q CONVECTION TRAY} 0.001127504 0.6526674 241.45428 1150## [3] {GAS BOTTLE 2KG CODE 4 DC} => {GAS BOTTLE REFILL 2KG*} 0.001344181 0.7308102 154.58137 1371## [4] {GAS BOTTLE 4KG POL CODE 2 DC} => {GAS BOTTLE REFILL 4KG*} 0.001583408 0.7222719 62.83544 1615## [5] {YTH L J PP THERMAL OE} => {YTH LS TOP PP THERMAL OE} 0.001667726 0.6634165 249.13587 1701## [6] {YTH LS TOP PP THERMAL OE} => {YTH L J PP THERMAL OE} 0.001667726 0.6262887 249.13587 1701## [7] {UNI L J PP THERMAL OE} => {UNI L S TOP PP THERMAL OE} 0.002523648 0.5458015 97.88840 2574"
},
{
"code": null,
"e": 16012,
"s": 15964,
"text": "Interpretation of the first rule is as follows:"
},
{
"code": null,
"e": 16171,
"s": 16012,
"text": "If a customer buys the 9kg gas bottle, there is a 79% chance that customer will also buy its refill. This is identified for 1,683 transactions in the dataset."
},
{
"code": null,
"e": 16212,
"s": 16171,
"text": "Now, let’s look at these plots visually."
},
{
"code": null,
"e": 16297,
"s": 16212,
"text": "All rules have a confidence value greater than 0.5 with lift ranging from 26 to 249."
},
{
"code": null,
"e": 16585,
"s": 16297,
"text": "The Parallel coordinates plot for the seven rules shows how the purchase of one product influences the purchase of another product. RHS is the item we propose the customer buy. For LHS, 2 is the most recent addition to the basket and 1 is the item that the customer previously purchased."
},
{
"code": null,
"e": 16762,
"s": 16585,
"text": "Looking at the first arrow we can see that if a customer has Weber Baby (Q1000) roasting trivet in their basket, then they are likely to purchase weber babgy q convection tray."
},
{
"code": null,
"e": 16850,
"s": 16762,
"text": "The below plots would be more useful if we could visualize more than 2-itemset baskets."
},
{
"code": null,
"e": 17055,
"s": 16850,
"text": "You have now learnt how to make recommendations to customers based on which items are most frequently purchased together based on apriori rules. However, some important things to note about this analysis."
},
{
"code": null,
"e": 17338,
"s": 17055,
"text": "The most popular/frequent items have confounded the analysis to some extent where it appears that we can only make recommendations with respect to only seven association rules with confidence. This is due to the uneven distribution of the number of items by frequency in the basket."
},
{
"code": null,
"e": 17621,
"s": 17338,
"text": "Customer segmentation may be another approach for this dataset where customers are grouped by spend (SalesGross), product type (i.e. CategoryCode), StateStore, and time of sale (i.e. Month/Year). However, it would be useful to have more features on customers to do this effectively."
},
{
"code": null,
"e": 17702,
"s": 17621,
"text": "Reference: https://www.datacamp.com/community/tutorials/market-basket-analysis-r"
}
] |
Arduino - Stepper Motor | A Stepper Motor or a step motor is a brushless, synchronous motor, which divides a full rotation into a number of steps. Unlike a brushless DC motor, which rotates continuously when a fixed DC voltage is applied to it, a step motor rotates in discrete step angles.
The Stepper Motors therefore are manufactured with steps per revolution of 12, 24, 72, 144, 180, and 200, resulting in stepping angles of 30, 15, 5, 2.5, 2, and 1.8 degrees per step. The stepper motor can be controlled with or without feedback.
Imagine a motor on an RC airplane. The motor spins very fast in one direction or another. You can vary the speed with the amount of power given to the motor, but you cannot tell the propeller to stop at a specific position.
Now imagine a printer. There are lots of moving parts inside a printer, including motors. One such motor acts as the paper feed, spinning rollers that move the piece of paper as ink is being printed on it. This motor needs to be able to move the paper an exact distance to be able to print the next line of text or the next line of an image.
There is another motor attached to a threaded rod that moves the print head back and forth. Again, that threaded rod needs to be moved an exact amount to print one letter after another. This is where the stepper motors come in handy.
A regular DC motor spins in only direction whereas a Stepper motor can spin in precise increments.
Stepper motors can turn an exact amount of degrees (or steps) as desired. This gives you total control over the motor, allowing you to move it to an exact location and hold that position. It does so by powering the coils inside the motor for very short periods of time. The disadvantage is that you have to power the motor all the time to keep it in the position that you desire.
All you need to know for now is that, to move a stepper motor, you tell it to move a certain number of steps in one direction or the other, and tell it the speed at which to step in that direction. There are numerous varieties of stepper motors. The methods described here can be used to infer how to use other motors and drivers which are not mentioned in this tutorial. However, it is always recommended that you consult the datasheets and guides of the motors and drivers specific to the models you have.
You will need the following components −
1 × Arduino UNO board
1 × small bipolar stepper Motor as shown in the image given below
1 × LM298 driving IC
Follow the circuit diagram and make the connections as shown in the image given below.
Open the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking New.
/* Stepper Motor Control */
#include <Stepper.h>
const int stepsPerRevolution = 90;
// change this to fit the number of steps per revolution
// for your motor
// initialize the stepper library on pins 8 through 11:
Stepper myStepper(stepsPerRevolution, 8, 9, 10, 11);
void setup() {
// set the speed at 60 rpm:
myStepper.setSpeed(5);
// initialize the serial port:
Serial.begin(9600);
}
void loop() {
// step one revolution in one direction:
Serial.println("clockwise");
myStepper.step(stepsPerRevolution);
delay(500);
// step one revolution in the other direction:
Serial.println("counterclockwise");
myStepper.step(-stepsPerRevolution);
delay(500);
}
This program drives a unipolar or bipolar stepper motor. The motor is attached to digital pins 8 - 11 of Arduino.
The motor will take one revolution in one direction, then one revolution in the other direction.
65 Lectures
6.5 hours
Amit Rana
43 Lectures
3 hours
Amit Rana
20 Lectures
2 hours
Ashraf Said
19 Lectures
1.5 hours
Ashraf Said
11 Lectures
47 mins
Ashraf Said
9 Lectures
41 mins
Ashraf Said
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3135,
"s": 2870,
"text": "A Stepper Motor or a step motor is a brushless, synchronous motor, which divides a full rotation into a number of steps. Unlike a brushless DC motor, which rotates continuously when a fixed DC voltage is applied to it, a step motor rotates in discrete step angles."
},
{
"code": null,
"e": 3380,
"s": 3135,
"text": "The Stepper Motors therefore are manufactured with steps per revolution of 12, 24, 72, 144, 180, and 200, resulting in stepping angles of 30, 15, 5, 2.5, 2, and 1.8 degrees per step. The stepper motor can be controlled with or without feedback."
},
{
"code": null,
"e": 3604,
"s": 3380,
"text": "Imagine a motor on an RC airplane. The motor spins very fast in one direction or another. You can vary the speed with the amount of power given to the motor, but you cannot tell the propeller to stop at a specific position."
},
{
"code": null,
"e": 3946,
"s": 3604,
"text": "Now imagine a printer. There are lots of moving parts inside a printer, including motors. One such motor acts as the paper feed, spinning rollers that move the piece of paper as ink is being printed on it. This motor needs to be able to move the paper an exact distance to be able to print the next line of text or the next line of an image."
},
{
"code": null,
"e": 4180,
"s": 3946,
"text": "There is another motor attached to a threaded rod that moves the print head back and forth. Again, that threaded rod needs to be moved an exact amount to print one letter after another. This is where the stepper motors come in handy."
},
{
"code": null,
"e": 4279,
"s": 4180,
"text": "A regular DC motor spins in only direction whereas a Stepper motor can spin in precise increments."
},
{
"code": null,
"e": 4659,
"s": 4279,
"text": "Stepper motors can turn an exact amount of degrees (or steps) as desired. This gives you total control over the motor, allowing you to move it to an exact location and hold that position. It does so by powering the coils inside the motor for very short periods of time. The disadvantage is that you have to power the motor all the time to keep it in the position that you desire."
},
{
"code": null,
"e": 5167,
"s": 4659,
"text": "All you need to know for now is that, to move a stepper motor, you tell it to move a certain number of steps in one direction or the other, and tell it the speed at which to step in that direction. There are numerous varieties of stepper motors. The methods described here can be used to infer how to use other motors and drivers which are not mentioned in this tutorial. However, it is always recommended that you consult the datasheets and guides of the motors and drivers specific to the models you have."
},
{
"code": null,
"e": 5208,
"s": 5167,
"text": "You will need the following components −"
},
{
"code": null,
"e": 5230,
"s": 5208,
"text": "1 × Arduino UNO board"
},
{
"code": null,
"e": 5296,
"s": 5230,
"text": "1 × small bipolar stepper Motor as shown in the image given below"
},
{
"code": null,
"e": 5317,
"s": 5296,
"text": "1 × LM298 driving IC"
},
{
"code": null,
"e": 5404,
"s": 5317,
"text": "Follow the circuit diagram and make the connections as shown in the image given below."
},
{
"code": null,
"e": 5550,
"s": 5404,
"text": "Open the Arduino IDE software on your computer. Coding in the Arduino language will control your circuit. Open a new sketch File by clicking New."
},
{
"code": null,
"e": 6242,
"s": 5550,
"text": "/* Stepper Motor Control */\n\n#include <Stepper.h>\nconst int stepsPerRevolution = 90;\n// change this to fit the number of steps per revolution\n// for your motor\n// initialize the stepper library on pins 8 through 11:\nStepper myStepper(stepsPerRevolution, 8, 9, 10, 11);\n\nvoid setup() {\n // set the speed at 60 rpm:\n myStepper.setSpeed(5);\n // initialize the serial port:\n Serial.begin(9600);\n}\n\nvoid loop() {\n // step one revolution in one direction:\n Serial.println(\"clockwise\");\n myStepper.step(stepsPerRevolution);\n delay(500);\n // step one revolution in the other direction:\n Serial.println(\"counterclockwise\");\n myStepper.step(-stepsPerRevolution);\n delay(500);\n}"
},
{
"code": null,
"e": 6356,
"s": 6242,
"text": "This program drives a unipolar or bipolar stepper motor. The motor is attached to digital pins 8 - 11 of Arduino."
},
{
"code": null,
"e": 6453,
"s": 6356,
"text": "The motor will take one revolution in one direction, then one revolution in the other direction."
},
{
"code": null,
"e": 6488,
"s": 6453,
"text": "\n 65 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 6499,
"s": 6488,
"text": " Amit Rana"
},
{
"code": null,
"e": 6532,
"s": 6499,
"text": "\n 43 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 6543,
"s": 6532,
"text": " Amit Rana"
},
{
"code": null,
"e": 6576,
"s": 6543,
"text": "\n 20 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 6589,
"s": 6576,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6624,
"s": 6589,
"text": "\n 19 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6637,
"s": 6624,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6669,
"s": 6637,
"text": "\n 11 Lectures \n 47 mins\n"
},
{
"code": null,
"e": 6682,
"s": 6669,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6713,
"s": 6682,
"text": "\n 9 Lectures \n 41 mins\n"
},
{
"code": null,
"e": 6726,
"s": 6713,
"text": " Ashraf Said"
},
{
"code": null,
"e": 6733,
"s": 6726,
"text": " Print"
},
{
"code": null,
"e": 6744,
"s": 6733,
"text": " Add Notes"
}
] |
Getting the Most out of scikit-learn Pipelines | by Jessica Miles | Towards Data Science | Pipelines are extremely useful and versatile objects in the scikit-learn package. They can be nested and combined with other sklearn objects to create repeatable and easily customizable data transformation and modeling workflows.
One of the most useful things you can do with a Pipeline is to chain data transformation steps together with an estimator (model) at the end. You can then pass this composite estimator to a GridSearchCV object and search over parameters for transformation as well as model hyper-parameters in one shot. But it takes a bit of practice to learn how to construct these objects, as well as how to get and set properties in the different levels.
This post will use an NLP classification example to demonstrate how to combine a ColumnTransformer with a Pipeline and GridSearchCV. We’ll cover some specific techniques and tips, such as how to:
Use a ColumnTransformer as a Pipeline step to transform different types of data columns in different ways
Build a composite pipeline that can be used to grid search parameters for both data transformation AND modeling at the same time
Specify complex sets of grid search parameters to be applied together (such as binary text vectorization OR Tf-Idf normalization, but not both )
Bypass or skip a step in the Pipeline using passthrough
Use set_params() to set individual parameters in the Pipeline on the fly to test them outside GridSearchCV
Retrieve feature names from the depths of the best estimator, for interpretation
I’ll use example data that I put together for a previous project, consisting of article text and metadata for a set of articles that The New York Times posted to Facebook.
The data has already been cleaned, but still needs to be transformed prior to modeling. This is a good example for us, because there are several different types of features that will need to be transformed in different ways:
article_text: Text to be tokenized and vectorized
topics: A column containing a list of applicable topics for each article, to be converted into individual features per topic
The rest are categorical features that we will need to one-hot encode
The target is already binarized here, with 0 indicating reader had low engagement with the Facebook post, and 1 indicating high engagement. The goal of this example classification problem will be to predict engagement level.
To prepare, we’ll assign the different sets of column names to variables for flexibility, and separate our data into train and test sets.
First, we’re going to create a ColumnTransformer to transform the data for modeling. We’ll use ColumnTransformer for this instead of a Pipeline because it allows us to specify different transformation steps for different columns, but results in a single matrix of features.
As a reminder (or introduction, if you haven’t used them before) regular Pipelines take a list of tuples as input, where the first value in each tuple is the step name, and the second value is the estimator object.
pipe = Pipeline([ ('vect', CountVectorizer()), ('clf', LogisticRegression())])
ColumnTransformers are built similarly to Pipelines, except you include a third value in each tuple representing the columns to be transformed in that step. Since our data is in a DataFrame, we can pass strings and lists of strings representing the DataFrame column names. If your data is stored in an array, you can pass a column index or array of column indices.
I’ve already stored the names of the different categories of columns as variables, so they can be dynamically passed to the ColumnTransformer like so:
cols_trans = ColumnTransformer([ ('txt', TfidfVectorizer(), text_col), ('txt_kw', CountVectorizer(), topic_col), ('ohe', OneHotEncoder(drop='first'), cat_cols), remainder='drop'])
Pipeline steps are executed serially, where the output from the first step is passed to the second step, and so on. ColumnTransformers are different in that each step is executed separately, and the transformed features are concatenated at the end. This saves us from having to do the concatenation ourselves, and will also make it easy to get the full list of feature names when we’re ready to interpret our best model.
By default, any columns you pass into the ColumnTransformer that aren’t specified to be transformed will be dropped (remainder='drop'). If you have columns that you want to include but do not need to be transformed, specify remainder='passthrough'. We’ll see the passthrough parameter again later, as it can used in other contexts in sklearn to skip or bypass a processing step.
Here’s the full code for this step in our example workflow, with explanation below:
text_col will be transformed using a TfidfVectorizer, which will tokenize the text of each document and create vectors to form a document-term matrix. We will be able to specify many different options for this transformation that will affect final model performance, such as stop words to remove, size of n-grams to generate, maximum number of features to include, and how to normalize the token counts in the document-term matrix. However, since we’re going to specify different options for those parameters to be tried in our grid search, we’ll just create a vanilla instance of the transformer for now.
topic_col is already a list of topics in a single column, from which we want to create a binary document-term matrix. We’ll use CountVectorizer, and give it a custom analyzer that doesn’t do anything. The default analyzer usually performs preprocessing, tokenizing, and n-grams generation and outputs a list of tokens, but since we already have a list of tokens, we’ll just pass them through as-is, and CountVectorizer will return a document-term matrix of the existing topics without tokenizing them further.
cat_cols consists of our categorical columns, which we will one-hot encode using OneHotEncoder. The only parameter we’ll specify is to drop the first category in each column, since we’ll be using a regression model.
Tip: Both TfidfVectorizer and CountVectorizer expect a 1-D array, so the column name needs to be passed to ColumnTransformer as a string and not as a list, even if the list has only a single entry. If you give either of these transformers a list, you will get an error referring to incompatible row dimensions. Most other sklearn transformers expect a 2-D array (such as OneHotEncoder), so even if you’re only transforming a single column, you need to pass a list.
Great, so now we have cols_trans, a ColumnTransformer object that will output a single feature matrix with our transformed data.
Next, we’ll create a Pipeline where cols_trans is the first step, and a Logistic Regression classifier is the second step.
from sklearn.pipeline import Pipelinefrom sklearn.linear_model import LogisticRegressionpipe = Pipeline([ ('trans', cols_trans), ('clf', LogisticRegression(max_iter=300, class_weight='balanced'))])
If we called pipe.fit(X_train, y_train), we would be transforming our X_train data and fitting the Logistic Regression model to it in a single step. Note that we can simply use fit(), and don’t need to do anything special to specify we want to both fit AND transform the data in the first step; the pipeline will know what to do.
Once you start nesting Pipelines and other objects, you want to refresh yourself on how the steps will be executed. One way to do this is to set sklearn’s display parameter to 'diagram' to show an HTML representation when you call display() on the pipeline object itself. The HTML will be interactive in a Jupyter Notebook, and you can click on each step to expand it and see its current parameters.
from sklearn import set_configset_config(display='diagram')# with display='diagram', simply use display() to see the diagramdisplay(pipe)# if desired, set display back to the defaultset_config(display='text')
We will be able to pass our pipe object to a GridSearchCV to search parameters for both the transformation and the classifier model at the same time. GridSearchCV will want a dictionary of search parameters to try, where the keys are the pipeline steps/parameter names, and the values are lists of the parameters to be searched over.
With a ColumnTransformer nested in a Pipeline like we have, it can be tricky to get the keys of this dictionary just right, since they’re named after the label of each step, with dunders __ as separators. The easiest way to get a list of all available options is to use pipe.get_params().
You should see something like this:
{'memory': None, 'steps': [('trans', ColumnTransformer(transformers=[('txt', TfidfVectorizer(), 'article_text'), ('txt_kw', CountVectorizer(analyzer=<function no_analyzer at 0x7fbc5f4bac10>), 'topics'), ('ohe', OneHotEncoder(drop='first'), ['section_name', 'word_count_cat', 'is_multimedia', 'on_weekend'])])), ('clf', LogisticRegression(class_weight='balanced', max_iter=300))], 'verbose': False, 'trans': ColumnTransformer(transformers=[('txt', TfidfVectorizer(), 'article_text'), ('txt_kw', CountVectorizer(analyzer=<function no_analyzer at 0x7fbc5f4bac10>), 'topics'), ('ohe', OneHotEncoder(drop='first'), ['section_name', 'word_count_cat', 'is_multimedia', 'on_weekend'])]), 'clf': LogisticRegression(class_weight='balanced', max_iter=300), 'trans__n_jobs': None, 'trans__remainder': 'drop', 'trans__sparse_threshold': 0.3, 'trans__transformer_weights': None, 'trans__transformers': [('txt', TfidfVectorizer(), 'article_text'), ('txt_kw', CountVectorizer(analyzer=<function no_analyzer at 0x7fbc5f4bac10>), 'topics'), ('ohe', OneHotEncoder(drop='first'), ['section_name', 'word_count_cat', 'is_multimedia', 'on_weekend'])], 'trans__verbose': False, 'trans__txt': TfidfVectorizer(), 'trans__txt_kw': CountVectorizer(analyzer=<function no_analyzer at 0x7fbc5f4bac10>), 'trans__ohe': OneHotEncoder(drop='first'), 'trans__txt__analyzer': 'word', 'trans__txt__binary': False, 'trans__txt__decode_error': 'strict', 'trans__txt__dtype': numpy.float64, 'trans__txt__encoding': 'utf-8', 'trans__txt__input': 'content', 'trans__txt__lowercase': True, 'trans__txt__max_df': 1.0, 'trans__txt__max_features': None, 'trans__txt__min_df': 1, 'trans__txt__ngram_range': (1, 1), 'trans__txt__norm': 'l2', 'trans__txt__preprocessor': None, 'trans__txt__smooth_idf': True, 'trans__txt__stop_words': None, 'trans__txt__strip_accents': None, 'trans__txt__sublinear_tf': False, 'trans__txt__token_pattern': '(?u)\\b\\w\\w+\\b', 'trans__txt__tokenizer': None, 'trans__txt__use_idf': True, 'trans__txt__vocabulary': None, 'trans__txt_kw__analyzer': <function __main__.no_analyzer(doc)>, 'trans__txt_kw__binary': False, 'trans__txt_kw__decode_error': 'strict', 'trans__txt_kw__dtype': numpy.int64, 'trans__txt_kw__encoding': 'utf-8', 'trans__txt_kw__input': 'content', 'trans__txt_kw__lowercase': True, 'trans__txt_kw__max_df': 1.0, 'trans__txt_kw__max_features': None, 'trans__txt_kw__min_df': 1, 'trans__txt_kw__ngram_range': (1, 1), 'trans__txt_kw__preprocessor': None, 'trans__txt_kw__stop_words': None, 'trans__txt_kw__strip_accents': None, 'trans__txt_kw__token_pattern': '(?u)\\b\\w\\w+\\b', 'trans__txt_kw__tokenizer': None, 'trans__txt_kw__vocabulary': None, 'trans__ohe__categories': 'auto', 'trans__ohe__drop': 'first', 'trans__ohe__dtype': numpy.float64, 'trans__ohe__handle_unknown': 'error', 'trans__ohe__sparse': True, 'clf__C': 1.0, 'clf__class_weight': 'balanced', 'clf__dual': False, 'clf__fit_intercept': True, 'clf__intercept_scaling': 1, 'clf__l1_ratio': None, 'clf__max_iter': 300, 'clf__multi_class': 'auto', 'clf__n_jobs': None, 'clf__penalty': 'l2', 'clf__random_state': None, 'clf__solver': 'lbfgs', 'clf__tol': 0.0001, 'clf__verbose': 0, 'clf__warm_start': False}
Scroll down to the bottom of the output, and you’ll see the parameters for each step listed in the exact format you’ll need to pass to GridSearchCV. Only the current value for each parameter will be listed, so you may need to review the documentation for each estimator to see what other values are supported.
Let’s say I decide want to search through the following parameters for text vectorization and my Logistic Regression model:
grid_params = { 'trans__txt__binary': [True, False], 'trans__txt__use_idf': [True, False], 'trans__txt__max_features': [None, 100000, 10000], 'trans__txt__ngram_range': [(1, 1), (1, 2), (1, 3)], 'trans__txt__stop_words': [None, nltk_stopwords], 'clf__C': [1.0, 0.1, 0.01], 'clf__fit_intercept': [True, False], 'clf__penalty': ['l2', 'l1'], 'clf__solver': ['lbfgs','saga']}
But what if I want to use 'trans__txt__binary': True only with 'trans__txt__use_idf': False , so that I really get a binary 0 or 1 output? And I want to try a regular Term Frequency by itself, as well as TF + IDF, but in that case I only want binary to be False?
If I run the search with the parameters as written above, GridSearchCV will try every combination, even ones that might not make sense.
It turns out, GridSearchCV will also accept a list of dictionaries of parameters, and will be smart enough to try only the unique combinations across all the dicts in the list. We need to put the common parameters in both, params for binary in one dictionary, and the params for regular token count in the other.
Note that this may not be necessary if you only have a few parameters you’re searching, but it really becomes important when you want to try many, like we do here. Each combination will take a certain amount of computational power and time, so we don’t want to run any unnecessary combinations.
Many of the actual parameters will have a None value you can include if you want to test results without using that option. However, this won’t work if you have a workflow where you want to bypass or skip an entire step in the Pipeline.
An example would be if you have continuous data and want to evaluate Linear Regression model performance using both a MinMaxScaler and a StandardScaler to see which works better. You could add each as a separate step in your Pipeline, and use a version of the technique above to create a list of gris parameters to try either MinMax or Standard Scaler, but not both at once. You can give 'passthrough' as the parameter value to the named pipeline step to bypass it, so that the other scaler is the only one being used.
Note that if you simply want to try applying and not applying a given step (instead of substituting one step for another, as we have above) you can include an instance of the transformer object itself in the parameter list along with 'passthrough'. For example, in this example from the sklearn documentation, if you have a dimensionality reduction step labeled 'pca' for Principal Component Analysis, you could include 'pca': ['passthrough', PCA(5), PCA(10)] in your grid search params to test using no PCA, using PCA with 5 components, and using PCA with 10 components.
OK, so assuming our grid search completed, the gs object will automatically have been fitted with the best estimator, which we can access using gs.best_estimator_ (don’t forget that underscore at the end; it’s necessary!).
Since we gave gs our pipe object as the estimator, the fitted best estimator is a copy of pipe, with the best parameters applied, and already fitted on the current X_train. We can do things like:
Call gs.best_estimator_.get_params() to get the parameters of the best-performing pipeline
Export gs.best_estimator_ to file using pickle or joblib to back up the best pipeline complete with its parameters and current fit on X_train
Call gs.best_estimator_.predict(X_test) to get predictions on unseen test data
Use set_params() in case we want to test tweaking any individual parameters individually
For instance, here we tested max_features of [10000, 5000, 2000] in our grid search, and 2000 performed the best. Since it’s the lowest, perhaps I want to evaluate performance with only 1000 to see if going even lower would actually be better.
I could manually set a parameter value to something else using set_params() and a similar syntax to what we used when specifying the grid search params:
gs.best_estimator_.set_params(**{'trans__txt__max_features': 1000})
Note that I pass the parameter as a dictionary, and could include multiple parameters if needed. I also need the ** before the dictionary to unpack it so set_params() will accept it.
Let’s assume that max_features of 1000 didn’t help, so I’ve set it back to 2000, evaluated performance on unseen test data, and am now ready to do some interpretation of the model coefficients.
To do that, I’ll need to get the final transformed feature names after we tokenized the article text, split up the topics, and one-hot encoded the categorical features. We can get that pretty easily, but we’ll have use the correct syntax to reach down through the nested levels.
First, let’s use the HTML display trick to remind ourselves what the steps are and what they’re labeled:
Remember that although we named the ColumnTransformer object cols_trans, we labeled it just trans in the final pipeline we fed to gs. So trans is the first label we’ll need to deal with here.
Although the transformation steps are shown in parallel here, they’re actually done in an order in terms of how they’re concatenated. If we want our full list of feature names list to be accurate, we’ll need to get the feature names from each transformer in the order they were originally applied, then concatenate them. In the diagram, the correct order of that layer can be read from left to right.
We’ll use the named_steps() property to access the transformation layer, and then get the get_feature_names() property from each of the named_transformers_() in that layer, in order.
Note that for the ohe transformer, I passed cat_cols to get_feature_names() which is the list of original column names that were transformed in this step. Passing the original column names here using them as the column prefixes; otherwise they will be generic.
Hopefully this has been a useful example of how to construct a nested Pipeline to handle both transformation and modeling. Although it takes a little extra work to build, doing so can be very beneficial if you want to try out different transformation and model parameters in your grid search.
Feedback and questions welcome! | [
{
"code": null,
"e": 402,
"s": 172,
"text": "Pipelines are extremely useful and versatile objects in the scikit-learn package. They can be nested and combined with other sklearn objects to create repeatable and easily customizable data transformation and modeling workflows."
},
{
"code": null,
"e": 843,
"s": 402,
"text": "One of the most useful things you can do with a Pipeline is to chain data transformation steps together with an estimator (model) at the end. You can then pass this composite estimator to a GridSearchCV object and search over parameters for transformation as well as model hyper-parameters in one shot. But it takes a bit of practice to learn how to construct these objects, as well as how to get and set properties in the different levels."
},
{
"code": null,
"e": 1039,
"s": 843,
"text": "This post will use an NLP classification example to demonstrate how to combine a ColumnTransformer with a Pipeline and GridSearchCV. We’ll cover some specific techniques and tips, such as how to:"
},
{
"code": null,
"e": 1145,
"s": 1039,
"text": "Use a ColumnTransformer as a Pipeline step to transform different types of data columns in different ways"
},
{
"code": null,
"e": 1274,
"s": 1145,
"text": "Build a composite pipeline that can be used to grid search parameters for both data transformation AND modeling at the same time"
},
{
"code": null,
"e": 1419,
"s": 1274,
"text": "Specify complex sets of grid search parameters to be applied together (such as binary text vectorization OR Tf-Idf normalization, but not both )"
},
{
"code": null,
"e": 1475,
"s": 1419,
"text": "Bypass or skip a step in the Pipeline using passthrough"
},
{
"code": null,
"e": 1582,
"s": 1475,
"text": "Use set_params() to set individual parameters in the Pipeline on the fly to test them outside GridSearchCV"
},
{
"code": null,
"e": 1663,
"s": 1582,
"text": "Retrieve feature names from the depths of the best estimator, for interpretation"
},
{
"code": null,
"e": 1835,
"s": 1663,
"text": "I’ll use example data that I put together for a previous project, consisting of article text and metadata for a set of articles that The New York Times posted to Facebook."
},
{
"code": null,
"e": 2060,
"s": 1835,
"text": "The data has already been cleaned, but still needs to be transformed prior to modeling. This is a good example for us, because there are several different types of features that will need to be transformed in different ways:"
},
{
"code": null,
"e": 2110,
"s": 2060,
"text": "article_text: Text to be tokenized and vectorized"
},
{
"code": null,
"e": 2235,
"s": 2110,
"text": "topics: A column containing a list of applicable topics for each article, to be converted into individual features per topic"
},
{
"code": null,
"e": 2305,
"s": 2235,
"text": "The rest are categorical features that we will need to one-hot encode"
},
{
"code": null,
"e": 2530,
"s": 2305,
"text": "The target is already binarized here, with 0 indicating reader had low engagement with the Facebook post, and 1 indicating high engagement. The goal of this example classification problem will be to predict engagement level."
},
{
"code": null,
"e": 2668,
"s": 2530,
"text": "To prepare, we’ll assign the different sets of column names to variables for flexibility, and separate our data into train and test sets."
},
{
"code": null,
"e": 2942,
"s": 2668,
"text": "First, we’re going to create a ColumnTransformer to transform the data for modeling. We’ll use ColumnTransformer for this instead of a Pipeline because it allows us to specify different transformation steps for different columns, but results in a single matrix of features."
},
{
"code": null,
"e": 3157,
"s": 2942,
"text": "As a reminder (or introduction, if you haven’t used them before) regular Pipelines take a list of tuples as input, where the first value in each tuple is the step name, and the second value is the estimator object."
},
{
"code": null,
"e": 3242,
"s": 3157,
"text": "pipe = Pipeline([ ('vect', CountVectorizer()), ('clf', LogisticRegression())])"
},
{
"code": null,
"e": 3607,
"s": 3242,
"text": "ColumnTransformers are built similarly to Pipelines, except you include a third value in each tuple representing the columns to be transformed in that step. Since our data is in a DataFrame, we can pass strings and lists of strings representing the DataFrame column names. If your data is stored in an array, you can pass a column index or array of column indices."
},
{
"code": null,
"e": 3758,
"s": 3607,
"text": "I’ve already stored the names of the different categories of columns as variables, so they can be dynamically passed to the ColumnTransformer like so:"
},
{
"code": null,
"e": 3951,
"s": 3758,
"text": "cols_trans = ColumnTransformer([ ('txt', TfidfVectorizer(), text_col), ('txt_kw', CountVectorizer(), topic_col), ('ohe', OneHotEncoder(drop='first'), cat_cols), remainder='drop'])"
},
{
"code": null,
"e": 4372,
"s": 3951,
"text": "Pipeline steps are executed serially, where the output from the first step is passed to the second step, and so on. ColumnTransformers are different in that each step is executed separately, and the transformed features are concatenated at the end. This saves us from having to do the concatenation ourselves, and will also make it easy to get the full list of feature names when we’re ready to interpret our best model."
},
{
"code": null,
"e": 4751,
"s": 4372,
"text": "By default, any columns you pass into the ColumnTransformer that aren’t specified to be transformed will be dropped (remainder='drop'). If you have columns that you want to include but do not need to be transformed, specify remainder='passthrough'. We’ll see the passthrough parameter again later, as it can used in other contexts in sklearn to skip or bypass a processing step."
},
{
"code": null,
"e": 4835,
"s": 4751,
"text": "Here’s the full code for this step in our example workflow, with explanation below:"
},
{
"code": null,
"e": 5441,
"s": 4835,
"text": "text_col will be transformed using a TfidfVectorizer, which will tokenize the text of each document and create vectors to form a document-term matrix. We will be able to specify many different options for this transformation that will affect final model performance, such as stop words to remove, size of n-grams to generate, maximum number of features to include, and how to normalize the token counts in the document-term matrix. However, since we’re going to specify different options for those parameters to be tried in our grid search, we’ll just create a vanilla instance of the transformer for now."
},
{
"code": null,
"e": 5951,
"s": 5441,
"text": "topic_col is already a list of topics in a single column, from which we want to create a binary document-term matrix. We’ll use CountVectorizer, and give it a custom analyzer that doesn’t do anything. The default analyzer usually performs preprocessing, tokenizing, and n-grams generation and outputs a list of tokens, but since we already have a list of tokens, we’ll just pass them through as-is, and CountVectorizer will return a document-term matrix of the existing topics without tokenizing them further."
},
{
"code": null,
"e": 6167,
"s": 5951,
"text": "cat_cols consists of our categorical columns, which we will one-hot encode using OneHotEncoder. The only parameter we’ll specify is to drop the first category in each column, since we’ll be using a regression model."
},
{
"code": null,
"e": 6632,
"s": 6167,
"text": "Tip: Both TfidfVectorizer and CountVectorizer expect a 1-D array, so the column name needs to be passed to ColumnTransformer as a string and not as a list, even if the list has only a single entry. If you give either of these transformers a list, you will get an error referring to incompatible row dimensions. Most other sklearn transformers expect a 2-D array (such as OneHotEncoder), so even if you’re only transforming a single column, you need to pass a list."
},
{
"code": null,
"e": 6761,
"s": 6632,
"text": "Great, so now we have cols_trans, a ColumnTransformer object that will output a single feature matrix with our transformed data."
},
{
"code": null,
"e": 6884,
"s": 6761,
"text": "Next, we’ll create a Pipeline where cols_trans is the first step, and a Logistic Regression classifier is the second step."
},
{
"code": null,
"e": 7088,
"s": 6884,
"text": "from sklearn.pipeline import Pipelinefrom sklearn.linear_model import LogisticRegressionpipe = Pipeline([ ('trans', cols_trans), ('clf', LogisticRegression(max_iter=300, class_weight='balanced'))])"
},
{
"code": null,
"e": 7418,
"s": 7088,
"text": "If we called pipe.fit(X_train, y_train), we would be transforming our X_train data and fitting the Logistic Regression model to it in a single step. Note that we can simply use fit(), and don’t need to do anything special to specify we want to both fit AND transform the data in the first step; the pipeline will know what to do."
},
{
"code": null,
"e": 7818,
"s": 7418,
"text": "Once you start nesting Pipelines and other objects, you want to refresh yourself on how the steps will be executed. One way to do this is to set sklearn’s display parameter to 'diagram' to show an HTML representation when you call display() on the pipeline object itself. The HTML will be interactive in a Jupyter Notebook, and you can click on each step to expand it and see its current parameters."
},
{
"code": null,
"e": 8027,
"s": 7818,
"text": "from sklearn import set_configset_config(display='diagram')# with display='diagram', simply use display() to see the diagramdisplay(pipe)# if desired, set display back to the defaultset_config(display='text')"
},
{
"code": null,
"e": 8361,
"s": 8027,
"text": "We will be able to pass our pipe object to a GridSearchCV to search parameters for both the transformation and the classifier model at the same time. GridSearchCV will want a dictionary of search parameters to try, where the keys are the pipeline steps/parameter names, and the values are lists of the parameters to be searched over."
},
{
"code": null,
"e": 8650,
"s": 8361,
"text": "With a ColumnTransformer nested in a Pipeline like we have, it can be tricky to get the keys of this dictionary just right, since they’re named after the label of each step, with dunders __ as separators. The easiest way to get a list of all available options is to use pipe.get_params()."
},
{
"code": null,
"e": 8686,
"s": 8650,
"text": "You should see something like this:"
},
{
"code": null,
"e": 12276,
"s": 8686,
"text": "{'memory': None, 'steps': [('trans', ColumnTransformer(transformers=[('txt', TfidfVectorizer(), 'article_text'), ('txt_kw', CountVectorizer(analyzer=<function no_analyzer at 0x7fbc5f4bac10>), 'topics'), ('ohe', OneHotEncoder(drop='first'), ['section_name', 'word_count_cat', 'is_multimedia', 'on_weekend'])])), ('clf', LogisticRegression(class_weight='balanced', max_iter=300))], 'verbose': False, 'trans': ColumnTransformer(transformers=[('txt', TfidfVectorizer(), 'article_text'), ('txt_kw', CountVectorizer(analyzer=<function no_analyzer at 0x7fbc5f4bac10>), 'topics'), ('ohe', OneHotEncoder(drop='first'), ['section_name', 'word_count_cat', 'is_multimedia', 'on_weekend'])]), 'clf': LogisticRegression(class_weight='balanced', max_iter=300), 'trans__n_jobs': None, 'trans__remainder': 'drop', 'trans__sparse_threshold': 0.3, 'trans__transformer_weights': None, 'trans__transformers': [('txt', TfidfVectorizer(), 'article_text'), ('txt_kw', CountVectorizer(analyzer=<function no_analyzer at 0x7fbc5f4bac10>), 'topics'), ('ohe', OneHotEncoder(drop='first'), ['section_name', 'word_count_cat', 'is_multimedia', 'on_weekend'])], 'trans__verbose': False, 'trans__txt': TfidfVectorizer(), 'trans__txt_kw': CountVectorizer(analyzer=<function no_analyzer at 0x7fbc5f4bac10>), 'trans__ohe': OneHotEncoder(drop='first'), 'trans__txt__analyzer': 'word', 'trans__txt__binary': False, 'trans__txt__decode_error': 'strict', 'trans__txt__dtype': numpy.float64, 'trans__txt__encoding': 'utf-8', 'trans__txt__input': 'content', 'trans__txt__lowercase': True, 'trans__txt__max_df': 1.0, 'trans__txt__max_features': None, 'trans__txt__min_df': 1, 'trans__txt__ngram_range': (1, 1), 'trans__txt__norm': 'l2', 'trans__txt__preprocessor': None, 'trans__txt__smooth_idf': True, 'trans__txt__stop_words': None, 'trans__txt__strip_accents': None, 'trans__txt__sublinear_tf': False, 'trans__txt__token_pattern': '(?u)\\\\b\\\\w\\\\w+\\\\b', 'trans__txt__tokenizer': None, 'trans__txt__use_idf': True, 'trans__txt__vocabulary': None, 'trans__txt_kw__analyzer': <function __main__.no_analyzer(doc)>, 'trans__txt_kw__binary': False, 'trans__txt_kw__decode_error': 'strict', 'trans__txt_kw__dtype': numpy.int64, 'trans__txt_kw__encoding': 'utf-8', 'trans__txt_kw__input': 'content', 'trans__txt_kw__lowercase': True, 'trans__txt_kw__max_df': 1.0, 'trans__txt_kw__max_features': None, 'trans__txt_kw__min_df': 1, 'trans__txt_kw__ngram_range': (1, 1), 'trans__txt_kw__preprocessor': None, 'trans__txt_kw__stop_words': None, 'trans__txt_kw__strip_accents': None, 'trans__txt_kw__token_pattern': '(?u)\\\\b\\\\w\\\\w+\\\\b', 'trans__txt_kw__tokenizer': None, 'trans__txt_kw__vocabulary': None, 'trans__ohe__categories': 'auto', 'trans__ohe__drop': 'first', 'trans__ohe__dtype': numpy.float64, 'trans__ohe__handle_unknown': 'error', 'trans__ohe__sparse': True, 'clf__C': 1.0, 'clf__class_weight': 'balanced', 'clf__dual': False, 'clf__fit_intercept': True, 'clf__intercept_scaling': 1, 'clf__l1_ratio': None, 'clf__max_iter': 300, 'clf__multi_class': 'auto', 'clf__n_jobs': None, 'clf__penalty': 'l2', 'clf__random_state': None, 'clf__solver': 'lbfgs', 'clf__tol': 0.0001, 'clf__verbose': 0, 'clf__warm_start': False}"
},
{
"code": null,
"e": 12586,
"s": 12276,
"text": "Scroll down to the bottom of the output, and you’ll see the parameters for each step listed in the exact format you’ll need to pass to GridSearchCV. Only the current value for each parameter will be listed, so you may need to review the documentation for each estimator to see what other values are supported."
},
{
"code": null,
"e": 12710,
"s": 12586,
"text": "Let’s say I decide want to search through the following parameters for text vectorization and my Logistic Regression model:"
},
{
"code": null,
"e": 13110,
"s": 12710,
"text": "grid_params = { 'trans__txt__binary': [True, False], 'trans__txt__use_idf': [True, False], 'trans__txt__max_features': [None, 100000, 10000], 'trans__txt__ngram_range': [(1, 1), (1, 2), (1, 3)], 'trans__txt__stop_words': [None, nltk_stopwords], 'clf__C': [1.0, 0.1, 0.01], 'clf__fit_intercept': [True, False], 'clf__penalty': ['l2', 'l1'], 'clf__solver': ['lbfgs','saga']}"
},
{
"code": null,
"e": 13373,
"s": 13110,
"text": "But what if I want to use 'trans__txt__binary': True only with 'trans__txt__use_idf': False , so that I really get a binary 0 or 1 output? And I want to try a regular Term Frequency by itself, as well as TF + IDF, but in that case I only want binary to be False?"
},
{
"code": null,
"e": 13509,
"s": 13373,
"text": "If I run the search with the parameters as written above, GridSearchCV will try every combination, even ones that might not make sense."
},
{
"code": null,
"e": 13822,
"s": 13509,
"text": "It turns out, GridSearchCV will also accept a list of dictionaries of parameters, and will be smart enough to try only the unique combinations across all the dicts in the list. We need to put the common parameters in both, params for binary in one dictionary, and the params for regular token count in the other."
},
{
"code": null,
"e": 14117,
"s": 13822,
"text": "Note that this may not be necessary if you only have a few parameters you’re searching, but it really becomes important when you want to try many, like we do here. Each combination will take a certain amount of computational power and time, so we don’t want to run any unnecessary combinations."
},
{
"code": null,
"e": 14354,
"s": 14117,
"text": "Many of the actual parameters will have a None value you can include if you want to test results without using that option. However, this won’t work if you have a workflow where you want to bypass or skip an entire step in the Pipeline."
},
{
"code": null,
"e": 14873,
"s": 14354,
"text": "An example would be if you have continuous data and want to evaluate Linear Regression model performance using both a MinMaxScaler and a StandardScaler to see which works better. You could add each as a separate step in your Pipeline, and use a version of the technique above to create a list of gris parameters to try either MinMax or Standard Scaler, but not both at once. You can give 'passthrough' as the parameter value to the named pipeline step to bypass it, so that the other scaler is the only one being used."
},
{
"code": null,
"e": 15445,
"s": 14873,
"text": "Note that if you simply want to try applying and not applying a given step (instead of substituting one step for another, as we have above) you can include an instance of the transformer object itself in the parameter list along with 'passthrough'. For example, in this example from the sklearn documentation, if you have a dimensionality reduction step labeled 'pca' for Principal Component Analysis, you could include 'pca': ['passthrough', PCA(5), PCA(10)] in your grid search params to test using no PCA, using PCA with 5 components, and using PCA with 10 components."
},
{
"code": null,
"e": 15668,
"s": 15445,
"text": "OK, so assuming our grid search completed, the gs object will automatically have been fitted with the best estimator, which we can access using gs.best_estimator_ (don’t forget that underscore at the end; it’s necessary!)."
},
{
"code": null,
"e": 15864,
"s": 15668,
"text": "Since we gave gs our pipe object as the estimator, the fitted best estimator is a copy of pipe, with the best parameters applied, and already fitted on the current X_train. We can do things like:"
},
{
"code": null,
"e": 15955,
"s": 15864,
"text": "Call gs.best_estimator_.get_params() to get the parameters of the best-performing pipeline"
},
{
"code": null,
"e": 16097,
"s": 15955,
"text": "Export gs.best_estimator_ to file using pickle or joblib to back up the best pipeline complete with its parameters and current fit on X_train"
},
{
"code": null,
"e": 16176,
"s": 16097,
"text": "Call gs.best_estimator_.predict(X_test) to get predictions on unseen test data"
},
{
"code": null,
"e": 16265,
"s": 16176,
"text": "Use set_params() in case we want to test tweaking any individual parameters individually"
},
{
"code": null,
"e": 16509,
"s": 16265,
"text": "For instance, here we tested max_features of [10000, 5000, 2000] in our grid search, and 2000 performed the best. Since it’s the lowest, perhaps I want to evaluate performance with only 1000 to see if going even lower would actually be better."
},
{
"code": null,
"e": 16662,
"s": 16509,
"text": "I could manually set a parameter value to something else using set_params() and a similar syntax to what we used when specifying the grid search params:"
},
{
"code": null,
"e": 16730,
"s": 16662,
"text": "gs.best_estimator_.set_params(**{'trans__txt__max_features': 1000})"
},
{
"code": null,
"e": 16913,
"s": 16730,
"text": "Note that I pass the parameter as a dictionary, and could include multiple parameters if needed. I also need the ** before the dictionary to unpack it so set_params() will accept it."
},
{
"code": null,
"e": 17107,
"s": 16913,
"text": "Let’s assume that max_features of 1000 didn’t help, so I’ve set it back to 2000, evaluated performance on unseen test data, and am now ready to do some interpretation of the model coefficients."
},
{
"code": null,
"e": 17386,
"s": 17107,
"text": "To do that, I’ll need to get the final transformed feature names after we tokenized the article text, split up the topics, and one-hot encoded the categorical features. We can get that pretty easily, but we’ll have use the correct syntax to reach down through the nested levels."
},
{
"code": null,
"e": 17491,
"s": 17386,
"text": "First, let’s use the HTML display trick to remind ourselves what the steps are and what they’re labeled:"
},
{
"code": null,
"e": 17683,
"s": 17491,
"text": "Remember that although we named the ColumnTransformer object cols_trans, we labeled it just trans in the final pipeline we fed to gs. So trans is the first label we’ll need to deal with here."
},
{
"code": null,
"e": 18084,
"s": 17683,
"text": "Although the transformation steps are shown in parallel here, they’re actually done in an order in terms of how they’re concatenated. If we want our full list of feature names list to be accurate, we’ll need to get the feature names from each transformer in the order they were originally applied, then concatenate them. In the diagram, the correct order of that layer can be read from left to right."
},
{
"code": null,
"e": 18267,
"s": 18084,
"text": "We’ll use the named_steps() property to access the transformation layer, and then get the get_feature_names() property from each of the named_transformers_() in that layer, in order."
},
{
"code": null,
"e": 18528,
"s": 18267,
"text": "Note that for the ohe transformer, I passed cat_cols to get_feature_names() which is the list of original column names that were transformed in this step. Passing the original column names here using them as the column prefixes; otherwise they will be generic."
},
{
"code": null,
"e": 18821,
"s": 18528,
"text": "Hopefully this has been a useful example of how to construct a nested Pipeline to handle both transformation and modeling. Although it takes a little extra work to build, doing so can be very beneficial if you want to try out different transformation and model parameters in your grid search."
}
] |
Angular Material - Fab Toolbars | The md-fab-toolbar, an Angular directive, is used to show a toolbar of elements or buttons for quick access to common actions.
The following table lists out the parameters and description of the different attributes of md-fab-toolbar.
* md-direction
This determines from which direction the toolbar items will appear relative to the trigger element. Supports left and right directions.
md-open
Programmatically control whether or not the toolbar is visible.
The following example shows the use of md-fab-toolbar directive and also the uses of toolbar.
am_fabtoolbar.htm
<html lang = "en">
<head>
<link rel = "stylesheet"
href = "https://ajax.googleapis.com/ajax/libs/angular_material/1.0.0/angular-material.min.css">
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular.min.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-animate.min.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-aria.min.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-messages.min.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/angular_material/1.0.0/angular-material.min.js"></script>
<link rel = "stylesheet" href = "https://fonts.googleapis.com/icon?family=Material+Icons">
<script language = "javascript">
angular
.module('firstApplication', ['ngMaterial'])
.controller('toolbarController', toolbarController);
function toolbarController ($scope) {
$scope.isOpen = false;
$scope.count = 0;
$scope.selectedDirection = 'left';
}
</script>
</head>
<body ng-app = "firstApplication">
<div id = "toolbarContainer" ng-controller = "toolbarController as ctrl" ng-cloak>
<md-fab-toolbar md-open = "ctrl.isOpen" md-direction = "{{ctrl.selectedDirection}}"
count = "ctrl.count">
<md-fab-trigger class = "align-with-text">
<md-button aria-label = "menu" class = "md-fab md-primary">
<md-icon class = "material-icons">menu</md-icon>
</md-button>
</md-fab-trigger>
<md-toolbar>
<md-fab-actions class = "md-toolbar-tools">
<md-button aria-label = "Add" class = "md-fab md-raised md-mini
md-accent">
<md-icon class = "material-icons" aria-label = "Add">add</md-icon>
</md-button>
<md-button aria-label = "Insert Link" class = "md-fab md-raised
md-mini md-accent">
<md-icon class = "material-icons" aria-label = "Insert Link">
insert_link</md-icon>
</md-button>
<md-button aria-label = "Edit" class = "md-fab md-raised md-mini
md-accent">
<md-icon class = "material-icons" aria-label = "Edit">
mode_edit</md-icon>
</md-button>
</md-fab-actions>
</md-toolbar>
</md-fab-toolbar>
<md-content class = "md-padding" layout = "column">
<div layout = "row" layout-align = "space-around">
<div layout = "column">
<b>Open/Closed</b>
<md-radio-group ng-model = "ctrl.isOpen">
<md-radio-button ng-value = "true">Open</md-radio-button>
<md-radio-button ng-value = "false">Closed</md-radio-button>
</md-radio-group>
</div>
<div layout = "column">
<b>Direction</b>
<md-radio-group ng-model = "ctrl.selectedDirection">
<md-radio-button ng-value = "'left'">Left</md-radio-button>
<md-radio-button ng-value = "'right'">Right</md-radio-button>
</md-radio-group>
</div>
</div>
</md-content>
</div>
</body>
</html>
Verify the result.
16 Lectures
1.5 hours
Anadi Sharma
28 Lectures
2.5 hours
Anadi Sharma
11 Lectures
7.5 hours
SHIVPRASAD KOIRALA
16 Lectures
2.5 hours
Frahaan Hussain
69 Lectures
5 hours
Senol Atac
53 Lectures
3.5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2317,
"s": 2190,
"text": "The md-fab-toolbar, an Angular directive, is used to show a toolbar of elements or buttons for quick access to common actions."
},
{
"code": null,
"e": 2425,
"s": 2317,
"text": "The following table lists out the parameters and description of the different attributes of md-fab-toolbar."
},
{
"code": null,
"e": 2440,
"s": 2425,
"text": "* md-direction"
},
{
"code": null,
"e": 2576,
"s": 2440,
"text": "This determines from which direction the toolbar items will appear relative to the trigger element. Supports left and right directions."
},
{
"code": null,
"e": 2584,
"s": 2576,
"text": "md-open"
},
{
"code": null,
"e": 2648,
"s": 2584,
"text": "Programmatically control whether or not the toolbar is visible."
},
{
"code": null,
"e": 2742,
"s": 2648,
"text": "The following example shows the use of md-fab-toolbar directive and also the uses of toolbar."
},
{
"code": null,
"e": 2760,
"s": 2742,
"text": "am_fabtoolbar.htm"
},
{
"code": null,
"e": 6496,
"s": 2760,
"text": "<html lang = \"en\">\n <head>\n <link rel = \"stylesheet\"\n href = \"https://ajax.googleapis.com/ajax/libs/angular_material/1.0.0/angular-material.min.css\">\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-animate.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-aria.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular-messages.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angular_material/1.0.0/angular-material.min.js\"></script>\n <link rel = \"stylesheet\" href = \"https://fonts.googleapis.com/icon?family=Material+Icons\">\n \n <script language = \"javascript\">\n angular\n .module('firstApplication', ['ngMaterial'])\n .controller('toolbarController', toolbarController);\n\n function toolbarController ($scope) {\n $scope.isOpen = false;\n $scope.count = 0;\n $scope.selectedDirection = 'left'; \n } \n </script> \n </head>\n \n <body ng-app = \"firstApplication\"> \n <div id = \"toolbarContainer\" ng-controller = \"toolbarController as ctrl\" ng-cloak>\n <md-fab-toolbar md-open = \"ctrl.isOpen\" md-direction = \"{{ctrl.selectedDirection}}\"\n count = \"ctrl.count\">\n <md-fab-trigger class = \"align-with-text\">\n <md-button aria-label = \"menu\" class = \"md-fab md-primary\">\n <md-icon class = \"material-icons\">menu</md-icon>\n </md-button>\n </md-fab-trigger>\n \n <md-toolbar>\n <md-fab-actions class = \"md-toolbar-tools\">\n \n <md-button aria-label = \"Add\" class = \"md-fab md-raised md-mini\n md-accent\">\n <md-icon class = \"material-icons\" aria-label = \"Add\">add</md-icon>\n </md-button>\n \n <md-button aria-label = \"Insert Link\" class = \"md-fab md-raised\n md-mini md-accent\">\n <md-icon class = \"material-icons\" aria-label = \"Insert Link\">\n insert_link</md-icon>\n </md-button>\n \n <md-button aria-label = \"Edit\" class = \"md-fab md-raised md-mini\n md-accent\">\n <md-icon class = \"material-icons\" aria-label = \"Edit\">\n mode_edit</md-icon>\n </md-button>\n \n </md-fab-actions>\n </md-toolbar>\n </md-fab-toolbar>\n \n <md-content class = \"md-padding\" layout = \"column\">\n <div layout = \"row\" layout-align = \"space-around\">\n <div layout = \"column\">\n <b>Open/Closed</b>\n <md-radio-group ng-model = \"ctrl.isOpen\">\n <md-radio-button ng-value = \"true\">Open</md-radio-button>\n <md-radio-button ng-value = \"false\">Closed</md-radio-button>\n </md-radio-group>\n </div>\n \n <div layout = \"column\">\n <b>Direction</b>\n <md-radio-group ng-model = \"ctrl.selectedDirection\">\n <md-radio-button ng-value = \"'left'\">Left</md-radio-button>\n <md-radio-button ng-value = \"'right'\">Right</md-radio-button>\n </md-radio-group>\n </div>\n \n </div>\n </md-content>\n </div>\n </body>\n</html>"
},
{
"code": null,
"e": 6515,
"s": 6496,
"text": "Verify the result."
},
{
"code": null,
"e": 6550,
"s": 6515,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6564,
"s": 6550,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6599,
"s": 6564,
"text": "\n 28 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6613,
"s": 6599,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 6648,
"s": 6613,
"text": "\n 11 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 6668,
"s": 6648,
"text": " SHIVPRASAD KOIRALA"
},
{
"code": null,
"e": 6703,
"s": 6668,
"text": "\n 16 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6720,
"s": 6703,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6753,
"s": 6720,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6765,
"s": 6753,
"text": " Senol Atac"
},
{
"code": null,
"e": 6800,
"s": 6765,
"text": "\n 53 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6812,
"s": 6800,
"text": " Senol Atac"
},
{
"code": null,
"e": 6819,
"s": 6812,
"text": " Print"
},
{
"code": null,
"e": 6830,
"s": 6819,
"text": " Add Notes"
}
] |
MongoDB query for Partial Object in an array | Let us first create a collection with documents −
> db.queryForPartialObjectDemo.insertOne({_id:new ObjectId(), "StudentDetails": [{"StudentId":1, "StudentName":"Chris"}]});
{
"acknowledged" : true,
"insertedId" : ObjectId("5cdfcf55bf3115999ed51206")
}
> db.queryForPartialObjectDemo.insertOne({_id:new ObjectId(), "StudentDetails": [{"StudentId":2, "StudentName":"David"}]});
{
"acknowledged" : true,
"insertedId" : ObjectId("5cdfcf55bf3115999ed51207")
}
Following is the query to display all documents from a collection with the help of find() method −
> db.queryForPartialObjectDemo.find().pretty();
This will produce the following output −
{
"_id" : ObjectId("5cdfcf55bf3115999ed51206"),
"StudentDetails" : [
{
"StudentId" : 1,
"StudentName" : "Chris"
}
]
}
{
"_id" : ObjectId("5cdfcf55bf3115999ed51207"),
"StudentDetails" : [
{
"StudentId" : 2,
"StudentName" : "David"
}
]
}
Following is the query for partial object in array with MongoDB −
> db.queryForPartialObjectDemo.find({StudentDetails: {StudentId: 1, "StudentName" : "Chris"}});
This will produce the following output −
{ "_id" : ObjectId("5cdfcf55bf3115999ed51206"), "StudentDetails" : [ { "StudentId" : 1, "StudentName" : "Chris" } ] }
Following is the query for partial object in an array with dot notation −
> db.queryForPartialObjectDemo.find({"StudentDetails.StudentName":"Chris"});
This will produce the following output −
{ "_id" : ObjectId("5cdfcf55bf3115999ed51206"), "StudentDetails" : [ { "StudentId" : 1, "StudentName" : "Chris" } ] } | [
{
"code": null,
"e": 1112,
"s": 1062,
"text": "Let us first create a collection with documents −"
},
{
"code": null,
"e": 1530,
"s": 1112,
"text": "> db.queryForPartialObjectDemo.insertOne({_id:new ObjectId(), \"StudentDetails\": [{\"StudentId\":1, \"StudentName\":\"Chris\"}]});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5cdfcf55bf3115999ed51206\")\n}\n> db.queryForPartialObjectDemo.insertOne({_id:new ObjectId(), \"StudentDetails\": [{\"StudentId\":2, \"StudentName\":\"David\"}]});\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5cdfcf55bf3115999ed51207\")\n}"
},
{
"code": null,
"e": 1629,
"s": 1530,
"text": "Following is the query to display all documents from a collection with the help of find() method −"
},
{
"code": null,
"e": 1677,
"s": 1629,
"text": "> db.queryForPartialObjectDemo.find().pretty();"
},
{
"code": null,
"e": 1718,
"s": 1677,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2032,
"s": 1718,
"text": "{\n \"_id\" : ObjectId(\"5cdfcf55bf3115999ed51206\"),\n \"StudentDetails\" : [\n {\n \"StudentId\" : 1,\n \"StudentName\" : \"Chris\"\n }\n ]\n}\n{\n \"_id\" : ObjectId(\"5cdfcf55bf3115999ed51207\"),\n \"StudentDetails\" : [\n {\n \"StudentId\" : 2,\n \"StudentName\" : \"David\"\n }\n ]\n}"
},
{
"code": null,
"e": 2098,
"s": 2032,
"text": "Following is the query for partial object in array with MongoDB −"
},
{
"code": null,
"e": 2194,
"s": 2098,
"text": "> db.queryForPartialObjectDemo.find({StudentDetails: {StudentId: 1, \"StudentName\" : \"Chris\"}});"
},
{
"code": null,
"e": 2235,
"s": 2194,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2353,
"s": 2235,
"text": "{ \"_id\" : ObjectId(\"5cdfcf55bf3115999ed51206\"), \"StudentDetails\" : [ { \"StudentId\" : 1, \"StudentName\" : \"Chris\" } ] }"
},
{
"code": null,
"e": 2427,
"s": 2353,
"text": "Following is the query for partial object in an array with dot notation −"
},
{
"code": null,
"e": 2504,
"s": 2427,
"text": "> db.queryForPartialObjectDemo.find({\"StudentDetails.StudentName\":\"Chris\"});"
},
{
"code": null,
"e": 2545,
"s": 2504,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2663,
"s": 2545,
"text": "{ \"_id\" : ObjectId(\"5cdfcf55bf3115999ed51206\"), \"StudentDetails\" : [ { \"StudentId\" : 1, \"StudentName\" : \"Chris\" } ] }"
}
] |
Creating a QR Code of a link in React JS | In this article, we will see how to create a QR code of a link in
React JS. A QR code is a two-dimensional barcode that is readable on
smartphones. You must have seen QR codes on websites that you can
scan which redirects you to a page. For example, to access WhatsApp
from your laptop, you can go to "web.whatsapp.com" and then open
WhatsApp on your phone and scan the given QR code.
First create a React project −
npx create-react-app tutorialpurpose
Go to the project directory −
cd tutorialpurpose
Install the qrcode.react package −
npm i --save qrcode.react
This library is going to help us in generating QR codes and add
dependencies to do so.
Now insert the following lines of code in App.js −
import QRCode from "qrcode.react";
export default function App() {
return (
<div style={{ marginTop: 200, display: "flex",flexDirection: "row" }}>
<div>
<QRCode
value="https://www.tutorialspoint.com/"style={{ marginRight: 50 }}/>
<p>Tutorialspoint </p>
</div>
<div>
<QRCode value="https://www.google.com/" style={{marginRight: 50 }} />
<p>google</p>
</div>
<div>
<QRCode value="https://github.com/" style={{marginRight: 50 }} />
<p>github</p>
</div>
<div>
<QRCode value="https://www.instagram.com/" style={{ marginRight: 50 }}/>
<p>instagram</p>
</div>
<div>
<QRCode value="https://discord.com/" style={{marginRight: 50 }} />
<p>discord</p>
</div>
</div>
);
}
The code takes a link, processes it, and then renders a QR code for
that link.
Here we first imported our QRCode object which takes one parameter called "value" which takes the link of which you want to make a QR code.
You can also apply styles on it only for positioning and size.
We have taken the links of the following 5 websites and generated their QR codes −
https://www.tutorialspoint.com/
https://www.tutorialspoint.com/
https://www.google.com/
https://www.google.com/
https://github.com/
https://github.com/
https://www.instagram.com/
https://www.instagram.com/
https://discord.com/
https://discord.com/
On execution, it will produce the following output −
Scan any of the codes with your mobile phone and it will prompt you with the link to open that page in a browser | [
{
"code": null,
"e": 1447,
"s": 1062,
"text": "In this article, we will see how to create a QR code of a link in\nReact JS. A QR code is a two-dimensional barcode that is readable on\nsmartphones. You must have seen QR codes on websites that you can\nscan which redirects you to a page. For example, to access WhatsApp\nfrom your laptop, you can go to \"web.whatsapp.com\" and then open\nWhatsApp on your phone and scan the given QR code."
},
{
"code": null,
"e": 1478,
"s": 1447,
"text": "First create a React project −"
},
{
"code": null,
"e": 1515,
"s": 1478,
"text": "npx create-react-app tutorialpurpose"
},
{
"code": null,
"e": 1545,
"s": 1515,
"text": "Go to the project directory −"
},
{
"code": null,
"e": 1564,
"s": 1545,
"text": "cd tutorialpurpose"
},
{
"code": null,
"e": 1599,
"s": 1564,
"text": "Install the qrcode.react package −"
},
{
"code": null,
"e": 1625,
"s": 1599,
"text": "npm i --save qrcode.react"
},
{
"code": null,
"e": 1712,
"s": 1625,
"text": "This library is going to help us in generating QR codes and add\ndependencies to do so."
},
{
"code": null,
"e": 1763,
"s": 1712,
"text": "Now insert the following lines of code in App.js −"
},
{
"code": null,
"e": 2629,
"s": 1763,
"text": "import QRCode from \"qrcode.react\";\nexport default function App() {\n return (\n <div style={{ marginTop: 200, display: \"flex\",flexDirection: \"row\" }}>\n <div>\n <QRCode\n value=\"https://www.tutorialspoint.com/\"style={{ marginRight: 50 }}/>\n <p>Tutorialspoint </p>\n </div>\n <div>\n <QRCode value=\"https://www.google.com/\" style={{marginRight: 50 }} />\n <p>google</p>\n </div>\n <div>\n <QRCode value=\"https://github.com/\" style={{marginRight: 50 }} />\n <p>github</p>\n </div>\n <div>\n <QRCode value=\"https://www.instagram.com/\" style={{ marginRight: 50 }}/>\n <p>instagram</p>\n </div>\n <div>\n <QRCode value=\"https://discord.com/\" style={{marginRight: 50 }} />\n <p>discord</p>\n </div>\n </div>\n );\n}"
},
{
"code": null,
"e": 2708,
"s": 2629,
"text": "The code takes a link, processes it, and then renders a QR code for\nthat link."
},
{
"code": null,
"e": 2848,
"s": 2708,
"text": "Here we first imported our QRCode object which takes one parameter called \"value\" which takes the link of which you want to make a QR code."
},
{
"code": null,
"e": 2911,
"s": 2848,
"text": "You can also apply styles on it only for positioning and size."
},
{
"code": null,
"e": 2994,
"s": 2911,
"text": "We have taken the links of the following 5 websites and generated their QR codes −"
},
{
"code": null,
"e": 3026,
"s": 2994,
"text": "https://www.tutorialspoint.com/"
},
{
"code": null,
"e": 3058,
"s": 3026,
"text": "https://www.tutorialspoint.com/"
},
{
"code": null,
"e": 3082,
"s": 3058,
"text": "https://www.google.com/"
},
{
"code": null,
"e": 3106,
"s": 3082,
"text": "https://www.google.com/"
},
{
"code": null,
"e": 3126,
"s": 3106,
"text": "https://github.com/"
},
{
"code": null,
"e": 3146,
"s": 3126,
"text": "https://github.com/"
},
{
"code": null,
"e": 3173,
"s": 3146,
"text": "https://www.instagram.com/"
},
{
"code": null,
"e": 3200,
"s": 3173,
"text": "https://www.instagram.com/"
},
{
"code": null,
"e": 3221,
"s": 3200,
"text": "https://discord.com/"
},
{
"code": null,
"e": 3242,
"s": 3221,
"text": "https://discord.com/"
},
{
"code": null,
"e": 3295,
"s": 3242,
"text": "On execution, it will produce the following output −"
},
{
"code": null,
"e": 3408,
"s": 3295,
"text": "Scan any of the codes with your mobile phone and it will prompt you with the link to open that page in a browser"
}
] |
How to create frequency table of a string vector in R? | To create a frequency table of a string vector, we just need to use table function. For example, if we have a vector x that contains randomly sampled 100 values of first five English alphabets then the table of vector x can be created by using table(x). This will generate a table along with the name of the vector.
Live Demo
> x1<-sample(letters[1:4],20,replace=TRUE)
> x1
[1] "d" "d" "a" "c" "a" "a" "c" "a" "d" "c" "a" "d" "d" "b" "c" "a" "b" "c" "d"
[20] "b"
> table(x1)
x1
a b c d
6 3 5 6
Live Demo
> x2<-sample(letters[1:26],120,replace=TRUE)
> x2
[1] "w" "j" "p" "y" "r" "m" "y" "r" "z" "v" "p" "d" "s" "x" "j" "t" "d" "e"
[19] "l" "m" "f" "p" "u" "a" "d" "y" "y" "k" "n" "i" "m" "g" "s" "e" "n" "a"
[37] "w" "a" "s" "w" "n" "f" "f" "n" "s" "q" "y" "a" "h" "d" "q" "h" "g" "f"
[55] "z" "e" "a" "v" "f" "a" "w" "o" "u" "c" "l" "h" "z" "o" "e" "w" "x" "t"
[73] "y" "f" "q" "e" "d" "c" "l" "s" "x" "i" "i" "q" "p" "o" "v" "k" "b" "w"
[91] "s" "k" "s" "l" "f" "t" "j" "u" "j" "s" "p" "w" "w" "o" "d" "x" "l" "h"
[109] "u" "d" "p" "l" "h" "s" "n" "a" "o" "k" "y" "m"
> table(x2)
x2
a b c d e f g h i j k l m n o p q r s t u v w x y z
7 1 2 7 5 7 2 5 3 4 4 6 4 5 5 6 4 2 9 3 4 3 8 4 7 3
Live Demo
> x3<-sample(LETTERS[1:26],150,replace=TRUE)
> x3
[1] "U" "M" "A" "J" "Q" "L" "L" "Y" "A" "Q" "P" "R" "M" "T" "U" "H" "V" "H"
[19] "I" "H" "G" "D" "H" "V" "X" "K" "R" "H" "Y" "I" "L" "K" "O" "W" "Z" "K"
[37] "Q" "R" "D" "E" "I" "A" "E" "U" "C" "N" "S" "R" "O" "A" "Y" "I" "E" "E"
[55] "D" "F" "A" "G" "S" "Y" "B" "X" "H" "J" "O" "L" "A" "K" "D" "U" "N" "K"
[73] "S" "Y" "V" "V" "X" "Q" "M" "S" "G" "L" "I" "Y" "C" "T" "N" "I" "E" "E"
[91] "X" "G" "B" "Q" "D" "C" "G" "R" "P" "A" "Z" "A" "Z" "Z" "X" "G" "D" "G"
[109] "T" "T" "W" "I" "N" "H" "E" "P" "M" "U" "Q" "U" "P" "S" "N" "Z" "G" "P"
[127] "N" "I" "C" "H" "S" "U" "Q" "Q" "S" "T" "D" "D" "I" "S" "S" "V" "R" "Q"
[145] "V" "C" "O" "B" "E" "X"
> table(x3)
x3
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
8 3 5 8 8 1 8 8 9 2 5 5 4 6 4 5 9 6 9 5 7 6 2 6 6 5
Live Demo
> x4<-sample(c("India","USA","UK","Turkey"),100,replace=TRUE)
> x4
[1] "Turkey" "India" "India" "USA" "India" "Turkey" "India" "USA"
[9] "India" "USA" "USA" "UK" "India" "UK" "USA" "India"
[17] "Turkey" "Turkey" "USA" "India" "Turkey" "USA" "USA" "UK"
[25] "USA" "Turkey" "India" "Turkey" "India" "UK" "UK" "UK"
[33] "UK" "USA" "USA" "UK" "Turkey" "USA" "India" "UK"
[41] "India" "India" "India" "Turkey" "USA" "India" "USA" "USA"
[49] "India" "UK" "India" "UK" "USA" "Turkey" "Turkey" "USA"
[57] "Turkey" "Turkey" "India" "Turkey" "Turkey" "UK" "Turkey" "USA"
[65] "Turkey" "India" "Turkey" "USA" "USA" "UK" "UK" "Turkey"
[73] "USA" "India" "Turkey" "Turkey" "India" "Turkey" "USA" "UK"
[81] "India" "Turkey" "USA" "UK" "UK" "UK" "USA" "USA"
[89] "India" "India" "Turkey" "Turkey" "USA" "Turkey" "Turkey" "UK"
[97] "USA" "UK" "UK" "UK"
> table(x4)
x4
India Turkey UK USA
24 27 22 27
Live Demo
> x5<-sample(c("Europe","Asia","North America","Africa","South America","Antartica","Oceania"),100,replace=TRUE)
> x5
[1] "Asia" "Oceania" "Antartica" "Antartica"
[5] "South America" "Oceania" "Africa" "Africa"
[9] "Europe" "Africa" "Africa" "North America"
[13] "North America" "Oceania" "Antartica" "Asia"
[17] "Antartica" "Europe" "Asia" "North America"
[21] "South America" "Europe" "Oceania" "Oceania"
[25] "Asia" "Oceania" "South America" "Antartica"
[29] "Europe" "Europe" "Oceania" "Africa"
[33] "Asia" "Africa" "South America" "Antartica"
[37] "North America" "South America" "South America" "South America"
[41] "Asia" "South America" "Europe" "South America"
[45] "Asia" "North America" "Africa" "North America"
[49] "North America" "South America" "North America" "Antartica"
[53] "Asia" "South America" "North America" "South America"
[57] "Antartica" "South America" "Asia" "North America"
[61] "North America" "Oceania" "North America" "Asia"
[65] "Europe" "Oceania" "Africa" "Antartica"
[69] "Antartica" "Antartica" "North America" "Asia"
[73] "Antartica" "Antartica" "South America" "Europe"
[77] "Africa" "North America" "Asia" "Africa"
[81] "Oceania" "Africa" "Europe" "Asia"
[85] "North America" "Africa" "Europe" "Oceania"
[89] "Asia" "Europe" "Europe" "Africa"
[93] "Africa" "Antartica" "Antartica" "Africa"
[97] "South America" "Oceania" "South America" "North America"
> table(x5)
x5
Africa Antartica Asia Europe North America
15 15 14 12 16
Oceania South America
12 16 | [
{
"code": null,
"e": 1378,
"s": 1062,
"text": "To create a frequency table of a string vector, we just need to use table function. For example, if we have a vector x that contains randomly sampled 100 values of first five English alphabets then the table of vector x can be created by using table(x). This will generate a table along with the name of the vector."
},
{
"code": null,
"e": 1388,
"s": 1378,
"text": "Live Demo"
},
{
"code": null,
"e": 1436,
"s": 1388,
"text": "> x1<-sample(letters[1:4],20,replace=TRUE)\n> x1"
},
{
"code": null,
"e": 1525,
"s": 1436,
"text": "[1] \"d\" \"d\" \"a\" \"c\" \"a\" \"a\" \"c\" \"a\" \"d\" \"c\" \"a\" \"d\" \"d\" \"b\" \"c\" \"a\" \"b\" \"c\" \"d\"\n[20] \"b\""
},
{
"code": null,
"e": 1537,
"s": 1525,
"text": "> table(x1)"
},
{
"code": null,
"e": 1556,
"s": 1537,
"text": "x1\na b c d\n6 3 5 6"
},
{
"code": null,
"e": 1566,
"s": 1556,
"text": "Live Demo"
},
{
"code": null,
"e": 1616,
"s": 1566,
"text": "> x2<-sample(letters[1:26],120,replace=TRUE)\n> x2"
},
{
"code": null,
"e": 2131,
"s": 1616,
"text": "[1] \"w\" \"j\" \"p\" \"y\" \"r\" \"m\" \"y\" \"r\" \"z\" \"v\" \"p\" \"d\" \"s\" \"x\" \"j\" \"t\" \"d\" \"e\"\n[19] \"l\" \"m\" \"f\" \"p\" \"u\" \"a\" \"d\" \"y\" \"y\" \"k\" \"n\" \"i\" \"m\" \"g\" \"s\" \"e\" \"n\" \"a\"\n[37] \"w\" \"a\" \"s\" \"w\" \"n\" \"f\" \"f\" \"n\" \"s\" \"q\" \"y\" \"a\" \"h\" \"d\" \"q\" \"h\" \"g\" \"f\"\n[55] \"z\" \"e\" \"a\" \"v\" \"f\" \"a\" \"w\" \"o\" \"u\" \"c\" \"l\" \"h\" \"z\" \"o\" \"e\" \"w\" \"x\" \"t\"\n[73] \"y\" \"f\" \"q\" \"e\" \"d\" \"c\" \"l\" \"s\" \"x\" \"i\" \"i\" \"q\" \"p\" \"o\" \"v\" \"k\" \"b\" \"w\"\n[91] \"s\" \"k\" \"s\" \"l\" \"f\" \"t\" \"j\" \"u\" \"j\" \"s\" \"p\" \"w\" \"w\" \"o\" \"d\" \"x\" \"l\" \"h\"\n[109] \"u\" \"d\" \"p\" \"l\" \"h\" \"s\" \"n\" \"a\" \"o\" \"k\" \"y\" \"m\""
},
{
"code": null,
"e": 2143,
"s": 2131,
"text": "> table(x2)"
},
{
"code": null,
"e": 2250,
"s": 2143,
"text": "x2\na b c d e f g h i j k l m n o p q r s t u v w x y z\n7 1 2 7 5 7 2 5 3 4 4 6 4 5 5 6 4 2 9 3 4 3 8 4 7 3"
},
{
"code": null,
"e": 2260,
"s": 2250,
"text": "Live Demo"
},
{
"code": null,
"e": 2310,
"s": 2260,
"text": "> x3<-sample(LETTERS[1:26],150,replace=TRUE)\n> x3"
},
{
"code": null,
"e": 2957,
"s": 2310,
"text": "[1] \"U\" \"M\" \"A\" \"J\" \"Q\" \"L\" \"L\" \"Y\" \"A\" \"Q\" \"P\" \"R\" \"M\" \"T\" \"U\" \"H\" \"V\" \"H\"\n[19] \"I\" \"H\" \"G\" \"D\" \"H\" \"V\" \"X\" \"K\" \"R\" \"H\" \"Y\" \"I\" \"L\" \"K\" \"O\" \"W\" \"Z\" \"K\"\n[37] \"Q\" \"R\" \"D\" \"E\" \"I\" \"A\" \"E\" \"U\" \"C\" \"N\" \"S\" \"R\" \"O\" \"A\" \"Y\" \"I\" \"E\" \"E\"\n[55] \"D\" \"F\" \"A\" \"G\" \"S\" \"Y\" \"B\" \"X\" \"H\" \"J\" \"O\" \"L\" \"A\" \"K\" \"D\" \"U\" \"N\" \"K\"\n[73] \"S\" \"Y\" \"V\" \"V\" \"X\" \"Q\" \"M\" \"S\" \"G\" \"L\" \"I\" \"Y\" \"C\" \"T\" \"N\" \"I\" \"E\" \"E\"\n[91] \"X\" \"G\" \"B\" \"Q\" \"D\" \"C\" \"G\" \"R\" \"P\" \"A\" \"Z\" \"A\" \"Z\" \"Z\" \"X\" \"G\" \"D\" \"G\"\n[109] \"T\" \"T\" \"W\" \"I\" \"N\" \"H\" \"E\" \"P\" \"M\" \"U\" \"Q\" \"U\" \"P\" \"S\" \"N\" \"Z\" \"G\" \"P\"\n[127] \"N\" \"I\" \"C\" \"H\" \"S\" \"U\" \"Q\" \"Q\" \"S\" \"T\" \"D\" \"D\" \"I\" \"S\" \"S\" \"V\" \"R\" \"Q\"\n[145] \"V\" \"C\" \"O\" \"B\" \"E\" \"X\""
},
{
"code": null,
"e": 2969,
"s": 2957,
"text": "> table(x3)"
},
{
"code": null,
"e": 3076,
"s": 2969,
"text": "x3\nA B C D E F G H I J K L M N O P Q R S T U V W X Y Z\n8 3 5 8 8 1 8 8 9 2 5 5 4 6 4 5 9 6 9 5 7 6 2 6 6 5"
},
{
"code": null,
"e": 3086,
"s": 3076,
"text": "Live Demo"
},
{
"code": null,
"e": 3153,
"s": 3086,
"text": "> x4<-sample(c(\"India\",\"USA\",\"UK\",\"Turkey\"),100,replace=TRUE)\n> x4"
},
{
"code": null,
"e": 3923,
"s": 3153,
"text": "[1] \"Turkey\" \"India\" \"India\" \"USA\" \"India\" \"Turkey\" \"India\" \"USA\"\n[9] \"India\" \"USA\" \"USA\" \"UK\" \"India\" \"UK\" \"USA\" \"India\"\n[17] \"Turkey\" \"Turkey\" \"USA\" \"India\" \"Turkey\" \"USA\" \"USA\" \"UK\"\n[25] \"USA\" \"Turkey\" \"India\" \"Turkey\" \"India\" \"UK\" \"UK\" \"UK\"\n[33] \"UK\" \"USA\" \"USA\" \"UK\" \"Turkey\" \"USA\" \"India\" \"UK\"\n[41] \"India\" \"India\" \"India\" \"Turkey\" \"USA\" \"India\" \"USA\" \"USA\"\n[49] \"India\" \"UK\" \"India\" \"UK\" \"USA\" \"Turkey\" \"Turkey\" \"USA\"\n[57] \"Turkey\" \"Turkey\" \"India\" \"Turkey\" \"Turkey\" \"UK\" \"Turkey\" \"USA\"\n[65] \"Turkey\" \"India\" \"Turkey\" \"USA\" \"USA\" \"UK\" \"UK\" \"Turkey\"\n[73] \"USA\" \"India\" \"Turkey\" \"Turkey\" \"India\" \"Turkey\" \"USA\" \"UK\"\n[81] \"India\" \"Turkey\" \"USA\" \"UK\" \"UK\" \"UK\" \"USA\" \"USA\"\n[89] \"India\" \"India\" \"Turkey\" \"Turkey\" \"USA\" \"Turkey\" \"Turkey\" \"UK\"\n[97] \"USA\" \"UK\" \"UK\" \"UK\""
},
{
"code": null,
"e": 3935,
"s": 3923,
"text": "> table(x4)"
},
{
"code": null,
"e": 3970,
"s": 3935,
"text": "x4\nIndia Turkey UK USA\n24 27 22 27"
},
{
"code": null,
"e": 3980,
"s": 3970,
"text": "Live Demo"
},
{
"code": null,
"e": 4098,
"s": 3980,
"text": "> x5<-sample(c(\"Europe\",\"Asia\",\"North America\",\"Africa\",\"South America\",\"Antartica\",\"Oceania\"),100,replace=TRUE)\n> x5"
},
{
"code": null,
"e": 5373,
"s": 4098,
"text": "[1] \"Asia\" \"Oceania\" \"Antartica\" \"Antartica\"\n[5] \"South America\" \"Oceania\" \"Africa\" \"Africa\"\n[9] \"Europe\" \"Africa\" \"Africa\" \"North America\"\n[13] \"North America\" \"Oceania\" \"Antartica\" \"Asia\"\n[17] \"Antartica\" \"Europe\" \"Asia\" \"North America\"\n[21] \"South America\" \"Europe\" \"Oceania\" \"Oceania\"\n[25] \"Asia\" \"Oceania\" \"South America\" \"Antartica\"\n[29] \"Europe\" \"Europe\" \"Oceania\" \"Africa\"\n[33] \"Asia\" \"Africa\" \"South America\" \"Antartica\"\n[37] \"North America\" \"South America\" \"South America\" \"South America\"\n[41] \"Asia\" \"South America\" \"Europe\" \"South America\"\n[45] \"Asia\" \"North America\" \"Africa\" \"North America\"\n[49] \"North America\" \"South America\" \"North America\" \"Antartica\"\n[53] \"Asia\" \"South America\" \"North America\" \"South America\"\n[57] \"Antartica\" \"South America\" \"Asia\" \"North America\"\n[61] \"North America\" \"Oceania\" \"North America\" \"Asia\"\n[65] \"Europe\" \"Oceania\" \"Africa\" \"Antartica\"\n[69] \"Antartica\" \"Antartica\" \"North America\" \"Asia\"\n[73] \"Antartica\" \"Antartica\" \"South America\" \"Europe\"\n[77] \"Africa\" \"North America\" \"Asia\" \"Africa\"\n[81] \"Oceania\" \"Africa\" \"Europe\" \"Asia\"\n[85] \"North America\" \"Africa\" \"Europe\" \"Oceania\"\n[89] \"Asia\" \"Europe\" \"Europe\" \"Africa\"\n[93] \"Africa\" \"Antartica\" \"Antartica\" \"Africa\"\n[97] \"South America\" \"Oceania\" \"South America\" \"North America\""
},
{
"code": null,
"e": 5385,
"s": 5373,
"text": "> table(x5)"
},
{
"code": null,
"e": 5474,
"s": 5385,
"text": "x5\nAfrica Antartica Asia Europe North America\n15 15 14 12 16\nOceania South America\n12 16"
}
] |
Python | Numpy np.gumbel() method | 24 Oct, 2019
With the help of np.gumbel() method, we can get the gumbel distribution in the form of an array by using np.gumbel() method.
Syntax : np.gumbel(value, scale, size)
Return : Return the array of gumbel distribution.
Example #1 :In this example we can see that by using np.gumbel() method, we are able to get an array of gumbel distribution using this method.
# import numpyimport numpy as np # using np.gumbel() methodgfg = np.random.gumbel(12, 0.7, 20) print(gfg)
Output :
array([12.64947204, 12.37405666, 13.58474571, 14.91257252, 12.52167875,12.4480617, 14.95250558, 11.70944994, 12.80072181, 11.60226466,11.877631, 11.5397349, 12.46834902, 13.27323288, 14.93137191,11.95035735, 12.54426051, 11.1745507, 13.10580066, 14.41014362])
Example #2 :
# import numpyimport numpy as np # using np.gumbel() methodgfg = np.random.gumbel(0, 1.7, 10) print(gfg)
Output :
[ 5.34883666 -0.98597658 2.35585763 1.45700115 -2.62043708 -0.619834421.8046374 -1.73997392 4.29301495 1.82840768]
Image-Processing
OpenCV
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n24 Oct, 2019"
},
{
"code": null,
"e": 153,
"s": 28,
"text": "With the help of np.gumbel() method, we can get the gumbel distribution in the form of an array by using np.gumbel() method."
},
{
"code": null,
"e": 192,
"s": 153,
"text": "Syntax : np.gumbel(value, scale, size)"
},
{
"code": null,
"e": 242,
"s": 192,
"text": "Return : Return the array of gumbel distribution."
},
{
"code": null,
"e": 385,
"s": 242,
"text": "Example #1 :In this example we can see that by using np.gumbel() method, we are able to get an array of gumbel distribution using this method."
},
{
"code": "# import numpyimport numpy as np # using np.gumbel() methodgfg = np.random.gumbel(12, 0.7, 20) print(gfg)",
"e": 493,
"s": 385,
"text": null
},
{
"code": null,
"e": 502,
"s": 493,
"text": "Output :"
},
{
"code": null,
"e": 762,
"s": 502,
"text": "array([12.64947204, 12.37405666, 13.58474571, 14.91257252, 12.52167875,12.4480617, 14.95250558, 11.70944994, 12.80072181, 11.60226466,11.877631, 11.5397349, 12.46834902, 13.27323288, 14.93137191,11.95035735, 12.54426051, 11.1745507, 13.10580066, 14.41014362])"
},
{
"code": null,
"e": 775,
"s": 762,
"text": "Example #2 :"
},
{
"code": "# import numpyimport numpy as np # using np.gumbel() methodgfg = np.random.gumbel(0, 1.7, 10) print(gfg)",
"e": 882,
"s": 775,
"text": null
},
{
"code": null,
"e": 891,
"s": 882,
"text": "Output :"
},
{
"code": null,
"e": 1006,
"s": 891,
"text": "[ 5.34883666 -0.98597658 2.35585763 1.45700115 -2.62043708 -0.619834421.8046374 -1.73997392 4.29301495 1.82840768]"
},
{
"code": null,
"e": 1023,
"s": 1006,
"text": "Image-Processing"
},
{
"code": null,
"e": 1030,
"s": 1023,
"text": "OpenCV"
},
{
"code": null,
"e": 1037,
"s": 1030,
"text": "Python"
}
] |
How to Install GCC Compiler on Linux? | 06 Oct, 2021
In this article, we will discuss how to install a GCC compiler on Linux.
GCC stands for GNU Compiler Collections which is used to compile mainly C and C++ language. It can also be used to compile Objective C and Objective C++. The GCC is an open-source collection of compilers and libraries.
Let’s start with the steps to install the GCC on Linux.
To install the GCC open the terminal on Linux.
The terminal takes the input from the user in the form of commands and displays the output on the screen. Hence we have to pass some commands to install the GCC.
Follow the commands step by step to install the GCC.
Command 1: The very first step is to update the packages. This command is used to download package information from all configured sources and to get the info of the updated versions of the packages.
sudo apt-get update
command-1
Note: In the first command it will ask for your password, make sure to enter the password correctly.
Command 2: Now we have to install the build-essential packages, which is also known as a meta-package, it contains the GCC compiler all the other essentials used to compile the software written in C and C++ language.
sudo apt install build-essential
command-2
It will take some time to install all the essential packages.
Command 3: After the second command it will install GCC on your Linux, to verify it is installed correctly, check the version of the GCC.
gcc --version
command-3
Now, we have successfully installed the GCC on Linux.
Note: Versions may vary from time to time.
how-to-install
Picked
How To
Installation Guide
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Set Git Username and Password in GitBash?
How to Install Jupyter Notebook on MacOS?
How to Install and Use NVM on Windows?
How to Install Python Packages for AWS Lambda Layers?
How to Add External JAR File to an IntelliJ IDEA Project?
Installation of Node.js on Linux
Installation of Node.js on Windows
How to Install Jupyter Notebook on MacOS?
How to Install and Use NVM on Windows?
How to Install Python Packages for AWS Lambda Layers? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n06 Oct, 2021"
},
{
"code": null,
"e": 102,
"s": 28,
"text": "In this article, we will discuss how to install a GCC compiler on Linux. "
},
{
"code": null,
"e": 321,
"s": 102,
"text": "GCC stands for GNU Compiler Collections which is used to compile mainly C and C++ language. It can also be used to compile Objective C and Objective C++. The GCC is an open-source collection of compilers and libraries."
},
{
"code": null,
"e": 377,
"s": 321,
"text": "Let’s start with the steps to install the GCC on Linux."
},
{
"code": null,
"e": 424,
"s": 377,
"text": "To install the GCC open the terminal on Linux."
},
{
"code": null,
"e": 586,
"s": 424,
"text": "The terminal takes the input from the user in the form of commands and displays the output on the screen. Hence we have to pass some commands to install the GCC."
},
{
"code": null,
"e": 639,
"s": 586,
"text": "Follow the commands step by step to install the GCC."
},
{
"code": null,
"e": 839,
"s": 639,
"text": "Command 1: The very first step is to update the packages. This command is used to download package information from all configured sources and to get the info of the updated versions of the packages."
},
{
"code": null,
"e": 859,
"s": 839,
"text": "sudo apt-get update"
},
{
"code": null,
"e": 869,
"s": 859,
"text": "command-1"
},
{
"code": null,
"e": 970,
"s": 869,
"text": "Note: In the first command it will ask for your password, make sure to enter the password correctly."
},
{
"code": null,
"e": 1187,
"s": 970,
"text": "Command 2: Now we have to install the build-essential packages, which is also known as a meta-package, it contains the GCC compiler all the other essentials used to compile the software written in C and C++ language."
},
{
"code": null,
"e": 1220,
"s": 1187,
"text": "sudo apt install build-essential"
},
{
"code": null,
"e": 1230,
"s": 1220,
"text": "command-2"
},
{
"code": null,
"e": 1292,
"s": 1230,
"text": "It will take some time to install all the essential packages."
},
{
"code": null,
"e": 1430,
"s": 1292,
"text": "Command 3: After the second command it will install GCC on your Linux, to verify it is installed correctly, check the version of the GCC."
},
{
"code": null,
"e": 1444,
"s": 1430,
"text": "gcc --version"
},
{
"code": null,
"e": 1454,
"s": 1444,
"text": "command-3"
},
{
"code": null,
"e": 1508,
"s": 1454,
"text": "Now, we have successfully installed the GCC on Linux."
},
{
"code": null,
"e": 1551,
"s": 1508,
"text": "Note: Versions may vary from time to time."
},
{
"code": null,
"e": 1566,
"s": 1551,
"text": "how-to-install"
},
{
"code": null,
"e": 1573,
"s": 1566,
"text": "Picked"
},
{
"code": null,
"e": 1580,
"s": 1573,
"text": "How To"
},
{
"code": null,
"e": 1599,
"s": 1580,
"text": "Installation Guide"
},
{
"code": null,
"e": 1697,
"s": 1599,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1746,
"s": 1697,
"text": "How to Set Git Username and Password in GitBash?"
},
{
"code": null,
"e": 1788,
"s": 1746,
"text": "How to Install Jupyter Notebook on MacOS?"
},
{
"code": null,
"e": 1827,
"s": 1788,
"text": "How to Install and Use NVM on Windows?"
},
{
"code": null,
"e": 1881,
"s": 1827,
"text": "How to Install Python Packages for AWS Lambda Layers?"
},
{
"code": null,
"e": 1939,
"s": 1881,
"text": "How to Add External JAR File to an IntelliJ IDEA Project?"
},
{
"code": null,
"e": 1972,
"s": 1939,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 2007,
"s": 1972,
"text": "Installation of Node.js on Windows"
},
{
"code": null,
"e": 2049,
"s": 2007,
"text": "How to Install Jupyter Notebook on MacOS?"
},
{
"code": null,
"e": 2088,
"s": 2049,
"text": "How to Install and Use NVM on Windows?"
}
] |
MySQL - DATEDIFF() Function | The DATE, DATETIME and TIMESTAMP datatypes in MySQL are used to store the date, date and time, time stamp values respectively. Where a time stamp is a numerical value representing the number of milliseconds from '1970-01-01 00:00:01' UTC (epoch) to the specified time. MySQL provides a set of functions to manipulate these values.
The MYSQL DATEDIFF() function accepts two date or, date-time values as parameters, calculates the difference between them (argument1-argument2) and returns the result. This function returns difference between the given date values in the form of days. This function includes only the date parts of the arguments while calculating the difference.
Following is the syntax of the above function –
DATEDIFF(expr1, expr2)
Following example demonstrates the usage of the DATEDIFF() function –
mysql> SELECT DATEDIFF('2015-09-05', '1989-03-25');
+--------------------------------------+
| DATEDIFF('2015-09-05', '1989-03-25') |
+--------------------------------------+
| 9660 |
+--------------------------------------+
1 row in set (0.09 sec)
Following is another example of this function –
mysql> SELECT DATEDIFF('2019-05-25', '2019-05-05');
+--------------------------------------+
| DATEDIFF('2019-05-25', '2019-05-05') |
+--------------------------------------+
| 20 |
+--------------------------------------+
1 row in set (0.00 sec)
In the following example we are passing DATETIME value as argument to this function–
mysql> SELECT DATEDIFF('2018-05-23 20:40:32', '1996-12-07');
+-----------------------------------------------+
| DATEDIFF('2018-05-23 20:40:32', '1996-12-07') |
+-----------------------------------------------+
| 7837 |
+-----------------------------------------------+
1 row in set (0.00 sec)
mysql> SELECT DATEDIFF('2018-05-23 20:40:32', '1996-12-07 20:15:40');
+--------------------------------------------------------+
| DATEDIFF('2018-05-23 20:40:32', '1996-12-07 20:15:40') |
+--------------------------------------------------------+
| 7837 |
+--------------------------------------------------------+
1 row in set (0.00 sec)
In the following example we are passing the result of CURDATE() as an argument to the DATEDIFF() function —
mysql> SELECT DATEDIFF(CURDATE(), '1995-11-15');
+-----------------------------------+
| DATEDIFF(CURDATE(), '1995-11-15') |
+-----------------------------------+
| 9374 |
+-----------------------------------+
1 row in set (0.00 sec)
mysql> SELECT DATEDIFF('2050-03-25', CURDATE());
+-----------------------------------+
| DATEDIFF('2050-03-25', CURDATE()) |
+-----------------------------------+
| 10480 |
+-----------------------------------+
1 row in set (0.00 sec)
We can also pass current timestamp values as arguments to this function –
mysql> SELECT DATEDIFF(NOW(), '2015-09-05');
+-------------------------------+
| DATEDIFF(NOW(), '2015-09-05') |
+-------------------------------+
| 2140 |
+-------------------------------+
1 row in set (0.00 sec)
mysql> SELECT DATEDIFF(CURRENT_TIMESTAMP(), '2015-09-05');
+---------------------------------------------+
| DATEDIFF(CURRENT_TIMESTAMP(), '2015-09-05') |
+---------------------------------------------+
| 2140 |
+---------------------------------------------+
1 row in set (0.00 sec)
Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below –
mysql> CREATE TABLE MyPlayers(
ID INT,
First_Name VARCHAR(255),
Last_Name VARCHAR(255),
Date_Of_Birth date,
Place_Of_Birth VARCHAR(255),
Country VARCHAR(255),
PRIMARY KEY (ID)
);
Now, we will insert 7 records in MyPlayers table using INSERT statements −
mysql> insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India');
mysql> insert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica');
mysql> insert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka');
mysql> insert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India');
mysql> insert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India');
mysql> insert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India');
mysql> insert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England');
Following query calculates the age of the players in days —
mysql> SELECT First_Name, Last_Name, Date_Of_Birth, Country, DATEDIFF(CURDATE(), Date_Of_Birth) as Age_In_Days FROM MyPlayers;
+------------+------------+---------------+-------------+-------------+
| First_Name | Last_Name | Date_Of_Birth | Country | Age_In_Days |
+------------+------------+---------------+-------------+-------------+
| Shikhar | Dhawan | 1981-12-05 | India | 14463 |
| Jonathan | Trott | 1981-04-22 | SouthAfrica | 14690 |
| Kumara | Sangakkara | 1977-10-27 | Srilanka | 15963 |
| Virat | Kohli | 1988-11-05 | India | 11936 |
| Rohit | Sharma | 1987-04-30 | India | 12491 |
| Ravindra | Jadeja | 1988-12-06 | India | 11905 |
| James | Anderson | 1982-06-30 | England | 14256 |
+------------+------------+---------------+-------------+-------------+
7 rows in set (0.11 sec)
Let us create another table with name Sales in MySQL database using CREATE statement as follows –
mysql> CREATE TABLE sales(
ID INT,
ProductName VARCHAR(255),
CustomerName VARCHAR(255),
DispatchDate date,
DispatchTime time,
Price INT,
Location VARCHAR(255)
);
Query OK, 0 rows affected (2.22 sec)
Now, we will insert 5 records in Sales table using INSERT statements −
insert into sales values (1, 'Key-Board', 'Raja', DATE('2019-09-01'), TIME('11:00:00'), 7000, 'Hyderabad');
insert into sales values (2, 'Earphones', 'Roja', DATE('2019-05-01'), TIME('11:00:00'), 2000, 'Vishakhapatnam');
insert into sales values (3, 'Mouse', 'Puja', DATE('2019-03-01'), TIME('10:59:59'), 3000, 'Vijayawada');
insert into sales values (4, 'Mobile', 'Vanaja', DATE('2019-03-01'), TIME('10:10:52'), 9000, 'Chennai');
insert into sales values (5, 'Headset', 'Jalaja', DATE('2019-04-06'), TIME('11:08:59'), 6000, 'Goa');
Following is another example of this function —
mysql> SELECT ProductName, CustomerName, DispatchDate, Price, DATEDIFF(CURDATE(), DispatchDate) as difference_in_days FROM sales;
+-------------+--------------+--------------+-------+--------------------+
| ProductName | CustomerName | DispatchDate | Price | difference_in_days |
+-------------+--------------+--------------+-------+--------------------+
| Key-Board | Raja | 2019-09-01 | 7000 | 679 |
| Earphones | Roja | 2019-05-01 | 2000 | 802 |
| Mouse | Puja | 2019-03-01 | 3000 | 863 |
| Mobile | Vanaja | 2019-03-01 | 9000 | 863 |
| Headset | Jalaja | 2019-04-06 | 6000 | 827 |
+-------------+--------------+--------------+-------+--------------------+
5 rows in set (0.00 sec)
Suppose we have created a table named Subscribers with 5 records in it using the following queries –
mysql> CREATE TABLE Subscribers(
SubscriberName VARCHAR(255),
PackageName VARCHAR(255),
SubscriptionDate date
);
insert into Subscribers values('Raja', 'Premium', Date('2020-10-21'));
insert into Subscribers values('Roja', 'Basic', Date('2020-11-26'));
insert into Subscribers values('Puja', 'Moderate', Date('2021-03-07'));
insert into Subscribers values('Vanaja', 'Basic', Date('2021-02-21'));
insert into Subscribers values('Jalaja', 'Premium', Date('2021-01-30'));
Following query calculates and displays the remaining number of days for the subscription to complete —
mysql> SELECT SubscriberName, PackageName, SubscriptionDate, DATEDIFF(CURDATE(), SubscriptionDate) as Remaining_Days FROM Subscribers;
+----------------+-------------+------------------+----------------+
| SubscriberName | PackageName | SubscriptionDate | Remaining_Days |
+----------------+-------------+------------------+----------------+
| Raja | Premium | 2020-10-21 | 263 |
| Roja | Basic | 2020-11-26 | 227 |
| Puja | Moderate | 2021-03-07 | 126 |
| Vanaja | Basic | 2021-02-21 | 140 |
| Jalaja | Premium | 2021-01-30 | 162 |
+----------------+-------------+------------------+----------------+
5 rows in set (0.11 sec) | [
{
"code": null,
"e": 2772,
"s": 2441,
"text": "The DATE, DATETIME and TIMESTAMP datatypes in MySQL are used to store the date, date and time, time stamp values respectively. Where a time stamp is a numerical value representing the number of milliseconds from '1970-01-01 00:00:01' UTC (epoch) to the specified time. MySQL provides a set of functions to manipulate these values."
},
{
"code": null,
"e": 3118,
"s": 2772,
"text": "The MYSQL DATEDIFF() function accepts two date or, date-time values as parameters, calculates the difference between them (argument1-argument2) and returns the result. This function returns difference between the given date values in the form of days. This function includes only the date parts of the arguments while calculating the difference."
},
{
"code": null,
"e": 3166,
"s": 3118,
"text": "Following is the syntax of the above function –"
},
{
"code": null,
"e": 3190,
"s": 3166,
"text": "DATEDIFF(expr1, expr2)\n"
},
{
"code": null,
"e": 3260,
"s": 3190,
"text": "Following example demonstrates the usage of the DATEDIFF() function –"
},
{
"code": null,
"e": 3541,
"s": 3260,
"text": "mysql> SELECT DATEDIFF('2015-09-05', '1989-03-25');\n+--------------------------------------+\n| DATEDIFF('2015-09-05', '1989-03-25') |\n+--------------------------------------+\n| 9660 |\n+--------------------------------------+\n1 row in set (0.09 sec)"
},
{
"code": null,
"e": 3589,
"s": 3541,
"text": "Following is another example of this function –"
},
{
"code": null,
"e": 3870,
"s": 3589,
"text": "mysql> SELECT DATEDIFF('2019-05-25', '2019-05-05');\n+--------------------------------------+\n| DATEDIFF('2019-05-25', '2019-05-05') |\n+--------------------------------------+\n| 20 |\n+--------------------------------------+\n1 row in set (0.00 sec)"
},
{
"code": null,
"e": 3955,
"s": 3870,
"text": "In the following example we are passing DATETIME value as argument to this function–"
},
{
"code": null,
"e": 4679,
"s": 3955,
"text": "mysql> SELECT DATEDIFF('2018-05-23 20:40:32', '1996-12-07');\n+-----------------------------------------------+\n| DATEDIFF('2018-05-23 20:40:32', '1996-12-07') |\n+-----------------------------------------------+\n| 7837 |\n+-----------------------------------------------+\n1 row in set (0.00 sec)\nmysql> SELECT DATEDIFF('2018-05-23 20:40:32', '1996-12-07 20:15:40');\n+--------------------------------------------------------+\n| DATEDIFF('2018-05-23 20:40:32', '1996-12-07 20:15:40') |\n+--------------------------------------------------------+\n| 7837 |\n+--------------------------------------------------------+\n1 row in set (0.00 sec)"
},
{
"code": null,
"e": 4787,
"s": 4679,
"text": "In the following example we are passing the result of CURDATE() as an argument to the DATEDIFF() function —"
},
{
"code": null,
"e": 5313,
"s": 4787,
"text": "mysql> SELECT DATEDIFF(CURDATE(), '1995-11-15');\n+-----------------------------------+\n| DATEDIFF(CURDATE(), '1995-11-15') |\n+-----------------------------------+\n| 9374 |\n+-----------------------------------+\n1 row in set (0.00 sec)\nmysql> SELECT DATEDIFF('2050-03-25', CURDATE());\n+-----------------------------------+\n| DATEDIFF('2050-03-25', CURDATE()) |\n+-----------------------------------+\n| 10480 |\n+-----------------------------------+\n1 row in set (0.00 sec)"
},
{
"code": null,
"e": 5387,
"s": 5313,
"text": "We can also pass current timestamp values as arguments to this function –"
},
{
"code": null,
"e": 5949,
"s": 5387,
"text": "mysql> SELECT DATEDIFF(NOW(), '2015-09-05');\n+-------------------------------+\n| DATEDIFF(NOW(), '2015-09-05') |\n+-------------------------------+\n| 2140 |\n+-------------------------------+\n1 row in set (0.00 sec)\nmysql> SELECT DATEDIFF(CURRENT_TIMESTAMP(), '2015-09-05');\n+---------------------------------------------+\n| DATEDIFF(CURRENT_TIMESTAMP(), '2015-09-05') |\n+---------------------------------------------+\n| 2140 |\n+---------------------------------------------+\n1 row in set (0.00 sec)"
},
{
"code": null,
"e": 6049,
"s": 5949,
"text": "Let us create a table with name MyPlayers in MySQL database using CREATE statement as shown below –"
},
{
"code": null,
"e": 6235,
"s": 6049,
"text": "mysql> CREATE TABLE MyPlayers(\n\tID INT,\n\tFirst_Name VARCHAR(255),\n\tLast_Name VARCHAR(255),\n\tDate_Of_Birth date,\n\tPlace_Of_Birth VARCHAR(255),\n\tCountry VARCHAR(255),\n\tPRIMARY KEY (ID)\n);"
},
{
"code": null,
"e": 6310,
"s": 6235,
"text": "Now, we will insert 7 records in MyPlayers table using INSERT statements −"
},
{
"code": null,
"e": 7021,
"s": 6310,
"text": "mysql> insert into MyPlayers values(1, 'Shikhar', 'Dhawan', DATE('1981-12-05'), 'Delhi', 'India');\nmysql> insert into MyPlayers values(2, 'Jonathan', 'Trott', DATE('1981-04-22'), 'CapeTown', 'SouthAfrica');\nmysql> insert into MyPlayers values(3, 'Kumara', 'Sangakkara', DATE('1977-10-27'), 'Matale', 'Srilanka');\nmysql> insert into MyPlayers values(4, 'Virat', 'Kohli', DATE('1988-11-05'), 'Delhi', 'India');\nmysql> insert into MyPlayers values(5, 'Rohit', 'Sharma', DATE('1987-04-30'), 'Nagpur', 'India');\nmysql> insert into MyPlayers values(6, 'Ravindra', 'Jadeja', DATE('1988-12-06'), 'Nagpur', 'India');\nmysql> insert into MyPlayers values(7, 'James', 'Anderson', DATE('1982-06-30'), 'Burnley', 'England');"
},
{
"code": null,
"e": 7081,
"s": 7021,
"text": "Following query calculates the age of the players in days —"
},
{
"code": null,
"e": 8025,
"s": 7081,
"text": "mysql> SELECT First_Name, Last_Name, Date_Of_Birth, Country, DATEDIFF(CURDATE(), Date_Of_Birth) as Age_In_Days FROM MyPlayers;\n+------------+------------+---------------+-------------+-------------+\n| First_Name | Last_Name | Date_Of_Birth | Country | Age_In_Days |\n+------------+------------+---------------+-------------+-------------+\n| Shikhar | Dhawan | 1981-12-05 | India | 14463 |\n| Jonathan | Trott | 1981-04-22 | SouthAfrica | 14690 |\n| Kumara | Sangakkara | 1977-10-27 | Srilanka | 15963 |\n| Virat | Kohli | 1988-11-05 | India | 11936 |\n| Rohit | Sharma | 1987-04-30 | India | 12491 |\n| Ravindra | Jadeja | 1988-12-06 | India | 11905 |\n| James | Anderson | 1982-06-30 | England | 14256 |\n+------------+------------+---------------+-------------+-------------+\n7 rows in set (0.11 sec)"
},
{
"code": null,
"e": 8123,
"s": 8025,
"text": "Let us create another table with name Sales in MySQL database using CREATE statement as follows –"
},
{
"code": null,
"e": 8329,
"s": 8123,
"text": "mysql> CREATE TABLE sales(\n\tID INT,\n\tProductName VARCHAR(255),\n\tCustomerName VARCHAR(255),\n\tDispatchDate date,\n\tDispatchTime time,\n\tPrice INT,\n\tLocation VARCHAR(255)\n);\nQuery OK, 0 rows affected (2.22 sec)"
},
{
"code": null,
"e": 8400,
"s": 8329,
"text": "Now, we will insert 5 records in Sales table using INSERT statements −"
},
{
"code": null,
"e": 8933,
"s": 8400,
"text": "insert into sales values (1, 'Key-Board', 'Raja', DATE('2019-09-01'), TIME('11:00:00'), 7000, 'Hyderabad');\ninsert into sales values (2, 'Earphones', 'Roja', DATE('2019-05-01'), TIME('11:00:00'), 2000, 'Vishakhapatnam');\ninsert into sales values (3, 'Mouse', 'Puja', DATE('2019-03-01'), TIME('10:59:59'), 3000, 'Vijayawada');\ninsert into sales values (4, 'Mobile', 'Vanaja', DATE('2019-03-01'), TIME('10:10:52'), 9000, 'Chennai');\ninsert into sales values (5, 'Headset', 'Jalaja', DATE('2019-04-06'), TIME('11:08:59'), 6000, 'Goa');"
},
{
"code": null,
"e": 8981,
"s": 8933,
"text": "Following is another example of this function —"
},
{
"code": null,
"e": 9811,
"s": 8981,
"text": "mysql> SELECT ProductName, CustomerName, DispatchDate, Price, DATEDIFF(CURDATE(), DispatchDate) as difference_in_days FROM sales;\n+-------------+--------------+--------------+-------+--------------------+\n| ProductName | CustomerName | DispatchDate | Price | difference_in_days |\n+-------------+--------------+--------------+-------+--------------------+\n| Key-Board | Raja | 2019-09-01 | 7000 | 679 |\n| Earphones | Roja | 2019-05-01 | 2000 | 802 |\n| Mouse | Puja | 2019-03-01 | 3000 | 863 |\n| Mobile | Vanaja | 2019-03-01 | 9000 | 863 |\n| Headset | Jalaja | 2019-04-06 | 6000 | 827 |\n+-------------+--------------+--------------+-------+--------------------+\n5 rows in set (0.00 sec)"
},
{
"code": null,
"e": 9912,
"s": 9811,
"text": "Suppose we have created a table named Subscribers with 5 records in it using the following queries –"
},
{
"code": null,
"e": 10384,
"s": 9912,
"text": "mysql> CREATE TABLE Subscribers(\n\tSubscriberName VARCHAR(255),\n\tPackageName VARCHAR(255),\n\tSubscriptionDate date\n);\ninsert into Subscribers values('Raja', 'Premium', Date('2020-10-21'));\ninsert into Subscribers values('Roja', 'Basic', Date('2020-11-26'));\ninsert into Subscribers values('Puja', 'Moderate', Date('2021-03-07'));\ninsert into Subscribers values('Vanaja', 'Basic', Date('2021-02-21'));\ninsert into Subscribers values('Jalaja', 'Premium', Date('2021-01-30'));"
},
{
"code": null,
"e": 10488,
"s": 10384,
"text": "Following query calculates and displays the remaining number of days for the subscription to complete —"
}
] |
Merge two sorted linked lists such that merged list is in reverse order | 22 Mar, 2022
Given two linked lists sorted in increasing order. Merge them such a way that the result list is in decreasing order (reverse order).
Examples:
Input: a: 5->10->15->40
b: 2->3->20
Output: res: 40->20->15->10->5->3->2
Input: a: NULL
b: 2->3->20
Output: res: 20->3->2
A Simple Solution is to do following. 1) Reverse first list ‘a’. 2) Reverse second list ‘b’. 3) Merge two reversed lists.Another Simple Solution is first Merge both lists, then reverse the merged list.Both of the above solutions require two traversals of linked list.
How to solve without reverse, O(1) auxiliary space (in-place) and only one traversal of both lists? The idea is to follow merge style process. Initialize result list as empty. Traverse both lists from beginning to end. Compare current nodes of both lists and insert smaller of two at the beginning of the result list.
1) Initialize result list as empty: res = NULL.
2) Let 'a' and 'b' be heads first and second lists respectively.
3) While (a != NULL and b != NULL)
a) Find the smaller of two (Current 'a' and 'b')
b) Insert the smaller value node at the front of result.
c) Move ahead in the list of smaller node.
4) If 'b' becomes NULL before 'a', insert all nodes of 'a'
into result list at the beginning.
5) If 'a' becomes NULL before 'b', insert all nodes of 'a'
into result list at the beginning.
Below is the implementation of above solution.
C++14
Java
Python3
C#
Javascript
/* Given two sorted non-empty linked lists. Merge them in such a way that the result list will be in reverse order. Reversing of linked list is not allowed. Also, extra space should be O(1) */#include<iostream>using namespace std; /* Link list Node */struct Node{ int key; struct Node* next;}; // Given two non-empty linked lists 'a' and 'b'Node* SortedMerge(Node *a, Node *b){ // If both lists are empty if (a==NULL && b==NULL) return NULL; // Initialize head of resultant list Node *res = NULL; // Traverse both lists while both of then // have nodes. while (a != NULL && b != NULL) { // If a's current value is smaller or equal to // b's current value. if (a->key <= b->key) { // Store next of current Node in first list Node *temp = a->next; // Add 'a' at the front of resultant list a->next = res; res = a; // Move ahead in first list a = temp; } // If a's value is greater. Below steps are similar // to above (Only 'a' is replaced with 'b') else { Node *temp = b->next; b->next = res; res = b; b = temp; } } // If second list reached end, but first list has // nodes. Add remaining nodes of first list at the // front of result list while (a != NULL) { Node *temp = a->next; a->next = res; res = a; a = temp; } // If first list reached end, but second list has // node. Add remaining nodes of first list at the // front of result list while (b != NULL) { Node *temp = b->next; b->next = res; res = b; b = temp; } return res;} /* Function to print Nodes in a given linked list */void printList(struct Node *Node){ while (Node!=NULL) { cout << Node->key << " "; Node = Node->next; }} /* Utility function to create a new node with given key */Node *newNode(int key){ Node *temp = new Node; temp->key = key; temp->next = NULL; return temp;} /* Driver program to test above functions*/int main(){ /* Start with the empty list */ struct Node* res = NULL; /* Let us create two sorted linked lists to test the above functions. Created lists shall be a: 5->10->15 b: 2->3->20 */ Node *a = newNode(5); a->next = newNode(10); a->next->next = newNode(15); Node *b = newNode(2); b->next = newNode(3); b->next->next = newNode(20); cout << "List A before merge: \n"; printList(a); cout << "\nList B before merge: \n"; printList(b); /* merge 2 increasing order LLs in decreasing order */ res = SortedMerge(a, b); cout << "\nMerged Linked List is: \n"; printList(res); return 0;}
// Java program to merge two sorted linked list such that merged// list is in reverse order // Linked List Classclass LinkedList { Node head; // head of list static Node a, b; /* Node Class */ static class Node { int data; Node next; // Constructor to create a new node Node(int d) { data = d; next = null; } } void printlist(Node node) { while (node != null) { System.out.print(node.data + " "); node = node.next; } } Node sortedmerge(Node node1, Node node2) { // if both the nodes are null if (node1 == null && node2 == null) { return null; } // resultant node Node res = null; // if both of them have nodes present traverse them while (node1 != null && node2 != null) { // Now compare both nodes current data if (node1.data <= node2.data) { Node temp = node1.next; node1.next = res; res = node1; node1 = temp; } else { Node temp = node2.next; node2.next = res; res = node2; node2 = temp; } } // If second list reached end, but first list has // nodes. Add remaining nodes of first list at the // front of result list while (node1 != null) { Node temp = node1.next; node1.next = res; res = node1; node1 = temp; } // If first list reached end, but second list has // node. Add remaining nodes of first list at the // front of result list while (node2 != null) { Node temp = node2.next; node2.next = res; res = node2; node2 = temp; } return res; } public static void main(String[] args) { LinkedList list = new LinkedList(); Node result = null; /*Let us create two sorted linked lists to test the above functions. Created lists shall be a: 5->10->15 b: 2->3->20*/ list.a = new Node(5); list.a.next = new Node(10); list.a.next.next = new Node(15); list.b = new Node(2); list.b.next = new Node(3); list.b.next.next = new Node(20); System.out.println("List a before merge :"); list.printlist(a); System.out.println(""); System.out.println("List b before merge :"); list.printlist(b); // merge two sorted linkedlist in decreasing order result = list.sortedmerge(a, b); System.out.println(""); System.out.println("Merged linked list : "); list.printlist(result); }} // This code has been contributed by Mayank Jaiswal
# Given two sorted non-empty linked lists. Merge them in# such a way that the result list will be in reverse# order. Reversing of linked list is not allowed. Also,# extra space should be O(1) # Node of a linked listclass Node: def __init__(self, next = None, data = None): self.next = next self.data = data # Given two non-empty linked lists 'a' and 'b'def SortedMerge(a,b): # If both lists are empty if (a == None and b == None): return None # Initialize head of resultant list res = None # Traverse both lists while both of then # have nodes. while (a != None and b != None): # If a's current value is smaller or equal to # b's current value. if (a.key <= b.key): # Store next of current Node in first list temp = a.next # Add 'a' at the front of resultant list a.next = res res = a # Move ahead in first list a = temp # If a's value is greater. Below steps are similar # to above (Only 'a' is replaced with 'b') else: temp = b.next b.next = res res = b b = temp # If second list reached end, but first list has # nodes. Add remaining nodes of first list at the # front of result list while (a != None): temp = a.next a.next = res res = a a = temp # If first list reached end, but second list has # node. Add remaining nodes of first list at the # front of result list while (b != None): temp = b.next b.next = res res = b b = temp return res # Function to print Nodes in a given linked listdef printList(Node): while (Node != None): print( Node.key, end = " ") Node = Node.next # Utility function to create a new node with# given keydef newNode( key): temp = Node() temp.key = key temp.next = None return temp # Driver program to test above functions # Start with the empty listres = None # Let us create two sorted linked lists to test# the above functions. Created lists shall be# a: 5.10.15# b: 2.3.20a = newNode(5)a.next = newNode(10)a.next.next = newNode(15) b = newNode(2)b.next = newNode(3)b.next.next = newNode(20) print( "List A before merge: ")printList(a) print( "\nList B before merge: ")printList(b) # merge 2 increasing order LLs in decreasing orderres = SortedMerge(a, b) print("\nMerged Linked List is: ")printList(res) # This code is contributed by Arnab Kundu
// C# program to merge two sorted// linked list such that merged// list is in reverse order // Linked List Classusing System; class LinkedList{ public Node head; // head of list static Node a, b; /* Node Class */ public class Node { public int data; public Node next; // Constructor to create a new node public Node(int d) { data = d; next = null; } } void printlist(Node node) { while (node != null) { Console.Write(node.data + " "); node = node.next; } } Node sortedmerge(Node node1, Node node2) { // if both the nodes are null if (node1 == null && node2 == null) { return null; } // resultant node Node res = null; // if both of them have nodes // present traverse them while (node1 != null && node2 != null) { // Now compare both nodes current data if (node1.data <= node2.data) { Node temp = node1.next; node1.next = res; res = node1; node1 = temp; } else { Node temp = node2.next; node2.next = res; res = node2; node2 = temp; } } // If second list reached end, but first // list has nodes. Add remaining nodes of // first list at the front of result list while (node1 != null) { Node temp = node1.next; node1.next = res; res = node1; node1 = temp; } // If first list reached end, but second // list has node. Add remaining nodes of // first list at the front of result list while (node2 != null) { Node temp = node2.next; node2.next = res; res = node2; node2 = temp; } return res; } // Driver code public static void Main(String[] args) { LinkedList list = new LinkedList(); Node result = null; /*Let us create two sorted linked lists to test the above functions. Created lists shall be a: 5->10->15 b: 2->3->20*/ LinkedList.a = new Node(5); LinkedList.a.next = new Node(10); LinkedList.a.next.next = new Node(15); LinkedList.b = new Node(2); LinkedList.b.next = new Node(3); LinkedList.b.next.next = new Node(20); Console.WriteLine("List a before merge :"); list.printlist(a); Console.WriteLine(""); Console.WriteLine("List b before merge :"); list.printlist(b); // merge two sorted linkedlist in decreasing order result = list.sortedmerge(a, b); Console.WriteLine(""); Console.WriteLine("Merged linked list : "); list.printlist(result); }} // This code has been contributed by 29AjayKumar
<script> // Javascript program to merge two// sorted linked list such that merged// list is in reverse order // Node Classclass Node{ constructor(d) { this.data = d; this.next = null; }} // Head of listlet head; let a, b; function printlist(node){ while (node != null) { document.write(node.data + " "); node = node.next; }} function sortedmerge(node1, node2){ // If both the nodes are null if (node1 == null && node2 == null) { return null; } // Resultant node let res = null; // If both of them have nodes present // traverse them while (node1 != null && node2 != null) { // Now compare both nodes current data if (node1.data <= node2.data) { let temp = node1.next; node1.next = res; res = node1; node1 = temp; } else { let temp = node2.next; node2.next = res; res = node2; node2 = temp; } } // If second list reached end, but // first list has nodes. Add // remaining nodes of first // list at the front of result list while (node1 != null) { let temp = node1.next; node1.next = res; res = node1; node1 = temp; } // If first list reached end, but // second list has node. Add // remaining nodes of first // list at the front of result list while (node2 != null) { let temp = node2.next; node2.next = res; res = node2; node2 = temp; } return res;} // Driver codelet result = null; /*Let us create two sorted linked lists to test the above functions. Created lists shall be a: 5->10->15 b: 2->3->20*/a = new Node(5);a.next = new Node(10);a.next.next = new Node(15); b = new Node(2);b.next = new Node(3);b.next.next = new Node(20); document.write("List a before merge :<br>");printlist(a);document.write("<br>");document.write("List b before merge :<br>");printlist(b); // Merge two sorted linkedlist in decreasing orderresult = sortedmerge(a, b);document.write("<br>");document.write("Merged linked list : <br>");printlist(result); // This code is contributed by rag2127 </script>
Output:
List A before merge:
5 10 15
List B before merge:
2 3 20
Merged Linked List is:
20 15 10 5 3 2
Time Complexity: O(N)
Auxiliary Space: O(1)
This solution traverses both lists only once, doesn’t require reverse and works in-place.This article is contributed by Mohammed Raqeeb. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
29AjayKumar
nidhi_biet
andrew1234
rag2127
simmytarika5
rohitsingh07052
Merge Sort
Microsoft
Reverse
Linked List
Microsoft
Linked List
Merge Sort
Reverse
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Introduction to Data Structures
What is Data Structure: Types, Classifications and Applications
Merge Sort for Linked Lists
Implementing a Linked List in Java using Class
Find Length of a Linked List (Iterative and Recursive)
Detect and Remove Loop in a Linked List
Add two numbers represented by linked lists | Set 1
Function to check if a singly linked list is palindrome
Write a function to get the intersection point of two Linked Lists
Remove duplicates from an unsorted linked list | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n22 Mar, 2022"
},
{
"code": null,
"e": 188,
"s": 54,
"text": "Given two linked lists sorted in increasing order. Merge them such a way that the result list is in decreasing order (reverse order)."
},
{
"code": null,
"e": 199,
"s": 188,
"text": "Examples: "
},
{
"code": null,
"e": 342,
"s": 199,
"text": "Input: a: 5->10->15->40\n b: 2->3->20 \nOutput: res: 40->20->15->10->5->3->2\n\nInput: a: NULL\n b: 2->3->20 \nOutput: res: 20->3->2"
},
{
"code": null,
"e": 611,
"s": 342,
"text": "A Simple Solution is to do following. 1) Reverse first list ‘a’. 2) Reverse second list ‘b’. 3) Merge two reversed lists.Another Simple Solution is first Merge both lists, then reverse the merged list.Both of the above solutions require two traversals of linked list. "
},
{
"code": null,
"e": 930,
"s": 611,
"text": "How to solve without reverse, O(1) auxiliary space (in-place) and only one traversal of both lists? The idea is to follow merge style process. Initialize result list as empty. Traverse both lists from beginning to end. Compare current nodes of both lists and insert smaller of two at the beginning of the result list. "
},
{
"code": null,
"e": 1437,
"s": 930,
"text": "1) Initialize result list as empty: res = NULL.\n2) Let 'a' and 'b' be heads first and second lists respectively.\n3) While (a != NULL and b != NULL)\n a) Find the smaller of two (Current 'a' and 'b')\n b) Insert the smaller value node at the front of result.\n c) Move ahead in the list of smaller node. \n4) If 'b' becomes NULL before 'a', insert all nodes of 'a' \n into result list at the beginning.\n5) If 'a' becomes NULL before 'b', insert all nodes of 'a' \n into result list at the beginning. "
},
{
"code": null,
"e": 1484,
"s": 1437,
"text": "Below is the implementation of above solution."
},
{
"code": null,
"e": 1490,
"s": 1484,
"text": "C++14"
},
{
"code": null,
"e": 1495,
"s": 1490,
"text": "Java"
},
{
"code": null,
"e": 1503,
"s": 1495,
"text": "Python3"
},
{
"code": null,
"e": 1506,
"s": 1503,
"text": "C#"
},
{
"code": null,
"e": 1517,
"s": 1506,
"text": "Javascript"
},
{
"code": "/* Given two sorted non-empty linked lists. Merge them in such a way that the result list will be in reverse order. Reversing of linked list is not allowed. Also, extra space should be O(1) */#include<iostream>using namespace std; /* Link list Node */struct Node{ int key; struct Node* next;}; // Given two non-empty linked lists 'a' and 'b'Node* SortedMerge(Node *a, Node *b){ // If both lists are empty if (a==NULL && b==NULL) return NULL; // Initialize head of resultant list Node *res = NULL; // Traverse both lists while both of then // have nodes. while (a != NULL && b != NULL) { // If a's current value is smaller or equal to // b's current value. if (a->key <= b->key) { // Store next of current Node in first list Node *temp = a->next; // Add 'a' at the front of resultant list a->next = res; res = a; // Move ahead in first list a = temp; } // If a's value is greater. Below steps are similar // to above (Only 'a' is replaced with 'b') else { Node *temp = b->next; b->next = res; res = b; b = temp; } } // If second list reached end, but first list has // nodes. Add remaining nodes of first list at the // front of result list while (a != NULL) { Node *temp = a->next; a->next = res; res = a; a = temp; } // If first list reached end, but second list has // node. Add remaining nodes of first list at the // front of result list while (b != NULL) { Node *temp = b->next; b->next = res; res = b; b = temp; } return res;} /* Function to print Nodes in a given linked list */void printList(struct Node *Node){ while (Node!=NULL) { cout << Node->key << \" \"; Node = Node->next; }} /* Utility function to create a new node with given key */Node *newNode(int key){ Node *temp = new Node; temp->key = key; temp->next = NULL; return temp;} /* Driver program to test above functions*/int main(){ /* Start with the empty list */ struct Node* res = NULL; /* Let us create two sorted linked lists to test the above functions. Created lists shall be a: 5->10->15 b: 2->3->20 */ Node *a = newNode(5); a->next = newNode(10); a->next->next = newNode(15); Node *b = newNode(2); b->next = newNode(3); b->next->next = newNode(20); cout << \"List A before merge: \\n\"; printList(a); cout << \"\\nList B before merge: \\n\"; printList(b); /* merge 2 increasing order LLs in decreasing order */ res = SortedMerge(a, b); cout << \"\\nMerged Linked List is: \\n\"; printList(res); return 0;}",
"e": 4341,
"s": 1517,
"text": null
},
{
"code": "// Java program to merge two sorted linked list such that merged// list is in reverse order // Linked List Classclass LinkedList { Node head; // head of list static Node a, b; /* Node Class */ static class Node { int data; Node next; // Constructor to create a new node Node(int d) { data = d; next = null; } } void printlist(Node node) { while (node != null) { System.out.print(node.data + \" \"); node = node.next; } } Node sortedmerge(Node node1, Node node2) { // if both the nodes are null if (node1 == null && node2 == null) { return null; } // resultant node Node res = null; // if both of them have nodes present traverse them while (node1 != null && node2 != null) { // Now compare both nodes current data if (node1.data <= node2.data) { Node temp = node1.next; node1.next = res; res = node1; node1 = temp; } else { Node temp = node2.next; node2.next = res; res = node2; node2 = temp; } } // If second list reached end, but first list has // nodes. Add remaining nodes of first list at the // front of result list while (node1 != null) { Node temp = node1.next; node1.next = res; res = node1; node1 = temp; } // If first list reached end, but second list has // node. Add remaining nodes of first list at the // front of result list while (node2 != null) { Node temp = node2.next; node2.next = res; res = node2; node2 = temp; } return res; } public static void main(String[] args) { LinkedList list = new LinkedList(); Node result = null; /*Let us create two sorted linked lists to test the above functions. Created lists shall be a: 5->10->15 b: 2->3->20*/ list.a = new Node(5); list.a.next = new Node(10); list.a.next.next = new Node(15); list.b = new Node(2); list.b.next = new Node(3); list.b.next.next = new Node(20); System.out.println(\"List a before merge :\"); list.printlist(a); System.out.println(\"\"); System.out.println(\"List b before merge :\"); list.printlist(b); // merge two sorted linkedlist in decreasing order result = list.sortedmerge(a, b); System.out.println(\"\"); System.out.println(\"Merged linked list : \"); list.printlist(result); }} // This code has been contributed by Mayank Jaiswal",
"e": 7163,
"s": 4341,
"text": null
},
{
"code": "# Given two sorted non-empty linked lists. Merge them in# such a way that the result list will be in reverse# order. Reversing of linked list is not allowed. Also,# extra space should be O(1) # Node of a linked listclass Node: def __init__(self, next = None, data = None): self.next = next self.data = data # Given two non-empty linked lists 'a' and 'b'def SortedMerge(a,b): # If both lists are empty if (a == None and b == None): return None # Initialize head of resultant list res = None # Traverse both lists while both of then # have nodes. while (a != None and b != None): # If a's current value is smaller or equal to # b's current value. if (a.key <= b.key): # Store next of current Node in first list temp = a.next # Add 'a' at the front of resultant list a.next = res res = a # Move ahead in first list a = temp # If a's value is greater. Below steps are similar # to above (Only 'a' is replaced with 'b') else: temp = b.next b.next = res res = b b = temp # If second list reached end, but first list has # nodes. Add remaining nodes of first list at the # front of result list while (a != None): temp = a.next a.next = res res = a a = temp # If first list reached end, but second list has # node. Add remaining nodes of first list at the # front of result list while (b != None): temp = b.next b.next = res res = b b = temp return res # Function to print Nodes in a given linked listdef printList(Node): while (Node != None): print( Node.key, end = \" \") Node = Node.next # Utility function to create a new node with# given keydef newNode( key): temp = Node() temp.key = key temp.next = None return temp # Driver program to test above functions # Start with the empty listres = None # Let us create two sorted linked lists to test# the above functions. Created lists shall be# a: 5.10.15# b: 2.3.20a = newNode(5)a.next = newNode(10)a.next.next = newNode(15) b = newNode(2)b.next = newNode(3)b.next.next = newNode(20) print( \"List A before merge: \")printList(a) print( \"\\nList B before merge: \")printList(b) # merge 2 increasing order LLs in decreasing orderres = SortedMerge(a, b) print(\"\\nMerged Linked List is: \")printList(res) # This code is contributed by Arnab Kundu",
"e": 9749,
"s": 7163,
"text": null
},
{
"code": "// C# program to merge two sorted// linked list such that merged// list is in reverse order // Linked List Classusing System; class LinkedList{ public Node head; // head of list static Node a, b; /* Node Class */ public class Node { public int data; public Node next; // Constructor to create a new node public Node(int d) { data = d; next = null; } } void printlist(Node node) { while (node != null) { Console.Write(node.data + \" \"); node = node.next; } } Node sortedmerge(Node node1, Node node2) { // if both the nodes are null if (node1 == null && node2 == null) { return null; } // resultant node Node res = null; // if both of them have nodes // present traverse them while (node1 != null && node2 != null) { // Now compare both nodes current data if (node1.data <= node2.data) { Node temp = node1.next; node1.next = res; res = node1; node1 = temp; } else { Node temp = node2.next; node2.next = res; res = node2; node2 = temp; } } // If second list reached end, but first // list has nodes. Add remaining nodes of // first list at the front of result list while (node1 != null) { Node temp = node1.next; node1.next = res; res = node1; node1 = temp; } // If first list reached end, but second // list has node. Add remaining nodes of // first list at the front of result list while (node2 != null) { Node temp = node2.next; node2.next = res; res = node2; node2 = temp; } return res; } // Driver code public static void Main(String[] args) { LinkedList list = new LinkedList(); Node result = null; /*Let us create two sorted linked lists to test the above functions. Created lists shall be a: 5->10->15 b: 2->3->20*/ LinkedList.a = new Node(5); LinkedList.a.next = new Node(10); LinkedList.a.next.next = new Node(15); LinkedList.b = new Node(2); LinkedList.b.next = new Node(3); LinkedList.b.next.next = new Node(20); Console.WriteLine(\"List a before merge :\"); list.printlist(a); Console.WriteLine(\"\"); Console.WriteLine(\"List b before merge :\"); list.printlist(b); // merge two sorted linkedlist in decreasing order result = list.sortedmerge(a, b); Console.WriteLine(\"\"); Console.WriteLine(\"Merged linked list : \"); list.printlist(result); }} // This code has been contributed by 29AjayKumar",
"e": 12747,
"s": 9749,
"text": null
},
{
"code": "<script> // Javascript program to merge two// sorted linked list such that merged// list is in reverse order // Node Classclass Node{ constructor(d) { this.data = d; this.next = null; }} // Head of listlet head; let a, b; function printlist(node){ while (node != null) { document.write(node.data + \" \"); node = node.next; }} function sortedmerge(node1, node2){ // If both the nodes are null if (node1 == null && node2 == null) { return null; } // Resultant node let res = null; // If both of them have nodes present // traverse them while (node1 != null && node2 != null) { // Now compare both nodes current data if (node1.data <= node2.data) { let temp = node1.next; node1.next = res; res = node1; node1 = temp; } else { let temp = node2.next; node2.next = res; res = node2; node2 = temp; } } // If second list reached end, but // first list has nodes. Add // remaining nodes of first // list at the front of result list while (node1 != null) { let temp = node1.next; node1.next = res; res = node1; node1 = temp; } // If first list reached end, but // second list has node. Add // remaining nodes of first // list at the front of result list while (node2 != null) { let temp = node2.next; node2.next = res; res = node2; node2 = temp; } return res;} // Driver codelet result = null; /*Let us create two sorted linked lists to test the above functions. Created lists shall be a: 5->10->15 b: 2->3->20*/a = new Node(5);a.next = new Node(10);a.next.next = new Node(15); b = new Node(2);b.next = new Node(3);b.next.next = new Node(20); document.write(\"List a before merge :<br>\");printlist(a);document.write(\"<br>\");document.write(\"List b before merge :<br>\");printlist(b); // Merge two sorted linkedlist in decreasing orderresult = sortedmerge(a, b);document.write(\"<br>\");document.write(\"Merged linked list : <br>\");printlist(result); // This code is contributed by rag2127 </script>",
"e": 14999,
"s": 12747,
"text": null
},
{
"code": null,
"e": 15008,
"s": 14999,
"text": "Output: "
},
{
"code": null,
"e": 15109,
"s": 15008,
"text": "List A before merge: \n5 10 15 \nList B before merge: \n2 3 20 \nMerged Linked List is: \n20 15 10 5 3 2 "
},
{
"code": null,
"e": 15131,
"s": 15109,
"text": "Time Complexity: O(N)"
},
{
"code": null,
"e": 15153,
"s": 15131,
"text": "Auxiliary Space: O(1)"
},
{
"code": null,
"e": 15415,
"s": 15153,
"text": "This solution traverses both lists only once, doesn’t require reverse and works in-place.This article is contributed by Mohammed Raqeeb. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above "
},
{
"code": null,
"e": 15427,
"s": 15415,
"text": "29AjayKumar"
},
{
"code": null,
"e": 15438,
"s": 15427,
"text": "nidhi_biet"
},
{
"code": null,
"e": 15449,
"s": 15438,
"text": "andrew1234"
},
{
"code": null,
"e": 15457,
"s": 15449,
"text": "rag2127"
},
{
"code": null,
"e": 15470,
"s": 15457,
"text": "simmytarika5"
},
{
"code": null,
"e": 15486,
"s": 15470,
"text": "rohitsingh07052"
},
{
"code": null,
"e": 15497,
"s": 15486,
"text": "Merge Sort"
},
{
"code": null,
"e": 15507,
"s": 15497,
"text": "Microsoft"
},
{
"code": null,
"e": 15515,
"s": 15507,
"text": "Reverse"
},
{
"code": null,
"e": 15527,
"s": 15515,
"text": "Linked List"
},
{
"code": null,
"e": 15537,
"s": 15527,
"text": "Microsoft"
},
{
"code": null,
"e": 15549,
"s": 15537,
"text": "Linked List"
},
{
"code": null,
"e": 15560,
"s": 15549,
"text": "Merge Sort"
},
{
"code": null,
"e": 15568,
"s": 15560,
"text": "Reverse"
},
{
"code": null,
"e": 15666,
"s": 15568,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 15698,
"s": 15666,
"text": "Introduction to Data Structures"
},
{
"code": null,
"e": 15762,
"s": 15698,
"text": "What is Data Structure: Types, Classifications and Applications"
},
{
"code": null,
"e": 15790,
"s": 15762,
"text": "Merge Sort for Linked Lists"
},
{
"code": null,
"e": 15837,
"s": 15790,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 15892,
"s": 15837,
"text": "Find Length of a Linked List (Iterative and Recursive)"
},
{
"code": null,
"e": 15932,
"s": 15892,
"text": "Detect and Remove Loop in a Linked List"
},
{
"code": null,
"e": 15984,
"s": 15932,
"text": "Add two numbers represented by linked lists | Set 1"
},
{
"code": null,
"e": 16040,
"s": 15984,
"text": "Function to check if a singly linked list is palindrome"
},
{
"code": null,
"e": 16107,
"s": 16040,
"text": "Write a function to get the intersection point of two Linked Lists"
}
] |
Selenium - Multi Browser Testing | Users can execute scripts in multiple browsers simultaneously. For demonstration, we will use the same scenario that we had taken for Selenium Grid. In the Selenium Grid example, we had executed the scripts remotely; here we will execute the scripts locally.
First of all, ensure that you have appropriate drivers downloaded. Please refer the chapter "Selenium Grid" for downloading IE and Chrome drivers.
For demonstration, we will perform percent calculator in all the browsers simultaneously.
package TestNG;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import org.openqa.selenium.ie.InternetExplorerDriver;
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.*;
import org.testng.annotations.*;
public class TestNGClass {
private WebDriver driver;
private String URL = "http://www.calculator.net";
@Parameters("browser")
@BeforeTest
public void launchapp(String browser) {
if (browser.equalsIgnoreCase("firefox")) {
System.out.println(" Executing on FireFox");
driver = new FirefoxDriver();
driver.get(URL);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
driver.manage().window().maximize();
} else if (browser.equalsIgnoreCase("chrome")) {
System.out.println(" Executing on CHROME");
System.out.println("Executing on IE");
System.setProperty("webdriver.chrome.driver", "D:\\chromedriver.exe");
driver = new ChromeDriver();
driver.get(URL);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
driver.manage().window().maximize();
} else if (browser.equalsIgnoreCase("ie")) {
System.out.println("Executing on IE");
System.setProperty("webdriver.ie.driver", "D:\\IEDriverServer.exe");
driver = new InternetExplorerDriver();
driver.get(URL);
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
driver.manage().window().maximize();
} else {
throw new IllegalArgumentException("The Browser Type is Undefined");
}
}
@Test
public void calculatepercent() {
// Click on Math Calculators
driver.findElement(By.xpath(".//*[@id = 'menu']/div[3]/a")).click();
// Click on Percent Calculators
driver.findElement(By.xpath(".//*[@id = 'menu']/div[4]/div[3]/a")).click();
// Enter value 10 in the first number of the percent Calculator
driver.findElement(By.id("cpar1")).sendKeys("10");
// Enter value 50 in the second number of the percent Calculator
driver.findElement(By.id("cpar2")).sendKeys("50");
// Click Calculate Button
driver.findElement(By.xpath(".//*[@id = 'content']/table/tbody/tr/td[2]/input")).click();
// Get the Result Text based on its xpath
String result =
driver.findElement(By.xpath(".//*[@id = 'content']/p[2]/span/font/b")).getText();
// Print a Log In message to the screen
System.out.println(" The Result is " + result);
if(result.equals("5")) {
System.out.println(" The Result is Pass");
} else {
System.out.println(" The Result is Fail");
}
}
@AfterTest
public void closeBrowser() {
driver.close();
}
}
Create an XML which will help us in parameterizing the browser name and don't forget to mention parallel="tests" in order to execute in all the browsers simultaneously.
Execute the script by performing right-click on the XML file and select 'Run As' >> 'TestNG' Suite as shown below.
All the browser would be launched simultaneously and the result would be printed in the console.
Note − To execute on IE successfully, ensure that the check box 'Enable Protected Mode' under the security Tab of 'IE Option' is either checked or unchecked across all zones.
TestNG results can be viewed in HTML format for detailed analysis. | [
{
"code": null,
"e": 2268,
"s": 2009,
"text": "Users can execute scripts in multiple browsers simultaneously. For demonstration, we will use the same scenario that we had taken for Selenium Grid. In the Selenium Grid example, we had executed the scripts remotely; here we will execute the scripts locally."
},
{
"code": null,
"e": 2415,
"s": 2268,
"text": "First of all, ensure that you have appropriate drivers downloaded. Please refer the chapter \"Selenium Grid\" for downloading IE and Chrome drivers."
},
{
"code": null,
"e": 2505,
"s": 2415,
"text": "For demonstration, we will perform percent calculator in all the browsers simultaneously."
},
{
"code": null,
"e": 5325,
"s": 2505,
"text": "package TestNG;\n\nimport org.openqa.selenium.chrome.ChromeDriver;\nimport org.openqa.selenium.firefox.FirefoxDriver;\nimport org.openqa.selenium.ie.InternetExplorerDriver;\nimport java.util.concurrent.TimeUnit;\nimport org.openqa.selenium.*;\nimport org.testng.annotations.*;\n\npublic class TestNGClass {\n private WebDriver driver;\n private String URL = \"http://www.calculator.net\";\n\n @Parameters(\"browser\")\n @BeforeTest\n public void launchapp(String browser) {\n\n if (browser.equalsIgnoreCase(\"firefox\")) {\n System.out.println(\" Executing on FireFox\");\n driver = new FirefoxDriver();\n driver.get(URL);\n driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);\n driver.manage().window().maximize();\n } else if (browser.equalsIgnoreCase(\"chrome\")) {\n System.out.println(\" Executing on CHROME\");\n System.out.println(\"Executing on IE\");\n System.setProperty(\"webdriver.chrome.driver\", \"D:\\\\chromedriver.exe\");\n driver = new ChromeDriver();\n driver.get(URL);\n driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);\n driver.manage().window().maximize();\n } else if (browser.equalsIgnoreCase(\"ie\")) {\n System.out.println(\"Executing on IE\");\n System.setProperty(\"webdriver.ie.driver\", \"D:\\\\IEDriverServer.exe\");\n driver = new InternetExplorerDriver();\n driver.get(URL);\n driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);\n driver.manage().window().maximize();\n } else {\n throw new IllegalArgumentException(\"The Browser Type is Undefined\");\n }\n }\n\n @Test\n public void calculatepercent() {\n // Click on Math Calculators\n driver.findElement(By.xpath(\".//*[@id = 'menu']/div[3]/a\")).click();\n\n // Click on Percent Calculators\n driver.findElement(By.xpath(\".//*[@id = 'menu']/div[4]/div[3]/a\")).click();\n\n // Enter value 10 in the first number of the percent Calculator\n driver.findElement(By.id(\"cpar1\")).sendKeys(\"10\");\n\n // Enter value 50 in the second number of the percent Calculator\n driver.findElement(By.id(\"cpar2\")).sendKeys(\"50\");\n\n // Click Calculate Button\n driver.findElement(By.xpath(\".//*[@id = 'content']/table/tbody/tr/td[2]/input\")).click();\n\n // Get the Result Text based on its xpath\n String result =\n driver.findElement(By.xpath(\".//*[@id = 'content']/p[2]/span/font/b\")).getText();\t\t \n\n // Print a Log In message to the screen\n System.out.println(\" The Result is \" + result);\n\n if(result.equals(\"5\")) {\n System.out.println(\" The Result is Pass\");\n } else {\n System.out.println(\" The Result is Fail\");\n }\n }\n\n @AfterTest\n public void closeBrowser() {\n driver.close();\n }\n}"
},
{
"code": null,
"e": 5494,
"s": 5325,
"text": "Create an XML which will help us in parameterizing the browser name and don't forget to mention parallel=\"tests\" in order to execute in all the browsers simultaneously."
},
{
"code": null,
"e": 5609,
"s": 5494,
"text": "Execute the script by performing right-click on the XML file and select 'Run As' >> 'TestNG' Suite as shown below."
},
{
"code": null,
"e": 5706,
"s": 5609,
"text": "All the browser would be launched simultaneously and the result would be printed in the console."
},
{
"code": null,
"e": 5881,
"s": 5706,
"text": "Note − To execute on IE successfully, ensure that the check box 'Enable Protected Mode' under the security Tab of 'IE Option' is either checked or unchecked across all zones."
}
] |
Half clique NP Complete problem | 08 Jun, 2021
A half clique in a graph is a set of n/2 vertices such that each vertex shares an edge with every other vertex, that is, the k = n/2 vertices of the graph form a complete graph.
Problem – Given a graph G(V,E) ,the problem is to determine if the graph contains a clique of size atleast k = |V/2|.
Explanation- An instance of the problem is an input specified to the problem. An instance of the Half-Clique problem is a graph G (V, E) and a positive integer k, and the problem is to check whether a clique of size k = |V/2| exists in G. Since an NP Complete problem, by definition, is a problem which is both in NP and NP hard, the proof for the statement that a problem is NP Complete consists of two parts:
1. Half – Clique Problem is in NP :If any problem is in NP, then, given a ‘certificate’, which is a solution to the problem and an instance of the problem (a graph G and a positive integer k, in this case), we will be able to verify (check whether the solution given is correct or not) the certificate in polynomial time. The certificate is a subset V’ of the vertices, which comprises the vertices belonging to the half-clique. We can validate this solution by checking that each pair of vertices belonging to the solution are adjacent, by simply verifying that they share an edge with each other. This can be done in polynomial time, that is O(V +E) using the following strategy of graph G(V,E):
flag=true
Count the number of vertices in the subset V'
If not equal to V/2 :
flag = false
Else :
For every pair {u,v} in the subset V’:
Check that these two vertices {u,v} share an edge
If there is no edge ,set flag to false and break
If flag is true:
Solution is correct
Else:
Solution is incorrect
2. Half-Clique Problem is NP Hard :To prove that the half-clique problem is NP Hard, we take the help of a problem which is already NP Hard and show that this problem can be reduced to the half-clique problem. For this, we consider the Clique problem, which is NP Complete (and hence NP Hard). Every instance of the clique problem consisting of the graph G (V,E) and an integer k can be converted to the required graph G’ (V’,E’) and k’ of the half clique problem. The deduction that can be made is that the graph G’ will have a clique of size n/2 , iff the graph G has a clique of size k. Let m be the number of nodes in the graph G. We will now prove that the problem of computing the clique indeed boils down to the computation of the independent set. The reduction can be proved by the following two propositions:
If k > = m/2, then for a constant number t, we add t nodes each of degree 0, for a graph G’. The graph G’ has a total number of nodes equivalent to n = m + t, that is, the summation of all the nodes of graph G along with the extra nodes, such that it is equivalent to 2k, for any arbitrary value of k. Now k = n/2. This can be done by taking t = 2k -m. Then, the graph G has a clique of size k if and only if the graph G’ has a clique of size k.
If k < m/2 , then we add t additional nodes for the creation of graph G’. Edges can also be added from each new node to every other node in the graph. Therefore, any k-clique in G, for any arbitrary value of k combines with the t new nodes to make a (k+t)-clique in G’, since edges have been added between each pair of vertices. A k+t-sized clique in G’ must include at least k old nodes, which form a clique in the graph G. Therefore, the value of t is picked such that k+t = (m+t)/2, or t = m-2k, which makes the clique size in G’ equivalent to n/2 exactly.
Picked
GATE CS
Theory of Computation & Automata
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between Clustered and Non-clustered index
Introduction of Process Synchronization
Three address code in Compiler
Differences between IPv4 and IPv6
Phases of a Compiler
Introduction of Finite Automata
Difference between DFA and NFA
Turing Machine in TOC
Chomsky Hierarchy in Theory of Computation
Boyer-Moore Majority Voting Algorithm | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n08 Jun, 2021"
},
{
"code": null,
"e": 207,
"s": 28,
"text": "A half clique in a graph is a set of n/2 vertices such that each vertex shares an edge with every other vertex, that is, the k = n/2 vertices of the graph form a complete graph. "
},
{
"code": null,
"e": 326,
"s": 207,
"text": "Problem – Given a graph G(V,E) ,the problem is to determine if the graph contains a clique of size atleast k = |V/2|. "
},
{
"code": null,
"e": 738,
"s": 326,
"text": "Explanation- An instance of the problem is an input specified to the problem. An instance of the Half-Clique problem is a graph G (V, E) and a positive integer k, and the problem is to check whether a clique of size k = |V/2| exists in G. Since an NP Complete problem, by definition, is a problem which is both in NP and NP hard, the proof for the statement that a problem is NP Complete consists of two parts: "
},
{
"code": null,
"e": 1438,
"s": 738,
"text": "1. Half – Clique Problem is in NP :If any problem is in NP, then, given a ‘certificate’, which is a solution to the problem and an instance of the problem (a graph G and a positive integer k, in this case), we will be able to verify (check whether the solution given is correct or not) the certificate in polynomial time. The certificate is a subset V’ of the vertices, which comprises the vertices belonging to the half-clique. We can validate this solution by checking that each pair of vertices belonging to the solution are adjacent, by simply verifying that they share an edge with each other. This can be done in polynomial time, that is O(V +E) using the following strategy of graph G(V,E): "
},
{
"code": null,
"e": 1774,
"s": 1438,
"text": "flag=true\nCount the number of vertices in the subset V' \nIf not equal to V/2 :\n flag = false\nElse : \n For every pair {u,v} in the subset V’:\n Check that these two vertices {u,v} share an edge\n If there is no edge ,set flag to false and break\nIf flag is true:\n Solution is correct\nElse:\n Solution is incorrect "
},
{
"code": null,
"e": 2594,
"s": 1774,
"text": " 2. Half-Clique Problem is NP Hard :To prove that the half-clique problem is NP Hard, we take the help of a problem which is already NP Hard and show that this problem can be reduced to the half-clique problem. For this, we consider the Clique problem, which is NP Complete (and hence NP Hard). Every instance of the clique problem consisting of the graph G (V,E) and an integer k can be converted to the required graph G’ (V’,E’) and k’ of the half clique problem. The deduction that can be made is that the graph G’ will have a clique of size n/2 , iff the graph G has a clique of size k. Let m be the number of nodes in the graph G. We will now prove that the problem of computing the clique indeed boils down to the computation of the independent set. The reduction can be proved by the following two propositions: "
},
{
"code": null,
"e": 3041,
"s": 2594,
"text": "If k > = m/2, then for a constant number t, we add t nodes each of degree 0, for a graph G’. The graph G’ has a total number of nodes equivalent to n = m + t, that is, the summation of all the nodes of graph G along with the extra nodes, such that it is equivalent to 2k, for any arbitrary value of k. Now k = n/2. This can be done by taking t = 2k -m. Then, the graph G has a clique of size k if and only if the graph G’ has a clique of size k. "
},
{
"code": null,
"e": 3602,
"s": 3041,
"text": "If k < m/2 , then we add t additional nodes for the creation of graph G’. Edges can also be added from each new node to every other node in the graph. Therefore, any k-clique in G, for any arbitrary value of k combines with the t new nodes to make a (k+t)-clique in G’, since edges have been added between each pair of vertices. A k+t-sized clique in G’ must include at least k old nodes, which form a clique in the graph G. Therefore, the value of t is picked such that k+t = (m+t)/2, or t = m-2k, which makes the clique size in G’ equivalent to n/2 exactly."
},
{
"code": null,
"e": 3609,
"s": 3602,
"text": "Picked"
},
{
"code": null,
"e": 3617,
"s": 3609,
"text": "GATE CS"
},
{
"code": null,
"e": 3650,
"s": 3617,
"text": "Theory of Computation & Automata"
},
{
"code": null,
"e": 3748,
"s": 3650,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3801,
"s": 3748,
"text": "Difference between Clustered and Non-clustered index"
},
{
"code": null,
"e": 3841,
"s": 3801,
"text": "Introduction of Process Synchronization"
},
{
"code": null,
"e": 3872,
"s": 3841,
"text": "Three address code in Compiler"
},
{
"code": null,
"e": 3906,
"s": 3872,
"text": "Differences between IPv4 and IPv6"
},
{
"code": null,
"e": 3927,
"s": 3906,
"text": "Phases of a Compiler"
},
{
"code": null,
"e": 3959,
"s": 3927,
"text": "Introduction of Finite Automata"
},
{
"code": null,
"e": 3990,
"s": 3959,
"text": "Difference between DFA and NFA"
},
{
"code": null,
"e": 4012,
"s": 3990,
"text": "Turing Machine in TOC"
},
{
"code": null,
"e": 4055,
"s": 4012,
"text": "Chomsky Hierarchy in Theory of Computation"
}
] |
jQuery | fadeIn() Method | 19 Feb, 2019
The fadeIn() Method in jQuery is used to change the opacity of selected elements from hidden to visible. The hidden elements will not be display.
Syntax:
$(selector).fadeIn( speed, easing, callback )
Parameters: This method accepts three parameters as mentioned above and described below:
Speed: It is an optional parameter and used to specify the speed of the fading effect. The default value of speed is 400 millisecond. The possible value of speed are:milliseconds“slow”“fast”
milliseconds
“slow”
“fast”
Easing: It is an optional parameter and used to specify the speed of element to different points of animation. The default value of easing is “swing”. The possible value of easing are:“swing”“linear”
“swing”
“linear”
Callback: It is optional parameter. The callback function is executed after fadeIn() method is completed.
Below examples illustrate the fadeIn() method in jQuery:
Example 1: This example describes fadeIn() method with speed 1000 milliseconds.
<!DOCTYPE html> <html> <head> <title> fadeIn() Method in jQuery </title> <style> #Outer { border: 1px solid black; padding-top: 40px; height: 140px; background: green; display: none; } </style> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script> </head> <body style = "text-align:center;"> <div id= "Outer"> <h1 style = "color:white;" > GeeksForGeeks </h1> </div><br> <button id = "btn"> Fade In </button> <!-- jQuery script of fadeIn() method --> <script> $(document).ready(function() { $("#btn").click(function() { $("#Outer").fadeIn(1000); }); }); </script> </body> </html>
Output:Before click on the button:After click on the button:
Example 2: This example describes fadeIn() method with easing “swing”.
<!DOCTYPE html> <html> <head> <title> fadeIn() Method in jQuery </title> <style> #Outer { border: 1px solid black; padding-top: 40px; height: 140px; background: green; display: none; } </style> <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"> </script> </head> <body style = "text-align:center;"> <div id= "Outer"> <h1 style = "color:white;" > GeeksForGeeks </h1> </div><br> <button id = "btn"> Fade In </button> <!-- jQuery script of fadeIn() method --> <script> $(document).ready(function() { $("#btn").click(function() { $("#Outer").fadeIn("swing"); }); }); </script> </body> </html>
Output:Before click on the button:After click on the button:
jQuery-Effects
Picked
JQuery
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
JQuery | Set the value of an input text field
Form validation using jQuery
How to change selected value of a drop-down list using jQuery?
How to add options to a select element using jQuery?
How to fetch data from JSON file and display in HTML table using jQuery ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Installation of Node.js on Linux
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n19 Feb, 2019"
},
{
"code": null,
"e": 174,
"s": 28,
"text": "The fadeIn() Method in jQuery is used to change the opacity of selected elements from hidden to visible. The hidden elements will not be display."
},
{
"code": null,
"e": 182,
"s": 174,
"text": "Syntax:"
},
{
"code": null,
"e": 228,
"s": 182,
"text": "$(selector).fadeIn( speed, easing, callback )"
},
{
"code": null,
"e": 317,
"s": 228,
"text": "Parameters: This method accepts three parameters as mentioned above and described below:"
},
{
"code": null,
"e": 508,
"s": 317,
"text": "Speed: It is an optional parameter and used to specify the speed of the fading effect. The default value of speed is 400 millisecond. The possible value of speed are:milliseconds“slow”“fast”"
},
{
"code": null,
"e": 521,
"s": 508,
"text": "milliseconds"
},
{
"code": null,
"e": 528,
"s": 521,
"text": "“slow”"
},
{
"code": null,
"e": 535,
"s": 528,
"text": "“fast”"
},
{
"code": null,
"e": 735,
"s": 535,
"text": "Easing: It is an optional parameter and used to specify the speed of element to different points of animation. The default value of easing is “swing”. The possible value of easing are:“swing”“linear”"
},
{
"code": null,
"e": 743,
"s": 735,
"text": "“swing”"
},
{
"code": null,
"e": 752,
"s": 743,
"text": "“linear”"
},
{
"code": null,
"e": 858,
"s": 752,
"text": "Callback: It is optional parameter. The callback function is executed after fadeIn() method is completed."
},
{
"code": null,
"e": 915,
"s": 858,
"text": "Below examples illustrate the fadeIn() method in jQuery:"
},
{
"code": null,
"e": 995,
"s": 915,
"text": "Example 1: This example describes fadeIn() method with speed 1000 milliseconds."
},
{
"code": "<!DOCTYPE html> <html> <head> <title> fadeIn() Method in jQuery </title> <style> #Outer { border: 1px solid black; padding-top: 40px; height: 140px; background: green; display: none; } </style> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script> </head> <body style = \"text-align:center;\"> <div id= \"Outer\"> <h1 style = \"color:white;\" > GeeksForGeeks </h1> </div><br> <button id = \"btn\"> Fade In </button> <!-- jQuery script of fadeIn() method --> <script> $(document).ready(function() { $(\"#btn\").click(function() { $(\"#Outer\").fadeIn(1000); }); }); </script> </body> </html> ",
"e": 2015,
"s": 995,
"text": null
},
{
"code": null,
"e": 2076,
"s": 2015,
"text": "Output:Before click on the button:After click on the button:"
},
{
"code": null,
"e": 2147,
"s": 2076,
"text": "Example 2: This example describes fadeIn() method with easing “swing”."
},
{
"code": "<!DOCTYPE html> <html> <head> <title> fadeIn() Method in jQuery </title> <style> #Outer { border: 1px solid black; padding-top: 40px; height: 140px; background: green; display: none; } </style> <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js\"> </script> </head> <body style = \"text-align:center;\"> <div id= \"Outer\"> <h1 style = \"color:white;\" > GeeksForGeeks </h1> </div><br> <button id = \"btn\"> Fade In </button> <!-- jQuery script of fadeIn() method --> <script> $(document).ready(function() { $(\"#btn\").click(function() { $(\"#Outer\").fadeIn(\"swing\"); }); }); </script> </body> </html> ",
"e": 3170,
"s": 2147,
"text": null
},
{
"code": null,
"e": 3231,
"s": 3170,
"text": "Output:Before click on the button:After click on the button:"
},
{
"code": null,
"e": 3246,
"s": 3231,
"text": "jQuery-Effects"
},
{
"code": null,
"e": 3253,
"s": 3246,
"text": "Picked"
},
{
"code": null,
"e": 3260,
"s": 3253,
"text": "JQuery"
},
{
"code": null,
"e": 3277,
"s": 3260,
"text": "Web Technologies"
},
{
"code": null,
"e": 3375,
"s": 3277,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3421,
"s": 3375,
"text": "JQuery | Set the value of an input text field"
},
{
"code": null,
"e": 3450,
"s": 3421,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 3513,
"s": 3450,
"text": "How to change selected value of a drop-down list using jQuery?"
},
{
"code": null,
"e": 3566,
"s": 3513,
"text": "How to add options to a select element using jQuery?"
},
{
"code": null,
"e": 3640,
"s": 3566,
"text": "How to fetch data from JSON file and display in HTML table using jQuery ?"
},
{
"code": null,
"e": 3702,
"s": 3640,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 3735,
"s": 3702,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 3796,
"s": 3735,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 3846,
"s": 3796,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
How does auto property work in margin:0 auto in CSS ? | 17 Mar, 2021
In this article, we will learn how auto property works in margin:0 auto in CSS. The margin property is used to set the margins for an element. The margin property has four values margin-top, margin-right, margin-bottom, and margin-left.
Syntax:
margin: top_margin right_margin bottom_margin left_margin;
/* We can also use the shortened syntax
of margin that takes only two parameters */
margin: top_and_bottom_margin left_and_right_margin;
So in margin: 0 auto, the top/bottom margin is 0, and the left/right margin is auto, Where auto means that the left and right margin are automatically set by the browser based on the container, to make element centered. The margin: 0 auto equivalent to:
margin-top:0;
margin-bottom:0;
margin-left:auto;
margin-right:auto;
Example:
HTML
<!DOCTYPE html><html lang="en"> <head> <style> .parent{ background-color: yellow; /* It set top and bottom margin to 5% and, the left and right are automatically set by browser */ margin: 5% auto; } .h1{ color: rgb(5, 138, 5); font-size: 50px; } </style></head> <body> <div class="parent"> <h1 class="h1">GeeksforGeeks</h1> </div></body> </html>
Output:
Before set margin:
Before set margin:
After set margin:
After set margin:
CSS-Properties
CSS-Questions
Picked
CSS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Design a Tribute Page using HTML & CSS
How to set space between the flexbox ?
Build a Survey Form using HTML and CSS
Form validation using jQuery
Design a web page using HTML and CSS
Installation of Node.js on Linux
Difference between var, let and const keywords in JavaScript
How to fetch data from an API in ReactJS ?
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n17 Mar, 2021"
},
{
"code": null,
"e": 265,
"s": 28,
"text": "In this article, we will learn how auto property works in margin:0 auto in CSS. The margin property is used to set the margins for an element. The margin property has four values margin-top, margin-right, margin-bottom, and margin-left."
},
{
"code": null,
"e": 273,
"s": 265,
"text": "Syntax:"
},
{
"code": null,
"e": 475,
"s": 273,
"text": "margin: top_margin right_margin bottom_margin left_margin;\n\n/* We can also use the shortened syntax \n of margin that takes only two parameters */\n\nmargin: top_and_bottom_margin left_and_right_margin;"
},
{
"code": null,
"e": 729,
"s": 475,
"text": "So in margin: 0 auto, the top/bottom margin is 0, and the left/right margin is auto, Where auto means that the left and right margin are automatically set by the browser based on the container, to make element centered. The margin: 0 auto equivalent to:"
},
{
"code": null,
"e": 797,
"s": 729,
"text": "margin-top:0;\nmargin-bottom:0;\nmargin-left:auto;\nmargin-right:auto;"
},
{
"code": null,
"e": 806,
"s": 797,
"text": "Example:"
},
{
"code": null,
"e": 811,
"s": 806,
"text": "HTML"
},
{
"code": "<!DOCTYPE html><html lang=\"en\"> <head> <style> .parent{ background-color: yellow; /* It set top and bottom margin to 5% and, the left and right are automatically set by browser */ margin: 5% auto; } .h1{ color: rgb(5, 138, 5); font-size: 50px; } </style></head> <body> <div class=\"parent\"> <h1 class=\"h1\">GeeksforGeeks</h1> </div></body> </html>",
"e": 1210,
"s": 811,
"text": null
},
{
"code": null,
"e": 1219,
"s": 1210,
"text": "Output: "
},
{
"code": null,
"e": 1238,
"s": 1219,
"text": "Before set margin:"
},
{
"code": null,
"e": 1257,
"s": 1238,
"text": "Before set margin:"
},
{
"code": null,
"e": 1275,
"s": 1257,
"text": "After set margin:"
},
{
"code": null,
"e": 1293,
"s": 1275,
"text": "After set margin:"
},
{
"code": null,
"e": 1308,
"s": 1293,
"text": "CSS-Properties"
},
{
"code": null,
"e": 1322,
"s": 1308,
"text": "CSS-Questions"
},
{
"code": null,
"e": 1329,
"s": 1322,
"text": "Picked"
},
{
"code": null,
"e": 1333,
"s": 1329,
"text": "CSS"
},
{
"code": null,
"e": 1350,
"s": 1333,
"text": "Web Technologies"
},
{
"code": null,
"e": 1448,
"s": 1350,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1487,
"s": 1448,
"text": "Design a Tribute Page using HTML & CSS"
},
{
"code": null,
"e": 1526,
"s": 1487,
"text": "How to set space between the flexbox ?"
},
{
"code": null,
"e": 1565,
"s": 1526,
"text": "Build a Survey Form using HTML and CSS"
},
{
"code": null,
"e": 1594,
"s": 1565,
"text": "Form validation using jQuery"
},
{
"code": null,
"e": 1631,
"s": 1594,
"text": "Design a web page using HTML and CSS"
},
{
"code": null,
"e": 1664,
"s": 1631,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 1725,
"s": 1664,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 1768,
"s": 1725,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 1840,
"s": 1768,
"text": "Differences between Functional Components and Class Components in React"
}
] |
Python program to add two numbers | 04 Jun, 2022
Given two numbers num1 and num2. The task is to write a Python program to find the addition of these two numbers. Examples:
Input: num1 = 5, num2 = 3
Output: 8
Input: num1 = 13, num2 = 6
Output: 19
In the below program to add two numbers, the user is first asked to enter two numbers and the input is scanned using the input() function and stored in the variables number1 and number2. Then, the variables number1 and number2 are added using the arithmetic operator + and the result is stored in the variable sum. Below is the Python program to add two numbers:
Example 1:
Python3
# Python3 program to add two numbers num1 = 15num2 = 12 # Adding two nossum = num1 + num2 # printing valuesprint("Sum of {0} and {1} is {2}" .format(num1, num2, sum))
Output:
Sum of 15 and 12 is 27
Example 2: Adding two number provided by user input
Python3
# Python3 program to add two numbers number1 = input("First number: ")number2 = input("\nSecond number: ") # Adding two numbers# User might also enter float numberssum = float(number1) + float(number2) # Display the sum# will print value in floatprint("The sum of {0} and {1} is {2}" .format(number1, number2, sum))
Output:
First number: 13.5 Second number: 1.54
The sum of 13.5 and 1.54 is 15.04
Example 3:
Python3
# Python3 program to add two numbers # Driver Codeif __name__ == "__main__" : num1 = 15 num2 = 12 # Adding two numbers sum_twoNum = lambda num1, num2 : num1 + num2 # printing values print("Sum of {0} and {1} is {2};" .format(num1, num2, sum_twoNum(num1, num2)))
Output:
Sum of 15 and 12 is 27;
ankthon
Python
Python Programs
School Programming
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Read a file line by line in Python
Python String | replace()
Python program to convert a list to string
Defaultdict in Python
Python | Get dictionary keys as a list
Python | Convert a list to dictionary
Python | Convert string dictionary to dictionary | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n04 Jun, 2022"
},
{
"code": null,
"e": 177,
"s": 53,
"text": "Given two numbers num1 and num2. The task is to write a Python program to find the addition of these two numbers. Examples:"
},
{
"code": null,
"e": 252,
"s": 177,
"text": "Input: num1 = 5, num2 = 3\nOutput: 8\n\nInput: num1 = 13, num2 = 6\nOutput: 19"
},
{
"code": null,
"e": 616,
"s": 252,
"text": "In the below program to add two numbers, the user is first asked to enter two numbers and the input is scanned using the input() function and stored in the variables number1 and number2. Then, the variables number1 and number2 are added using the arithmetic operator + and the result is stored in the variable sum. Below is the Python program to add two numbers: "
},
{
"code": null,
"e": 628,
"s": 616,
"text": "Example 1: "
},
{
"code": null,
"e": 636,
"s": 628,
"text": "Python3"
},
{
"code": "# Python3 program to add two numbers num1 = 15num2 = 12 # Adding two nossum = num1 + num2 # printing valuesprint(\"Sum of {0} and {1} is {2}\" .format(num1, num2, sum))",
"e": 803,
"s": 636,
"text": null
},
{
"code": null,
"e": 811,
"s": 803,
"text": "Output:"
},
{
"code": null,
"e": 834,
"s": 811,
"text": "Sum of 15 and 12 is 27"
},
{
"code": null,
"e": 887,
"s": 834,
"text": "Example 2: Adding two number provided by user input "
},
{
"code": null,
"e": 895,
"s": 887,
"text": "Python3"
},
{
"code": "# Python3 program to add two numbers number1 = input(\"First number: \")number2 = input(\"\\nSecond number: \") # Adding two numbers# User might also enter float numberssum = float(number1) + float(number2) # Display the sum# will print value in floatprint(\"The sum of {0} and {1} is {2}\" .format(number1, number2, sum))",
"e": 1211,
"s": 895,
"text": null
},
{
"code": null,
"e": 1219,
"s": 1211,
"text": "Output:"
},
{
"code": null,
"e": 1292,
"s": 1219,
"text": "First number: 13.5 Second number: 1.54\nThe sum of 13.5 and 1.54 is 15.04"
},
{
"code": null,
"e": 1303,
"s": 1292,
"text": "Example 3:"
},
{
"code": null,
"e": 1311,
"s": 1303,
"text": "Python3"
},
{
"code": "# Python3 program to add two numbers # Driver Codeif __name__ == \"__main__\" : num1 = 15 num2 = 12 # Adding two numbers sum_twoNum = lambda num1, num2 : num1 + num2 # printing values print(\"Sum of {0} and {1} is {2};\" .format(num1, num2, sum_twoNum(num1, num2)))",
"e": 1588,
"s": 1311,
"text": null
},
{
"code": null,
"e": 1596,
"s": 1588,
"text": "Output:"
},
{
"code": null,
"e": 1620,
"s": 1596,
"text": "Sum of 15 and 12 is 27;"
},
{
"code": null,
"e": 1628,
"s": 1620,
"text": "ankthon"
},
{
"code": null,
"e": 1635,
"s": 1628,
"text": "Python"
},
{
"code": null,
"e": 1651,
"s": 1635,
"text": "Python Programs"
},
{
"code": null,
"e": 1670,
"s": 1651,
"text": "School Programming"
},
{
"code": null,
"e": 1768,
"s": 1670,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1786,
"s": 1768,
"text": "Python Dictionary"
},
{
"code": null,
"e": 1828,
"s": 1786,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 1850,
"s": 1828,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 1885,
"s": 1850,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 1911,
"s": 1885,
"text": "Python String | replace()"
},
{
"code": null,
"e": 1954,
"s": 1911,
"text": "Python program to convert a list to string"
},
{
"code": null,
"e": 1976,
"s": 1954,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 2015,
"s": 1976,
"text": "Python | Get dictionary keys as a list"
},
{
"code": null,
"e": 2053,
"s": 2015,
"text": "Python | Convert a list to dictionary"
}
] |
Difference between Private and Public IP addresses | 07 Jun, 2022
Private IP address of a system is the IP address that is used to communicate within the same network. Using private IP data or information can be sent or received within the same network.
Public IP address of a system is the IP address that is used to communicate outside the network. A public IP address is basically assigned by the ISP (Internet Service Provider).
Range:
10.0.0.0 – 10.255.255.255,
172.16.0.0 – 172.31.255.255,
192.168.0.0 – 192.168.255.255
guptavivek0503
IP Addressing
Computer Networks
Difference Between
GATE CS
Computer Networks
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Differences between TCP and UDP
Types of Network Topology
TCP Server-Client implementation in C
Socket Programming in Python
GSM in Wireless Communication
Class method vs Static method in Python
Difference between BFS and DFS
Difference between var, let and const keywords in JavaScript
Difference Between Method Overloading and Method Overriding in Java
Differences between JDK, JRE and JVM | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n07 Jun, 2022"
},
{
"code": null,
"e": 242,
"s": 53,
"text": "Private IP address of a system is the IP address that is used to communicate within the same network. Using private IP data or information can be sent or received within the same network. "
},
{
"code": null,
"e": 422,
"s": 242,
"text": "Public IP address of a system is the IP address that is used to communicate outside the network. A public IP address is basically assigned by the ISP (Internet Service Provider). "
},
{
"code": null,
"e": 429,
"s": 422,
"text": "Range:"
},
{
"code": null,
"e": 518,
"s": 429,
"text": "10.0.0.0 – 10.255.255.255, \n172.16.0.0 – 172.31.255.255, \n192.168.0.0 – 192.168.255.255 "
},
{
"code": null,
"e": 533,
"s": 518,
"text": "guptavivek0503"
},
{
"code": null,
"e": 547,
"s": 533,
"text": "IP Addressing"
},
{
"code": null,
"e": 565,
"s": 547,
"text": "Computer Networks"
},
{
"code": null,
"e": 584,
"s": 565,
"text": "Difference Between"
},
{
"code": null,
"e": 592,
"s": 584,
"text": "GATE CS"
},
{
"code": null,
"e": 610,
"s": 592,
"text": "Computer Networks"
},
{
"code": null,
"e": 708,
"s": 610,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 740,
"s": 708,
"text": "Differences between TCP and UDP"
},
{
"code": null,
"e": 766,
"s": 740,
"text": "Types of Network Topology"
},
{
"code": null,
"e": 804,
"s": 766,
"text": "TCP Server-Client implementation in C"
},
{
"code": null,
"e": 833,
"s": 804,
"text": "Socket Programming in Python"
},
{
"code": null,
"e": 863,
"s": 833,
"text": "GSM in Wireless Communication"
},
{
"code": null,
"e": 903,
"s": 863,
"text": "Class method vs Static method in Python"
},
{
"code": null,
"e": 934,
"s": 903,
"text": "Difference between BFS and DFS"
},
{
"code": null,
"e": 995,
"s": 934,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 1063,
"s": 995,
"text": "Difference Between Method Overloading and Method Overriding in Java"
}
] |
How to use Boto3 library in Python to upload an object in S3 using AWS Resource? | Problem Statement − Use Boto3 library in Python to upload an object into S3. For example, how to upload test.zip into Bucket_1 of S3.
Step 1 − Import boto3 and botocore exceptions to handle exceptions.
Step 2 − From pathlib, import PurePosixPath to retrive filename from path
Step 3 − s3_path and filepath are the two parameters in function upload_object_into_s3
Step 4 − Validate the s3_path is passed in AWS format as s3://bucket_name/key and filepath as local path C://users/filename
Step 5 − Create an AWS session using boto3 library.
Step 6 − Create an AWS resource for S3.
Step 7 − Split the S3 path and perform operations to separate the root bucket name and key path
Step 8 − Get the file name for complete filepath and add into S3 key path.
Step 9 − Now use the function upload_fileobj to upload the local file into S3.
Step 10 − Use the function wait_until_exists to wait until operation is finished.
Step 11 − Handle the exception based on response code to validate whether file is uploaded or not.
Step 12 − Handle the generic exception if something went wrong while uploading the file
Use the following code to upload a file into AWS S3 −
import boto3
from botocore.exceptions import ClientError
from pathlib import PurePosixPath
def upload_object_into_s3(s3_path, filepath):
if 's3://' in filepath:
print('SourcePath is not a valid path.' + filepath)
raise Exception('SourcePath is not a valid path.')
elif s3_path.find('s3://') == -1:
print('DestinationPath is not a s3 path.' + s3_path)
raise Exception('DestinationPath is not a valid path.')
session = boto3.session.Session()
s3_resource = session.resource('s3')
tokens = s3_path.split('/')
target_key = ""
if len(tokens) > 3:
for tokn in range(3, len(tokens)):
if tokn == 3:
target_key += tokens[tokn]
else:
target_key += "/" + tokens[tokn]
target_bucket_name = tokens[2]
file_name = PurePosixPath(filepath).name
if target_key != '':
target_key.strip()
key_path = target_key + "/" + file_name
else:
key_path = file_name
print(("key_path: " + key_path, 'target_bucket: ' + target_bucket_name))
try:
# uploading Entity from local path
with open(filepath, "rb") as file:
s3_resource.meta.client.upload_fileobj(file, target_bucket_name, key_path)
try:
s3_resource.Object(target_bucket_name, key_path).wait_until_exists()
file.close()
except ClientError as error:
error_code = int(error.response['Error']['Code'])
if error_code == 412 or error_code == 304:
print("Object didn't Upload Successfully ", target_bucket_name)
raise error
return "Object Uploaded Successfully"
except Exception as error:
print("Error in upload object function of s3 helper: " + error.__str__())
raise error
print(upload_object_into_s3('s3://Bucket_1/testfolder', 'c://test.zip'))
key_path:/testfolder/test.zip, target_bucket: Bucket_1
Object Uploaded Successfully | [
{
"code": null,
"e": 1196,
"s": 1062,
"text": "Problem Statement − Use Boto3 library in Python to upload an object into S3. For example, how to upload test.zip into Bucket_1 of S3."
},
{
"code": null,
"e": 1264,
"s": 1196,
"text": "Step 1 − Import boto3 and botocore exceptions to handle exceptions."
},
{
"code": null,
"e": 1338,
"s": 1264,
"text": "Step 2 − From pathlib, import PurePosixPath to retrive filename from path"
},
{
"code": null,
"e": 1425,
"s": 1338,
"text": "Step 3 − s3_path and filepath are the two parameters in function upload_object_into_s3"
},
{
"code": null,
"e": 1549,
"s": 1425,
"text": "Step 4 − Validate the s3_path is passed in AWS format as s3://bucket_name/key and filepath as local path C://users/filename"
},
{
"code": null,
"e": 1601,
"s": 1549,
"text": "Step 5 − Create an AWS session using boto3 library."
},
{
"code": null,
"e": 1641,
"s": 1601,
"text": "Step 6 − Create an AWS resource for S3."
},
{
"code": null,
"e": 1737,
"s": 1641,
"text": "Step 7 − Split the S3 path and perform operations to separate the root bucket name and key path"
},
{
"code": null,
"e": 1812,
"s": 1737,
"text": "Step 8 − Get the file name for complete filepath and add into S3 key path."
},
{
"code": null,
"e": 1891,
"s": 1812,
"text": "Step 9 − Now use the function upload_fileobj to upload the local file into S3."
},
{
"code": null,
"e": 1973,
"s": 1891,
"text": "Step 10 − Use the function wait_until_exists to wait until operation is finished."
},
{
"code": null,
"e": 2072,
"s": 1973,
"text": "Step 11 − Handle the exception based on response code to validate whether file is uploaded or not."
},
{
"code": null,
"e": 2160,
"s": 2072,
"text": "Step 12 − Handle the generic exception if something went wrong while uploading the file"
},
{
"code": null,
"e": 2214,
"s": 2160,
"text": "Use the following code to upload a file into AWS S3 −"
},
{
"code": null,
"e": 4026,
"s": 2214,
"text": "import boto3\nfrom botocore.exceptions import ClientError\nfrom pathlib import PurePosixPath\n\ndef upload_object_into_s3(s3_path, filepath):\n\n if 's3://' in filepath:\n print('SourcePath is not a valid path.' + filepath)\n raise Exception('SourcePath is not a valid path.')\n elif s3_path.find('s3://') == -1:\n print('DestinationPath is not a s3 path.' + s3_path)\n raise Exception('DestinationPath is not a valid path.')\n session = boto3.session.Session()\n s3_resource = session.resource('s3')\n tokens = s3_path.split('/')\n target_key = \"\"\n if len(tokens) > 3:\n for tokn in range(3, len(tokens)):\n if tokn == 3:\n target_key += tokens[tokn]\n else:\n target_key += \"/\" + tokens[tokn]\n target_bucket_name = tokens[2]\n\n file_name = PurePosixPath(filepath).name\n if target_key != '':\n target_key.strip()\n key_path = target_key + \"/\" + file_name\n else:\n key_path = file_name\n print((\"key_path: \" + key_path, 'target_bucket: ' + target_bucket_name))\n\n try:\n # uploading Entity from local path\n with open(filepath, \"rb\") as file:\n s3_resource.meta.client.upload_fileobj(file, target_bucket_name, key_path)\n try:\n s3_resource.Object(target_bucket_name, key_path).wait_until_exists()\n file.close()\n except ClientError as error:\n error_code = int(error.response['Error']['Code'])\n if error_code == 412 or error_code == 304:\n print(\"Object didn't Upload Successfully \", target_bucket_name)\n raise error\n return \"Object Uploaded Successfully\"\n except Exception as error:\n print(\"Error in upload object function of s3 helper: \" + error.__str__())\n raise error\nprint(upload_object_into_s3('s3://Bucket_1/testfolder', 'c://test.zip'))"
},
{
"code": null,
"e": 4110,
"s": 4026,
"text": "key_path:/testfolder/test.zip, target_bucket: Bucket_1\nObject Uploaded Successfully"
}
] |
How to Convert CSV to JSON file having Comma Separated values in Node.js ? - GeeksforGeeks | 19 May, 2021
A CSV is a comma-separated values file with .csv extension, which allows data to be saved in a tabular format. Here is a article to convert the data of a csv file to a JavaScript Object Notation (JSON) without using any third party npm package. The main difference from normal conversion is that the values of any row can be Comma Separated and as we know, different columns are also comma separated.
In this approach, we will input the contents of the CSV file in an array and split the content of the array based on a delimiter. All the rows of the CSV will be converted to JSON objects which will be added to the resultant array which will then be converted to JSON and a corresponding JSON output file will be generated.
Approach: Follow the steps below to achieve the solution:
Read the csv file using default fs npm package.Convert the data to String and split it in an array.Generate a headers array.For all the remaining n-1 rows do the following: Create an empty object to add values of current row to it.Declare a string str as current array value to change the delimiter and store the generated string in a new string s.If we encounter opening quote (“) then we keep commas as it is otherwise we replace them with pipe “|”Keep adding the characters we traverse to a String s.Split the string using pipe delimiter | and store the values in a properties array.For each header, if the value contains multiple comma separated data, then we store it in the form of array otherwise directly the value is stored.Add the generated object to our result array.Convert the resultant array to json and generate the JSON output file.
Read the csv file using default fs npm package.
Convert the data to String and split it in an array.
Generate a headers array.
For all the remaining n-1 rows do the following: Create an empty object to add values of current row to it.Declare a string str as current array value to change the delimiter and store the generated string in a new string s.If we encounter opening quote (“) then we keep commas as it is otherwise we replace them with pipe “|”Keep adding the characters we traverse to a String s.Split the string using pipe delimiter | and store the values in a properties array.For each header, if the value contains multiple comma separated data, then we store it in the form of array otherwise directly the value is stored.Add the generated object to our result array.
Create an empty object to add values of current row to it.
Declare a string str as current array value to change the delimiter and store the generated string in a new string s.
If we encounter opening quote (“) then we keep commas as it is otherwise we replace them with pipe “|”
Keep adding the characters we traverse to a String s.
Split the string using pipe delimiter | and store the values in a properties array.
For each header, if the value contains multiple comma separated data, then we store it in the form of array otherwise directly the value is stored.
Add the generated object to our result array.
Convert the resultant array to json and generate the JSON output file.
Filename: app.js
javascript
// Reading the file using default// fs npm packageconst fs = require("fs");csv = fs.readFileSync("CSV_file.csv") // Convert the data to String and// split it in an arrayvar array = csv.toString().split("\r"); // All the rows of the CSV will be// converted to JSON objects which// will be added to result in an arraylet result = []; // The array[0] contains all the// header columns so we store them// in headers arraylet headers = array[0].split(", ") // Since headers are separated, we// need to traverse remaining n-1 rows.for (let i = 1; i < array.length - 1; i++) { let obj = {} // Create an empty object to later add // values of the current row to it // Declare string str as current array // value to change the delimiter and // store the generated string in a new // string s let str = array[i] let s = '' // By Default, we get the comma separated // values of a cell in quotes " " so we // use flag to keep track of quotes and // split the string accordingly // If we encounter opening quote (") // then we keep commas as it is otherwise // we replace them with pipe | // We keep adding the characters we // traverse to a String s let flag = 0 for (let ch of str) { if (ch === '"' && flag === 0) { flag = 1 } else if (ch === '"' && flag == 1) flag = 0 if (ch === ', ' && flag === 0) ch = '|' if (ch !== '"') s += ch } // Split the string using pipe delimiter | // and store the values in a properties array let properties = s.split("|") // For each header, if the value contains // multiple comma separated data, then we // store it in the form of array otherwise // directly the value is stored for (let j in headers) { if (properties[j].includes(", ")) { obj[headers[j]] = properties[j] .split(", ").map(item => item.trim()) } else obj[headers[j]] = properties[j] } // Add the generated object to our // result array result.push(obj)} // Convert the resultant array to json and// generate the JSON output file.let json = JSON.stringify(result);fs.writeFileSync('output.json', json);
Input:
Run the command “node app.js” on the terminal to run the program.
Output:
The output.json file gets created and the content of the file has been loggeed on the terminal.
akshaysingh98088
CSV
JSON
Node.js-Misc
JavaScript
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to append HTML code to a div using JavaScript ?
How to Open URL in New Tab using JavaScript ?
Installation of Node.js on Linux
Node.js fs.readFileSync() Method
How to update Node.js and NPM to next version ?
Node.js fs.readFile() Method
Node.js fs.writeFile() Method | [
{
"code": null,
"e": 25182,
"s": 25154,
"text": "\n19 May, 2021"
},
{
"code": null,
"e": 25583,
"s": 25182,
"text": "A CSV is a comma-separated values file with .csv extension, which allows data to be saved in a tabular format. Here is a article to convert the data of a csv file to a JavaScript Object Notation (JSON) without using any third party npm package. The main difference from normal conversion is that the values of any row can be Comma Separated and as we know, different columns are also comma separated."
},
{
"code": null,
"e": 25907,
"s": 25583,
"text": "In this approach, we will input the contents of the CSV file in an array and split the content of the array based on a delimiter. All the rows of the CSV will be converted to JSON objects which will be added to the resultant array which will then be converted to JSON and a corresponding JSON output file will be generated."
},
{
"code": null,
"e": 25967,
"s": 25907,
"text": "Approach: Follow the steps below to achieve the solution: "
},
{
"code": null,
"e": 26816,
"s": 25967,
"text": "Read the csv file using default fs npm package.Convert the data to String and split it in an array.Generate a headers array.For all the remaining n-1 rows do the following: Create an empty object to add values of current row to it.Declare a string str as current array value to change the delimiter and store the generated string in a new string s.If we encounter opening quote (“) then we keep commas as it is otherwise we replace them with pipe “|”Keep adding the characters we traverse to a String s.Split the string using pipe delimiter | and store the values in a properties array.For each header, if the value contains multiple comma separated data, then we store it in the form of array otherwise directly the value is stored.Add the generated object to our result array.Convert the resultant array to json and generate the JSON output file."
},
{
"code": null,
"e": 26864,
"s": 26816,
"text": "Read the csv file using default fs npm package."
},
{
"code": null,
"e": 26917,
"s": 26864,
"text": "Convert the data to String and split it in an array."
},
{
"code": null,
"e": 26943,
"s": 26917,
"text": "Generate a headers array."
},
{
"code": null,
"e": 27598,
"s": 26943,
"text": "For all the remaining n-1 rows do the following: Create an empty object to add values of current row to it.Declare a string str as current array value to change the delimiter and store the generated string in a new string s.If we encounter opening quote (“) then we keep commas as it is otherwise we replace them with pipe “|”Keep adding the characters we traverse to a String s.Split the string using pipe delimiter | and store the values in a properties array.For each header, if the value contains multiple comma separated data, then we store it in the form of array otherwise directly the value is stored.Add the generated object to our result array."
},
{
"code": null,
"e": 27657,
"s": 27598,
"text": "Create an empty object to add values of current row to it."
},
{
"code": null,
"e": 27775,
"s": 27657,
"text": "Declare a string str as current array value to change the delimiter and store the generated string in a new string s."
},
{
"code": null,
"e": 27878,
"s": 27775,
"text": "If we encounter opening quote (“) then we keep commas as it is otherwise we replace them with pipe “|”"
},
{
"code": null,
"e": 27932,
"s": 27878,
"text": "Keep adding the characters we traverse to a String s."
},
{
"code": null,
"e": 28016,
"s": 27932,
"text": "Split the string using pipe delimiter | and store the values in a properties array."
},
{
"code": null,
"e": 28164,
"s": 28016,
"text": "For each header, if the value contains multiple comma separated data, then we store it in the form of array otherwise directly the value is stored."
},
{
"code": null,
"e": 28210,
"s": 28164,
"text": "Add the generated object to our result array."
},
{
"code": null,
"e": 28281,
"s": 28210,
"text": "Convert the resultant array to json and generate the JSON output file."
},
{
"code": null,
"e": 28300,
"s": 28281,
"text": "Filename: app.js "
},
{
"code": null,
"e": 28311,
"s": 28300,
"text": "javascript"
},
{
"code": "// Reading the file using default// fs npm packageconst fs = require(\"fs\");csv = fs.readFileSync(\"CSV_file.csv\") // Convert the data to String and// split it in an arrayvar array = csv.toString().split(\"\\r\"); // All the rows of the CSV will be// converted to JSON objects which// will be added to result in an arraylet result = []; // The array[0] contains all the// header columns so we store them// in headers arraylet headers = array[0].split(\", \") // Since headers are separated, we// need to traverse remaining n-1 rows.for (let i = 1; i < array.length - 1; i++) { let obj = {} // Create an empty object to later add // values of the current row to it // Declare string str as current array // value to change the delimiter and // store the generated string in a new // string s let str = array[i] let s = '' // By Default, we get the comma separated // values of a cell in quotes \" \" so we // use flag to keep track of quotes and // split the string accordingly // If we encounter opening quote (\") // then we keep commas as it is otherwise // we replace them with pipe | // We keep adding the characters we // traverse to a String s let flag = 0 for (let ch of str) { if (ch === '\"' && flag === 0) { flag = 1 } else if (ch === '\"' && flag == 1) flag = 0 if (ch === ', ' && flag === 0) ch = '|' if (ch !== '\"') s += ch } // Split the string using pipe delimiter | // and store the values in a properties array let properties = s.split(\"|\") // For each header, if the value contains // multiple comma separated data, then we // store it in the form of array otherwise // directly the value is stored for (let j in headers) { if (properties[j].includes(\", \")) { obj[headers[j]] = properties[j] .split(\", \").map(item => item.trim()) } else obj[headers[j]] = properties[j] } // Add the generated object to our // result array result.push(obj)} // Convert the resultant array to json and// generate the JSON output file.let json = JSON.stringify(result);fs.writeFileSync('output.json', json);",
"e": 30384,
"s": 28311,
"text": null
},
{
"code": null,
"e": 30391,
"s": 30384,
"text": "Input:"
},
{
"code": null,
"e": 30457,
"s": 30391,
"text": "Run the command “node app.js” on the terminal to run the program."
},
{
"code": null,
"e": 30465,
"s": 30457,
"text": "Output:"
},
{
"code": null,
"e": 30561,
"s": 30465,
"text": "The output.json file gets created and the content of the file has been loggeed on the terminal."
},
{
"code": null,
"e": 30578,
"s": 30561,
"text": "akshaysingh98088"
},
{
"code": null,
"e": 30582,
"s": 30578,
"text": "CSV"
},
{
"code": null,
"e": 30587,
"s": 30582,
"text": "JSON"
},
{
"code": null,
"e": 30600,
"s": 30587,
"text": "Node.js-Misc"
},
{
"code": null,
"e": 30611,
"s": 30600,
"text": "JavaScript"
},
{
"code": null,
"e": 30619,
"s": 30611,
"text": "Node.js"
},
{
"code": null,
"e": 30636,
"s": 30619,
"text": "Web Technologies"
},
{
"code": null,
"e": 30734,
"s": 30636,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30779,
"s": 30734,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 30840,
"s": 30779,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 30912,
"s": 30840,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 30964,
"s": 30912,
"text": "How to append HTML code to a div using JavaScript ?"
},
{
"code": null,
"e": 31010,
"s": 30964,
"text": "How to Open URL in New Tab using JavaScript ?"
},
{
"code": null,
"e": 31043,
"s": 31010,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 31076,
"s": 31043,
"text": "Node.js fs.readFileSync() Method"
},
{
"code": null,
"e": 31124,
"s": 31076,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 31153,
"s": 31124,
"text": "Node.js fs.readFile() Method"
}
] |
Machine Learning with Python: Classification (complete tutorial) | by Mauro Di Pietro | Towards Data Science | In this article, using Data Science and Python, I will explain the main steps of a Classification use case, from data analysis to understanding the model output.
Since this tutorial can be a good starting point for beginners, I will use the “Titanic dataset” from the famous Kaggle competition, in which you are provided with passengers data and the task is to build a predictive model that answers the question: “what sorts of people were more likely to survive?” (linked below).
www.kaggle.com
I will present some useful Python code that can be easily used in other similar cases (just copy, paste, run) and walk through every line of code with comments, so that you can easily replicate this example (link to the full code below).
github.com
In particular, I will go through:
Environment setup: import libraries and read data
Data Analysis: understand the meaning and the predictive power of the variables
Feature engineering: extract features from raw data
Preprocessing: data partitioning, handle missing values, encode categorical variables, scale
Feature Selection: keep only the most relevant variables
Model design: train, tune hyperparameters, validation, test
Performance evaluation: read the metrics
Explainability: understand how the model produces results
First of all, I need to import the following libraries.
## for dataimport pandas as pdimport numpy as np## for plottingimport matplotlib.pyplot as pltimport seaborn as sns## for statistical testsimport scipyimport statsmodels.formula.api as smfimport statsmodels.api as sm## for machine learningfrom sklearn import model_selection, preprocessing, feature_selection, ensemble, linear_model, metrics, decomposition## for explainerfrom lime import lime_tabular
Then I will read the data into a pandas Dataframe.
dtf = pd.read_csv('data_titanic.csv')dtf.head()
Details about the columns can be found in the provided link to the dataset.
Please note that each row of the table represents a specific passenger (or observation). If you are working with a different dataset that doesn’t have a structure like that, in which each row represents an observation, then you need to summarize data and transform it.
Now that it’s all set, I will start by analyzing data, then select the features, build a machine learning model and predict.
Let’s get started, shall we?
In statistics, exploratory data analysis is the process of summarizing the main characteristics of a dataset to understand what the data can tell us beyond the formal modeling or hypothesis testing task.
I always start by getting an overview of the whole dataset, in particular I want to know how many categorical and numerical variables there are and the proportion of missing data. Recognizing a variable’s type sometimes can be tricky because categories can be expressed as numbers (the Survived column is made of 1s and 0s). To this end, I am going to write a simple function that will do that for us:
'''Recognize whether a column is numerical or categorical.:parameter :param dtf: dataframe - input data :param col: str - name of the column to analyze :param max_cat: num - max number of unique values to recognize a column as categorical:return "cat" if the column is categorical or "num" otherwise'''def utils_recognize_type(dtf, col, max_cat=20): if (dtf[col].dtype == "O") | (dtf[col].nunique() < max_cat): return "cat" else: return "num"
This function is very useful and can be used in several occasions. To give an illustration I’ll plot a heatmap of the dataframe to visualize columns type and missing data.
dic_cols = {col:utils_recognize_type(dtf, col, max_cat=20) for col in dtf.columns}heatmap = dtf.isnull()for k,v in dic_cols.items(): if v == "num": heatmap[k] = heatmap[k].apply(lambda x: 0.5 if x is False else 1) else: heatmap[k] = heatmap[k].apply(lambda x: 0 if x is False else 1)sns.heatmap(heatmap, cbar=False).set_title('Dataset Overview')plt.show()print("\033[1;37;40m Categerocial ", "\033[1;30;41m Numeric ", "\033[1;30;47m NaN ")
There are 885 rows and 12 columns:
each row of the table represents a specific passenger (or observation) identified by PassengerId, so I’ll set it as index (or primary key of the table for SQL lovers).
Survived is the phenomenon that we want to understand and predict (or target variable), so I’ll rename the column as “Y”. It contains two classes: 1 if the passenger survived and 0 otherwise, therefore this use case is a binary classification problem.
Age and Fare are numerical variables while the others are categorical.
Only Age and Cabin contain missing data.
dtf = dtf.set_index("PassengerId")dtf = dtf.rename(columns={"Survived":"Y"})
I believe visualization is the best tool for data analysis, but you need to know what kind of plots are more suitable for the different types of variables. Therefore, I’ll provide the code to plot the appropriate visualization for different examples.
First, let’s have a look at the univariate distributions (probability distribution of just one variable). A bar plot is appropriate to understand labels frequency for a single categorical variable. For example, let’s plot the target variable:
y = "Y"ax = dtf[y].value_counts().sort_values().plot(kind="barh")totals= []for i in ax.patches: totals.append(i.get_width())total = sum(totals)for i in ax.patches: ax.text(i.get_width()+.3, i.get_y()+.20, str(round((i.get_width()/total)*100, 2))+'%', fontsize=10, color='black')ax.grid(axis="x")plt.suptitle(y, fontsize=20)plt.show()
Up to 300 passengers survived and about 550 didn’t, in other words the survival rate (or the population mean) is 38%.
Moreover, a histogram is perfect to give a rough sense of the density of the underlying distribution of a single numerical data. I recommend using a box plot to graphically depict data groups through their quartiles. Let’s take the Age variable for instance:
x = "Age"fig, ax = plt.subplots(nrows=1, ncols=2, sharex=False, sharey=False)fig.suptitle(x, fontsize=20)### distributionax[0].title.set_text('distribution')variable = dtf[x].fillna(dtf[x].mean())breaks = np.quantile(variable, q=np.linspace(0, 1, 11))variable = variable[ (variable > breaks[0]) & (variable < breaks[10]) ]sns.distplot(variable, hist=True, kde=True, kde_kws={"shade": True}, ax=ax[0])des = dtf[x].describe()ax[0].axvline(des["25%"], ls='--')ax[0].axvline(des["mean"], ls='--')ax[0].axvline(des["75%"], ls='--')ax[0].grid(True)des = round(des, 2).apply(lambda x: str(x))box = '\n'.join(("min: "+des["min"], "25%: "+des["25%"], "mean: "+des["mean"], "75%: "+des["75%"], "max: "+des["max"]))ax[0].text(0.95, 0.95, box, transform=ax[0].transAxes, fontsize=10, va='top', ha="right", bbox=dict(boxstyle='round', facecolor='white', alpha=1))### boxplot ax[1].title.set_text('outliers (log scale)')tmp_dtf = pd.DataFrame(dtf[x])tmp_dtf[x] = np.log(tmp_dtf[x])tmp_dtf.boxplot(column=x, ax=ax[1])plt.show()
The passengers were, on average, pretty young: the distribution is skewed towards the left side (the mean is 30 y.o and the 75th percentile is 38 y.o.). Coupled with the outliers in the box plot, the first spike in the left tail says that there was a significant amount of children.
I’ll take the analysis to the next level and look into the bivariate distribution to understand if Age has predictive power to predict Y. This would be the case of categorical (Y) vs numerical (Age), therefore I shall proceed like this:
split the population (the whole set of observations) into 2 samples: the portion of passengers with Y = 1 (Survived) and Y = 0 (Not Survived).
Plot and compare densities of the two samples, if the distributions are different then the variable is predictive because the two groups have different patterns.
Group the numerical variable (Age) in bins (subsamples) and plot the composition of each bin, if the proportion of 1s is similar in all of them then the variable is not predictive.
Plot and compare the box plots of the two samples to spot different behaviors of the outliers.
cat, num = "Y", "Age"fig, ax = plt.subplots(nrows=1, ncols=3, sharex=False, sharey=False)fig.suptitle(x+" vs "+y, fontsize=20) ### distributionax[0].title.set_text('density')for i in dtf[cat].unique(): sns.distplot(dtf[dtf[cat]==i][num], hist=False, label=i, ax=ax[0])ax[0].grid(True)### stackedax[1].title.set_text('bins')breaks = np.quantile(dtf[num], q=np.linspace(0,1,11))tmp = dtf.groupby([cat, pd.cut(dtf[num], breaks, duplicates='drop')]).size().unstack().Ttmp = tmp[dtf[cat].unique()]tmp["tot"] = tmp.sum(axis=1)for col in tmp.drop("tot", axis=1).columns: tmp[col] = tmp[col] / tmp["tot"]tmp.drop("tot", axis=1).plot(kind='bar', stacked=True, ax=ax[1], legend=False, grid=True)### boxplot ax[2].title.set_text('outliers')sns.catplot(x=cat, y=num, data=dtf, kind="box", ax=ax[2])ax[2].grid(True)plt.show()
These 3 plots are just different perspectives of the conclusion that Age is predictive. The survival rate is higher for younger passengers: there is a spike in the left tail of 1s distribution and the first bin (0–16 y.o.) contains the highest percentage of survived passengers.
When not convinced by the “eye intuition”, you can always resort to good old statistics and run a test. In this case of categorical (Y) vs numerical (Age), I would use a one-way ANOVA test. Basically, it tests whether the means of two or more independent samples are significantly different, so if the p-value is small enough (<0.05) the null hypothesis of samples means equality can be rejected.
cat, num = "Y", "Age"model = smf.ols(num+' ~ '+cat, data=dtf).fit()table = sm.stats.anova_lm(model)p = table["PR(>F)"][0]coeff, p = None, round(p, 3)conclusion = "Correlated" if p < 0.05 else "Non-Correlated"print("Anova F: the variables are", conclusion, "(p-value: "+str(p)+")")
Apparently the passengers' age contributed to determine their survival. That makes sense as the lives of women and children were to be saved first in a life-threatening situation, typically abandoning ship, when survival resources such as lifeboats were limited (the “women and children first” code).
In order to check the validity of this first conclusion, I will have to analyze the behavior of the Sex variable with respect to the target variable. This is a case of categorical (Y) vs categorical (Sex), so I’ll plot 2 bar plots, one with the amount of 1s and 0s among the two categories of Sex (male and female) and the other with the percentages.
x, y = "Sex", "Y"fig, ax = plt.subplots(nrows=1, ncols=2, sharex=False, sharey=False)fig.suptitle(x+" vs "+y, fontsize=20)### countax[0].title.set_text('count')order = dtf.groupby(x)[y].count().index.tolist()sns.catplot(x=x, hue=y, data=dtf, kind='count', order=order, ax=ax[0])ax[0].grid(True)### percentageax[1].title.set_text('percentage')a = dtf.groupby(x)[y].count().reset_index()a = a.rename(columns={y:"tot"})b = dtf.groupby([x,y])[y].count()b = b.rename(columns={y:0}).reset_index()b = b.merge(a, how="left")b["%"] = b[0] / b["tot"] *100sns.barplot(x=x, y="%", hue=y, data=b, ax=ax[1]).get_legend().remove()ax[1].grid(True)plt.show()
More than 200 female passengers (75% of the total amount of women onboard) and about 100 male passengers (less than 20%) survived. To put it another way, among women the survival rate is 75% and among men is 20%, therefore Sex is predictive. Moreover, this confirms that they gave priority to women and children.
Just like before, we can test the correlation of these 2 variables. Since they are both categorical, I’d use a Chi-Square test: assuming that two variables are independent (null hypothesis), it tests whether the values of the contingency table for these variables are uniformly distributed. If the p-value is small enough (<0.05), the null hypothesis can be rejected and we can say that the two variables are probably dependent. It’s possible to calculate Cramer’s V that is a measure of correlation that follows from this test, which is symmetrical (like traditional Pearson’s correlation) and ranges between 0 and 1 (unlike traditional Pearson’s correlation there are no negative values).
x, y = "Sex", "Y"cont_table = pd.crosstab(index=dtf[x], columns=dtf[y])chi2_test = scipy.stats.chi2_contingency(cont_table)chi2, p = chi2_test[0], chi2_test[1]n = cont_table.sum().sum()phi2 = chi2/nr,k = cont_table.shapephi2corr = max(0, phi2-((k-1)*(r-1))/(n-1))rcorr = r-((r-1)**2)/(n-1)kcorr = k-((k-1)**2)/(n-1)coeff = np.sqrt(phi2corr/min((kcorr-1), (rcorr-1)))coeff, p = round(coeff, 3), round(p, 3)conclusion = "Significant" if p < 0.05 else "Non-Significant"print("Cramer Correlation:", coeff, conclusion, "(p-value:"+str(p)+")")
Age and Sex are examples of predictive features, but not all of the columns in the dataset are like that. For instance, Cabin seems to be a useless variable as it doesn’t provide any useful information, there are too many missing values and categories.
This kind of analysis should be carried on for each variable in the dataset to decide what should be kept as a potential feature and what can be dropped because not predictive (check out the link to the full code).
It’s time to create new features from raw data using domain knowledge. I will provide one example: I’ll try to create a useful feature by extracting information from the Cabin column. I’m assuming that the letter at the beginning of each cabin number (i.e. “B96”) indicates some kind of section, maybe there were some lucky sections near to lifeboats. I will summarize the observations in clusters by extracting the section of each cabin:
## Create new columndtf["Cabin_section"] = dtf["Cabin"].apply(lambda x: str(x)[0])## Plot contingency tablecont_table = pd.crosstab(index=dtf["Cabin_section"], columns=dtf["Pclass"], values=dtf["Y"], aggfunc="sum")sns.heatmap(cont_table, annot=True, cmap="YlGnBu", fmt='.0f', linewidths=.5).set_title( 'Cabin_section vs Pclass (filter: Y)' )
This plot shows how survivors are distributed among cabin sections and classes (7 survivors are in section A, 35 in B, ...). Most of the sections are assigned to the 1st and the 2nd classes, while the majority of missing sections (“n”) belongs to the 3rd class. I am going to keep this new feature instead of the column Cabin:
Data preprocessing is the phase of preparing the raw data to make it suitable for a machine learning model. In particular:
each observation must be represented by a single row, in other words you can’t have two rows describing the same passenger because they will be processed separately by the model (the dataset is already in such form, so ✅). Moreover, each column should be a feature, so you shouldn’t use PassengerId as a predictor, that’s why this kind of table is called “feature matrix”.The dataset must be partitioned into at least two sets: the model shall be trained on a significant portion of your dataset (so-called “train set”) and tested on a smaller set (“test set”).Missing values should be replaced with something, otherwise your model may freak out.Categorical data must be encoded, which means converting labels into integers, because machine learning expects numbers not strings.It’s good practice to scale the data, it helps to normalize the data within a particular range and speed up the calculations in an algorithm.
each observation must be represented by a single row, in other words you can’t have two rows describing the same passenger because they will be processed separately by the model (the dataset is already in such form, so ✅). Moreover, each column should be a feature, so you shouldn’t use PassengerId as a predictor, that’s why this kind of table is called “feature matrix”.
The dataset must be partitioned into at least two sets: the model shall be trained on a significant portion of your dataset (so-called “train set”) and tested on a smaller set (“test set”).
Missing values should be replaced with something, otherwise your model may freak out.
Categorical data must be encoded, which means converting labels into integers, because machine learning expects numbers not strings.
It’s good practice to scale the data, it helps to normalize the data within a particular range and speed up the calculations in an algorithm.
Alright, let’s begin by partitioning the dataset. When splitting data into train and test sets you must follow 1 basic rule: rows in the train set shouldn’t appear in the test set as well. That’s because the model sees the target values during training and uses it to understand the phenomenon. In other words, the model already knows the right answer for the training observations and testing it on those would be like cheating. I’ve seen a lot of people pitching their machine learning models claiming 99.99% of accuracy that did in fact ignore this rule. Luckily, the Scikit-learn package knows that:
## split datadtf_train, dtf_test = model_selection.train_test_split(dtf, test_size=0.3)## print infoprint("X_train shape:", dtf_train.drop("Y",axis=1).shape, "| X_test shape:", dtf_test.drop("Y",axis=1).shape)print("y_train mean:", round(np.mean(dtf_train["Y"]),2), "| y_test mean:", round(np.mean(dtf_test["Y"]),2))print(dtf_train.shape[1], "features:", dtf_train.drop("Y",axis=1).columns.to_list())
Next step: the Age column contains some missing data (19%) that need to be handled. In practice, you can replace missing data with a specific value, like 9999, that keeps trace of the missing information but changes the variable distribution. Alternatively, you can use the average of the column, like I’m going to do. I’d like to underline that from a Machine Learning perspective, it’s correct to first split into train and test and then replace NAs with the average of the training set only.
dtf_train["Age"] = dtf_train["Age"].fillna(dtf_train["Age"].mean())
There are still some categorical data that should be encoded. The two most common encoders are the Label-Encoder (each unique label is mapped to an integer) and the One-Hot-Encoder (each label is mapped to a binary vector). The first one is suited for data with ordinality only. If applied to a column with no ordinality, like Sex, it would turn the vector [male, female, female, male, ...] into [1, 2, 2, 1, ...] and we would have that female > male and with an average of 1.5 which makes no sense. On the other hand, the One-Hot-Encoder would transform the previous example into two dummy variables (dichotomous quantitative variables): Male [1, 0, 0, 1, ...] and Female [0, 1, 1, 0, ...]. It has the advantage that the result is binary rather than ordinal and that everything sits in an orthogonal vector space, but features with high cardinality can lead to a dimensionality issue. I shall use the One-Hot-Encoding method, transforming 1 categorical column with n unique values into n-1 dummies. Let’s encode Sex as an example:
## create dummydummy = pd.get_dummies(dtf_train["Sex"], prefix="Sex",drop_first=True)dtf_train= pd.concat([dtf_train, dummy], axis=1)print( dtf_train.filter(like="Sex", axis=1).head() )## drop the original categorical columndtf = dtf_train.drop("Sex", axis=1)
Last but not least, I’m going to scale the features. There are several different ways to do that, I’ll present just the most used ones: the Standard-Scaler and the MinMax-Scaler. The first one assumes data is normally distributed and rescales it such that the distribution centres around 0 with a standard deviation of 1. However, the outliers have an influence when computing the empirical mean and standard deviation which shrink the range of the feature values, therefore this scaler can’t guarantee balanced feature scales in the presence of outliers. On the other hand, the MinMax-Scaler rescales the data set such that all feature values are in the same range (0–1). It is less affected by outliers but compresses all inliers in a narrow range. Since my data is not normally distributed, I’ll go with the MinMax-Scaler:
scaler = preprocessing.MinMaxScaler(feature_range=(0,1))X = scaler.fit_transform(dtf_train.drop("Y", axis=1))dtf_scaled= pd.DataFrame(X, columns=dtf_train.drop("Y", axis=1).columns, index=dtf_train.index)dtf_scaled["Y"] = dtf_train["Y"]dtf_scaled.head()
Feature selection is the process of selecting a subset of relevant variables to build the machine learning model. It makes the model easier to interpret and reduces overfitting (when the model adapts too much to the training data and performs badly outside the train set).
I already did a first “manual” feature selection during data analysis by excluding irrelevant columns. Now it’s going to be a bit different because we assume that all the features in the matrix are relevant and we want to drop the unnecessary ones. When a feature is not necessary? Well, the answer is easy: when there is a better equivalent, or one that does the same job but better.
I’ll explain with an example: Pclass is highly correlated with Cabin_section because, as we’ve seen before, certain sections were located in 1st class and others in the 2nd. Let’s compute the correlation matrix to see it:
corr_matrix = dtf.copy()for col in corr_matrix.columns: if corr_matrix[col].dtype == "O": corr_matrix[col] = corr_matrix[col].factorize(sort=True)[0]corr_matrix = corr_matrix.corr(method="pearson")sns.heatmap(corr_matrix, vmin=-1., vmax=1., annot=True, fmt='.2f', cmap="YlGnBu", cbar=True, linewidths=0.5)plt.title("pearson correlation")
One among Pclass and Cabin_section could be unnecessary and we may decide to drop it and keep the most useful one (i.e. the one with the lowest p-value or the one that most reduces entropy).
I will show two different ways to perform automatic feature selection: first I will use a regularization method and compare it with the ANOVA test already mentioned before, then I will show how to get feature importance from ensemble methods.
LASSO regularization is a regression analysis method that performs both variable selection and regularization in order to enhance accuracy and interpretability.
X = dtf_train.drop("Y", axis=1).valuesy = dtf_train["Y"].valuesfeature_names = dtf_train.drop("Y", axis=1).columns## Anovaselector = feature_selection.SelectKBest(score_func= feature_selection.f_classif, k=10).fit(X,y)anova_selected_features = feature_names[selector.get_support()]## Lasso regularizationselector = feature_selection.SelectFromModel(estimator= linear_model.LogisticRegression(C=1, penalty="l1", solver='liblinear'), max_features=10).fit(X,y)lasso_selected_features = feature_names[selector.get_support()] ## Plotdtf_features = pd.DataFrame({"features":feature_names})dtf_features["anova"] = dtf_features["features"].apply(lambda x: "anova" if x in anova_selected_features else "")dtf_features["num1"] = dtf_features["features"].apply(lambda x: 1 if x in anova_selected_features else 0)dtf_features["lasso"] = dtf_features["features"].apply(lambda x: "lasso" if x in lasso_selected_features else "")dtf_features["num2"] = dtf_features["features"].apply(lambda x: 1 if x in lasso_selected_features else 0)dtf_features["method"] = dtf_features[["anova","lasso"]].apply(lambda x: (x[0]+" "+x[1]).strip(), axis=1)dtf_features["selection"] = dtf_features["num1"] + dtf_features["num2"]sns.barplot(y="features", x="selection", hue="method", data=dtf_features.sort_values("selection", ascending=False), dodge=False)
The blue features are the ones selected by both ANOVA and LASSO, the others are selected by just one of the two methods.
Random forest is an ensemble method that consists of a number of decision trees in which every node is a condition on a single feature, designed to split the dataset into two so that similar response values end up in the same set. Features importance is computed from how much each feature decreases the entropy in a tree.
X = dtf_train.drop("Y", axis=1).valuesy = dtf_train["Y"].valuesfeature_names = dtf_train.drop("Y", axis=1).columns.tolist()## Importancemodel = ensemble.RandomForestClassifier(n_estimators=100, criterion="entropy", random_state=0)model.fit(X,y)importances = model.feature_importances_## Put in a pandas dtfdtf_importances = pd.DataFrame({"IMPORTANCE":importances, "VARIABLE":feature_names}).sort_values("IMPORTANCE", ascending=False)dtf_importances['cumsum'] = dtf_importances['IMPORTANCE'].cumsum(axis=0)dtf_importances = dtf_importances.set_index("VARIABLE") ## Plotfig, ax = plt.subplots(nrows=1, ncols=2, sharex=False, sharey=False)fig.suptitle("Features Importance", fontsize=20)ax[0].title.set_text('variables') dtf_importances[["IMPORTANCE"]].sort_values(by="IMPORTANCE").plot( kind="barh", legend=False, ax=ax[0]).grid(axis="x")ax[0].set(ylabel="")ax[1].title.set_text('cumulative')dtf_importances[["cumsum"]].plot(kind="line", linewidth=4, legend=False, ax=ax[1])ax[1].set(xlabel="", xticks=np.arange(len(dtf_importances)), xticklabels=dtf_importances.index)plt.xticks(rotation=70)plt.grid(axis='both')plt.show()
It’s really interesting that Age and Fare, which are the most important features this time, weren’t the top features before and that on the contrary Cabin_section E, F and D don’t appear really useful here.
Personally, I always try to use less features as possible, so here I select the following ones and proceed with the design, train, test and evaluation of the machine learning model:
X_names = ["Age", "Fare", "Sex_male", "SibSp", "Pclass_3", "Parch","Cabin_section_n", "Embarked_S", "Pclass_2", "Cabin_section_F", "Cabin_section_E", "Cabin_section_D"]X_train = dtf_train[X_names].valuesy_train = dtf_train["Y"].valuesX_test = dtf_test[X_names].valuesy_test = dtf_test["Y"].values
Please note that before using test data for prediction you have to preprocess it just like we did for the train data.
Finally, it’s time to build the machine learning model. First, we need to choose an algorithm that is able to learn from training data how to recognize the two classes of the target variable by minimizing some error function.
I suggest to always try a gradient boosting algorithm (like XGBoost). It’s a machine learning technique that produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. Basically it’s similar to a Random Forest with the difference that every tree is fitted on the error of the previous one.
There a lot of hyperparameters and there is no general rule about what is best, so you just have to find the right combination that fits your data better. You could do different tries manually or you can let the computer do this tedious job with a GridSearch (tries every possible combination but takes time) or with a RandomSearch (tries randomly a fixed number of iterations). I’ll try a RandonSearch for my hyperparameter tuning: the machine will iterate n times (1000) through training data to find the combination of parameters (specified in the code below) that maximizes a scoring function used as KPI (accuracy, the ratio of the number of correct predictions to the total number of input samples):
## call modelmodel = ensemble.GradientBoostingClassifier()## define hyperparameters combinations to tryparam_dic = {'learning_rate':[0.15,0.1,0.05,0.01,0.005,0.001], #weighting factor for the corrections by new trees when added to the model'n_estimators':[100,250,500,750,1000,1250,1500,1750], #number of trees added to the model'max_depth':[2,3,4,5,6,7], #maximum depth of the tree'min_samples_split':[2,4,6,8,10,20,40,60,100], #sets the minimum number of samples to split'min_samples_leaf':[1,3,5,7,9], #the minimum number of samples to form a leaf'max_features':[2,3,4,5,6,7], #square root of features is usually a good starting point'subsample':[0.7,0.75,0.8,0.85,0.9,0.95,1]} #the fraction of samples to be used for fitting the individual base learners. Values lower than 1 generally lead to a reduction of variance and an increase in bias.## random searchrandom_search = model_selection.RandomizedSearchCV(model, param_distributions=param_dic, n_iter=1000, scoring="accuracy").fit(X_train, y_train)print("Best Model parameters:", random_search.best_params_)print("Best Model mean accuracy:", random_search.best_score_)model = random_search.best_estimator_
Cool, that’s the best model, with a mean accuracy of 0.85, so probably 85% of predictions on the test set will be correct.
We can also validate this model using a k-fold cross-validation, a procedure that consists in splitting the data k times into train and validation sets and for each split the model is trained and tested. It’s used to check how well the model is able to get trained by some data and predict unseen data.
I’d like to clarify that I call validation set a set of examples used to tune the hyperparameters of a classifier, extracted from splitting training data. On the other end, a test set is a simulation of how the model would perform in production when it’s asked to predict observations never seen before.
It’s common to plot a ROC curve for every fold, a plot that illustrates how the ability of a binary classifier changes as its discrimination threshold is varied. It is created by plotting the true positive rate (1s predicted correctly) against the false positive rate (1s predicted that are actually 0s) at various threshold settings. The AUC (area under the ROC curve) indicates the probability that the classifier will rank a randomly chosen positive observation (Y=1) higher than a randomly chosen negative one (Y=0).
Now I’ll show an example of with 10 folds (k=10):
cv = model_selection.StratifiedKFold(n_splits=10, shuffle=True)tprs, aucs = [], []mean_fpr = np.linspace(0,1,100)fig = plt.figure()i = 1for train, test in cv.split(X_train, y_train): prediction = model.fit(X_train[train], y_train[train]).predict_proba(X_train[test]) fpr, tpr, t = metrics.roc_curve(y_train[test], prediction[:, 1]) tprs.append(scipy.interp(mean_fpr, fpr, tpr)) roc_auc = metrics.auc(fpr, tpr) aucs.append(roc_auc) plt.plot(fpr, tpr, lw=2, alpha=0.3, label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc)) i = i+1 plt.plot([0,1], [0,1], linestyle='--', lw=2, color='black')mean_tpr = np.mean(tprs, axis=0)mean_auc = metrics.auc(mean_fpr, mean_tpr)plt.plot(mean_fpr, mean_tpr, color='blue', label=r'Mean ROC (AUC = %0.2f )' % (mean_auc), lw=2, alpha=1)plt.xlabel('False Positive Rate')plt.ylabel('True Positive Rate')plt.title('K-Fold Validation')plt.legend(loc="lower right")plt.show()
According to this validation, we should expect an AUC score around 0.84 when making predictions on the test.
For the purpose of this tutorial I’d say that the performance is fine and we can proceed with the model selected by the RandomSearch. Once that the right model is selected, it can be trained on the whole train set and then tested on the test set.
## trainmodel.fit(X_train, y_train)## testpredicted_prob = model.predict_proba(X_test)[:,1]predicted = model.predict(X_test)
In the code above I made two kinds of predictions: the first one is the probability that an observation is a 1, and the second is the prediction of the label (1 or 0). To get the latter you have to decide a probability threshold for which an observation can be considered as 1, I used the default threshold of 0.5.
Moment of truth, we’re about to see if all this hard work is worth. The whole point is to study how many correct predictions and error types the model makes.
I’ll evaluate the model using the following common metrics: Accuracy, AUC, Precision and Recall. I already mentioned the first two, but I reckon that the others are way more important. Precision is the fraction of 1s (or 0s) that the model predicted correctly among all predicted 1s (or 0s), so it can be seen as a sort of confidence level when predicting a 1 (or a 0). Recall is the portion of 1s (or 0s) that the model predicted correctly among all 1s (or 0s) in the test set, basically it’s the true 1 rate. Combining Precision and Recall with an armonic mean, you get the F1-score.
Let’s see how the model did on the test set:
## Accuray e AUCaccuracy = metrics.accuracy_score(y_test, predicted)auc = metrics.roc_auc_score(y_test, predicted_prob)print("Accuracy (overall correct predictions):", round(accuracy,2))print("Auc:", round(auc,2)) ## Precision e Recallrecall = metrics.recall_score(y_test, predicted)precision = metrics.precision_score(y_test, predicted)print("Recall (all 1s predicted right):", round(recall,2))print("Precision (confidence when predicting a 1):", round(precision,2))print("Detail:")print(metrics.classification_report(y_test, predicted, target_names=[str(i) for i in np.unique(y_test)]))
As expected, the general accuracy of the model is around 85%. It predicted 71% of 1s correctly with a precision of 84% and 92% of 0s with a precision of 85%. In order to understand these metrics better, I’ll break down the results in a confusion matrix:
classes = np.unique(y_test)fig, ax = plt.subplots()cm = metrics.confusion_matrix(y_test, predicted, labels=classes)sns.heatmap(cm, annot=True, fmt='d', cmap=plt.cm.Blues, cbar=False)ax.set(xlabel="Pred", ylabel="True", title="Confusion matrix")ax.set_yticklabels(labels=classes, rotation=0)plt.show()
We can see that the model predicted 85 (70+15) 1s of which 70 are true positives and 15 are false positives, so it has a Precision of 70/85 = 0.82 when predicting 1s. On the other hand, the model got 70 1s right of all the 96 (70+26) 1s in the test set, so its Recall is 70/96 = 0.73.
Choosing a threshold of 0.5 to decide whether a prediction is a 1 or 0 led to this result. Would it be different with another one? Definitely yes, but there is no threshold that would bring the top score on both precision and recall, choosing a threshold means to make a compromise between these two metrics. I’ll show what I mean by plotting the ROC curve and the precision-recall curve of the test result:
classes = np.unique(y_test)fig, ax = plt.subplots(nrows=1, ncols=2)## plot ROC curvefpr, tpr, thresholds = metrics.roc_curve(y_test, predicted_prob)roc_auc = metrics.auc(fpr, tpr) ax[0].plot(fpr, tpr, color='darkorange', lw=3, label='area = %0.2f' % roc_auc)ax[0].plot([0,1], [0,1], color='navy', lw=3, linestyle='--')ax[0].hlines(y=recall, xmin=0, xmax=1-cm[0,0]/(cm[0,0]+cm[0,1]), color='red', linestyle='--', alpha=0.7, label="chosen threshold")ax[0].vlines(x=1-cm[0,0]/(cm[0,0]+cm[0,1]), ymin=0, ymax=recall, color='red', linestyle='--', alpha=0.7)ax[0].set(xlabel='False Positive Rate', ylabel="True Positive Rate (Recall)", title="Receiver operating characteristic") ax.legend(loc="lower right")ax.grid(True)## annotate ROC thresholdsthres_in_plot = []for i,t in enumerate(thresholds): t = np.round(t,1) if t not in thres_in_plot: ax.annotate(t, xy=(fpr[i],tpr[i]), xytext=(fpr[i],tpr[i]), textcoords='offset points', ha='left', va='bottom') thres_in_plot.append(t) else: next## plot P-R curveprecisions, recalls, thresholds = metrics.precision_recall_curve(y_test, predicted_prob)roc_auc = metrics.auc(recalls, precisions)ax[1].plot(recalls, precisions, color='darkorange', lw=3, label='area = %0.2f' % roc_auc)ax[1].plot([0,1], [(cm[1,0]+cm[1,0])/len(y_test), (cm[1,0]+cm[1,0])/len(y_test)], linestyle='--', color='navy', lw=3)ax[1].hlines(y=precision, xmin=0, xmax=recall, color='red', linestyle='--', alpha=0.7, label="chosen threshold")ax[1].vlines(x=recall, ymin=0, ymax=precision, color='red', linestyle='--', alpha=0.7)ax[1].set(xlabel='Recall', ylabel="Precision", title="Precision-Recall curve")ax[1].legend(loc="lower left")ax[1].grid(True)## annotate P-R thresholdsthres_in_plot = []for i,t in enumerate(thresholds): t = np.round(t,1) if t not in thres_in_plot: ax.annotate(np.round(t,1), xy=(recalls[i],precisions[i]), xytext=(recalls[i],precisions[i]), textcoords='offset points', ha='left', va='bottom') thres_in_plot.append(t) else: nextplt.show()
Every point of these curves represents a confusion matrix obtained with a different threshold (the numbers printed on the curves). I could use a threshold of 0.1 and gain a recall of 0.9, meaning that the model would predict 90% of 1s correctly, but the precision would drop to 0.4, meaning that the model would predict a lot of false positives. So it really depends on the type of use case and in particular whether a false positive has an higher cost of a false negative.
When the dataset is balanced and metrics aren’t specified by project stakeholder, I usually choose the threshold that maximize the F1-score. Here’s how:
## calculate scores for different thresholdsdic_scores = {'accuracy':[], 'precision':[], 'recall':[], 'f1':[]}XX_train, XX_test, yy_train, yy_test = model_selection.train_test_split(X_train, y_train, test_size=0.2)predicted_prob = model.fit(XX_train, yy_train).predict_proba(XX_test)[:,1]thresholds = []for threshold in np.arange(0.1, 1, step=0.1): predicted = (predicted_prob > threshold) thresholds.append(threshold) dic_scores["accuracy"].append(metrics.accuracy_score(yy_test, predicted))dic_scores["precision"].append(metrics.precision_score(yy_test, predicted))dic_scores["recall"].append(metrics.recall_score(yy_test, predicted))dic_scores["f1"].append(metrics.f1_score(yy_test, predicted)) ## plotdtf_scores = pd.DataFrame(dic_scores).set_index(pd.Index(thresholds)) dtf_scores.plot(ax=ax, title="Threshold Selection")plt.show()
Before moving forward with the last section of this long tutorial, I’d like to say that we can’t say that the model is good or bad yet. The accuracy is 0.85, is it high? Compared to what? You need a baseline to compare your model with. Maybe the project you’re working on is about building a new model to replace an old one that can be used as baseline, or you can train different machine learning models on the same train set and compare the performance on a test set.
You analyzed and understood the data, you trained a model and tested it, you’re even satisfied with the performance. You think you’re done? Wrong. High chance that the project stakeholder doesn’t care about your metrics and doesn’t understand your algorithm, so you have to show that your machine learning model is not a black box.
The Lime package can help us to build an explainer. To give an illustration I will take a random observation from the test set and see what the model predicts:
print("True:", y_test[4], "--> Pred:", predicted[4], "| Prob:", np.max(predicted_prob[4]))
The model thinks that this observation is a 1 with a probability of 0.93 and in fact this passenger did survive. Why? Let’s use the explainer:
explainer = lime_tabular.LimeTabularExplainer(training_data=X_train, feature_names=X_names, class_names=np.unique(y_train), mode="classification")explained = explainer.explain_instance(X_test[4], model.predict_proba, num_features=10)explained.as_pyplot_figure()
The main factors for this particular prediction are that the passenger is female (Sex_male = 0), young (Age ≤ 22) and traveling in 1st class (Pclass_3 = 0 and Pclass_2 = 0).
The confusion matrix is a great tool to show how the testing went, but I also plot the classification regions to give a visual aid of what observations the model predicted correctly and what it missed. In order to plot the data in 2 dimensions some dimensionality reduction is required (the process of reducing the number of features by obtaining a set of principal variables). I will give an example using the PCA algorithm to summarize the data into 2 variables obtained with linear combinations of the features.
## PCApca = decomposition.PCA(n_components=2)X_train_2d = pca.fit_transform(X_train)X_test_2d = pca.transform(X_test)## train 2d modelmodel_2d = ensemble.GradientBoostingClassifier()model_2d.fit(X_train, y_train) ## plot classification regionsfrom matplotlib.colors import ListedColormapcolors = {np.unique(y_test)[0]:"black", np.unique(y_test)[1]:"green"}X1, X2 = np.meshgrid(np.arange(start=X_test[:,0].min()-1, stop=X_test[:,0].max()+1, step=0.01),np.arange(start=X_test[:,1].min()-1, stop=X_test[:,1].max()+1, step=0.01))fig, ax = plt.subplots()Y = model_2d.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape)ax.contourf(X1, X2, Y, alpha=0.5, cmap=ListedColormap(list(colors.values())))ax.set(xlim=[X1.min(),X1.max()], ylim=[X2.min(),X2.max()], title="Classification regions")for i in np.unique(y_test): ax.scatter(X_test[y_test==i, 0], X_test[y_test==i, 1], c=colors[i], label="true "+str(i)) plt.legend()plt.show()
This article has been a tutorial to demonstrate how to approach a classification use case with data science. I used the Titanic dataset as an example, going through every step from data analysis to the machine learning model.
In the exploratory section, I analyzed the case of a single categorical variable, a single numerical variable and how they interact together. I gave an example of feature engineering extracting a feature from raw data. Regarding preprocessing, I explained how to handle missing values and categorical data. I showed different ways to select the right features, how to use them to build a machine learning classifier and how to assess the performance. In the final section, I gave some suggestions on how to improve the explainability of your machine learning model.
An important note is that I haven’t covered what happens after your model is approved for deployment. Just keep in mind that you need to build a pipeline to automatically process new data that you will get periodically.
Now that you know how to approach a data science use case, you can apply this code and method to any kind of binary classification problem, carry out your own analysis, build your own model and even explain it.
I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects.
👉 Let’s Connect 👈
This article is part of the series Machine Learning with Python, see also: | [
{
"code": null,
"e": 334,
"s": 172,
"text": "In this article, using Data Science and Python, I will explain the main steps of a Classification use case, from data analysis to understanding the model output."
},
{
"code": null,
"e": 653,
"s": 334,
"text": "Since this tutorial can be a good starting point for beginners, I will use the “Titanic dataset” from the famous Kaggle competition, in which you are provided with passengers data and the task is to build a predictive model that answers the question: “what sorts of people were more likely to survive?” (linked below)."
},
{
"code": null,
"e": 668,
"s": 653,
"text": "www.kaggle.com"
},
{
"code": null,
"e": 906,
"s": 668,
"text": "I will present some useful Python code that can be easily used in other similar cases (just copy, paste, run) and walk through every line of code with comments, so that you can easily replicate this example (link to the full code below)."
},
{
"code": null,
"e": 917,
"s": 906,
"text": "github.com"
},
{
"code": null,
"e": 951,
"s": 917,
"text": "In particular, I will go through:"
},
{
"code": null,
"e": 1001,
"s": 951,
"text": "Environment setup: import libraries and read data"
},
{
"code": null,
"e": 1081,
"s": 1001,
"text": "Data Analysis: understand the meaning and the predictive power of the variables"
},
{
"code": null,
"e": 1133,
"s": 1081,
"text": "Feature engineering: extract features from raw data"
},
{
"code": null,
"e": 1226,
"s": 1133,
"text": "Preprocessing: data partitioning, handle missing values, encode categorical variables, scale"
},
{
"code": null,
"e": 1283,
"s": 1226,
"text": "Feature Selection: keep only the most relevant variables"
},
{
"code": null,
"e": 1343,
"s": 1283,
"text": "Model design: train, tune hyperparameters, validation, test"
},
{
"code": null,
"e": 1384,
"s": 1343,
"text": "Performance evaluation: read the metrics"
},
{
"code": null,
"e": 1442,
"s": 1384,
"text": "Explainability: understand how the model produces results"
},
{
"code": null,
"e": 1498,
"s": 1442,
"text": "First of all, I need to import the following libraries."
},
{
"code": null,
"e": 1900,
"s": 1498,
"text": "## for dataimport pandas as pdimport numpy as np## for plottingimport matplotlib.pyplot as pltimport seaborn as sns## for statistical testsimport scipyimport statsmodels.formula.api as smfimport statsmodels.api as sm## for machine learningfrom sklearn import model_selection, preprocessing, feature_selection, ensemble, linear_model, metrics, decomposition## for explainerfrom lime import lime_tabular"
},
{
"code": null,
"e": 1951,
"s": 1900,
"text": "Then I will read the data into a pandas Dataframe."
},
{
"code": null,
"e": 1999,
"s": 1951,
"text": "dtf = pd.read_csv('data_titanic.csv')dtf.head()"
},
{
"code": null,
"e": 2075,
"s": 1999,
"text": "Details about the columns can be found in the provided link to the dataset."
},
{
"code": null,
"e": 2344,
"s": 2075,
"text": "Please note that each row of the table represents a specific passenger (or observation). If you are working with a different dataset that doesn’t have a structure like that, in which each row represents an observation, then you need to summarize data and transform it."
},
{
"code": null,
"e": 2469,
"s": 2344,
"text": "Now that it’s all set, I will start by analyzing data, then select the features, build a machine learning model and predict."
},
{
"code": null,
"e": 2498,
"s": 2469,
"text": "Let’s get started, shall we?"
},
{
"code": null,
"e": 2702,
"s": 2498,
"text": "In statistics, exploratory data analysis is the process of summarizing the main characteristics of a dataset to understand what the data can tell us beyond the formal modeling or hypothesis testing task."
},
{
"code": null,
"e": 3104,
"s": 2702,
"text": "I always start by getting an overview of the whole dataset, in particular I want to know how many categorical and numerical variables there are and the proportion of missing data. Recognizing a variable’s type sometimes can be tricky because categories can be expressed as numbers (the Survived column is made of 1s and 0s). To this end, I am going to write a simple function that will do that for us:"
},
{
"code": null,
"e": 3579,
"s": 3104,
"text": "'''Recognize whether a column is numerical or categorical.:parameter :param dtf: dataframe - input data :param col: str - name of the column to analyze :param max_cat: num - max number of unique values to recognize a column as categorical:return \"cat\" if the column is categorical or \"num\" otherwise'''def utils_recognize_type(dtf, col, max_cat=20): if (dtf[col].dtype == \"O\") | (dtf[col].nunique() < max_cat): return \"cat\" else: return \"num\""
},
{
"code": null,
"e": 3751,
"s": 3579,
"text": "This function is very useful and can be used in several occasions. To give an illustration I’ll plot a heatmap of the dataframe to visualize columns type and missing data."
},
{
"code": null,
"e": 4195,
"s": 3751,
"text": "dic_cols = {col:utils_recognize_type(dtf, col, max_cat=20) for col in dtf.columns}heatmap = dtf.isnull()for k,v in dic_cols.items(): if v == \"num\": heatmap[k] = heatmap[k].apply(lambda x: 0.5 if x is False else 1) else: heatmap[k] = heatmap[k].apply(lambda x: 0 if x is False else 1)sns.heatmap(heatmap, cbar=False).set_title('Dataset Overview')plt.show()print(\"\\033[1;37;40m Categerocial \", \"\\033[1;30;41m Numeric \", \"\\033[1;30;47m NaN \")"
},
{
"code": null,
"e": 4230,
"s": 4195,
"text": "There are 885 rows and 12 columns:"
},
{
"code": null,
"e": 4398,
"s": 4230,
"text": "each row of the table represents a specific passenger (or observation) identified by PassengerId, so I’ll set it as index (or primary key of the table for SQL lovers)."
},
{
"code": null,
"e": 4650,
"s": 4398,
"text": "Survived is the phenomenon that we want to understand and predict (or target variable), so I’ll rename the column as “Y”. It contains two classes: 1 if the passenger survived and 0 otherwise, therefore this use case is a binary classification problem."
},
{
"code": null,
"e": 4721,
"s": 4650,
"text": "Age and Fare are numerical variables while the others are categorical."
},
{
"code": null,
"e": 4762,
"s": 4721,
"text": "Only Age and Cabin contain missing data."
},
{
"code": null,
"e": 4839,
"s": 4762,
"text": "dtf = dtf.set_index(\"PassengerId\")dtf = dtf.rename(columns={\"Survived\":\"Y\"})"
},
{
"code": null,
"e": 5090,
"s": 4839,
"text": "I believe visualization is the best tool for data analysis, but you need to know what kind of plots are more suitable for the different types of variables. Therefore, I’ll provide the code to plot the appropriate visualization for different examples."
},
{
"code": null,
"e": 5333,
"s": 5090,
"text": "First, let’s have a look at the univariate distributions (probability distribution of just one variable). A bar plot is appropriate to understand labels frequency for a single categorical variable. For example, let’s plot the target variable:"
},
{
"code": null,
"e": 5684,
"s": 5333,
"text": "y = \"Y\"ax = dtf[y].value_counts().sort_values().plot(kind=\"barh\")totals= []for i in ax.patches: totals.append(i.get_width())total = sum(totals)for i in ax.patches: ax.text(i.get_width()+.3, i.get_y()+.20, str(round((i.get_width()/total)*100, 2))+'%', fontsize=10, color='black')ax.grid(axis=\"x\")plt.suptitle(y, fontsize=20)plt.show()"
},
{
"code": null,
"e": 5802,
"s": 5684,
"text": "Up to 300 passengers survived and about 550 didn’t, in other words the survival rate (or the population mean) is 38%."
},
{
"code": null,
"e": 6061,
"s": 5802,
"text": "Moreover, a histogram is perfect to give a rough sense of the density of the underlying distribution of a single numerical data. I recommend using a box plot to graphically depict data groups through their quartiles. Let’s take the Age variable for instance:"
},
{
"code": null,
"e": 7095,
"s": 6061,
"text": "x = \"Age\"fig, ax = plt.subplots(nrows=1, ncols=2, sharex=False, sharey=False)fig.suptitle(x, fontsize=20)### distributionax[0].title.set_text('distribution')variable = dtf[x].fillna(dtf[x].mean())breaks = np.quantile(variable, q=np.linspace(0, 1, 11))variable = variable[ (variable > breaks[0]) & (variable < breaks[10]) ]sns.distplot(variable, hist=True, kde=True, kde_kws={\"shade\": True}, ax=ax[0])des = dtf[x].describe()ax[0].axvline(des[\"25%\"], ls='--')ax[0].axvline(des[\"mean\"], ls='--')ax[0].axvline(des[\"75%\"], ls='--')ax[0].grid(True)des = round(des, 2).apply(lambda x: str(x))box = '\\n'.join((\"min: \"+des[\"min\"], \"25%: \"+des[\"25%\"], \"mean: \"+des[\"mean\"], \"75%: \"+des[\"75%\"], \"max: \"+des[\"max\"]))ax[0].text(0.95, 0.95, box, transform=ax[0].transAxes, fontsize=10, va='top', ha=\"right\", bbox=dict(boxstyle='round', facecolor='white', alpha=1))### boxplot ax[1].title.set_text('outliers (log scale)')tmp_dtf = pd.DataFrame(dtf[x])tmp_dtf[x] = np.log(tmp_dtf[x])tmp_dtf.boxplot(column=x, ax=ax[1])plt.show()"
},
{
"code": null,
"e": 7378,
"s": 7095,
"text": "The passengers were, on average, pretty young: the distribution is skewed towards the left side (the mean is 30 y.o and the 75th percentile is 38 y.o.). Coupled with the outliers in the box plot, the first spike in the left tail says that there was a significant amount of children."
},
{
"code": null,
"e": 7615,
"s": 7378,
"text": "I’ll take the analysis to the next level and look into the bivariate distribution to understand if Age has predictive power to predict Y. This would be the case of categorical (Y) vs numerical (Age), therefore I shall proceed like this:"
},
{
"code": null,
"e": 7758,
"s": 7615,
"text": "split the population (the whole set of observations) into 2 samples: the portion of passengers with Y = 1 (Survived) and Y = 0 (Not Survived)."
},
{
"code": null,
"e": 7920,
"s": 7758,
"text": "Plot and compare densities of the two samples, if the distributions are different then the variable is predictive because the two groups have different patterns."
},
{
"code": null,
"e": 8101,
"s": 7920,
"text": "Group the numerical variable (Age) in bins (subsamples) and plot the composition of each bin, if the proportion of 1s is similar in all of them then the variable is not predictive."
},
{
"code": null,
"e": 8196,
"s": 8101,
"text": "Plot and compare the box plots of the two samples to spot different behaviors of the outliers."
},
{
"code": null,
"e": 9034,
"s": 8196,
"text": "cat, num = \"Y\", \"Age\"fig, ax = plt.subplots(nrows=1, ncols=3, sharex=False, sharey=False)fig.suptitle(x+\" vs \"+y, fontsize=20) ### distributionax[0].title.set_text('density')for i in dtf[cat].unique(): sns.distplot(dtf[dtf[cat]==i][num], hist=False, label=i, ax=ax[0])ax[0].grid(True)### stackedax[1].title.set_text('bins')breaks = np.quantile(dtf[num], q=np.linspace(0,1,11))tmp = dtf.groupby([cat, pd.cut(dtf[num], breaks, duplicates='drop')]).size().unstack().Ttmp = tmp[dtf[cat].unique()]tmp[\"tot\"] = tmp.sum(axis=1)for col in tmp.drop(\"tot\", axis=1).columns: tmp[col] = tmp[col] / tmp[\"tot\"]tmp.drop(\"tot\", axis=1).plot(kind='bar', stacked=True, ax=ax[1], legend=False, grid=True)### boxplot ax[2].title.set_text('outliers')sns.catplot(x=cat, y=num, data=dtf, kind=\"box\", ax=ax[2])ax[2].grid(True)plt.show()"
},
{
"code": null,
"e": 9313,
"s": 9034,
"text": "These 3 plots are just different perspectives of the conclusion that Age is predictive. The survival rate is higher for younger passengers: there is a spike in the left tail of 1s distribution and the first bin (0–16 y.o.) contains the highest percentage of survived passengers."
},
{
"code": null,
"e": 9710,
"s": 9313,
"text": "When not convinced by the “eye intuition”, you can always resort to good old statistics and run a test. In this case of categorical (Y) vs numerical (Age), I would use a one-way ANOVA test. Basically, it tests whether the means of two or more independent samples are significantly different, so if the p-value is small enough (<0.05) the null hypothesis of samples means equality can be rejected."
},
{
"code": null,
"e": 9991,
"s": 9710,
"text": "cat, num = \"Y\", \"Age\"model = smf.ols(num+' ~ '+cat, data=dtf).fit()table = sm.stats.anova_lm(model)p = table[\"PR(>F)\"][0]coeff, p = None, round(p, 3)conclusion = \"Correlated\" if p < 0.05 else \"Non-Correlated\"print(\"Anova F: the variables are\", conclusion, \"(p-value: \"+str(p)+\")\")"
},
{
"code": null,
"e": 10292,
"s": 9991,
"text": "Apparently the passengers' age contributed to determine their survival. That makes sense as the lives of women and children were to be saved first in a life-threatening situation, typically abandoning ship, when survival resources such as lifeboats were limited (the “women and children first” code)."
},
{
"code": null,
"e": 10643,
"s": 10292,
"text": "In order to check the validity of this first conclusion, I will have to analyze the behavior of the Sex variable with respect to the target variable. This is a case of categorical (Y) vs categorical (Sex), so I’ll plot 2 bar plots, one with the amount of 1s and 0s among the two categories of Sex (male and female) and the other with the percentages."
},
{
"code": null,
"e": 11301,
"s": 10643,
"text": "x, y = \"Sex\", \"Y\"fig, ax = plt.subplots(nrows=1, ncols=2, sharex=False, sharey=False)fig.suptitle(x+\" vs \"+y, fontsize=20)### countax[0].title.set_text('count')order = dtf.groupby(x)[y].count().index.tolist()sns.catplot(x=x, hue=y, data=dtf, kind='count', order=order, ax=ax[0])ax[0].grid(True)### percentageax[1].title.set_text('percentage')a = dtf.groupby(x)[y].count().reset_index()a = a.rename(columns={y:\"tot\"})b = dtf.groupby([x,y])[y].count()b = b.rename(columns={y:0}).reset_index()b = b.merge(a, how=\"left\")b[\"%\"] = b[0] / b[\"tot\"] *100sns.barplot(x=x, y=\"%\", hue=y, data=b, ax=ax[1]).get_legend().remove()ax[1].grid(True)plt.show()"
},
{
"code": null,
"e": 11614,
"s": 11301,
"text": "More than 200 female passengers (75% of the total amount of women onboard) and about 100 male passengers (less than 20%) survived. To put it another way, among women the survival rate is 75% and among men is 20%, therefore Sex is predictive. Moreover, this confirms that they gave priority to women and children."
},
{
"code": null,
"e": 12305,
"s": 11614,
"text": "Just like before, we can test the correlation of these 2 variables. Since they are both categorical, I’d use a Chi-Square test: assuming that two variables are independent (null hypothesis), it tests whether the values of the contingency table for these variables are uniformly distributed. If the p-value is small enough (<0.05), the null hypothesis can be rejected and we can say that the two variables are probably dependent. It’s possible to calculate Cramer’s V that is a measure of correlation that follows from this test, which is symmetrical (like traditional Pearson’s correlation) and ranges between 0 and 1 (unlike traditional Pearson’s correlation there are no negative values)."
},
{
"code": null,
"e": 12843,
"s": 12305,
"text": "x, y = \"Sex\", \"Y\"cont_table = pd.crosstab(index=dtf[x], columns=dtf[y])chi2_test = scipy.stats.chi2_contingency(cont_table)chi2, p = chi2_test[0], chi2_test[1]n = cont_table.sum().sum()phi2 = chi2/nr,k = cont_table.shapephi2corr = max(0, phi2-((k-1)*(r-1))/(n-1))rcorr = r-((r-1)**2)/(n-1)kcorr = k-((k-1)**2)/(n-1)coeff = np.sqrt(phi2corr/min((kcorr-1), (rcorr-1)))coeff, p = round(coeff, 3), round(p, 3)conclusion = \"Significant\" if p < 0.05 else \"Non-Significant\"print(\"Cramer Correlation:\", coeff, conclusion, \"(p-value:\"+str(p)+\")\")"
},
{
"code": null,
"e": 13096,
"s": 12843,
"text": "Age and Sex are examples of predictive features, but not all of the columns in the dataset are like that. For instance, Cabin seems to be a useless variable as it doesn’t provide any useful information, there are too many missing values and categories."
},
{
"code": null,
"e": 13311,
"s": 13096,
"text": "This kind of analysis should be carried on for each variable in the dataset to decide what should be kept as a potential feature and what can be dropped because not predictive (check out the link to the full code)."
},
{
"code": null,
"e": 13750,
"s": 13311,
"text": "It’s time to create new features from raw data using domain knowledge. I will provide one example: I’ll try to create a useful feature by extracting information from the Cabin column. I’m assuming that the letter at the beginning of each cabin number (i.e. “B96”) indicates some kind of section, maybe there were some lucky sections near to lifeboats. I will summarize the observations in clusters by extracting the section of each cabin:"
},
{
"code": null,
"e": 14128,
"s": 13750,
"text": "## Create new columndtf[\"Cabin_section\"] = dtf[\"Cabin\"].apply(lambda x: str(x)[0])## Plot contingency tablecont_table = pd.crosstab(index=dtf[\"Cabin_section\"], columns=dtf[\"Pclass\"], values=dtf[\"Y\"], aggfunc=\"sum\")sns.heatmap(cont_table, annot=True, cmap=\"YlGnBu\", fmt='.0f', linewidths=.5).set_title( 'Cabin_section vs Pclass (filter: Y)' )"
},
{
"code": null,
"e": 14455,
"s": 14128,
"text": "This plot shows how survivors are distributed among cabin sections and classes (7 survivors are in section A, 35 in B, ...). Most of the sections are assigned to the 1st and the 2nd classes, while the majority of missing sections (“n”) belongs to the 3rd class. I am going to keep this new feature instead of the column Cabin:"
},
{
"code": null,
"e": 14578,
"s": 14455,
"text": "Data preprocessing is the phase of preparing the raw data to make it suitable for a machine learning model. In particular:"
},
{
"code": null,
"e": 15498,
"s": 14578,
"text": "each observation must be represented by a single row, in other words you can’t have two rows describing the same passenger because they will be processed separately by the model (the dataset is already in such form, so ✅). Moreover, each column should be a feature, so you shouldn’t use PassengerId as a predictor, that’s why this kind of table is called “feature matrix”.The dataset must be partitioned into at least two sets: the model shall be trained on a significant portion of your dataset (so-called “train set”) and tested on a smaller set (“test set”).Missing values should be replaced with something, otherwise your model may freak out.Categorical data must be encoded, which means converting labels into integers, because machine learning expects numbers not strings.It’s good practice to scale the data, it helps to normalize the data within a particular range and speed up the calculations in an algorithm."
},
{
"code": null,
"e": 15871,
"s": 15498,
"text": "each observation must be represented by a single row, in other words you can’t have two rows describing the same passenger because they will be processed separately by the model (the dataset is already in such form, so ✅). Moreover, each column should be a feature, so you shouldn’t use PassengerId as a predictor, that’s why this kind of table is called “feature matrix”."
},
{
"code": null,
"e": 16061,
"s": 15871,
"text": "The dataset must be partitioned into at least two sets: the model shall be trained on a significant portion of your dataset (so-called “train set”) and tested on a smaller set (“test set”)."
},
{
"code": null,
"e": 16147,
"s": 16061,
"text": "Missing values should be replaced with something, otherwise your model may freak out."
},
{
"code": null,
"e": 16280,
"s": 16147,
"text": "Categorical data must be encoded, which means converting labels into integers, because machine learning expects numbers not strings."
},
{
"code": null,
"e": 16422,
"s": 16280,
"text": "It’s good practice to scale the data, it helps to normalize the data within a particular range and speed up the calculations in an algorithm."
},
{
"code": null,
"e": 17026,
"s": 16422,
"text": "Alright, let’s begin by partitioning the dataset. When splitting data into train and test sets you must follow 1 basic rule: rows in the train set shouldn’t appear in the test set as well. That’s because the model sees the target values during training and uses it to understand the phenomenon. In other words, the model already knows the right answer for the training observations and testing it on those would be like cheating. I’ve seen a lot of people pitching their machine learning models claiming 99.99% of accuracy that did in fact ignore this rule. Luckily, the Scikit-learn package knows that:"
},
{
"code": null,
"e": 17449,
"s": 17026,
"text": "## split datadtf_train, dtf_test = model_selection.train_test_split(dtf, test_size=0.3)## print infoprint(\"X_train shape:\", dtf_train.drop(\"Y\",axis=1).shape, \"| X_test shape:\", dtf_test.drop(\"Y\",axis=1).shape)print(\"y_train mean:\", round(np.mean(dtf_train[\"Y\"]),2), \"| y_test mean:\", round(np.mean(dtf_test[\"Y\"]),2))print(dtf_train.shape[1], \"features:\", dtf_train.drop(\"Y\",axis=1).columns.to_list())"
},
{
"code": null,
"e": 17944,
"s": 17449,
"text": "Next step: the Age column contains some missing data (19%) that need to be handled. In practice, you can replace missing data with a specific value, like 9999, that keeps trace of the missing information but changes the variable distribution. Alternatively, you can use the average of the column, like I’m going to do. I’d like to underline that from a Machine Learning perspective, it’s correct to first split into train and test and then replace NAs with the average of the training set only."
},
{
"code": null,
"e": 18012,
"s": 17944,
"text": "dtf_train[\"Age\"] = dtf_train[\"Age\"].fillna(dtf_train[\"Age\"].mean())"
},
{
"code": null,
"e": 19044,
"s": 18012,
"text": "There are still some categorical data that should be encoded. The two most common encoders are the Label-Encoder (each unique label is mapped to an integer) and the One-Hot-Encoder (each label is mapped to a binary vector). The first one is suited for data with ordinality only. If applied to a column with no ordinality, like Sex, it would turn the vector [male, female, female, male, ...] into [1, 2, 2, 1, ...] and we would have that female > male and with an average of 1.5 which makes no sense. On the other hand, the One-Hot-Encoder would transform the previous example into two dummy variables (dichotomous quantitative variables): Male [1, 0, 0, 1, ...] and Female [0, 1, 1, 0, ...]. It has the advantage that the result is binary rather than ordinal and that everything sits in an orthogonal vector space, but features with high cardinality can lead to a dimensionality issue. I shall use the One-Hot-Encoding method, transforming 1 categorical column with n unique values into n-1 dummies. Let’s encode Sex as an example:"
},
{
"code": null,
"e": 19327,
"s": 19044,
"text": "## create dummydummy = pd.get_dummies(dtf_train[\"Sex\"], prefix=\"Sex\",drop_first=True)dtf_train= pd.concat([dtf_train, dummy], axis=1)print( dtf_train.filter(like=\"Sex\", axis=1).head() )## drop the original categorical columndtf = dtf_train.drop(\"Sex\", axis=1)"
},
{
"code": null,
"e": 20153,
"s": 19327,
"text": "Last but not least, I’m going to scale the features. There are several different ways to do that, I’ll present just the most used ones: the Standard-Scaler and the MinMax-Scaler. The first one assumes data is normally distributed and rescales it such that the distribution centres around 0 with a standard deviation of 1. However, the outliers have an influence when computing the empirical mean and standard deviation which shrink the range of the feature values, therefore this scaler can’t guarantee balanced feature scales in the presence of outliers. On the other hand, the MinMax-Scaler rescales the data set such that all feature values are in the same range (0–1). It is less affected by outliers but compresses all inliers in a narrow range. Since my data is not normally distributed, I’ll go with the MinMax-Scaler:"
},
{
"code": null,
"e": 20407,
"s": 20153,
"text": "scaler = preprocessing.MinMaxScaler(feature_range=(0,1))X = scaler.fit_transform(dtf_train.drop(\"Y\", axis=1))dtf_scaled= pd.DataFrame(X, columns=dtf_train.drop(\"Y\", axis=1).columns, index=dtf_train.index)dtf_scaled[\"Y\"] = dtf_train[\"Y\"]dtf_scaled.head()"
},
{
"code": null,
"e": 20680,
"s": 20407,
"text": "Feature selection is the process of selecting a subset of relevant variables to build the machine learning model. It makes the model easier to interpret and reduces overfitting (when the model adapts too much to the training data and performs badly outside the train set)."
},
{
"code": null,
"e": 21065,
"s": 20680,
"text": "I already did a first “manual” feature selection during data analysis by excluding irrelevant columns. Now it’s going to be a bit different because we assume that all the features in the matrix are relevant and we want to drop the unnecessary ones. When a feature is not necessary? Well, the answer is easy: when there is a better equivalent, or one that does the same job but better."
},
{
"code": null,
"e": 21287,
"s": 21065,
"text": "I’ll explain with an example: Pclass is highly correlated with Cabin_section because, as we’ve seen before, certain sections were located in 1st class and others in the 2nd. Let’s compute the correlation matrix to see it:"
},
{
"code": null,
"e": 21636,
"s": 21287,
"text": "corr_matrix = dtf.copy()for col in corr_matrix.columns: if corr_matrix[col].dtype == \"O\": corr_matrix[col] = corr_matrix[col].factorize(sort=True)[0]corr_matrix = corr_matrix.corr(method=\"pearson\")sns.heatmap(corr_matrix, vmin=-1., vmax=1., annot=True, fmt='.2f', cmap=\"YlGnBu\", cbar=True, linewidths=0.5)plt.title(\"pearson correlation\")"
},
{
"code": null,
"e": 21827,
"s": 21636,
"text": "One among Pclass and Cabin_section could be unnecessary and we may decide to drop it and keep the most useful one (i.e. the one with the lowest p-value or the one that most reduces entropy)."
},
{
"code": null,
"e": 22070,
"s": 21827,
"text": "I will show two different ways to perform automatic feature selection: first I will use a regularization method and compare it with the ANOVA test already mentioned before, then I will show how to get feature importance from ensemble methods."
},
{
"code": null,
"e": 22231,
"s": 22070,
"text": "LASSO regularization is a regression analysis method that performs both variable selection and regularization in order to enhance accuracy and interpretability."
},
{
"code": null,
"e": 23599,
"s": 22231,
"text": "X = dtf_train.drop(\"Y\", axis=1).valuesy = dtf_train[\"Y\"].valuesfeature_names = dtf_train.drop(\"Y\", axis=1).columns## Anovaselector = feature_selection.SelectKBest(score_func= feature_selection.f_classif, k=10).fit(X,y)anova_selected_features = feature_names[selector.get_support()]## Lasso regularizationselector = feature_selection.SelectFromModel(estimator= linear_model.LogisticRegression(C=1, penalty=\"l1\", solver='liblinear'), max_features=10).fit(X,y)lasso_selected_features = feature_names[selector.get_support()] ## Plotdtf_features = pd.DataFrame({\"features\":feature_names})dtf_features[\"anova\"] = dtf_features[\"features\"].apply(lambda x: \"anova\" if x in anova_selected_features else \"\")dtf_features[\"num1\"] = dtf_features[\"features\"].apply(lambda x: 1 if x in anova_selected_features else 0)dtf_features[\"lasso\"] = dtf_features[\"features\"].apply(lambda x: \"lasso\" if x in lasso_selected_features else \"\")dtf_features[\"num2\"] = dtf_features[\"features\"].apply(lambda x: 1 if x in lasso_selected_features else 0)dtf_features[\"method\"] = dtf_features[[\"anova\",\"lasso\"]].apply(lambda x: (x[0]+\" \"+x[1]).strip(), axis=1)dtf_features[\"selection\"] = dtf_features[\"num1\"] + dtf_features[\"num2\"]sns.barplot(y=\"features\", x=\"selection\", hue=\"method\", data=dtf_features.sort_values(\"selection\", ascending=False), dodge=False)"
},
{
"code": null,
"e": 23720,
"s": 23599,
"text": "The blue features are the ones selected by both ANOVA and LASSO, the others are selected by just one of the two methods."
},
{
"code": null,
"e": 24043,
"s": 23720,
"text": "Random forest is an ensemble method that consists of a number of decision trees in which every node is a condition on a single feature, designed to split the dataset into two so that similar response values end up in the same set. Features importance is computed from how much each feature decreases the entropy in a tree."
},
{
"code": null,
"e": 25287,
"s": 24043,
"text": "X = dtf_train.drop(\"Y\", axis=1).valuesy = dtf_train[\"Y\"].valuesfeature_names = dtf_train.drop(\"Y\", axis=1).columns.tolist()## Importancemodel = ensemble.RandomForestClassifier(n_estimators=100, criterion=\"entropy\", random_state=0)model.fit(X,y)importances = model.feature_importances_## Put in a pandas dtfdtf_importances = pd.DataFrame({\"IMPORTANCE\":importances, \"VARIABLE\":feature_names}).sort_values(\"IMPORTANCE\", ascending=False)dtf_importances['cumsum'] = dtf_importances['IMPORTANCE'].cumsum(axis=0)dtf_importances = dtf_importances.set_index(\"VARIABLE\") ## Plotfig, ax = plt.subplots(nrows=1, ncols=2, sharex=False, sharey=False)fig.suptitle(\"Features Importance\", fontsize=20)ax[0].title.set_text('variables') dtf_importances[[\"IMPORTANCE\"]].sort_values(by=\"IMPORTANCE\").plot( kind=\"barh\", legend=False, ax=ax[0]).grid(axis=\"x\")ax[0].set(ylabel=\"\")ax[1].title.set_text('cumulative')dtf_importances[[\"cumsum\"]].plot(kind=\"line\", linewidth=4, legend=False, ax=ax[1])ax[1].set(xlabel=\"\", xticks=np.arange(len(dtf_importances)), xticklabels=dtf_importances.index)plt.xticks(rotation=70)plt.grid(axis='both')plt.show()"
},
{
"code": null,
"e": 25494,
"s": 25287,
"text": "It’s really interesting that Age and Fare, which are the most important features this time, weren’t the top features before and that on the contrary Cabin_section E, F and D don’t appear really useful here."
},
{
"code": null,
"e": 25676,
"s": 25494,
"text": "Personally, I always try to use less features as possible, so here I select the following ones and proceed with the design, train, test and evaluation of the machine learning model:"
},
{
"code": null,
"e": 25973,
"s": 25676,
"text": "X_names = [\"Age\", \"Fare\", \"Sex_male\", \"SibSp\", \"Pclass_3\", \"Parch\",\"Cabin_section_n\", \"Embarked_S\", \"Pclass_2\", \"Cabin_section_F\", \"Cabin_section_E\", \"Cabin_section_D\"]X_train = dtf_train[X_names].valuesy_train = dtf_train[\"Y\"].valuesX_test = dtf_test[X_names].valuesy_test = dtf_test[\"Y\"].values"
},
{
"code": null,
"e": 26091,
"s": 25973,
"text": "Please note that before using test data for prediction you have to preprocess it just like we did for the train data."
},
{
"code": null,
"e": 26317,
"s": 26091,
"text": "Finally, it’s time to build the machine learning model. First, we need to choose an algorithm that is able to learn from training data how to recognize the two classes of the target variable by minimizing some error function."
},
{
"code": null,
"e": 26656,
"s": 26317,
"text": "I suggest to always try a gradient boosting algorithm (like XGBoost). It’s a machine learning technique that produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. Basically it’s similar to a Random Forest with the difference that every tree is fitted on the error of the previous one."
},
{
"code": null,
"e": 27362,
"s": 26656,
"text": "There a lot of hyperparameters and there is no general rule about what is best, so you just have to find the right combination that fits your data better. You could do different tries manually or you can let the computer do this tedious job with a GridSearch (tries every possible combination but takes time) or with a RandomSearch (tries randomly a fixed number of iterations). I’ll try a RandonSearch for my hyperparameter tuning: the machine will iterate n times (1000) through training data to find the combination of parameters (specified in the code below) that maximizes a scoring function used as KPI (accuracy, the ratio of the number of correct predictions to the total number of input samples):"
},
{
"code": null,
"e": 28564,
"s": 27362,
"text": "## call modelmodel = ensemble.GradientBoostingClassifier()## define hyperparameters combinations to tryparam_dic = {'learning_rate':[0.15,0.1,0.05,0.01,0.005,0.001], #weighting factor for the corrections by new trees when added to the model'n_estimators':[100,250,500,750,1000,1250,1500,1750], #number of trees added to the model'max_depth':[2,3,4,5,6,7], #maximum depth of the tree'min_samples_split':[2,4,6,8,10,20,40,60,100], #sets the minimum number of samples to split'min_samples_leaf':[1,3,5,7,9], #the minimum number of samples to form a leaf'max_features':[2,3,4,5,6,7], #square root of features is usually a good starting point'subsample':[0.7,0.75,0.8,0.85,0.9,0.95,1]} #the fraction of samples to be used for fitting the individual base learners. Values lower than 1 generally lead to a reduction of variance and an increase in bias.## random searchrandom_search = model_selection.RandomizedSearchCV(model, param_distributions=param_dic, n_iter=1000, scoring=\"accuracy\").fit(X_train, y_train)print(\"Best Model parameters:\", random_search.best_params_)print(\"Best Model mean accuracy:\", random_search.best_score_)model = random_search.best_estimator_"
},
{
"code": null,
"e": 28687,
"s": 28564,
"text": "Cool, that’s the best model, with a mean accuracy of 0.85, so probably 85% of predictions on the test set will be correct."
},
{
"code": null,
"e": 28990,
"s": 28687,
"text": "We can also validate this model using a k-fold cross-validation, a procedure that consists in splitting the data k times into train and validation sets and for each split the model is trained and tested. It’s used to check how well the model is able to get trained by some data and predict unseen data."
},
{
"code": null,
"e": 29294,
"s": 28990,
"text": "I’d like to clarify that I call validation set a set of examples used to tune the hyperparameters of a classifier, extracted from splitting training data. On the other end, a test set is a simulation of how the model would perform in production when it’s asked to predict observations never seen before."
},
{
"code": null,
"e": 29815,
"s": 29294,
"text": "It’s common to plot a ROC curve for every fold, a plot that illustrates how the ability of a binary classifier changes as its discrimination threshold is varied. It is created by plotting the true positive rate (1s predicted correctly) against the false positive rate (1s predicted that are actually 0s) at various threshold settings. The AUC (area under the ROC curve) indicates the probability that the classifier will rank a randomly chosen positive observation (Y=1) higher than a randomly chosen negative one (Y=0)."
},
{
"code": null,
"e": 29865,
"s": 29815,
"text": "Now I’ll show an example of with 10 folds (k=10):"
},
{
"code": null,
"e": 30814,
"s": 29865,
"text": "cv = model_selection.StratifiedKFold(n_splits=10, shuffle=True)tprs, aucs = [], []mean_fpr = np.linspace(0,1,100)fig = plt.figure()i = 1for train, test in cv.split(X_train, y_train): prediction = model.fit(X_train[train], y_train[train]).predict_proba(X_train[test]) fpr, tpr, t = metrics.roc_curve(y_train[test], prediction[:, 1]) tprs.append(scipy.interp(mean_fpr, fpr, tpr)) roc_auc = metrics.auc(fpr, tpr) aucs.append(roc_auc) plt.plot(fpr, tpr, lw=2, alpha=0.3, label='ROC fold %d (AUC = %0.2f)' % (i, roc_auc)) i = i+1 plt.plot([0,1], [0,1], linestyle='--', lw=2, color='black')mean_tpr = np.mean(tprs, axis=0)mean_auc = metrics.auc(mean_fpr, mean_tpr)plt.plot(mean_fpr, mean_tpr, color='blue', label=r'Mean ROC (AUC = %0.2f )' % (mean_auc), lw=2, alpha=1)plt.xlabel('False Positive Rate')plt.ylabel('True Positive Rate')plt.title('K-Fold Validation')plt.legend(loc=\"lower right\")plt.show()"
},
{
"code": null,
"e": 30923,
"s": 30814,
"text": "According to this validation, we should expect an AUC score around 0.84 when making predictions on the test."
},
{
"code": null,
"e": 31170,
"s": 30923,
"text": "For the purpose of this tutorial I’d say that the performance is fine and we can proceed with the model selected by the RandomSearch. Once that the right model is selected, it can be trained on the whole train set and then tested on the test set."
},
{
"code": null,
"e": 31295,
"s": 31170,
"text": "## trainmodel.fit(X_train, y_train)## testpredicted_prob = model.predict_proba(X_test)[:,1]predicted = model.predict(X_test)"
},
{
"code": null,
"e": 31610,
"s": 31295,
"text": "In the code above I made two kinds of predictions: the first one is the probability that an observation is a 1, and the second is the prediction of the label (1 or 0). To get the latter you have to decide a probability threshold for which an observation can be considered as 1, I used the default threshold of 0.5."
},
{
"code": null,
"e": 31768,
"s": 31610,
"text": "Moment of truth, we’re about to see if all this hard work is worth. The whole point is to study how many correct predictions and error types the model makes."
},
{
"code": null,
"e": 32354,
"s": 31768,
"text": "I’ll evaluate the model using the following common metrics: Accuracy, AUC, Precision and Recall. I already mentioned the first two, but I reckon that the others are way more important. Precision is the fraction of 1s (or 0s) that the model predicted correctly among all predicted 1s (or 0s), so it can be seen as a sort of confidence level when predicting a 1 (or a 0). Recall is the portion of 1s (or 0s) that the model predicted correctly among all 1s (or 0s) in the test set, basically it’s the true 1 rate. Combining Precision and Recall with an armonic mean, you get the F1-score."
},
{
"code": null,
"e": 32399,
"s": 32354,
"text": "Let’s see how the model did on the test set:"
},
{
"code": null,
"e": 32992,
"s": 32399,
"text": "## Accuray e AUCaccuracy = metrics.accuracy_score(y_test, predicted)auc = metrics.roc_auc_score(y_test, predicted_prob)print(\"Accuracy (overall correct predictions):\", round(accuracy,2))print(\"Auc:\", round(auc,2)) ## Precision e Recallrecall = metrics.recall_score(y_test, predicted)precision = metrics.precision_score(y_test, predicted)print(\"Recall (all 1s predicted right):\", round(recall,2))print(\"Precision (confidence when predicting a 1):\", round(precision,2))print(\"Detail:\")print(metrics.classification_report(y_test, predicted, target_names=[str(i) for i in np.unique(y_test)]))"
},
{
"code": null,
"e": 33246,
"s": 32992,
"text": "As expected, the general accuracy of the model is around 85%. It predicted 71% of 1s correctly with a precision of 84% and 92% of 0s with a precision of 85%. In order to understand these metrics better, I’ll break down the results in a confusion matrix:"
},
{
"code": null,
"e": 33547,
"s": 33246,
"text": "classes = np.unique(y_test)fig, ax = plt.subplots()cm = metrics.confusion_matrix(y_test, predicted, labels=classes)sns.heatmap(cm, annot=True, fmt='d', cmap=plt.cm.Blues, cbar=False)ax.set(xlabel=\"Pred\", ylabel=\"True\", title=\"Confusion matrix\")ax.set_yticklabels(labels=classes, rotation=0)plt.show()"
},
{
"code": null,
"e": 33832,
"s": 33547,
"text": "We can see that the model predicted 85 (70+15) 1s of which 70 are true positives and 15 are false positives, so it has a Precision of 70/85 = 0.82 when predicting 1s. On the other hand, the model got 70 1s right of all the 96 (70+26) 1s in the test set, so its Recall is 70/96 = 0.73."
},
{
"code": null,
"e": 34240,
"s": 33832,
"text": "Choosing a threshold of 0.5 to decide whether a prediction is a 1 or 0 led to this result. Would it be different with another one? Definitely yes, but there is no threshold that would bring the top score on both precision and recall, choosing a threshold means to make a compromise between these two metrics. I’ll show what I mean by plotting the ROC curve and the precision-recall curve of the test result:"
},
{
"code": null,
"e": 36331,
"s": 34240,
"text": "classes = np.unique(y_test)fig, ax = plt.subplots(nrows=1, ncols=2)## plot ROC curvefpr, tpr, thresholds = metrics.roc_curve(y_test, predicted_prob)roc_auc = metrics.auc(fpr, tpr) ax[0].plot(fpr, tpr, color='darkorange', lw=3, label='area = %0.2f' % roc_auc)ax[0].plot([0,1], [0,1], color='navy', lw=3, linestyle='--')ax[0].hlines(y=recall, xmin=0, xmax=1-cm[0,0]/(cm[0,0]+cm[0,1]), color='red', linestyle='--', alpha=0.7, label=\"chosen threshold\")ax[0].vlines(x=1-cm[0,0]/(cm[0,0]+cm[0,1]), ymin=0, ymax=recall, color='red', linestyle='--', alpha=0.7)ax[0].set(xlabel='False Positive Rate', ylabel=\"True Positive Rate (Recall)\", title=\"Receiver operating characteristic\") ax.legend(loc=\"lower right\")ax.grid(True)## annotate ROC thresholdsthres_in_plot = []for i,t in enumerate(thresholds): t = np.round(t,1) if t not in thres_in_plot: ax.annotate(t, xy=(fpr[i],tpr[i]), xytext=(fpr[i],tpr[i]), textcoords='offset points', ha='left', va='bottom') thres_in_plot.append(t) else: next## plot P-R curveprecisions, recalls, thresholds = metrics.precision_recall_curve(y_test, predicted_prob)roc_auc = metrics.auc(recalls, precisions)ax[1].plot(recalls, precisions, color='darkorange', lw=3, label='area = %0.2f' % roc_auc)ax[1].plot([0,1], [(cm[1,0]+cm[1,0])/len(y_test), (cm[1,0]+cm[1,0])/len(y_test)], linestyle='--', color='navy', lw=3)ax[1].hlines(y=precision, xmin=0, xmax=recall, color='red', linestyle='--', alpha=0.7, label=\"chosen threshold\")ax[1].vlines(x=recall, ymin=0, ymax=precision, color='red', linestyle='--', alpha=0.7)ax[1].set(xlabel='Recall', ylabel=\"Precision\", title=\"Precision-Recall curve\")ax[1].legend(loc=\"lower left\")ax[1].grid(True)## annotate P-R thresholdsthres_in_plot = []for i,t in enumerate(thresholds): t = np.round(t,1) if t not in thres_in_plot: ax.annotate(np.round(t,1), xy=(recalls[i],precisions[i]), xytext=(recalls[i],precisions[i]), textcoords='offset points', ha='left', va='bottom') thres_in_plot.append(t) else: nextplt.show()"
},
{
"code": null,
"e": 36805,
"s": 36331,
"text": "Every point of these curves represents a confusion matrix obtained with a different threshold (the numbers printed on the curves). I could use a threshold of 0.1 and gain a recall of 0.9, meaning that the model would predict 90% of 1s correctly, but the precision would drop to 0.4, meaning that the model would predict a lot of false positives. So it really depends on the type of use case and in particular whether a false positive has an higher cost of a false negative."
},
{
"code": null,
"e": 36958,
"s": 36805,
"text": "When the dataset is balanced and metrics aren’t specified by project stakeholder, I usually choose the threshold that maximize the F1-score. Here’s how:"
},
{
"code": null,
"e": 37818,
"s": 36958,
"text": "## calculate scores for different thresholdsdic_scores = {'accuracy':[], 'precision':[], 'recall':[], 'f1':[]}XX_train, XX_test, yy_train, yy_test = model_selection.train_test_split(X_train, y_train, test_size=0.2)predicted_prob = model.fit(XX_train, yy_train).predict_proba(XX_test)[:,1]thresholds = []for threshold in np.arange(0.1, 1, step=0.1): predicted = (predicted_prob > threshold) thresholds.append(threshold) dic_scores[\"accuracy\"].append(metrics.accuracy_score(yy_test, predicted))dic_scores[\"precision\"].append(metrics.precision_score(yy_test, predicted))dic_scores[\"recall\"].append(metrics.recall_score(yy_test, predicted))dic_scores[\"f1\"].append(metrics.f1_score(yy_test, predicted)) ## plotdtf_scores = pd.DataFrame(dic_scores).set_index(pd.Index(thresholds)) dtf_scores.plot(ax=ax, title=\"Threshold Selection\")plt.show()"
},
{
"code": null,
"e": 38288,
"s": 37818,
"text": "Before moving forward with the last section of this long tutorial, I’d like to say that we can’t say that the model is good or bad yet. The accuracy is 0.85, is it high? Compared to what? You need a baseline to compare your model with. Maybe the project you’re working on is about building a new model to replace an old one that can be used as baseline, or you can train different machine learning models on the same train set and compare the performance on a test set."
},
{
"code": null,
"e": 38620,
"s": 38288,
"text": "You analyzed and understood the data, you trained a model and tested it, you’re even satisfied with the performance. You think you’re done? Wrong. High chance that the project stakeholder doesn’t care about your metrics and doesn’t understand your algorithm, so you have to show that your machine learning model is not a black box."
},
{
"code": null,
"e": 38780,
"s": 38620,
"text": "The Lime package can help us to build an explainer. To give an illustration I will take a random observation from the test set and see what the model predicts:"
},
{
"code": null,
"e": 38871,
"s": 38780,
"text": "print(\"True:\", y_test[4], \"--> Pred:\", predicted[4], \"| Prob:\", np.max(predicted_prob[4]))"
},
{
"code": null,
"e": 39014,
"s": 38871,
"text": "The model thinks that this observation is a 1 with a probability of 0.93 and in fact this passenger did survive. Why? Let’s use the explainer:"
},
{
"code": null,
"e": 39276,
"s": 39014,
"text": "explainer = lime_tabular.LimeTabularExplainer(training_data=X_train, feature_names=X_names, class_names=np.unique(y_train), mode=\"classification\")explained = explainer.explain_instance(X_test[4], model.predict_proba, num_features=10)explained.as_pyplot_figure()"
},
{
"code": null,
"e": 39450,
"s": 39276,
"text": "The main factors for this particular prediction are that the passenger is female (Sex_male = 0), young (Age ≤ 22) and traveling in 1st class (Pclass_3 = 0 and Pclass_2 = 0)."
},
{
"code": null,
"e": 39965,
"s": 39450,
"text": "The confusion matrix is a great tool to show how the testing went, but I also plot the classification regions to give a visual aid of what observations the model predicted correctly and what it missed. In order to plot the data in 2 dimensions some dimensionality reduction is required (the process of reducing the number of features by obtaining a set of principal variables). I will give an example using the PCA algorithm to summarize the data into 2 variables obtained with linear combinations of the features."
},
{
"code": null,
"e": 40920,
"s": 39965,
"text": "## PCApca = decomposition.PCA(n_components=2)X_train_2d = pca.fit_transform(X_train)X_test_2d = pca.transform(X_test)## train 2d modelmodel_2d = ensemble.GradientBoostingClassifier()model_2d.fit(X_train, y_train) ## plot classification regionsfrom matplotlib.colors import ListedColormapcolors = {np.unique(y_test)[0]:\"black\", np.unique(y_test)[1]:\"green\"}X1, X2 = np.meshgrid(np.arange(start=X_test[:,0].min()-1, stop=X_test[:,0].max()+1, step=0.01),np.arange(start=X_test[:,1].min()-1, stop=X_test[:,1].max()+1, step=0.01))fig, ax = plt.subplots()Y = model_2d.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape)ax.contourf(X1, X2, Y, alpha=0.5, cmap=ListedColormap(list(colors.values())))ax.set(xlim=[X1.min(),X1.max()], ylim=[X2.min(),X2.max()], title=\"Classification regions\")for i in np.unique(y_test): ax.scatter(X_test[y_test==i, 0], X_test[y_test==i, 1], c=colors[i], label=\"true \"+str(i)) plt.legend()plt.show()"
},
{
"code": null,
"e": 41146,
"s": 40920,
"text": "This article has been a tutorial to demonstrate how to approach a classification use case with data science. I used the Titanic dataset as an example, going through every step from data analysis to the machine learning model."
},
{
"code": null,
"e": 41712,
"s": 41146,
"text": "In the exploratory section, I analyzed the case of a single categorical variable, a single numerical variable and how they interact together. I gave an example of feature engineering extracting a feature from raw data. Regarding preprocessing, I explained how to handle missing values and categorical data. I showed different ways to select the right features, how to use them to build a machine learning classifier and how to assess the performance. In the final section, I gave some suggestions on how to improve the explainability of your machine learning model."
},
{
"code": null,
"e": 41932,
"s": 41712,
"text": "An important note is that I haven’t covered what happens after your model is approved for deployment. Just keep in mind that you need to build a pipeline to automatically process new data that you will get periodically."
},
{
"code": null,
"e": 42143,
"s": 41932,
"text": "Now that you know how to approach a data science use case, you can apply this code and method to any kind of binary classification problem, carry out your own analysis, build your own model and even explain it."
},
{
"code": null,
"e": 42261,
"s": 42143,
"text": "I hope you enjoyed it! Feel free to contact me for questions and feedback or just to share your interesting projects."
},
{
"code": null,
"e": 42279,
"s": 42261,
"text": "👉 Let’s Connect 👈"
}
] |
Boolean.TryParse() Method in C# | The Boolean.TryParse() method in C# is used to convert the specified string representation of a logical value to its Boolean equivalent.
Following is the syntax −
public static bool TryParse (string val out bool result);
Let us now see an example to implement the Boolean.TryParse() method −
using System;
public class Demo {
public static void Main(){
bool val;
bool flag;
val = Boolean.TryParse("true", out flag);
Console.WriteLine("Result = "+val);
}
}
This will produce the following output −
Result = True
Let us now see another example to implement the Boolean.TryParse() method −
using System;
public class Demo {
public static void Main(){
bool val;
bool flag;
val = Boolean.TryParse("$", out flag);
Console.WriteLine("Result = "+val);
}
}
This will produce the following output −
Result = False | [
{
"code": null,
"e": 1199,
"s": 1062,
"text": "The Boolean.TryParse() method in C# is used to convert the specified string representation of a logical value to its Boolean equivalent."
},
{
"code": null,
"e": 1225,
"s": 1199,
"text": "Following is the syntax −"
},
{
"code": null,
"e": 1283,
"s": 1225,
"text": "public static bool TryParse (string val out bool result);"
},
{
"code": null,
"e": 1354,
"s": 1283,
"text": "Let us now see an example to implement the Boolean.TryParse() method −"
},
{
"code": null,
"e": 1548,
"s": 1354,
"text": "using System;\npublic class Demo {\n public static void Main(){\n bool val;\n bool flag;\n val = Boolean.TryParse(\"true\", out flag);\n Console.WriteLine(\"Result = \"+val);\n }\n}"
},
{
"code": null,
"e": 1589,
"s": 1548,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1603,
"s": 1589,
"text": "Result = True"
},
{
"code": null,
"e": 1679,
"s": 1603,
"text": "Let us now see another example to implement the Boolean.TryParse() method −"
},
{
"code": null,
"e": 1870,
"s": 1679,
"text": "using System;\npublic class Demo {\n public static void Main(){\n bool val;\n bool flag;\n val = Boolean.TryParse(\"$\", out flag);\n Console.WriteLine(\"Result = \"+val);\n }\n}"
},
{
"code": null,
"e": 1911,
"s": 1870,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1926,
"s": 1911,
"text": "Result = False"
}
] |
How to place a div inside an iframe for IE ? - GeeksforGeeks | 17 Jun, 2020
Introduction: An Iframe is used to display a web page (or possibly a document) within a web page, i.e. it loads the contents of a page (or document) inside the current page.
Iframe Syntax:
<iframe src="URL"></iframe>
Approach 1: For adding additional div’s in an iframe, you need to use a wrapper div, that wraps the contents of your intended div and the iframe into one unit. This way you can display the contents of your div along with the iframe embedding to the document/webpage.
Example:
<!DOCTYPE html><html> <head> <title> How to place a div inside an iframe for IE? </title></head> <body> <h2>Div in an Iframe</h2> <div id="container" style="border: solid 2px #000;"> <iframe height="500px" width="100%" name="iframe_a"> </iframe> <div> <h2>This goes in the iframe</h2> <h1> This is another heading </h1> </div> <!-- Put all your contents of the div here --> </div> <p><a href="https://ide.geeksforgeeks.org/" target="iframe_a"> Click Here </a> </p></body> </html>
Output:
Explanation: The div with id = “container” is the wrapper/container for the iframe and the contents of the div, thus the contents of the div are displayed along with the iframe.
Approach 2: Another way of handling the problem is to use the iframe itself for displaying the contents of the div instead of displaying them along with the iframe. However, this way the iframe only displays either the contents of the div or of the webpage/document at any point of time, thus this approach is a bit limited.
Syntax:
<iframe srcdoc="div goes here"></iframe>
Example:
<!DOCTYPE html><html> <head> <title> How to place a div inside an iframe for IE? </title></head> <body> <h2>Div in an Iframe</h2> <div id="container" style= "border: solid 2px #000;"> <iframe srcdoc="<div> <h2>This goes in the iframe</h2> </div>" height="500px" width="100%" name="iframe_a"> </iframe> <!-- Put all your contents of the div here --> </div> <p><a href="https://ide.geeksforgeeks.org/" target="iframe_a"> Click Here </a> </p></body> </html>
Output:
Explanation: In the above example, the iframe displays the div containing the text: “This goes in the iframe”, and whenever the “Click Here” link is clicked the iframe loads the target URL page in the iframe, overwriting the div that was present earlier. Doing this would load the Wikipedia page, however, to display the contents of the div again you would require to reload the page.
Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course.
Akanksha_Rai
HTML-Basics
HTML-Misc
Picked
HTML
Web Technologies
Web technologies Questions
HTML
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
How to update Node.js and NPM to next version ?
How to set the default value for an HTML <select> element ?
How to set input type date in dd-mm-yyyy format using HTML ?
Roadmap to Become a Web Developer in 2022
Installation of Node.js on Linux
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 31758,
"s": 31730,
"text": "\n17 Jun, 2020"
},
{
"code": null,
"e": 31932,
"s": 31758,
"text": "Introduction: An Iframe is used to display a web page (or possibly a document) within a web page, i.e. it loads the contents of a page (or document) inside the current page."
},
{
"code": null,
"e": 31947,
"s": 31932,
"text": "Iframe Syntax:"
},
{
"code": null,
"e": 31976,
"s": 31947,
"text": "<iframe src=\"URL\"></iframe>\n"
},
{
"code": null,
"e": 32243,
"s": 31976,
"text": "Approach 1: For adding additional div’s in an iframe, you need to use a wrapper div, that wraps the contents of your intended div and the iframe into one unit. This way you can display the contents of your div along with the iframe embedding to the document/webpage."
},
{
"code": null,
"e": 32252,
"s": 32243,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> How to place a div inside an iframe for IE? </title></head> <body> <h2>Div in an Iframe</h2> <div id=\"container\" style=\"border: solid 2px #000;\"> <iframe height=\"500px\" width=\"100%\" name=\"iframe_a\"> </iframe> <div> <h2>This goes in the iframe</h2> <h1> This is another heading </h1> </div> <!-- Put all your contents of the div here --> </div> <p><a href=\"https://ide.geeksforgeeks.org/\" target=\"iframe_a\"> Click Here </a> </p></body> </html> ",
"e": 32917,
"s": 32252,
"text": null
},
{
"code": null,
"e": 32925,
"s": 32917,
"text": "Output:"
},
{
"code": null,
"e": 33103,
"s": 32925,
"text": "Explanation: The div with id = “container” is the wrapper/container for the iframe and the contents of the div, thus the contents of the div are displayed along with the iframe."
},
{
"code": null,
"e": 33428,
"s": 33103,
"text": "Approach 2: Another way of handling the problem is to use the iframe itself for displaying the contents of the div instead of displaying them along with the iframe. However, this way the iframe only displays either the contents of the div or of the webpage/document at any point of time, thus this approach is a bit limited."
},
{
"code": null,
"e": 33436,
"s": 33428,
"text": "Syntax:"
},
{
"code": null,
"e": 33478,
"s": 33436,
"text": "<iframe srcdoc=\"div goes here\"></iframe>\n"
},
{
"code": null,
"e": 33487,
"s": 33478,
"text": "Example:"
},
{
"code": "<!DOCTYPE html><html> <head> <title> How to place a div inside an iframe for IE? </title></head> <body> <h2>Div in an Iframe</h2> <div id=\"container\" style= \"border: solid 2px #000;\"> <iframe srcdoc=\"<div> <h2>This goes in the iframe</h2> </div>\" height=\"500px\" width=\"100%\" name=\"iframe_a\"> </iframe> <!-- Put all your contents of the div here --> </div> <p><a href=\"https://ide.geeksforgeeks.org/\" target=\"iframe_a\"> Click Here </a> </p></body> </html>",
"e": 34091,
"s": 33487,
"text": null
},
{
"code": null,
"e": 34099,
"s": 34091,
"text": "Output:"
},
{
"code": null,
"e": 34484,
"s": 34099,
"text": "Explanation: In the above example, the iframe displays the div containing the text: “This goes in the iframe”, and whenever the “Click Here” link is clicked the iframe loads the target URL page in the iframe, overwriting the div that was present earlier. Doing this would load the Wikipedia page, however, to display the contents of the div again you would require to reload the page."
},
{
"code": null,
"e": 34621,
"s": 34484,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important HTML concepts with the Web Design for Beginners | HTML course."
},
{
"code": null,
"e": 34634,
"s": 34621,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 34646,
"s": 34634,
"text": "HTML-Basics"
},
{
"code": null,
"e": 34656,
"s": 34646,
"text": "HTML-Misc"
},
{
"code": null,
"e": 34663,
"s": 34656,
"text": "Picked"
},
{
"code": null,
"e": 34668,
"s": 34663,
"text": "HTML"
},
{
"code": null,
"e": 34685,
"s": 34668,
"text": "Web Technologies"
},
{
"code": null,
"e": 34712,
"s": 34685,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 34717,
"s": 34712,
"text": "HTML"
},
{
"code": null,
"e": 34815,
"s": 34717,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34824,
"s": 34815,
"text": "Comments"
},
{
"code": null,
"e": 34837,
"s": 34824,
"text": "Old Comments"
},
{
"code": null,
"e": 34899,
"s": 34837,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 34949,
"s": 34899,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
},
{
"code": null,
"e": 34997,
"s": 34949,
"text": "How to update Node.js and NPM to next version ?"
},
{
"code": null,
"e": 35057,
"s": 34997,
"text": "How to set the default value for an HTML <select> element ?"
},
{
"code": null,
"e": 35118,
"s": 35057,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 35160,
"s": 35118,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 35193,
"s": 35160,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 35236,
"s": 35193,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 35298,
"s": 35236,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
}
] |
Comparator thenComparingDouble() method in Java with examples - GeeksforGeeks | 29 Apr, 2019
The thenComparingDouble(java.util.function.ToDoubleFunction) method of Comparator Interface in Java returns a lexicographic-order comparator with a function that extracts a double sort key. This method is applied after comparing method if you want to apply another comparing for those values which are equal in the comparing method.
Syntax:
default Comparator <T> thenComparingDouble(
ToDoubleFunction <T> keyExtractor)
Parameters: This method accepts keyExtractor which is the function used to extract the Double sort key.
Return value: This method returns a lexicographic-order comparator composed of this and then the Double sort key.
Exception: This method throws NullPointerException if the argument is null.
Below programs illustrate thenComparingDouble(java.util.function.ToDoubleFunction) method:Program 1:
// Java program to demonstrate Comparator// thenComparingDouble(ToDoubleFunction) method import java.util.Arrays;import java.util.Collections;import java.util.Comparator;import java.util.List; public class GFG { public static void main(String[] args) { List<Student> list = getStudentList(); Comparator<Student> comparator = Comparator .comparing(Student::getSchool) .thenComparingDouble(Student::getpercentageMarks); Collections.sort(list, comparator); System.out.println("After sort"); list.forEach(s -> System.out.println(s)); } public static List<Student> getStudentList() { Student s1 = new Student("Ram", 85.5, "SJV"); Student s2 = new Student("Shyam", 83.25, "MSH"); Student s3 = new Student("Mohan", 86.55, "SJV"); Student s4 = new Student("Sohan", 81.00, "MSH"); Student s5 = new Student("Rabi", 55.6, "SJV"); List<Student> list = Arrays.asList(s1, s2, s3, s4, s5); return list; }} class Student { private String name; private double percentageMarks; private String school; public Student(String name, double percentageMarks, String school) { this.name = name; this.percentageMarks = percentageMarks; this.school = school; } public String getName() { return name; } public void setName(String name) { this.name = name; } public double getpercentageMarks() { return percentageMarks; } public void setpercentageMarks(int percentageMarks) { this.percentageMarks = percentageMarks; } public String getSchool() { return school; } public void setSchool(String school) { this.school = school; } @Override public String toString() { return "Student [name=" + name + ", percentageMarks = " + percentageMarks + ", school=" + school + "]"; }}
The output printed on console of IDE is shown below.Output:You can see in example first sorting is done on school wise and if the school is same then percentageMarks wise.
Program 2:
// Java program to demonstrate Comparator// thenComparingDouble(ToDoubleFunction) method import java.util.Arrays;import java.util.Comparator;import java.util.List; public class GFG { public static void main(String... args) { List<Integer> list = Arrays.asList(1, 2, 3, 4, 5, 6); try { // apply thenComparingDouble Comparator.comparing(list::get) .thenComparingDouble(null); } catch (Exception e) { System.out.printf("Exception:" + e); } }}
The output printed on console is shown below.Output:
References: https://docs.oracle.com/javase/10/docs/api/java/util/Comparator.html#thenComparingDouble(java.util.function.ToDoubleFunction)()
Java - util package
Java-Comparator
Java-Functions
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Constructors in Java
Different ways of Reading a text file in Java
Exceptions in Java
Functional Interfaces in Java
Generics in Java
Comparator Interface in Java with Examples
Introduction to Java
HashMap get() Method in Java
Strings in Java | [
{
"code": null,
"e": 23948,
"s": 23920,
"text": "\n29 Apr, 2019"
},
{
"code": null,
"e": 24281,
"s": 23948,
"text": "The thenComparingDouble(java.util.function.ToDoubleFunction) method of Comparator Interface in Java returns a lexicographic-order comparator with a function that extracts a double sort key. This method is applied after comparing method if you want to apply another comparing for those values which are equal in the comparing method."
},
{
"code": null,
"e": 24289,
"s": 24281,
"text": "Syntax:"
},
{
"code": null,
"e": 24373,
"s": 24289,
"text": "default Comparator <T> thenComparingDouble(\n ToDoubleFunction <T> keyExtractor)\n"
},
{
"code": null,
"e": 24477,
"s": 24373,
"text": "Parameters: This method accepts keyExtractor which is the function used to extract the Double sort key."
},
{
"code": null,
"e": 24591,
"s": 24477,
"text": "Return value: This method returns a lexicographic-order comparator composed of this and then the Double sort key."
},
{
"code": null,
"e": 24667,
"s": 24591,
"text": "Exception: This method throws NullPointerException if the argument is null."
},
{
"code": null,
"e": 24768,
"s": 24667,
"text": "Below programs illustrate thenComparingDouble(java.util.function.ToDoubleFunction) method:Program 1:"
},
{
"code": "// Java program to demonstrate Comparator// thenComparingDouble(ToDoubleFunction) method import java.util.Arrays;import java.util.Collections;import java.util.Comparator;import java.util.List; public class GFG { public static void main(String[] args) { List<Student> list = getStudentList(); Comparator<Student> comparator = Comparator .comparing(Student::getSchool) .thenComparingDouble(Student::getpercentageMarks); Collections.sort(list, comparator); System.out.println(\"After sort\"); list.forEach(s -> System.out.println(s)); } public static List<Student> getStudentList() { Student s1 = new Student(\"Ram\", 85.5, \"SJV\"); Student s2 = new Student(\"Shyam\", 83.25, \"MSH\"); Student s3 = new Student(\"Mohan\", 86.55, \"SJV\"); Student s4 = new Student(\"Sohan\", 81.00, \"MSH\"); Student s5 = new Student(\"Rabi\", 55.6, \"SJV\"); List<Student> list = Arrays.asList(s1, s2, s3, s4, s5); return list; }} class Student { private String name; private double percentageMarks; private String school; public Student(String name, double percentageMarks, String school) { this.name = name; this.percentageMarks = percentageMarks; this.school = school; } public String getName() { return name; } public void setName(String name) { this.name = name; } public double getpercentageMarks() { return percentageMarks; } public void setpercentageMarks(int percentageMarks) { this.percentageMarks = percentageMarks; } public String getSchool() { return school; } public void setSchool(String school) { this.school = school; } @Override public String toString() { return \"Student [name=\" + name + \", percentageMarks = \" + percentageMarks + \", school=\" + school + \"]\"; }}",
"e": 26816,
"s": 24768,
"text": null
},
{
"code": null,
"e": 26988,
"s": 26816,
"text": "The output printed on console of IDE is shown below.Output:You can see in example first sorting is done on school wise and if the school is same then percentageMarks wise."
},
{
"code": null,
"e": 26999,
"s": 26988,
"text": "Program 2:"
},
{
"code": "// Java program to demonstrate Comparator// thenComparingDouble(ToDoubleFunction) method import java.util.Arrays;import java.util.Comparator;import java.util.List; public class GFG { public static void main(String... args) { List<Integer> list = Arrays.asList(1, 2, 3, 4, 5, 6); try { // apply thenComparingDouble Comparator.comparing(list::get) .thenComparingDouble(null); } catch (Exception e) { System.out.printf(\"Exception:\" + e); } }}",
"e": 27555,
"s": 26999,
"text": null
},
{
"code": null,
"e": 27608,
"s": 27555,
"text": "The output printed on console is shown below.Output:"
},
{
"code": null,
"e": 27748,
"s": 27608,
"text": "References: https://docs.oracle.com/javase/10/docs/api/java/util/Comparator.html#thenComparingDouble(java.util.function.ToDoubleFunction)()"
},
{
"code": null,
"e": 27768,
"s": 27748,
"text": "Java - util package"
},
{
"code": null,
"e": 27784,
"s": 27768,
"text": "Java-Comparator"
},
{
"code": null,
"e": 27799,
"s": 27784,
"text": "Java-Functions"
},
{
"code": null,
"e": 27804,
"s": 27799,
"text": "Java"
},
{
"code": null,
"e": 27809,
"s": 27804,
"text": "Java"
},
{
"code": null,
"e": 27907,
"s": 27809,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27922,
"s": 27907,
"text": "Stream In Java"
},
{
"code": null,
"e": 27943,
"s": 27922,
"text": "Constructors in Java"
},
{
"code": null,
"e": 27989,
"s": 27943,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 28008,
"s": 27989,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 28038,
"s": 28008,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 28055,
"s": 28038,
"text": "Generics in Java"
},
{
"code": null,
"e": 28098,
"s": 28055,
"text": "Comparator Interface in Java with Examples"
},
{
"code": null,
"e": 28119,
"s": 28098,
"text": "Introduction to Java"
},
{
"code": null,
"e": 28148,
"s": 28119,
"text": "HashMap get() Method in Java"
}
] |
SWING - JComboBox Class | The class JComboBox is a component which combines a button or editable field and a drop-down list.
Following is the declaration for javax.swing.JComboBox class −
public class JComboBox
extends JComponent
implements ItemSelectable, ListDataListener, ActionListener, Accessible
Following are the fields for javax.swing.JList class −
protected String actionCommand − This protected field is implementation specific.
protected String actionCommand − This protected field is implementation specific.
protected ComboBoxModel dataModel − This protected field is implementation specific.
protected ComboBoxModel dataModel − This protected field is implementation specific.
protected ComboBoxEditor editor − This protected field is implementation specific.
protected ComboBoxEditor editor − This protected field is implementation specific.
protected boolean isEditable − This protected field is implementation specific.
protected boolean isEditable − This protected field is implementation specific.
protected JComboBox.KeySelectionManager keySelectionManager − This protected field is implementation specific.
protected JComboBox.KeySelectionManager keySelectionManager − This protected field is implementation specific.
protected boolean lightWeightPopupEnabled − This protected field is implementation specific.
protected boolean lightWeightPopupEnabled − This protected field is implementation specific.
protected int maximumRowCount − This protected field is implementation specific.
protected int maximumRowCount − This protected field is implementation specific.
protected ListCellRenderer renderer − This protected field is implementation specific.
protected ListCellRenderer renderer − This protected field is implementation specific.
protected Object selectedItemReminder − This protected field is implementation specific.
protected Object selectedItemReminder − This protected field is implementation specific.
JComboBox()
Creates a JComboBox with a default data model.
JComboBox(ComboBoxModel aModel)
Creates a JComboBox that takes its items from an existing ComboBoxModel.
JComboBox(Object[] items)
Creates a JComboBox that contains the elements in the specified array.
JComboBox(Vector<?> items)
Creates a JComboBox that contains the elements in the specified Vector.
void actionPerformed(ActionEvent e)
This method is public as an implementation side effect.
protected void actionPropertyChanged(Action action, String propertyName)
Updates the ComboBox's state in response to property changes in associated action.
void addActionListener(ActionListener l)
Adds an ActionListener.
void addItem(Object anObject)
Adds an item to the item list.
void addItemListener(ItemListener aListener)
Adds an ItemListener.
void addPopupMenuListener(PopupMenuListener l)
Adds a PopupMenu listener which will listen to notification messages from the popup portion of the ComboBox.
void configureEditor(ComboBoxEditor anEditor, Object anItem)
Initializes the editor with the specified item.
protected void configurePropertiesFromAction(Action a)
Sets the properties on this ComboBox to match those in the specified Action.
void contentsChanged(ListDataEvent e)
This method is public as an implementation side effect.
protected PropertyChangeListener createActionPropertyChangeListener(Action a)
Creates and returns a PropertyChangeListener responsible for listening to changes from the specified Action and updating the appropriate properties.
protected JComboBox.KeySelectionManager createDefaultKeySelectionManager()
Returns an instance of the default key-selection manager.
protected void fireActionEvent()
Notifies all listeners who have registered interest for notification on this event type.
protected void fireItemStateChanged(ItemEvent e)
Notifies all listeners who have registered interest for notification on this event type.
void firePopupMenuCanceled()
Notifies PopupMenuListeners that the popup portion of the ComboBox has been canceled.
void firePopupMenuWillBecomeInvisible()
Notifies PopupMenuListeners that the popup portion of the ComboBox has become invisible.
void firePopupMenuWillBecomeVisible()
Notifies PopupMenuListeners that the popup portion of the ComboBox will become visible.
AccessibleContext getAccessibleContext()
Gets the AccessibleContext associated with this JComboBox.
Action getAction()
Returns the currently set Action for this ActionEvent source, or null if no Action is set.
String getActionCommand()
Returns the action command that is included in the event sent to action listeners.
ActionListener[] getActionListeners()
Returns an array of all the ActionListeners added to this JComboBox with addActionListener().
ComboBoxEditor getEditor()
Returns the editor used to paint and edit the selected item in the JComboBox field.
Object getItemAt(int index)
Returns the list item at the specified index.
int getItemCount()
Returns the number of items in the list.
ItemListener[] getItemListeners()
Returns an array of all the ItemListeners added to this JComboBox with addItemListener().
JComboBox.KeySelectionManager getKeySelectionManager()
Returns the list's key-selection manager.
int getMaximumRowCount()
Returns the maximum number of items the combo box can display without a scrollbar.
ComboBoxMode getModel()
Returns the data model currently used by the JComboBox.
PopupMenuListener[]getPopupMenuListeners()
Returns an array of all the PopupMenuListeners added to this JComboBox with addPopupMenuListener().
Object getPrototypeDisplayValue()
Returns the "prototypical display" value - an Object used for the calculation of the display height and width.
ListCellRenderer getRenderer()
Returns the renderer used to display the selected item in the JComboBox field.
int getSelectedIndex()
Returns the first item in the list that matches the given item.
Object getSelectedItem()
Returns the current selected item.
Object[] getSelectedObjects()
Returns an array containing the selected item.
ComboBoxUI getUI()
Returns the L&F object that renders this component.
String getUIClassID()
Returns the name of the L&F class that renders this component.
void hidePopup()
Causes the ComboBox to close its popup window.
void insertItemAt(Object anObject, int index)
Inserts an item into the item list at a given index.
protected void installAncestorListener()
void intervalAdded(ListDataEvent e)
This method is public as an implementation side effect.
void intervalRemoved(ListDataEvent e)
This method is public as an implementation side effect.
boolean isEditable()
Returns true if the JComboBox is editable.
boolean isLightWeightPopupEnabled()
Gets the value of the lightWeightPopupEnabled property.
boolean isPopupVisible()
Determines the visibility of the popup.
protected String paramString()
Returns a string representation of this JComboBox.
void processKeyEvent(KeyEvent e)
Handles KeyEvents, looking for the Tab key.
void removeActionListener(ActionListener l)
Removes an ActionListener.
void removeAllItems()
Removes all items from the item list.
void removeItem(Object anObject)
Removes an item from the item list.
void removeItemAt(int anIndex)
Removes the item at anIndex This method works only if the JComboBox uses a mutable data model.
void removeItemListener(ItemListener aListener)
Removes an ItemListener.
void removePopupMenuListener(PopupMenuListener l)
Removes a PopupMenuListener.
protected void selectedItemChanged()
This protected method is implementation specific.
boolean selectWithKeyChar(char keyChar)
Selects the list item that corresponds to the specified keyboard character and returns true, if there is an item corresponding to that character.
void setAction(Action a)
Sets the Action for the ActionEvent source.
void setActionCommand(String aCommand)
Sets the action command that should be included in the event sent to action listeners.
void setEditable(boolean aFlag)
Determines whether the JComboBox field is editable.
void setEditor(ComboBoxEditor anEditor)
Sets the editor used to paint and edit the selected item in the JComboBox field.
void setEnabled(boolean b)
Enables the ComboBox so that items can be selected.
void setKeySelectionManager(JComboBox.KeySelectionManager aManager)
Sets the object translates a keyboard character into a list selection.
void setLightWeightPopupEnabled(boolean aFlag)
Sets the lightWeightPopupEnabled property, which provides a hint as to whether or not a lightweight Component should be used to contain the JComboBox, versus a heavyweight Component such as a Panel or a Window.
void setMaximumRowCount(int count)
Sets the maximum number of rows the JComboBox displays.
void setModel(ComboBoxModel aModel)
Sets the data model that the JComboBox uses to obtain the list of items.
void setPopupVisible(boolean v)
Sets the visibility of the popup.
void setPrototypeDisplayValue(Object prototypeDisplayValue)
Sets the prototype display value used to calculate the size of the display for the UI portion.
void setRenderer(ListCellRenderer aRenderer)
Sets the renderer that paints the list items and the item selected from the list in the JComboBox field.
void setSelectedIndex(int anIndex)
Selects the item at index anIndex.
void setSelectedItem(Object anObject)
Sets the selected item in the ComboBox display area to the object in the argument.
void setUI(ComboBoxUI ui)
Sets the L&F object that renders this component.
void showPopup()
Causes the ComboBox to display its popup window.
void updateUI()
Resets the UI property to a value from the current look and feel.
This class inherits methods from the following classes −
javax.swing.JComponent
java.awt.Container
java.awt.Component
java.lang.Object
Create the following Java program using any editor of your choice in say D:/ > SWING > com > tutorialspoint > gui >
SwingControlDemo.java
package com.tutorialspoint.gui;
import java.awt.*;
import java.awt.event.*;
import javax.swing.*;
public class SwingControlDemo {
private JFrame mainFrame;
private JLabel headerLabel;
private JLabel statusLabel;
private JPanel controlPanel;
public SwingControlDemo(){
prepareGUI();
}
public static void main(String[] args){
SwingControlDemo swingControlDemo = new SwingControlDemo();
swingControlDemo.showComboboxDemo();
}
private void prepareGUI(){
mainFrame = new JFrame("Java Swing Examples");
mainFrame.setSize(400,400);
mainFrame.setLayout(new GridLayout(3, 1));
mainFrame.addWindowListener(new WindowAdapter() {
public void windowClosing(WindowEvent windowEvent){
System.exit(0);
}
});
headerLabel = new JLabel("", JLabel.CENTER);
statusLabel = new JLabel("",JLabel.CENTER);
statusLabel.setSize(350,100);
controlPanel = new JPanel();
controlPanel.setLayout(new FlowLayout());
mainFrame.add(headerLabel);
mainFrame.add(controlPanel);
mainFrame.add(statusLabel);
mainFrame.setVisible(true);
}
private void showComboboxDemo(){
headerLabel.setText("Control in action: JComboBox");
final DefaultComboBoxModel fruitsName = new DefaultComboBoxModel();
fruitsName.addElement("Apple");
fruitsName.addElement("Grapes");
fruitsName.addElement("Mango");
fruitsName.addElement("Peer");
final JComboBox fruitCombo = new JComboBox(fruitsName);
fruitCombo.setSelectedIndex(0);
JScrollPane fruitListScrollPane = new JScrollPane(fruitCombo);
JButton showButton = new JButton("Show");
showButton.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
String data = "";
if (fruitCombo.getSelectedIndex() != -1) {
data = "Fruits Selected: "
+ fruitCombo.getItemAt
(fruitCombo.getSelectedIndex());
}
statusLabel.setText(data);
}
});
controlPanel.add(fruitListScrollPane);
controlPanel.add(showButton);
mainFrame.setVisible(true);
}
}
Compile the program using the command prompt. Go to D:/ > SWING and type the following command.
D:\SWING>javac com\tutorialspoint\gui\SwingControlDemo.java
If no error occurs, it means the compilation is successful. Run the program using the following command.
D:\SWING>java com.tutorialspoint.gui.SwingControlDemo
Verify the following output.
30 Lectures
3.5 hours
Pranjal Srivastava
13 Lectures
1 hours
Pranjal Srivastava
25 Lectures
4.5 hours
Emenwa Global, Ejike IfeanyiChukwu
14 Lectures
1.5 hours
Travis Rose
14 Lectures
1 hours
Travis Rose
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 1862,
"s": 1763,
"text": "The class JComboBox is a component which combines a button or editable field and a drop-down list."
},
{
"code": null,
"e": 1925,
"s": 1862,
"text": "Following is the declaration for javax.swing.JComboBox class −"
},
{
"code": null,
"e": 2049,
"s": 1925,
"text": "public class JComboBox\n extends JComponent\n implements ItemSelectable, ListDataListener, ActionListener, Accessible\n"
},
{
"code": null,
"e": 2104,
"s": 2049,
"text": "Following are the fields for javax.swing.JList class −"
},
{
"code": null,
"e": 2186,
"s": 2104,
"text": "protected String actionCommand − This protected field is implementation specific."
},
{
"code": null,
"e": 2268,
"s": 2186,
"text": "protected String actionCommand − This protected field is implementation specific."
},
{
"code": null,
"e": 2353,
"s": 2268,
"text": "protected ComboBoxModel dataModel − This protected field is implementation specific."
},
{
"code": null,
"e": 2438,
"s": 2353,
"text": "protected ComboBoxModel dataModel − This protected field is implementation specific."
},
{
"code": null,
"e": 2521,
"s": 2438,
"text": "protected ComboBoxEditor editor − This protected field is implementation specific."
},
{
"code": null,
"e": 2604,
"s": 2521,
"text": "protected ComboBoxEditor editor − This protected field is implementation specific."
},
{
"code": null,
"e": 2684,
"s": 2604,
"text": "protected boolean isEditable − This protected field is implementation specific."
},
{
"code": null,
"e": 2764,
"s": 2684,
"text": "protected boolean isEditable − This protected field is implementation specific."
},
{
"code": null,
"e": 2875,
"s": 2764,
"text": "protected JComboBox.KeySelectionManager keySelectionManager − This protected field is implementation specific."
},
{
"code": null,
"e": 2986,
"s": 2875,
"text": "protected JComboBox.KeySelectionManager keySelectionManager − This protected field is implementation specific."
},
{
"code": null,
"e": 3079,
"s": 2986,
"text": "protected boolean lightWeightPopupEnabled − This protected field is implementation specific."
},
{
"code": null,
"e": 3172,
"s": 3079,
"text": "protected boolean lightWeightPopupEnabled − This protected field is implementation specific."
},
{
"code": null,
"e": 3253,
"s": 3172,
"text": "protected int maximumRowCount − This protected field is implementation specific."
},
{
"code": null,
"e": 3334,
"s": 3253,
"text": "protected int maximumRowCount − This protected field is implementation specific."
},
{
"code": null,
"e": 3421,
"s": 3334,
"text": "protected ListCellRenderer renderer − This protected field is implementation specific."
},
{
"code": null,
"e": 3508,
"s": 3421,
"text": "protected ListCellRenderer renderer − This protected field is implementation specific."
},
{
"code": null,
"e": 3597,
"s": 3508,
"text": "protected Object selectedItemReminder − This protected field is implementation specific."
},
{
"code": null,
"e": 3686,
"s": 3597,
"text": "protected Object selectedItemReminder − This protected field is implementation specific."
},
{
"code": null,
"e": 3698,
"s": 3686,
"text": "JComboBox()"
},
{
"code": null,
"e": 3745,
"s": 3698,
"text": "Creates a JComboBox with a default data model."
},
{
"code": null,
"e": 3777,
"s": 3745,
"text": "JComboBox(ComboBoxModel aModel)"
},
{
"code": null,
"e": 3850,
"s": 3777,
"text": "Creates a JComboBox that takes its items from an existing ComboBoxModel."
},
{
"code": null,
"e": 3876,
"s": 3850,
"text": "JComboBox(Object[] items)"
},
{
"code": null,
"e": 3947,
"s": 3876,
"text": "Creates a JComboBox that contains the elements in the specified array."
},
{
"code": null,
"e": 3974,
"s": 3947,
"text": "JComboBox(Vector<?> items)"
},
{
"code": null,
"e": 4046,
"s": 3974,
"text": "Creates a JComboBox that contains the elements in the specified Vector."
},
{
"code": null,
"e": 4082,
"s": 4046,
"text": "void actionPerformed(ActionEvent e)"
},
{
"code": null,
"e": 4138,
"s": 4082,
"text": "This method is public as an implementation side effect."
},
{
"code": null,
"e": 4211,
"s": 4138,
"text": "protected void actionPropertyChanged(Action action, String propertyName)"
},
{
"code": null,
"e": 4294,
"s": 4211,
"text": "Updates the ComboBox's state in response to property changes in associated action."
},
{
"code": null,
"e": 4335,
"s": 4294,
"text": "void addActionListener(ActionListener l)"
},
{
"code": null,
"e": 4359,
"s": 4335,
"text": "Adds an ActionListener."
},
{
"code": null,
"e": 4389,
"s": 4359,
"text": "void addItem(Object anObject)"
},
{
"code": null,
"e": 4420,
"s": 4389,
"text": "Adds an item to the item list."
},
{
"code": null,
"e": 4465,
"s": 4420,
"text": "void addItemListener(ItemListener aListener)"
},
{
"code": null,
"e": 4487,
"s": 4465,
"text": "Adds an ItemListener."
},
{
"code": null,
"e": 4534,
"s": 4487,
"text": "void addPopupMenuListener(PopupMenuListener l)"
},
{
"code": null,
"e": 4643,
"s": 4534,
"text": "Adds a PopupMenu listener which will listen to notification messages from the popup portion of the ComboBox."
},
{
"code": null,
"e": 4704,
"s": 4643,
"text": "void configureEditor(ComboBoxEditor anEditor, Object anItem)"
},
{
"code": null,
"e": 4752,
"s": 4704,
"text": "Initializes the editor with the specified item."
},
{
"code": null,
"e": 4807,
"s": 4752,
"text": "protected void configurePropertiesFromAction(Action a)"
},
{
"code": null,
"e": 4884,
"s": 4807,
"text": "Sets the properties on this ComboBox to match those in the specified Action."
},
{
"code": null,
"e": 4922,
"s": 4884,
"text": "void contentsChanged(ListDataEvent e)"
},
{
"code": null,
"e": 4978,
"s": 4922,
"text": "This method is public as an implementation side effect."
},
{
"code": null,
"e": 5056,
"s": 4978,
"text": "protected PropertyChangeListener\tcreateActionPropertyChangeListener(Action a)"
},
{
"code": null,
"e": 5205,
"s": 5056,
"text": "Creates and returns a PropertyChangeListener responsible for listening to changes from the specified Action and updating the appropriate properties."
},
{
"code": null,
"e": 5280,
"s": 5205,
"text": "protected JComboBox.KeySelectionManager createDefaultKeySelectionManager()"
},
{
"code": null,
"e": 5338,
"s": 5280,
"text": "Returns an instance of the default key-selection manager."
},
{
"code": null,
"e": 5371,
"s": 5338,
"text": "protected void fireActionEvent()"
},
{
"code": null,
"e": 5460,
"s": 5371,
"text": "Notifies all listeners who have registered interest for notification on this event type."
},
{
"code": null,
"e": 5509,
"s": 5460,
"text": "protected void fireItemStateChanged(ItemEvent e)"
},
{
"code": null,
"e": 5598,
"s": 5509,
"text": "Notifies all listeners who have registered interest for notification on this event type."
},
{
"code": null,
"e": 5627,
"s": 5598,
"text": "void firePopupMenuCanceled()"
},
{
"code": null,
"e": 5713,
"s": 5627,
"text": "Notifies PopupMenuListeners that the popup portion of the ComboBox has been canceled."
},
{
"code": null,
"e": 5753,
"s": 5713,
"text": "void firePopupMenuWillBecomeInvisible()"
},
{
"code": null,
"e": 5842,
"s": 5753,
"text": "Notifies PopupMenuListeners that the popup portion of the ComboBox has become invisible."
},
{
"code": null,
"e": 5880,
"s": 5842,
"text": "void firePopupMenuWillBecomeVisible()"
},
{
"code": null,
"e": 5968,
"s": 5880,
"text": "Notifies PopupMenuListeners that the popup portion of the ComboBox will become visible."
},
{
"code": null,
"e": 6009,
"s": 5968,
"text": "AccessibleContext\tgetAccessibleContext()"
},
{
"code": null,
"e": 6068,
"s": 6009,
"text": "Gets the AccessibleContext associated with this JComboBox."
},
{
"code": null,
"e": 6087,
"s": 6068,
"text": "Action getAction()"
},
{
"code": null,
"e": 6178,
"s": 6087,
"text": "Returns the currently set Action for this ActionEvent source, or null if no Action is set."
},
{
"code": null,
"e": 6204,
"s": 6178,
"text": "String getActionCommand()"
},
{
"code": null,
"e": 6287,
"s": 6204,
"text": "Returns the action command that is included in the event sent to action listeners."
},
{
"code": null,
"e": 6325,
"s": 6287,
"text": "ActionListener[] getActionListeners()"
},
{
"code": null,
"e": 6419,
"s": 6325,
"text": "Returns an array of all the ActionListeners added to this JComboBox with addActionListener()."
},
{
"code": null,
"e": 6446,
"s": 6419,
"text": "ComboBoxEditor getEditor()"
},
{
"code": null,
"e": 6530,
"s": 6446,
"text": "Returns the editor used to paint and edit the selected item in the JComboBox field."
},
{
"code": null,
"e": 6558,
"s": 6530,
"text": "Object getItemAt(int index)"
},
{
"code": null,
"e": 6604,
"s": 6558,
"text": "Returns the list item at the specified index."
},
{
"code": null,
"e": 6623,
"s": 6604,
"text": "int getItemCount()"
},
{
"code": null,
"e": 6664,
"s": 6623,
"text": "Returns the number of items in the list."
},
{
"code": null,
"e": 6698,
"s": 6664,
"text": "ItemListener[] getItemListeners()"
},
{
"code": null,
"e": 6788,
"s": 6698,
"text": "Returns an array of all the ItemListeners added to this JComboBox with addItemListener()."
},
{
"code": null,
"e": 6843,
"s": 6788,
"text": "JComboBox.KeySelectionManager getKeySelectionManager()"
},
{
"code": null,
"e": 6885,
"s": 6843,
"text": "Returns the list's key-selection manager."
},
{
"code": null,
"e": 6910,
"s": 6885,
"text": "int getMaximumRowCount()"
},
{
"code": null,
"e": 6993,
"s": 6910,
"text": "Returns the maximum number of items the combo box can display without a scrollbar."
},
{
"code": null,
"e": 7017,
"s": 6993,
"text": "ComboBoxMode getModel()"
},
{
"code": null,
"e": 7073,
"s": 7017,
"text": "Returns the data model currently used by the JComboBox."
},
{
"code": null,
"e": 7116,
"s": 7073,
"text": "PopupMenuListener[]getPopupMenuListeners()"
},
{
"code": null,
"e": 7216,
"s": 7116,
"text": "Returns an array of all the PopupMenuListeners added to this JComboBox with addPopupMenuListener()."
},
{
"code": null,
"e": 7250,
"s": 7216,
"text": "Object getPrototypeDisplayValue()"
},
{
"code": null,
"e": 7361,
"s": 7250,
"text": "Returns the \"prototypical display\" value - an Object used for the calculation of the display height and width."
},
{
"code": null,
"e": 7392,
"s": 7361,
"text": "ListCellRenderer\tgetRenderer()"
},
{
"code": null,
"e": 7471,
"s": 7392,
"text": "Returns the renderer used to display the selected item in the JComboBox field."
},
{
"code": null,
"e": 7494,
"s": 7471,
"text": "int getSelectedIndex()"
},
{
"code": null,
"e": 7558,
"s": 7494,
"text": "Returns the first item in the list that matches the given item."
},
{
"code": null,
"e": 7583,
"s": 7558,
"text": "Object getSelectedItem()"
},
{
"code": null,
"e": 7618,
"s": 7583,
"text": "Returns the current selected item."
},
{
"code": null,
"e": 7648,
"s": 7618,
"text": "Object[] getSelectedObjects()"
},
{
"code": null,
"e": 7695,
"s": 7648,
"text": "Returns an array containing the selected item."
},
{
"code": null,
"e": 7714,
"s": 7695,
"text": "ComboBoxUI getUI()"
},
{
"code": null,
"e": 7766,
"s": 7714,
"text": "Returns the L&F object that renders this component."
},
{
"code": null,
"e": 7788,
"s": 7766,
"text": "String getUIClassID()"
},
{
"code": null,
"e": 7851,
"s": 7788,
"text": "Returns the name of the L&F class that renders this component."
},
{
"code": null,
"e": 7868,
"s": 7851,
"text": "void hidePopup()"
},
{
"code": null,
"e": 7915,
"s": 7868,
"text": "Causes the ComboBox to close its popup window."
},
{
"code": null,
"e": 7961,
"s": 7915,
"text": "void insertItemAt(Object anObject, int index)"
},
{
"code": null,
"e": 8014,
"s": 7961,
"text": "Inserts an item into the item list at a given index."
},
{
"code": null,
"e": 8055,
"s": 8014,
"text": "protected void installAncestorListener()"
},
{
"code": null,
"e": 8091,
"s": 8055,
"text": "void intervalAdded(ListDataEvent e)"
},
{
"code": null,
"e": 8147,
"s": 8091,
"text": "This method is public as an implementation side effect."
},
{
"code": null,
"e": 8185,
"s": 8147,
"text": "void intervalRemoved(ListDataEvent e)"
},
{
"code": null,
"e": 8241,
"s": 8185,
"text": "This method is public as an implementation side effect."
},
{
"code": null,
"e": 8262,
"s": 8241,
"text": "boolean isEditable()"
},
{
"code": null,
"e": 8305,
"s": 8262,
"text": "Returns true if the JComboBox is editable."
},
{
"code": null,
"e": 8341,
"s": 8305,
"text": "boolean isLightWeightPopupEnabled()"
},
{
"code": null,
"e": 8397,
"s": 8341,
"text": "Gets the value of the lightWeightPopupEnabled property."
},
{
"code": null,
"e": 8422,
"s": 8397,
"text": "boolean isPopupVisible()"
},
{
"code": null,
"e": 8462,
"s": 8422,
"text": "Determines the visibility of the popup."
},
{
"code": null,
"e": 8493,
"s": 8462,
"text": "protected String\tparamString()"
},
{
"code": null,
"e": 8544,
"s": 8493,
"text": "Returns a string representation of this JComboBox."
},
{
"code": null,
"e": 8577,
"s": 8544,
"text": "void processKeyEvent(KeyEvent e)"
},
{
"code": null,
"e": 8621,
"s": 8577,
"text": "Handles KeyEvents, looking for the Tab key."
},
{
"code": null,
"e": 8665,
"s": 8621,
"text": "void removeActionListener(ActionListener l)"
},
{
"code": null,
"e": 8692,
"s": 8665,
"text": "Removes an ActionListener."
},
{
"code": null,
"e": 8714,
"s": 8692,
"text": "void removeAllItems()"
},
{
"code": null,
"e": 8752,
"s": 8714,
"text": "Removes all items from the item list."
},
{
"code": null,
"e": 8785,
"s": 8752,
"text": "void removeItem(Object anObject)"
},
{
"code": null,
"e": 8821,
"s": 8785,
"text": "Removes an item from the item list."
},
{
"code": null,
"e": 8852,
"s": 8821,
"text": "void removeItemAt(int anIndex)"
},
{
"code": null,
"e": 8947,
"s": 8852,
"text": "Removes the item at anIndex This method works only if the JComboBox uses a mutable data model."
},
{
"code": null,
"e": 8995,
"s": 8947,
"text": "void removeItemListener(ItemListener aListener)"
},
{
"code": null,
"e": 9020,
"s": 8995,
"text": "Removes an ItemListener."
},
{
"code": null,
"e": 9070,
"s": 9020,
"text": "void removePopupMenuListener(PopupMenuListener l)"
},
{
"code": null,
"e": 9099,
"s": 9070,
"text": "Removes a PopupMenuListener."
},
{
"code": null,
"e": 9136,
"s": 9099,
"text": "protected void selectedItemChanged()"
},
{
"code": null,
"e": 9186,
"s": 9136,
"text": "This protected method is implementation specific."
},
{
"code": null,
"e": 9226,
"s": 9186,
"text": "boolean selectWithKeyChar(char keyChar)"
},
{
"code": null,
"e": 9372,
"s": 9226,
"text": "Selects the list item that corresponds to the specified keyboard character and returns true, if there is an item corresponding to that character."
},
{
"code": null,
"e": 9397,
"s": 9372,
"text": "void setAction(Action a)"
},
{
"code": null,
"e": 9441,
"s": 9397,
"text": "Sets the Action for the ActionEvent source."
},
{
"code": null,
"e": 9480,
"s": 9441,
"text": "void setActionCommand(String aCommand)"
},
{
"code": null,
"e": 9567,
"s": 9480,
"text": "Sets the action command that should be included in the event sent to action listeners."
},
{
"code": null,
"e": 9599,
"s": 9567,
"text": "void setEditable(boolean aFlag)"
},
{
"code": null,
"e": 9651,
"s": 9599,
"text": "Determines whether the JComboBox field is editable."
},
{
"code": null,
"e": 9691,
"s": 9651,
"text": "void setEditor(ComboBoxEditor anEditor)"
},
{
"code": null,
"e": 9772,
"s": 9691,
"text": "Sets the editor used to paint and edit the selected item in the JComboBox field."
},
{
"code": null,
"e": 9799,
"s": 9772,
"text": "void setEnabled(boolean b)"
},
{
"code": null,
"e": 9851,
"s": 9799,
"text": "Enables the ComboBox so that items can be selected."
},
{
"code": null,
"e": 9919,
"s": 9851,
"text": "void setKeySelectionManager(JComboBox.KeySelectionManager aManager)"
},
{
"code": null,
"e": 9990,
"s": 9919,
"text": "Sets the object translates a keyboard character into a list selection."
},
{
"code": null,
"e": 10037,
"s": 9990,
"text": "void setLightWeightPopupEnabled(boolean aFlag)"
},
{
"code": null,
"e": 10248,
"s": 10037,
"text": "Sets the lightWeightPopupEnabled property, which provides a hint as to whether or not a lightweight Component should be used to contain the JComboBox, versus a heavyweight Component such as a Panel or a Window."
},
{
"code": null,
"e": 10283,
"s": 10248,
"text": "void setMaximumRowCount(int count)"
},
{
"code": null,
"e": 10339,
"s": 10283,
"text": "Sets the maximum number of rows the JComboBox displays."
},
{
"code": null,
"e": 10375,
"s": 10339,
"text": "void setModel(ComboBoxModel aModel)"
},
{
"code": null,
"e": 10448,
"s": 10375,
"text": "Sets the data model that the JComboBox uses to obtain the list of items."
},
{
"code": null,
"e": 10480,
"s": 10448,
"text": "void setPopupVisible(boolean v)"
},
{
"code": null,
"e": 10514,
"s": 10480,
"text": "Sets the visibility of the popup."
},
{
"code": null,
"e": 10574,
"s": 10514,
"text": "void setPrototypeDisplayValue(Object prototypeDisplayValue)"
},
{
"code": null,
"e": 10669,
"s": 10574,
"text": "Sets the prototype display value used to calculate the size of the display for the UI portion."
},
{
"code": null,
"e": 10714,
"s": 10669,
"text": "void setRenderer(ListCellRenderer aRenderer)"
},
{
"code": null,
"e": 10819,
"s": 10714,
"text": "Sets the renderer that paints the list items and the item selected from the list in the JComboBox field."
},
{
"code": null,
"e": 10854,
"s": 10819,
"text": "void setSelectedIndex(int anIndex)"
},
{
"code": null,
"e": 10889,
"s": 10854,
"text": "Selects the item at index anIndex."
},
{
"code": null,
"e": 10927,
"s": 10889,
"text": "void setSelectedItem(Object anObject)"
},
{
"code": null,
"e": 11010,
"s": 10927,
"text": "Sets the selected item in the ComboBox display area to the object in the argument."
},
{
"code": null,
"e": 11036,
"s": 11010,
"text": "void setUI(ComboBoxUI ui)"
},
{
"code": null,
"e": 11085,
"s": 11036,
"text": "Sets the L&F object that renders this component."
},
{
"code": null,
"e": 11102,
"s": 11085,
"text": "void showPopup()"
},
{
"code": null,
"e": 11151,
"s": 11102,
"text": "Causes the ComboBox to display its popup window."
},
{
"code": null,
"e": 11167,
"s": 11151,
"text": "void updateUI()"
},
{
"code": null,
"e": 11233,
"s": 11167,
"text": "Resets the UI property to a value from the current look and feel."
},
{
"code": null,
"e": 11290,
"s": 11233,
"text": "This class inherits methods from the following classes −"
},
{
"code": null,
"e": 11313,
"s": 11290,
"text": "javax.swing.JComponent"
},
{
"code": null,
"e": 11332,
"s": 11313,
"text": "java.awt.Container"
},
{
"code": null,
"e": 11351,
"s": 11332,
"text": "java.awt.Component"
},
{
"code": null,
"e": 11368,
"s": 11351,
"text": "java.lang.Object"
},
{
"code": null,
"e": 11484,
"s": 11368,
"text": "Create the following Java program using any editor of your choice in say D:/ > SWING > com > tutorialspoint > gui >"
},
{
"code": null,
"e": 11506,
"s": 11484,
"text": "SwingControlDemo.java"
},
{
"code": null,
"e": 13904,
"s": 11506,
"text": "package com.tutorialspoint.gui;\n \nimport java.awt.*;\nimport java.awt.event.*;\nimport javax.swing.*;\n \npublic class SwingControlDemo {\n private JFrame mainFrame;\n private JLabel headerLabel;\n private JLabel statusLabel;\n private JPanel controlPanel;\n\n public SwingControlDemo(){\n prepareGUI();\n }\n public static void main(String[] args){\n SwingControlDemo swingControlDemo = new SwingControlDemo(); \n swingControlDemo.showComboboxDemo();\n }\n private void prepareGUI(){\n mainFrame = new JFrame(\"Java Swing Examples\");\n mainFrame.setSize(400,400);\n mainFrame.setLayout(new GridLayout(3, 1));\n \n mainFrame.addWindowListener(new WindowAdapter() {\n public void windowClosing(WindowEvent windowEvent){\n System.exit(0);\n } \n }); \n headerLabel = new JLabel(\"\", JLabel.CENTER); \n statusLabel = new JLabel(\"\",JLabel.CENTER); \n statusLabel.setSize(350,100);\n\n controlPanel = new JPanel();\n controlPanel.setLayout(new FlowLayout());\n\n mainFrame.add(headerLabel);\n mainFrame.add(controlPanel);\n mainFrame.add(statusLabel);\n mainFrame.setVisible(true); \n }\n private void showComboboxDemo(){ \n headerLabel.setText(\"Control in action: JComboBox\"); \n final DefaultComboBoxModel fruitsName = new DefaultComboBoxModel();\n\n fruitsName.addElement(\"Apple\");\n fruitsName.addElement(\"Grapes\");\n fruitsName.addElement(\"Mango\");\n fruitsName.addElement(\"Peer\");\n\n final JComboBox fruitCombo = new JComboBox(fruitsName); \n fruitCombo.setSelectedIndex(0);\n\n JScrollPane fruitListScrollPane = new JScrollPane(fruitCombo); \n JButton showButton = new JButton(\"Show\");\n\n showButton.addActionListener(new ActionListener() {\n public void actionPerformed(ActionEvent e) { \n String data = \"\";\n if (fruitCombo.getSelectedIndex() != -1) { \n data = \"Fruits Selected: \" \n + fruitCombo.getItemAt\n (fruitCombo.getSelectedIndex()); \n } \n statusLabel.setText(data);\n }\n }); \n controlPanel.add(fruitListScrollPane); \n controlPanel.add(showButton); \n mainFrame.setVisible(true); \n }\n}"
},
{
"code": null,
"e": 14000,
"s": 13904,
"text": "Compile the program using the command prompt. Go to D:/ > SWING and type the following command."
},
{
"code": null,
"e": 14061,
"s": 14000,
"text": "D:\\SWING>javac com\\tutorialspoint\\gui\\SwingControlDemo.java\n"
},
{
"code": null,
"e": 14166,
"s": 14061,
"text": "If no error occurs, it means the compilation is successful. Run the program using the following command."
},
{
"code": null,
"e": 14221,
"s": 14166,
"text": "D:\\SWING>java com.tutorialspoint.gui.SwingControlDemo\n"
},
{
"code": null,
"e": 14250,
"s": 14221,
"text": "Verify the following output."
},
{
"code": null,
"e": 14285,
"s": 14250,
"text": "\n 30 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 14305,
"s": 14285,
"text": " Pranjal Srivastava"
},
{
"code": null,
"e": 14338,
"s": 14305,
"text": "\n 13 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 14358,
"s": 14338,
"text": " Pranjal Srivastava"
},
{
"code": null,
"e": 14393,
"s": 14358,
"text": "\n 25 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 14429,
"s": 14393,
"text": " Emenwa Global, Ejike IfeanyiChukwu"
},
{
"code": null,
"e": 14464,
"s": 14429,
"text": "\n 14 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 14477,
"s": 14464,
"text": " Travis Rose"
},
{
"code": null,
"e": 14510,
"s": 14477,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 14523,
"s": 14510,
"text": " Travis Rose"
},
{
"code": null,
"e": 14530,
"s": 14523,
"text": " Print"
},
{
"code": null,
"e": 14541,
"s": 14530,
"text": " Add Notes"
}
] |
openat() - Unix, Linux System Call | Unix - Home
Unix - Getting Started
Unix - File Management
Unix - Directories
Unix - File Permission
Unix - Environment
Unix - Basic Utilities
Unix - Pipes & Filters
Unix - Processes
Unix - Communication
Unix - The vi Editor
Unix - What is Shell?
Unix - Using Variables
Unix - Special Variables
Unix - Using Arrays
Unix - Basic Operators
Unix - Decision Making
Unix - Shell Loops
Unix - Loop Control
Unix - Shell Substitutions
Unix - Quoting Mechanisms
Unix - IO Redirections
Unix - Shell Functions
Unix - Manpage Help
Unix - Regular Expressions
Unix - File System Basics
Unix - User Administration
Unix - System Performance
Unix - System Logging
Unix - Signals and Traps
Unix - Useful Commands
Unix - Quick Guide
Unix - Builtin Functions
Unix - System Calls
Unix - Commands List
Unix Useful Resources
Computer Glossary
Who is Who
Copyright © 2014 by tutorialspoint
#include <fcntl.h>
int openat(int dirfd, const char *pathname, int flags);
int openat(int dirfd, const char *pathname, int flags
", mode_t " mode );
int openat(int dirfd, const char *pathname, int flags);
int openat(int dirfd, const char *pathname, int flags
", mode_t " mode );
If the pathname given in
pathname is relative, then it is interpreted relative to the directory
referred to by the file descriptor
dirfd (rather than relative to the current working directory of
the calling process, as is done by
open(2)
for a relative pathname).
If the pathname given in
pathname is relative and
dirfd is the special value
AT_FDCWD, then
pathname is interpreted relative to the current working
directory of the calling process (like
open(2)).
If the pathname given in
pathname is absolute, then
dirfd is ignored.
First,
openat() allows an application to avoid race conditions that could
occur when using
open(2)
to open files in directories other than the current working directory.
These race conditions result from the fact that some component
of the directory prefix given to
open() could be changed in parallel with the call to
open(). Such races can be avoided by
opening a file descriptor for the target directory,
and then specifying that file descriptor as the
dirfd argument of
openat().
Second,
openat() allows the implementation of a per-thread "current working
directory", via file descriptor(s) maintained by the application.
(This functionality can also be obtained by tricks based
on the use of
/proc/self/fd/dirfd, but less efficiently.)
faccessat (2)
faccessat (2)
fchmodat (2)
fchmodat (2)
fchownat (2)
fchownat (2)
fstatat (2)
fstatat (2)
futimesat (2)
futimesat (2)
linkat (2)
linkat (2)
mkdirat (2)
mkdirat (2)
mknodat (2)
mknodat (2)
open (2)
open (2)
path_resolution (2)
path_resolution (2)
readlinkat (2)
readlinkat (2)
renameat (2)
renameat (2)
symlinkat (2)
symlinkat (2)
unlinkat (2)
unlinkat (2)
Advertisements
129 Lectures
23 hours
Eduonix Learning Solutions
5 Lectures
4.5 hours
Frahaan Hussain
35 Lectures
2 hours
Pradeep D
41 Lectures
2.5 hours
Musab Zayadneh
46 Lectures
4 hours
GUHARAJANM
6 Lectures
4 hours
Uplatz
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 1466,
"s": 1454,
"text": "Unix - Home"
},
{
"code": null,
"e": 1489,
"s": 1466,
"text": "Unix - Getting Started"
},
{
"code": null,
"e": 1512,
"s": 1489,
"text": "Unix - File Management"
},
{
"code": null,
"e": 1531,
"s": 1512,
"text": "Unix - Directories"
},
{
"code": null,
"e": 1554,
"s": 1531,
"text": "Unix - File Permission"
},
{
"code": null,
"e": 1573,
"s": 1554,
"text": "Unix - Environment"
},
{
"code": null,
"e": 1596,
"s": 1573,
"text": "Unix - Basic Utilities"
},
{
"code": null,
"e": 1619,
"s": 1596,
"text": "Unix - Pipes & Filters"
},
{
"code": null,
"e": 1636,
"s": 1619,
"text": "Unix - Processes"
},
{
"code": null,
"e": 1657,
"s": 1636,
"text": "Unix - Communication"
},
{
"code": null,
"e": 1678,
"s": 1657,
"text": "Unix - The vi Editor"
},
{
"code": null,
"e": 1700,
"s": 1678,
"text": "Unix - What is Shell?"
},
{
"code": null,
"e": 1723,
"s": 1700,
"text": "Unix - Using Variables"
},
{
"code": null,
"e": 1748,
"s": 1723,
"text": "Unix - Special Variables"
},
{
"code": null,
"e": 1768,
"s": 1748,
"text": "Unix - Using Arrays"
},
{
"code": null,
"e": 1791,
"s": 1768,
"text": "Unix - Basic Operators"
},
{
"code": null,
"e": 1814,
"s": 1791,
"text": "Unix - Decision Making"
},
{
"code": null,
"e": 1833,
"s": 1814,
"text": "Unix - Shell Loops"
},
{
"code": null,
"e": 1853,
"s": 1833,
"text": "Unix - Loop Control"
},
{
"code": null,
"e": 1880,
"s": 1853,
"text": "Unix - Shell Substitutions"
},
{
"code": null,
"e": 1906,
"s": 1880,
"text": "Unix - Quoting Mechanisms"
},
{
"code": null,
"e": 1929,
"s": 1906,
"text": "Unix - IO Redirections"
},
{
"code": null,
"e": 1952,
"s": 1929,
"text": "Unix - Shell Functions"
},
{
"code": null,
"e": 1972,
"s": 1952,
"text": "Unix - Manpage Help"
},
{
"code": null,
"e": 1999,
"s": 1972,
"text": "Unix - Regular Expressions"
},
{
"code": null,
"e": 2025,
"s": 1999,
"text": "Unix - File System Basics"
},
{
"code": null,
"e": 2052,
"s": 2025,
"text": "Unix - User Administration"
},
{
"code": null,
"e": 2078,
"s": 2052,
"text": "Unix - System Performance"
},
{
"code": null,
"e": 2100,
"s": 2078,
"text": "Unix - System Logging"
},
{
"code": null,
"e": 2125,
"s": 2100,
"text": "Unix - Signals and Traps"
},
{
"code": null,
"e": 2148,
"s": 2125,
"text": "Unix - Useful Commands"
},
{
"code": null,
"e": 2167,
"s": 2148,
"text": "Unix - Quick Guide"
},
{
"code": null,
"e": 2192,
"s": 2167,
"text": "Unix - Builtin Functions"
},
{
"code": null,
"e": 2212,
"s": 2192,
"text": "Unix - System Calls"
},
{
"code": null,
"e": 2233,
"s": 2212,
"text": "Unix - Commands List"
},
{
"code": null,
"e": 2255,
"s": 2233,
"text": "Unix Useful Resources"
},
{
"code": null,
"e": 2273,
"s": 2255,
"text": "Computer Glossary"
},
{
"code": null,
"e": 2284,
"s": 2273,
"text": "Who is Who"
},
{
"code": null,
"e": 2319,
"s": 2284,
"text": "Copyright © 2014 by tutorialspoint"
},
{
"code": null,
"e": 2473,
"s": 2319,
"text": "#include <fcntl.h> \n\nint openat(int dirfd, const char *pathname, int flags); \nint openat(int dirfd, const char *pathname, int flags \n\", mode_t \" mode );\n"
},
{
"code": null,
"e": 2607,
"s": 2473,
"text": "\nint openat(int dirfd, const char *pathname, int flags); \nint openat(int dirfd, const char *pathname, int flags \n\", mode_t \" mode );\n"
},
{
"code": null,
"e": 2873,
"s": 2607,
"text": "\nIf the pathname given in\npathname is relative, then it is interpreted relative to the directory\nreferred to by the file descriptor\ndirfd (rather than relative to the current working directory of\nthe calling process, as is done by\nopen(2)\nfor a relative pathname).\n"
},
{
"code": null,
"e": 3072,
"s": 2873,
"text": "\nIf the pathname given in\npathname is relative and\ndirfd is the special value\nAT_FDCWD, then\npathname is interpreted relative to the current working\ndirectory of the calling process (like\nopen(2)).\n"
},
{
"code": null,
"e": 3144,
"s": 3072,
"text": "\nIf the pathname given in\npathname is absolute, then\ndirfd is ignored.\n"
},
{
"code": null,
"e": 3630,
"s": 3144,
"text": "\nFirst,\nopenat() allows an application to avoid race conditions that could\noccur when using\nopen(2)\nto open files in directories other than the current working directory.\nThese race conditions result from the fact that some component\nof the directory prefix given to\nopen() could be changed in parallel with the call to\nopen(). Such races can be avoided by\nopening a file descriptor for the target directory,\nand then specifying that file descriptor as the\ndirfd argument of\nopenat(). "
},
{
"code": null,
"e": 3889,
"s": 3630,
"text": "\nSecond,\nopenat() allows the implementation of a per-thread \"current working\ndirectory\", via file descriptor(s) maintained by the application.\n(This functionality can also be obtained by tricks based\non the use of\n/proc/self/fd/dirfd, but less efficiently.)\n"
},
{
"code": null,
"e": 3903,
"s": 3889,
"text": "faccessat (2)"
},
{
"code": null,
"e": 3917,
"s": 3903,
"text": "faccessat (2)"
},
{
"code": null,
"e": 3930,
"s": 3917,
"text": "fchmodat (2)"
},
{
"code": null,
"e": 3943,
"s": 3930,
"text": "fchmodat (2)"
},
{
"code": null,
"e": 3956,
"s": 3943,
"text": "fchownat (2)"
},
{
"code": null,
"e": 3969,
"s": 3956,
"text": "fchownat (2)"
},
{
"code": null,
"e": 3981,
"s": 3969,
"text": "fstatat (2)"
},
{
"code": null,
"e": 3993,
"s": 3981,
"text": "fstatat (2)"
},
{
"code": null,
"e": 4007,
"s": 3993,
"text": "futimesat (2)"
},
{
"code": null,
"e": 4021,
"s": 4007,
"text": "futimesat (2)"
},
{
"code": null,
"e": 4032,
"s": 4021,
"text": "linkat (2)"
},
{
"code": null,
"e": 4043,
"s": 4032,
"text": "linkat (2)"
},
{
"code": null,
"e": 4055,
"s": 4043,
"text": "mkdirat (2)"
},
{
"code": null,
"e": 4067,
"s": 4055,
"text": "mkdirat (2)"
},
{
"code": null,
"e": 4079,
"s": 4067,
"text": "mknodat (2)"
},
{
"code": null,
"e": 4091,
"s": 4079,
"text": "mknodat (2)"
},
{
"code": null,
"e": 4100,
"s": 4091,
"text": "open (2)"
},
{
"code": null,
"e": 4109,
"s": 4100,
"text": "open (2)"
},
{
"code": null,
"e": 4129,
"s": 4109,
"text": "path_resolution (2)"
},
{
"code": null,
"e": 4149,
"s": 4129,
"text": "path_resolution (2)"
},
{
"code": null,
"e": 4164,
"s": 4149,
"text": "readlinkat (2)"
},
{
"code": null,
"e": 4179,
"s": 4164,
"text": "readlinkat (2)"
},
{
"code": null,
"e": 4192,
"s": 4179,
"text": "renameat (2)"
},
{
"code": null,
"e": 4205,
"s": 4192,
"text": "renameat (2)"
},
{
"code": null,
"e": 4219,
"s": 4205,
"text": "symlinkat (2)"
},
{
"code": null,
"e": 4233,
"s": 4219,
"text": "symlinkat (2)"
},
{
"code": null,
"e": 4246,
"s": 4233,
"text": "unlinkat (2)"
},
{
"code": null,
"e": 4259,
"s": 4246,
"text": "unlinkat (2)"
},
{
"code": null,
"e": 4276,
"s": 4259,
"text": "\nAdvertisements\n"
},
{
"code": null,
"e": 4311,
"s": 4276,
"text": "\n 129 Lectures \n 23 hours \n"
},
{
"code": null,
"e": 4339,
"s": 4311,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 4373,
"s": 4339,
"text": "\n 5 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 4390,
"s": 4373,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4423,
"s": 4390,
"text": "\n 35 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 4434,
"s": 4423,
"text": " Pradeep D"
},
{
"code": null,
"e": 4469,
"s": 4434,
"text": "\n 41 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4485,
"s": 4469,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 4518,
"s": 4485,
"text": "\n 46 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4530,
"s": 4518,
"text": " GUHARAJANM"
},
{
"code": null,
"e": 4562,
"s": 4530,
"text": "\n 6 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4570,
"s": 4562,
"text": " Uplatz"
},
{
"code": null,
"e": 4577,
"s": 4570,
"text": " Print"
},
{
"code": null,
"e": 4588,
"s": 4577,
"text": " Add Notes"
}
] |
Double Pointer (Pointer to Pointer) in C | A pointer is used to store the address of variables. So, when we define a pointer to pointer, the first pointer is used to store the address of the second pointer. Thus it is known as double pointers.
Begin
Declare v of the integer datatype.
Initialize v = 76.
Declare a pointer p1 of the integer datatype.
Declare another double pointer p2 of the integer datatype.
Initialize p1 as the pointer to variable v.
Initialize p2 as the pointer to variable p1.
Print “Value of v”.
Print the value of variable v.
Print “Value of v using single pointer”.
Print the value of pointer p1.
Print “Value of v using double pointer”.
Print the value of double pointer p2.
End.
A simple program to understand double pointer:
int main() {
int v = 76;
int *p1;
int **p2;
p1 = &v;
p2 = &p1;
printf("Value of v = %d\n", v);
printf("Value of v using single pointer = %d\n", *p1 );
printf("Value of v using double pointer = %d\n", **p2);
return 0;
}
Value of v = 76
Value of v using single pointer = 76
Value of v using double pointer = 76 | [
{
"code": null,
"e": 1263,
"s": 1062,
"text": "A pointer is used to store the address of variables. So, when we define a pointer to pointer, the first pointer is used to store the address of the second pointer. Thus it is known as double pointers."
},
{
"code": null,
"e": 1772,
"s": 1263,
"text": "Begin\n Declare v of the integer datatype.\n Initialize v = 76.\n Declare a pointer p1 of the integer datatype.\n Declare another double pointer p2 of the integer datatype.\n Initialize p1 as the pointer to variable v.\n Initialize p2 as the pointer to variable p1.\n Print “Value of v”.\n Print the value of variable v.\n Print “Value of v using single pointer”.\n Print the value of pointer p1.\n Print “Value of v using double pointer”.\n Print the value of double pointer p2.\nEnd."
},
{
"code": null,
"e": 1819,
"s": 1772,
"text": "A simple program to understand double pointer:"
},
{
"code": null,
"e": 2065,
"s": 1819,
"text": "int main() {\n int v = 76;\n int *p1;\n int **p2;\n p1 = &v;\n p2 = &p1;\n printf(\"Value of v = %d\\n\", v);\n printf(\"Value of v using single pointer = %d\\n\", *p1 );\n printf(\"Value of v using double pointer = %d\\n\", **p2);\n return 0;\n}"
},
{
"code": null,
"e": 2155,
"s": 2065,
"text": "Value of v = 76\nValue of v using single pointer = 76\nValue of v using double pointer = 76"
}
] |
div() function in C++ - GeeksforGeeks | 28 Sep, 2021
Given a numerator and denominator, we have to find their quotient and remainder without using the modulo or division operator. div() function allows us to do the same task easily and efficiently.
div() function : Returns the integral quotient and remainder of the division of number by denom ( number/denom ) as a structure of type div_t, ldiv_t or lldiv_t, which has two members: quot and rem.
Syntax:
div_t div(int numerator, int denominator);
ldiv_t div(long numerator, long denominator);
lldiv_t div(long long numerator, long long denominator);
When we use div() function, it returns a structure that contains the quotient and remainder of the parameters. The first parameter passed in a div() function is taken as numerator and the 2nd parameter is taken as the denominator.
For int values, the structure returned is div_t. This structure looks like this:
C++
typedef struct { int quot; /* Quotient. */ int rem; /* Remainder. */} div_t;
Similarly, for long values, structure ldiv_t is returned and for long long values, structure lldiv_t is returned.
C++
ldiv_t:struct ldiv_t { long quot; long rem;}; lldiv_t:struct lldiv_t { long long quot; long long rem;};
Where is it useful ?
The question is, since we have both % and / operators, why should we use div() function?. Well, in a program where we require both – quotient and remainder, using div() function would be the best choice as it calculates both the values for you at once, moreover, it requires less time as compared to using % and / functions one by one. While using div() function, both %operator and using div() will return the same value of remainder, i.e if we’re getting negative value of remainder by using %operator then we will get negative value of remainder using div() function too. Eg, div(-40,3) will give remainder of ‘-1’. . So, div() function can be efficiently used according to one’s requirement.
What happens when the denominator is 0?
If any one of the part of this function, i.e. the remainder or the quotient cannot be represented or cannot find a result, then the whole structure shows an undefined behavior.
NOTE: While using div() function, remember to include cstdlib.h library in your program.
Examples:
Input : div(40, 5)
Output :quot = 8 rem = 0
Input :div(53, 8)
Output :quot = 6 rem = 5
Input : div(-40,3)
Output : quot = -13 , rem = -1
Implementation:
C++
// CPP program to illustrate// div() function#include <iostream>#include <cstdlib>using namespace std; int main(){ div_t result1 = div(100, 6); cout << "Quotient of 100/6 = " << result1.quot << endl; cout << "Remainder of 100/6 = " << result1.rem << endl; ldiv_t result2 = div(19237012L,251L); cout << "Quotient of 19237012L/251L = " << result2.quot << endl; cout << "Remainder of 19237012L/251L = " << result2.rem << endl; return 0;}
Output:
Quotient of 100/6 = 16
Remainder of 100/6 = 4
Quotient of 19237012L/251L = 76641
Remainder of 19237012L/251L = 121
This article is contributed by Ayush Saxena. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
madman4012
abhishek0719kadiyan
chhabradhanvi
CPP-Library
C++
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Map in C++ Standard Template Library (STL)
Inheritance in C++
Constructors in C++
C++ Classes and Objects
Socket Programming in C/C++
Bitwise Operators in C/C++
Operator Overloading in C++
Multidimensional Arrays in C / C++
Copy Constructor in C++
Virtual Function in C++ | [
{
"code": null,
"e": 24096,
"s": 24068,
"text": "\n28 Sep, 2021"
},
{
"code": null,
"e": 24292,
"s": 24096,
"text": "Given a numerator and denominator, we have to find their quotient and remainder without using the modulo or division operator. div() function allows us to do the same task easily and efficiently."
},
{
"code": null,
"e": 24492,
"s": 24292,
"text": "div() function : Returns the integral quotient and remainder of the division of number by denom ( number/denom ) as a structure of type div_t, ldiv_t or lldiv_t, which has two members: quot and rem. "
},
{
"code": null,
"e": 24501,
"s": 24492,
"text": "Syntax: "
},
{
"code": null,
"e": 24647,
"s": 24501,
"text": "div_t div(int numerator, int denominator);\nldiv_t div(long numerator, long denominator);\nlldiv_t div(long long numerator, long long denominator);"
},
{
"code": null,
"e": 24879,
"s": 24647,
"text": "When we use div() function, it returns a structure that contains the quotient and remainder of the parameters. The first parameter passed in a div() function is taken as numerator and the 2nd parameter is taken as the denominator. "
},
{
"code": null,
"e": 24961,
"s": 24879,
"text": "For int values, the structure returned is div_t. This structure looks like this: "
},
{
"code": null,
"e": 24965,
"s": 24961,
"text": "C++"
},
{
"code": "typedef struct { int quot; /* Quotient. */ int rem; /* Remainder. */} div_t;",
"e": 25053,
"s": 24965,
"text": null
},
{
"code": null,
"e": 25167,
"s": 25053,
"text": "Similarly, for long values, structure ldiv_t is returned and for long long values, structure lldiv_t is returned."
},
{
"code": null,
"e": 25171,
"s": 25167,
"text": "C++"
},
{
"code": "ldiv_t:struct ldiv_t { long quot; long rem;}; lldiv_t:struct lldiv_t { long long quot; long long rem;};",
"e": 25287,
"s": 25171,
"text": null
},
{
"code": null,
"e": 25308,
"s": 25287,
"text": "Where is it useful ?"
},
{
"code": null,
"e": 26005,
"s": 25308,
"text": "The question is, since we have both % and / operators, why should we use div() function?. Well, in a program where we require both – quotient and remainder, using div() function would be the best choice as it calculates both the values for you at once, moreover, it requires less time as compared to using % and / functions one by one. While using div() function, both %operator and using div() will return the same value of remainder, i.e if we’re getting negative value of remainder by using %operator then we will get negative value of remainder using div() function too. Eg, div(-40,3) will give remainder of ‘-1’. . So, div() function can be efficiently used according to one’s requirement. "
},
{
"code": null,
"e": 26045,
"s": 26005,
"text": "What happens when the denominator is 0?"
},
{
"code": null,
"e": 26223,
"s": 26045,
"text": "If any one of the part of this function, i.e. the remainder or the quotient cannot be represented or cannot find a result, then the whole structure shows an undefined behavior. "
},
{
"code": null,
"e": 26313,
"s": 26223,
"text": "NOTE: While using div() function, remember to include cstdlib.h library in your program. "
},
{
"code": null,
"e": 26324,
"s": 26313,
"text": "Examples: "
},
{
"code": null,
"e": 26463,
"s": 26324,
"text": "Input : div(40, 5)\nOutput :quot = 8 rem = 0\n\nInput :div(53, 8)\nOutput :quot = 6 rem = 5\n\nInput : div(-40,3)\nOutput : quot = -13 , rem = -1"
},
{
"code": null,
"e": 26479,
"s": 26463,
"text": "Implementation:"
},
{
"code": null,
"e": 26483,
"s": 26479,
"text": "C++"
},
{
"code": "// CPP program to illustrate// div() function#include <iostream>#include <cstdlib>using namespace std; int main(){ div_t result1 = div(100, 6); cout << \"Quotient of 100/6 = \" << result1.quot << endl; cout << \"Remainder of 100/6 = \" << result1.rem << endl; ldiv_t result2 = div(19237012L,251L); cout << \"Quotient of 19237012L/251L = \" << result2.quot << endl; cout << \"Remainder of 19237012L/251L = \" << result2.rem << endl; return 0;}",
"e": 27038,
"s": 26483,
"text": null
},
{
"code": null,
"e": 27047,
"s": 27038,
"text": "Output: "
},
{
"code": null,
"e": 27162,
"s": 27047,
"text": "Quotient of 100/6 = 16\nRemainder of 100/6 = 4\nQuotient of 19237012L/251L = 76641\nRemainder of 19237012L/251L = 121"
},
{
"code": null,
"e": 27582,
"s": 27162,
"text": "This article is contributed by Ayush Saxena. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above."
},
{
"code": null,
"e": 27593,
"s": 27582,
"text": "madman4012"
},
{
"code": null,
"e": 27613,
"s": 27593,
"text": "abhishek0719kadiyan"
},
{
"code": null,
"e": 27627,
"s": 27613,
"text": "chhabradhanvi"
},
{
"code": null,
"e": 27639,
"s": 27627,
"text": "CPP-Library"
},
{
"code": null,
"e": 27643,
"s": 27639,
"text": "C++"
},
{
"code": null,
"e": 27647,
"s": 27643,
"text": "CPP"
},
{
"code": null,
"e": 27745,
"s": 27647,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27754,
"s": 27745,
"text": "Comments"
},
{
"code": null,
"e": 27767,
"s": 27754,
"text": "Old Comments"
},
{
"code": null,
"e": 27810,
"s": 27767,
"text": "Map in C++ Standard Template Library (STL)"
},
{
"code": null,
"e": 27829,
"s": 27810,
"text": "Inheritance in C++"
},
{
"code": null,
"e": 27849,
"s": 27829,
"text": "Constructors in C++"
},
{
"code": null,
"e": 27873,
"s": 27849,
"text": "C++ Classes and Objects"
},
{
"code": null,
"e": 27901,
"s": 27873,
"text": "Socket Programming in C/C++"
},
{
"code": null,
"e": 27928,
"s": 27901,
"text": "Bitwise Operators in C/C++"
},
{
"code": null,
"e": 27956,
"s": 27928,
"text": "Operator Overloading in C++"
},
{
"code": null,
"e": 27991,
"s": 27956,
"text": "Multidimensional Arrays in C / C++"
},
{
"code": null,
"e": 28015,
"s": 27991,
"text": "Copy Constructor in C++"
}
] |
PHP - Syntax Overview | This chapter will give you an idea of very basic syntax of PHP and very important to make your PHP foundation strong.
The PHP parsing engine needs a way to differentiate PHP code from other elements in the page. The mechanism for doing so is known as 'escaping to PHP'. There are four ways to do this −
The most universally effective PHP tag style is −
<?php...?>
If you use this style, you can be positive that your tags will always be correctly interpreted.
Short or short-open tags look like this −
<?...?>
Short tags are, as one might expect, the shortest option You must do one of two things to enable PHP to recognize the tags −
Choose the --enable-short-tags configuration option when you're building PHP.
Choose the --enable-short-tags configuration option when you're building PHP.
Set the short_open_tag setting in your php.ini file to on. This option must be disabled to parse XML with PHP because the same syntax is used for XML tags.
Set the short_open_tag setting in your php.ini file to on. This option must be disabled to parse XML with PHP because the same syntax is used for XML tags.
ASP-style tags mimic the tags used by Active Server Pages to delineate code blocks. ASP-style tags look like this −
<%...%>
To use ASP-style tags, you will need to set the configuration option in your php.ini file.
HTML script tags look like this −
<script language = "PHP">...</script>
A comment is the portion of a program that exists only for the human reader and stripped out before displaying the programs result. There are two commenting formats in PHP −
Single-line comments − They are generally used for short explanations or notes relevant to the local code. Here are the examples of single line comments.
<?
# This is a comment, and
# This is the second line of the comment
// This is a comment too. Each style comments only
print "An example with single line comments";
?>
Multi-lines printing − Here are the examples to print multiple lines in a single print statement −
<?
# First Example
print <<<END
This uses the "here document" syntax to output
multiple lines with $variable interpolation. Note
that the here document terminator must appear on a
line with just a semicolon no extra whitespace!
END;
# Second Example
print "This spans
multiple lines. The newlines will be
output as well";
?>
Multi-lines comments − They are generally used to provide pseudocode algorithms and more detailed explanations when necessary. The multiline style of commenting is the same as in C. Here are the example of multi lines comments.
<?
/* This is a comment with multiline
Author : Mohammad Mohtashim
Purpose: Multiline Comments Demo
Subject: PHP
*/
print "An example with multi line comments";
?>
Whitespace is the stuff you type that is typically invisible on the screen, including spaces, tabs, and carriage returns (end-of-line characters).
PHP whitespace insensitive means that it almost never matters how many whitespace characters you have in a row.one whitespace character is the same as many such characters.
For example, each of the following PHP statements that assigns the sum of 2 + 2 to the variable $four is equivalent −
$four = 2 + 2; // single spaces
$four <tab>=<tab2<tab>+<tab>2 ; // spaces and tabs
$four =
2+
2; // multiple lines
Yeah it is true that PHP is a case sensitive language. Try out following example −
<html>
<body>
<?php
$capital = 67;
print("Variable capital is $capital<br>");
print("Variable CaPiTaL is $CaPiTaL<br>");
?>
</body>
</html>
This will produce the following result −
Variable capital is 67
Variable CaPiTaL is
A statement in PHP is any expression that is followed by a semicolon (;).Any sequence of valid PHP statements that is enclosed by the PHP tags is a valid PHP program. Here is a typical statement in PHP, which in this case assigns a string of characters to a variable called $greeting −
$greeting = "Welcome to PHP!";
The smallest building blocks of PHP are the indivisible tokens, such as numbers (3.14159), strings (.two.), variables ($two), constants (TRUE), and the special words that make up the syntax of PHP itself like if, else, while, for and so forth
Although statements cannot be combined like expressions, you can always put a sequence of statements anywhere a statement can go by enclosing them in a set of curly braces.
Here both statements are equivalent −
if (3 == 2 + 1)
print("Good - I haven't totally lost my mind.<br>");
if (3 == 2 + 1) {
print("Good - I haven't totally");
print("lost my mind.<br>");
}
Yes you can run your PHP script on your command prompt. Assuming you have following content in test.php file
<?php
echo "Hello PHP!!!!!";
?>
Now run this script as command prompt as follows −
$ php test.php
It will produce the following result −
Hello PHP!!!!!
Hope now you have basic knowledge of PHP Syntax.
45 Lectures
9 hours
Malhar Lathkar
34 Lectures
4 hours
Syed Raza
84 Lectures
5.5 hours
Frahaan Hussain
17 Lectures
1 hours
Nivedita Jain
100 Lectures
34 hours
Azaz Patel
43 Lectures
5.5 hours
Vijay Kumar Parvatha Reddy
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2875,
"s": 2757,
"text": "This chapter will give you an idea of very basic syntax of PHP and very important to make your PHP foundation strong."
},
{
"code": null,
"e": 3060,
"s": 2875,
"text": "The PHP parsing engine needs a way to differentiate PHP code from other elements in the page. The mechanism for doing so is known as 'escaping to PHP'. There are four ways to do this −"
},
{
"code": null,
"e": 3110,
"s": 3060,
"text": "The most universally effective PHP tag style is −"
},
{
"code": null,
"e": 3122,
"s": 3110,
"text": "<?php...?>\n"
},
{
"code": null,
"e": 3218,
"s": 3122,
"text": "If you use this style, you can be positive that your tags will always be correctly interpreted."
},
{
"code": null,
"e": 3260,
"s": 3218,
"text": "Short or short-open tags look like this −"
},
{
"code": null,
"e": 3269,
"s": 3260,
"text": "<?...?>\n"
},
{
"code": null,
"e": 3394,
"s": 3269,
"text": "Short tags are, as one might expect, the shortest option You must do one of two things to enable PHP to recognize the tags −"
},
{
"code": null,
"e": 3472,
"s": 3394,
"text": "Choose the --enable-short-tags configuration option when you're building PHP."
},
{
"code": null,
"e": 3550,
"s": 3472,
"text": "Choose the --enable-short-tags configuration option when you're building PHP."
},
{
"code": null,
"e": 3706,
"s": 3550,
"text": "Set the short_open_tag setting in your php.ini file to on. This option must be disabled to parse XML with PHP because the same syntax is used for XML tags."
},
{
"code": null,
"e": 3862,
"s": 3706,
"text": "Set the short_open_tag setting in your php.ini file to on. This option must be disabled to parse XML with PHP because the same syntax is used for XML tags."
},
{
"code": null,
"e": 3978,
"s": 3862,
"text": "ASP-style tags mimic the tags used by Active Server Pages to delineate code blocks. ASP-style tags look like this −"
},
{
"code": null,
"e": 3987,
"s": 3978,
"text": "<%...%>\n"
},
{
"code": null,
"e": 4078,
"s": 3987,
"text": "To use ASP-style tags, you will need to set the configuration option in your php.ini file."
},
{
"code": null,
"e": 4112,
"s": 4078,
"text": "HTML script tags look like this −"
},
{
"code": null,
"e": 4151,
"s": 4112,
"text": "<script language = \"PHP\">...</script>\n"
},
{
"code": null,
"e": 4325,
"s": 4151,
"text": "A comment is the portion of a program that exists only for the human reader and stripped out before displaying the programs result. There are two commenting formats in PHP −"
},
{
"code": null,
"e": 4479,
"s": 4325,
"text": "Single-line comments − They are generally used for short explanations or notes relevant to the local code. Here are the examples of single line comments."
},
{
"code": null,
"e": 4664,
"s": 4479,
"text": "<?\n # This is a comment, and\n # This is the second line of the comment\n \n // This is a comment too. Each style comments only\n print \"An example with single line comments\";\n?>"
},
{
"code": null,
"e": 4763,
"s": 4664,
"text": "Multi-lines printing − Here are the examples to print multiple lines in a single print statement −"
},
{
"code": null,
"e": 5125,
"s": 4763,
"text": "<?\n # First Example\n print <<<END\n This uses the \"here document\" syntax to output\n multiple lines with $variable interpolation. Note\n that the here document terminator must appear on a\n line with just a semicolon no extra whitespace!\n END;\n \n # Second Example\n print \"This spans\n multiple lines. The newlines will be\n output as well\";\n?>"
},
{
"code": null,
"e": 5353,
"s": 5125,
"text": "Multi-lines comments − They are generally used to provide pseudocode algorithms and more detailed explanations when necessary. The multiline style of commenting is the same as in C. Here are the example of multi lines comments."
},
{
"code": null,
"e": 5548,
"s": 5353,
"text": "<?\n /* This is a comment with multiline\n Author : Mohammad Mohtashim\n Purpose: Multiline Comments Demo\n Subject: PHP\n */\n \n print \"An example with multi line comments\";\n?>"
},
{
"code": null,
"e": 5695,
"s": 5548,
"text": "Whitespace is the stuff you type that is typically invisible on the screen, including spaces, tabs, and carriage returns (end-of-line characters)."
},
{
"code": null,
"e": 5868,
"s": 5695,
"text": "PHP whitespace insensitive means that it almost never matters how many whitespace characters you have in a row.one whitespace character is the same as many such characters."
},
{
"code": null,
"e": 5986,
"s": 5868,
"text": "For example, each of the following PHP statements that assigns the sum of 2 + 2 to the variable $four is equivalent −"
},
{
"code": null,
"e": 6101,
"s": 5986,
"text": "$four = 2 + 2; // single spaces\n$four <tab>=<tab2<tab>+<tab>2 ; // spaces and tabs\n$four =\n2+\n2; // multiple lines"
},
{
"code": null,
"e": 6184,
"s": 6101,
"text": "Yeah it is true that PHP is a case sensitive language. Try out following example −"
},
{
"code": null,
"e": 6383,
"s": 6184,
"text": "<html>\n <body>\n \n <?php\n $capital = 67;\n print(\"Variable capital is $capital<br>\");\n print(\"Variable CaPiTaL is $CaPiTaL<br>\");\n ?>\n \n </body>\n</html>"
},
{
"code": null,
"e": 6424,
"s": 6383,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 6468,
"s": 6424,
"text": "Variable capital is 67\nVariable CaPiTaL is\n"
},
{
"code": null,
"e": 6754,
"s": 6468,
"text": "A statement in PHP is any expression that is followed by a semicolon (;).Any sequence of valid PHP statements that is enclosed by the PHP tags is a valid PHP program. Here is a typical statement in PHP, which in this case assigns a string of characters to a variable called $greeting −"
},
{
"code": null,
"e": 6786,
"s": 6754,
"text": "$greeting = \"Welcome to PHP!\";\n"
},
{
"code": null,
"e": 7029,
"s": 6786,
"text": "The smallest building blocks of PHP are the indivisible tokens, such as numbers (3.14159), strings (.two.), variables ($two), constants (TRUE), and the special words that make up the syntax of PHP itself like if, else, while, for and so forth"
},
{
"code": null,
"e": 7202,
"s": 7029,
"text": "Although statements cannot be combined like expressions, you can always put a sequence of statements anywhere a statement can go by enclosing them in a set of curly braces."
},
{
"code": null,
"e": 7240,
"s": 7202,
"text": "Here both statements are equivalent −"
},
{
"code": null,
"e": 7405,
"s": 7240,
"text": "if (3 == 2 + 1)\n print(\"Good - I haven't totally lost my mind.<br>\");\n \nif (3 == 2 + 1) {\n print(\"Good - I haven't totally\");\n print(\"lost my mind.<br>\");\n}"
},
{
"code": null,
"e": 7514,
"s": 7405,
"text": "Yes you can run your PHP script on your command prompt. Assuming you have following content in test.php file"
},
{
"code": null,
"e": 7549,
"s": 7514,
"text": "<?php\n echo \"Hello PHP!!!!!\";\n?>"
},
{
"code": null,
"e": 7600,
"s": 7549,
"text": "Now run this script as command prompt as follows −"
},
{
"code": null,
"e": 7616,
"s": 7600,
"text": "$ php test.php\n"
},
{
"code": null,
"e": 7655,
"s": 7616,
"text": "It will produce the following result −"
},
{
"code": null,
"e": 7671,
"s": 7655,
"text": "Hello PHP!!!!!\n"
},
{
"code": null,
"e": 7720,
"s": 7671,
"text": "Hope now you have basic knowledge of PHP Syntax."
},
{
"code": null,
"e": 7753,
"s": 7720,
"text": "\n 45 Lectures \n 9 hours \n"
},
{
"code": null,
"e": 7769,
"s": 7753,
"text": " Malhar Lathkar"
},
{
"code": null,
"e": 7802,
"s": 7769,
"text": "\n 34 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 7813,
"s": 7802,
"text": " Syed Raza"
},
{
"code": null,
"e": 7848,
"s": 7813,
"text": "\n 84 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 7865,
"s": 7848,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 7898,
"s": 7865,
"text": "\n 17 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 7913,
"s": 7898,
"text": " Nivedita Jain"
},
{
"code": null,
"e": 7948,
"s": 7913,
"text": "\n 100 Lectures \n 34 hours \n"
},
{
"code": null,
"e": 7960,
"s": 7948,
"text": " Azaz Patel"
},
{
"code": null,
"e": 7995,
"s": 7960,
"text": "\n 43 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 8023,
"s": 7995,
"text": " Vijay Kumar Parvatha Reddy"
},
{
"code": null,
"e": 8030,
"s": 8023,
"text": " Print"
},
{
"code": null,
"e": 8041,
"s": 8030,
"text": " Add Notes"
}
] |
Apache Pig - Running Scripts | Here in this chapter, we will see how how to run Apache Pig scripts in batch mode.
While writing a script in a file, we can include comments in it as shown below.
We will begin the multi-line comments with '/*', end them with '*/'.
/* These are the multi-line comments
In the pig script */
We will begin the single-line comments with '--'.
--we can write single line comments like this.
While executing Apache Pig statements in batch mode, follow the steps given below.
Write all the required Pig Latin statements in a single file. We can write all the Pig Latin statements and commands in a single file and save it as .pig file.
Execute the Apache Pig script. You can execute the Pig script from the shell (Linux) as shown below.
You can execute it from the Grunt shell as well using the exec command as shown below.
grunt> exec /sample_script.pig
We can also execute a Pig script that resides in the HDFS. Suppose there is a Pig script with the name Sample_script.pig in the HDFS directory named /pig_data/. We can execute it as shown below.
$ pig -x mapreduce hdfs://localhost:9000/pig_data/Sample_script.pig
Assume we have a file student_details.txt in HDFS with the following content.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
We also have a sample script with the name sample_script.pig, in the same HDFS directory. This file contains statements performing operations and transformations on the student relation, as shown below.
student = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);
student_order = ORDER student BY age DESC;
student_limit = LIMIT student_order 4;
Dump student_limit;
The first statement of the script will load the data in the file named student_details.txt as a relation named student.
The first statement of the script will load the data in the file named student_details.txt as a relation named student.
The second statement of the script will arrange the tuples of the relation in descending order, based on age, and store it as student_order.
The second statement of the script will arrange the tuples of the relation in descending order, based on age, and store it as student_order.
The third statement of the script will store the first 4 tuples of student_order as student_limit.
The third statement of the script will store the first 4 tuples of student_order as student_limit.
Finally the fourth statement will dump the content of the relation student_limit.
Finally the fourth statement will dump the content of the relation student_limit.
Let us now execute the sample_script.pig as shown below.
$./pig -x mapreduce hdfs://localhost:9000/pig_data/sample_script.pig
Apache Pig gets executed and gives you the output with the following content.
(7,Komal,Nayak,24,9848022334,trivendram)
(8,Bharathi,Nambiayar,24,9848022333,Chennai)
(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar)
(6,Archana,Mishra,23,9848022335,Chennai)
2015-10-19 10:31:27,446 [main] INFO org.apache.pig.Main - Pig script completed in 12
minutes, 32 seconds and 751 milliseconds (752751 ms)
46 Lectures
3.5 hours
Arnab Chakraborty
23 Lectures
1.5 hours
Mukund Kumar Mishra
16 Lectures
1 hours
Nilay Mehta
52 Lectures
1.5 hours
Bigdata Engineer
14 Lectures
1 hours
Bigdata Engineer
23 Lectures
1 hours
Bigdata Engineer
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2767,
"s": 2684,
"text": "Here in this chapter, we will see how how to run Apache Pig scripts in batch mode."
},
{
"code": null,
"e": 2847,
"s": 2767,
"text": "While writing a script in a file, we can include comments in it as shown below."
},
{
"code": null,
"e": 2916,
"s": 2847,
"text": "We will begin the multi-line comments with '/*', end them with '*/'."
},
{
"code": null,
"e": 2979,
"s": 2916,
"text": "/* These are the multi-line comments \n In the pig script */ \n"
},
{
"code": null,
"e": 3029,
"s": 2979,
"text": "We will begin the single-line comments with '--'."
},
{
"code": null,
"e": 3077,
"s": 3029,
"text": "--we can write single line comments like this.\n"
},
{
"code": null,
"e": 3160,
"s": 3077,
"text": "While executing Apache Pig statements in batch mode, follow the steps given below."
},
{
"code": null,
"e": 3320,
"s": 3160,
"text": "Write all the required Pig Latin statements in a single file. We can write all the Pig Latin statements and commands in a single file and save it as .pig file."
},
{
"code": null,
"e": 3421,
"s": 3320,
"text": "Execute the Apache Pig script. You can execute the Pig script from the shell (Linux) as shown below."
},
{
"code": null,
"e": 3508,
"s": 3421,
"text": "You can execute it from the Grunt shell as well using the exec command as shown below."
},
{
"code": null,
"e": 3539,
"s": 3508,
"text": "grunt> exec /sample_script.pig"
},
{
"code": null,
"e": 3734,
"s": 3539,
"text": "We can also execute a Pig script that resides in the HDFS. Suppose there is a Pig script with the name Sample_script.pig in the HDFS directory named /pig_data/. We can execute it as shown below."
},
{
"code": null,
"e": 3803,
"s": 3734,
"text": "$ pig -x mapreduce hdfs://localhost:9000/pig_data/Sample_script.pig "
},
{
"code": null,
"e": 3881,
"s": 3803,
"text": "Assume we have a file student_details.txt in HDFS with the following content."
},
{
"code": null,
"e": 3901,
"s": 3881,
"text": "student_details.txt"
},
{
"code": null,
"e": 4247,
"s": 3901,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad \n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi \n004,Preethi,Agarwal,21,9848022330,Pune \n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar \n006,Archana,Mishra,23,9848022335,Chennai \n007,Komal,Nayak,24,9848022334,trivendram \n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 4450,
"s": 4247,
"text": "We also have a sample script with the name sample_script.pig, in the same HDFS directory. This file contains statements performing operations and transformations on the student relation, as shown below."
},
{
"code": null,
"e": 4740,
"s": 4450,
"text": "student = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);\n\t\nstudent_order = ORDER student BY age DESC;\n \nstudent_limit = LIMIT student_order 4;\n \nDump student_limit;"
},
{
"code": null,
"e": 4860,
"s": 4740,
"text": "The first statement of the script will load the data in the file named student_details.txt as a relation named student."
},
{
"code": null,
"e": 4980,
"s": 4860,
"text": "The first statement of the script will load the data in the file named student_details.txt as a relation named student."
},
{
"code": null,
"e": 5121,
"s": 4980,
"text": "The second statement of the script will arrange the tuples of the relation in descending order, based on age, and store it as student_order."
},
{
"code": null,
"e": 5262,
"s": 5121,
"text": "The second statement of the script will arrange the tuples of the relation in descending order, based on age, and store it as student_order."
},
{
"code": null,
"e": 5361,
"s": 5262,
"text": "The third statement of the script will store the first 4 tuples of student_order as student_limit."
},
{
"code": null,
"e": 5460,
"s": 5361,
"text": "The third statement of the script will store the first 4 tuples of student_order as student_limit."
},
{
"code": null,
"e": 5542,
"s": 5460,
"text": "Finally the fourth statement will dump the content of the relation student_limit."
},
{
"code": null,
"e": 5624,
"s": 5542,
"text": "Finally the fourth statement will dump the content of the relation student_limit."
},
{
"code": null,
"e": 5681,
"s": 5624,
"text": "Let us now execute the sample_script.pig as shown below."
},
{
"code": null,
"e": 5750,
"s": 5681,
"text": "$./pig -x mapreduce hdfs://localhost:9000/pig_data/sample_script.pig"
},
{
"code": null,
"e": 5828,
"s": 5750,
"text": "Apache Pig gets executed and gives you the output with the following content."
},
{
"code": null,
"e": 6145,
"s": 5828,
"text": "(7,Komal,Nayak,24,9848022334,trivendram)\n(8,Bharathi,Nambiayar,24,9848022333,Chennai) \n(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar) \n(6,Archana,Mishra,23,9848022335,Chennai)\n2015-10-19 10:31:27,446 [main] INFO org.apache.pig.Main - Pig script completed in 12\nminutes, 32 seconds and 751 milliseconds (752751 ms)\n"
},
{
"code": null,
"e": 6180,
"s": 6145,
"text": "\n 46 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6199,
"s": 6180,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 6234,
"s": 6199,
"text": "\n 23 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6255,
"s": 6234,
"text": " Mukund Kumar Mishra"
},
{
"code": null,
"e": 6288,
"s": 6255,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6301,
"s": 6288,
"text": " Nilay Mehta"
},
{
"code": null,
"e": 6336,
"s": 6301,
"text": "\n 52 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6354,
"s": 6336,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6387,
"s": 6354,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6405,
"s": 6387,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6438,
"s": 6405,
"text": "\n 23 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6456,
"s": 6438,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 6463,
"s": 6456,
"text": " Print"
},
{
"code": null,
"e": 6474,
"s": 6463,
"text": " Add Notes"
}
] |
MySQL query to replace backslash from a varchar column with preceding backslash string values | Let us first create a table −
mysql> create table DemoTable
(
Title varchar(100)
);
Query OK, 0 rows affected (0.57 sec)
Insert some records in the table using insert command −
mysql> insert into DemoTable values('\\"MySQL');
Query OK, 1 row affected (0.14 sec)
mysql> insert into DemoTable values('MongoDB\\"');
Query OK, 1 row affected (0.09 sec)
mysql> insert into DemoTable values('\\"Java\\"');
Query OK, 1 row affected (0.19 sec)
mysql> insert into DemoTable values('\\"C\"');
Query OK, 1 row affected (0.14 sec)
Display all records from the table using select statement −
mysql> select *from DemoTable;
This will produce the following output −
+-----------+
| Title |
+-----------+
| \"MySQL |
| MongoDB\" |
| \"Java\" |
| \"C" |
+-----------+
4 rows in set (0.00 sec)
Following is the query to replace backslash from a varchar column with preceding backslash string values −
mysql> update DemoTable set Title=replace(Title,'\\"','"');
Query OK, 4 rows affected (0.14 sec)
Rows matched: 4 Changed: 4 Warnings: 0
Let us check table records once again −
mysql> select *from DemoTable;
This will produce the following output −
+----------+
| Title |
+----------+
| "MySQL |
| MongoDB" |
| "Java" |
| "C" |
+----------+
4 rows in set (0.00 sec) | [
{
"code": null,
"e": 1092,
"s": 1062,
"text": "Let us first create a table −"
},
{
"code": null,
"e": 1186,
"s": 1092,
"text": "mysql> create table DemoTable\n(\n Title varchar(100)\n);\nQuery OK, 0 rows affected (0.57 sec)"
},
{
"code": null,
"e": 1242,
"s": 1186,
"text": "Insert some records in the table using insert command −"
},
{
"code": null,
"e": 1584,
"s": 1242,
"text": "mysql> insert into DemoTable values('\\\\\"MySQL');\nQuery OK, 1 row affected (0.14 sec)\nmysql> insert into DemoTable values('MongoDB\\\\\"');\nQuery OK, 1 row affected (0.09 sec)\nmysql> insert into DemoTable values('\\\\\"Java\\\\\"');\nQuery OK, 1 row affected (0.19 sec)\nmysql> insert into DemoTable values('\\\\\"C\\\"');\nQuery OK, 1 row affected (0.14 sec)"
},
{
"code": null,
"e": 1644,
"s": 1584,
"text": "Display all records from the table using select statement −"
},
{
"code": null,
"e": 1675,
"s": 1644,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 1716,
"s": 1675,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 1853,
"s": 1716,
"text": "+-----------+\n| Title |\n+-----------+\n| \\\"MySQL |\n| MongoDB\\\" |\n| \\\"Java\\\" |\n| \\\"C\" |\n+-----------+\n4 rows in set (0.00 sec)"
},
{
"code": null,
"e": 1960,
"s": 1853,
"text": "Following is the query to replace backslash from a varchar column with preceding backslash string values −"
},
{
"code": null,
"e": 2096,
"s": 1960,
"text": "mysql> update DemoTable set Title=replace(Title,'\\\\\"','\"');\nQuery OK, 4 rows affected (0.14 sec)\nRows matched: 4 Changed: 4 Warnings: 0"
},
{
"code": null,
"e": 2136,
"s": 2096,
"text": "Let us check table records once again −"
},
{
"code": null,
"e": 2167,
"s": 2136,
"text": "mysql> select *from DemoTable;"
},
{
"code": null,
"e": 2208,
"s": 2167,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2337,
"s": 2208,
"text": "+----------+\n| Title |\n+----------+\n| \"MySQL |\n| MongoDB\" |\n| \"Java\" |\n| \"C\" |\n+----------+\n4 rows in set (0.00 sec)"
}
] |
Spring JDBC - Batch Operation | Following example will demonstrate how to make a batch update using Spring JDBC. We'll update the available records in Student table in a single batch operation.
String SQL = "update Student set age = ? where id = ?";
int[] updateCounts = jdbcTemplateObject.batchUpdate(SQL, new BatchPreparedStatementSetter() {
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setInt(1, students.get(i).getAge());
ps.setInt(2, students.get(i).getId());
}
public int getBatchSize() {
return students.size();
}
});
Where,
SQL − Update query to update student's age.
SQL − Update query to update student's age.
jdbcTemplateObject − StudentJDBCTemplate object to update student object in database.
jdbcTemplateObject − StudentJDBCTemplate object to update student object in database.
BatchPreparedStatementSetter − Batch executor, set values in PerparedStatement per item identified by list of objects student and index i. getBatchSize() returns the size of the batch.
BatchPreparedStatementSetter − Batch executor, set values in PerparedStatement per item identified by list of objects student and index i. getBatchSize() returns the size of the batch.
updateCounts − Int array containing updated row count per update query.
updateCounts − Int array containing updated row count per update query.
To understand the above-mentioned concepts related to Spring JDBC, let us write an example which will update a batch operation. To write our example, let us have a working Eclipse IDE in place and use the following steps to create a Spring application.
Following is the content of the Data Access Object interface file StudentDAO.java.
package com.tutorialspoint;
import java.util.List;
import javax.sql.DataSource;
public interface StudentDAO {
/**
* This is the method to be used to initialize
* database resources ie. connection.
*/
public void setDataSource(DataSource ds);
/**
* This is the method to be used to list down
* all the records from the Student table.
*/
public List<Student> listStudents();
public void batchUpdate(final List<Student> students);
}
Following is the content of the Student.java file.
package com.tutorialspoint;
public class Student {
private Integer age;
private String name;
private Integer id;
public void setAge(Integer age) {
this.age = age;
}
public Integer getAge() {
return age;
}
public void setName(String name) {
this.name = name;
}
public String getName() {
return name;
}
public void setId(Integer id) {
this.id = id;
}
public Integer getId() {
return id;
}
}
Following is the content of the StudentMapper.java file.
package com.tutorialspoint;
import java.sql.ResultSet;
import java.sql.SQLException;
import org.springframework.jdbc.core.RowMapper;
public class StudentMapper implements RowMapper<Student> {
public Student mapRow(ResultSet rs, int rowNum) throws SQLException {
Student student = new Student();
student.setId(rs.getInt("id"));
student.setName(rs.getString("name"));
student.setAge(rs.getInt("age"));
return student;
}
}
Following is the implementation class file StudentJDBCTemplate.java for the defined DAO interface StudentDAO.
package com.tutorialspoint;
import java.sql.PreparedStatement;
import java.util.List;
import javax.sql.DataSource;
import org.springframework.jdbc.core.JdbcTemplate;
import org.springframework.jdbc.core.BatchPreparedStatementSetter;
import java.sql.SQLException;
public class StudentJDBCTemplate implements StudentDAO {
private DataSource dataSource;
private JdbcTemplate jdbcTemplateObject;
public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
this.jdbcTemplateObject = new JdbcTemplate(dataSource);
}
public List<Student> listStudents() {
String SQL = "select * from Student";
List <Student> students = jdbcTemplateObject.query(SQL, new StudentMapper());
return students;
}
public void batchUpdate(final List<Student> students){
String SQL = "update Student set age = ? where id = ?";
int[] updateCounts = jdbcTemplateObject.batchUpdate(SQL,
new BatchPreparedStatementSetter() {
public void setValues(PreparedStatement ps, int i) throws SQLException {
ps.setInt(1, students.get(i).getAge());
ps.setInt(2, students.get(i).getId());
}
public int getBatchSize() {
return students.size();
}
});
System.out.println("Records updated!");
}
}
Following is the content of the MainApp.java file.
package com.tutorialspoint;
import java.util.ArrayList;
import java.util.List;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
public class MainApp {
public static void main(String[] args) {
ApplicationContext context = new ClassPathXmlApplicationContext("Beans.xml");
StudentJDBCTemplate studentJDBCTemplate = (StudentJDBCTemplate)context.getBean("studentJDBCTemplate");
List<Student> initialStudents = studentJDBCTemplate.listStudents();
System.out.println("Initial Students");
for(Student student2: initialStudents){
System.out.print("ID : " + student2.getId() );
System.out.println(", Age : " + student2.getAge());
}
Student student = new Student();
student.setId(1);
student.setAge(10);
Student student1 = new Student();
student1.setId(3);
student1.setAge(10);
List<Student> students = new ArrayList<Student>();
students.add(student);
students.add(student1);
studentJDBCTemplate.batchUpdate(students);
List<Student> updatedStudents = studentJDBCTemplate.listStudents();
System.out.println("Updated Students");
for(Student student3: updatedStudents){
System.out.print("ID : " + student3.getId() );
System.out.println(", Age : " + student3.getAge());
}
}
}
Following is the configuration file Beans.xml.
<?xml version = "1.0" encoding = "UTF-8"?>
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd ">
<!-- Initialization for data source -->
<bean id = "dataSource"
class = "org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name = "driverClassName" value = "com.mysql.cj.jdbc.Driver"/>
<property name = "url" value = "jdbc:mysql://localhost:3306/TEST"/>
<property name = "username" value = "root"/>
<property name = "password" value = "admin"/>
</bean>
<!-- Definition for studentJDBCTemplate bean -->
<bean id = "studentJDBCTemplate"
class = "com.tutorialspoint.StudentJDBCTemplate">
<property name = "dataSource" ref = "dataSource" />
</bean>
</beans>
Once you are done creating the source and bean configuration files, let us run the application. If everything is fine with your application, it will print the following message.
Initial Students
ID : 1, Age : 11
ID : 3, Age : 15
Records updated!
Updated Students
ID : 1, Age : 10
ID : 3, Age : 10
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2558,
"s": 2396,
"text": "Following example will demonstrate how to make a batch update using Spring JDBC. We'll update the available records in Student table in a single batch operation."
},
{
"code": null,
"e": 2963,
"s": 2558,
"text": "String SQL = \"update Student set age = ? where id = ?\";\nint[] updateCounts = jdbcTemplateObject.batchUpdate(SQL, new BatchPreparedStatementSetter() {\n \n public void setValues(PreparedStatement ps, int i) throws SQLException {\n ps.setInt(1, students.get(i).getAge());\t\t\t\t\t\t\n ps.setInt(2, students.get(i).getId());\t\n }\n public int getBatchSize() {\n return students.size();\n }\n}); \n"
},
{
"code": null,
"e": 2970,
"s": 2963,
"text": "Where,"
},
{
"code": null,
"e": 3014,
"s": 2970,
"text": "SQL − Update query to update student's age."
},
{
"code": null,
"e": 3058,
"s": 3014,
"text": "SQL − Update query to update student's age."
},
{
"code": null,
"e": 3144,
"s": 3058,
"text": "jdbcTemplateObject − StudentJDBCTemplate object to update student object in database."
},
{
"code": null,
"e": 3230,
"s": 3144,
"text": "jdbcTemplateObject − StudentJDBCTemplate object to update student object in database."
},
{
"code": null,
"e": 3415,
"s": 3230,
"text": "BatchPreparedStatementSetter − Batch executor, set values in PerparedStatement per item identified by list of objects student and index i. getBatchSize() returns the size of the batch."
},
{
"code": null,
"e": 3600,
"s": 3415,
"text": "BatchPreparedStatementSetter − Batch executor, set values in PerparedStatement per item identified by list of objects student and index i. getBatchSize() returns the size of the batch."
},
{
"code": null,
"e": 3672,
"s": 3600,
"text": "updateCounts − Int array containing updated row count per update query."
},
{
"code": null,
"e": 3744,
"s": 3672,
"text": "updateCounts − Int array containing updated row count per update query."
},
{
"code": null,
"e": 3997,
"s": 3744,
"text": "To understand the above-mentioned concepts related to Spring JDBC, let us write an example which will update a batch operation. To write our example, let us have a working Eclipse IDE in place and use the following steps to create a Spring application."
},
{
"code": null,
"e": 4080,
"s": 3997,
"text": "Following is the content of the Data Access Object interface file StudentDAO.java."
},
{
"code": null,
"e": 4565,
"s": 4080,
"text": "package com.tutorialspoint;\n\nimport java.util.List;\nimport javax.sql.DataSource;\n\npublic interface StudentDAO {\n /** \n * This is the method to be used to initialize\n * database resources ie. connection.\n */\n public void setDataSource(DataSource ds);\n \n /** \n * This is the method to be used to list down\n * all the records from the Student table.\n */\n public List<Student> listStudents(); \n public void batchUpdate(final List<Student> students);\n}"
},
{
"code": null,
"e": 4616,
"s": 4565,
"text": "Following is the content of the Student.java file."
},
{
"code": null,
"e": 5088,
"s": 4616,
"text": "package com.tutorialspoint;\n\npublic class Student {\n private Integer age;\n private String name;\n private Integer id;\n\n public void setAge(Integer age) {\n this.age = age;\n }\n public Integer getAge() {\n return age;\n }\n public void setName(String name) {\n this.name = name;\n }\n public String getName() {\n return name;\n }\n public void setId(Integer id) {\n this.id = id;\n }\n public Integer getId() {\n return id;\n }\n}"
},
{
"code": null,
"e": 5145,
"s": 5088,
"text": "Following is the content of the StudentMapper.java file."
},
{
"code": null,
"e": 5603,
"s": 5145,
"text": "package com.tutorialspoint;\n\nimport java.sql.ResultSet;\nimport java.sql.SQLException;\nimport org.springframework.jdbc.core.RowMapper;\n\npublic class StudentMapper implements RowMapper<Student> {\n public Student mapRow(ResultSet rs, int rowNum) throws SQLException {\n Student student = new Student();\n student.setId(rs.getInt(\"id\"));\n student.setName(rs.getString(\"name\"));\n student.setAge(rs.getInt(\"age\"));\n return student;\n }\n}"
},
{
"code": null,
"e": 5713,
"s": 5603,
"text": "Following is the implementation class file StudentJDBCTemplate.java for the defined DAO interface StudentDAO."
},
{
"code": null,
"e": 7061,
"s": 5713,
"text": "package com.tutorialspoint;\n\nimport java.sql.PreparedStatement;\nimport java.util.List;\nimport javax.sql.DataSource;\nimport org.springframework.jdbc.core.JdbcTemplate;\nimport org.springframework.jdbc.core.BatchPreparedStatementSetter;\nimport java.sql.SQLException;\n\npublic class StudentJDBCTemplate implements StudentDAO {\n private DataSource dataSource;\n private JdbcTemplate jdbcTemplateObject;\n \n public void setDataSource(DataSource dataSource) {\n this.dataSource = dataSource;\n this.jdbcTemplateObject = new JdbcTemplate(dataSource);\n }\n public List<Student> listStudents() {\n String SQL = \"select * from Student\";\n List <Student> students = jdbcTemplateObject.query(SQL, new StudentMapper());\n return students;\n }\n public void batchUpdate(final List<Student> students){\n String SQL = \"update Student set age = ? where id = ?\";\n int[] updateCounts = jdbcTemplateObject.batchUpdate(SQL,\n new BatchPreparedStatementSetter() {\n \n public void setValues(PreparedStatement ps, int i) throws SQLException {\n ps.setInt(1, students.get(i).getAge());\t\t\t\t\t\t\n ps.setInt(2, students.get(i).getId());\t\n }\n public int getBatchSize() {\n return students.size();\n }\n }); \n System.out.println(\"Records updated!\");\n }\n}"
},
{
"code": null,
"e": 7112,
"s": 7061,
"text": "Following is the content of the MainApp.java file."
},
{
"code": null,
"e": 8540,
"s": 7112,
"text": "package com.tutorialspoint;\n\nimport java.util.ArrayList;\nimport java.util.List;\nimport org.springframework.context.ApplicationContext;\nimport org.springframework.context.support.ClassPathXmlApplicationContext;\n\npublic class MainApp {\n public static void main(String[] args) {\n ApplicationContext context = new ClassPathXmlApplicationContext(\"Beans.xml\");\n\n StudentJDBCTemplate studentJDBCTemplate = (StudentJDBCTemplate)context.getBean(\"studentJDBCTemplate\");\n\n List<Student> initialStudents = studentJDBCTemplate.listStudents();\n System.out.println(\"Initial Students\");\n \n for(Student student2: initialStudents){\n System.out.print(\"ID : \" + student2.getId() );\n System.out.println(\", Age : \" + student2.getAge()); \n }\n Student student = new Student();\n student.setId(1);\n student.setAge(10);\n\n Student student1 = new Student();\n student1.setId(3);\n student1.setAge(10);\n\n List<Student> students = new ArrayList<Student>();\n students.add(student);\n students.add(student1);\n\n studentJDBCTemplate.batchUpdate(students);\n\n List<Student> updatedStudents = studentJDBCTemplate.listStudents();\n System.out.println(\"Updated Students\");\n \n for(Student student3: updatedStudents){\n System.out.print(\"ID : \" + student3.getId() );\n System.out.println(\", Age : \" + student3.getAge()); \n }\n }\n}"
},
{
"code": null,
"e": 8587,
"s": 8540,
"text": "Following is the configuration file Beans.xml."
},
{
"code": null,
"e": 9538,
"s": 8587,
"text": "<?xml version = \"1.0\" encoding = \"UTF-8\"?>\n<beans xmlns = \"http://www.springframework.org/schema/beans\"\n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\" \n xsi:schemaLocation=\"http://www.springframework.org/schema/beans\n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd \">\n\n <!-- Initialization for data source -->\n <bean id = \"dataSource\" \n class = \"org.springframework.jdbc.datasource.DriverManagerDataSource\">\n <property name = \"driverClassName\" value = \"com.mysql.cj.jdbc.Driver\"/>\n <property name = \"url\" value = \"jdbc:mysql://localhost:3306/TEST\"/>\n <property name = \"username\" value = \"root\"/>\n <property name = \"password\" value = \"admin\"/>\n </bean>\n\n <!-- Definition for studentJDBCTemplate bean -->\n <bean id = \"studentJDBCTemplate\" \n class = \"com.tutorialspoint.StudentJDBCTemplate\">\n <property name = \"dataSource\" ref = \"dataSource\" /> \n </bean> \n</beans>"
},
{
"code": null,
"e": 9716,
"s": 9538,
"text": "Once you are done creating the source and bean configuration files, let us run the application. If everything is fine with your application, it will print the following message."
},
{
"code": null,
"e": 9836,
"s": 9716,
"text": "Initial Students\nID : 1, Age : 11\nID : 3, Age : 15\nRecords updated!\nUpdated Students\nID : 1, Age : 10\nID : 3, Age : 10\n"
},
{
"code": null,
"e": 9843,
"s": 9836,
"text": " Print"
},
{
"code": null,
"e": 9854,
"s": 9843,
"text": " Add Notes"
}
] |
Go - Basic Syntax | We discussed the basic structure of a Go program in the previous chapter. Now it will be easy to understand the other basic building blocks of the Go programming language.
A Go program consists of various tokens. A token is either a keyword, an identifier, a constant, a string literal, or a symbol. For example, the following Go statement consists of six tokens −
fmt.Println("Hello, World!")
The individual tokens are −
fmt
.
Println
(
"Hello, World!"
)
In a Go program, the line separator key is a statement terminator. That is, individual statements don't need a special separator like “;” in C. The Go compiler internally places “;” as the statement terminator to indicate the end of one logical entity.
For example, take a look at the following statements −
fmt.Println("Hello, World!")
fmt.Println("I am in Go Programming World!")
Comments are like helping texts in your Go program and they are ignored by the compiler. They start with /* and terminates with the characters */ as shown below −
/* my first program in Go */
You cannot have comments within comments and they do not occur within a string or character literals.
A Go identifier is a name used to identify a variable, function, or any other user-defined item. An identifier starts with a letter A to Z or a to z or an underscore _ followed by zero or more letters, underscores, and digits (0 to 9).
identifier = letter { letter | unicode_digit }.
Go does not allow punctuation characters such as @, $, and % within identifiers. Go is a case-sensitive programming language. Thus, Manpower and manpower are two different identifiers in Go. Here are some examples of acceptable identifiers −
mahesh kumar abc move_name a_123
myname50 _temp j a23b9 retVal
The following list shows the reserved words in Go. These reserved words may not be used as constant or variable or any other identifier names.
Whitespace is the term used in Go to describe blanks, tabs, newline characters, and comments. A line containing only whitespace, possibly with a comment, is known as a blank line, and a Go compiler totally ignores it.
Whitespaces separate one part of a statement from another and enables the compiler to identify where one element in a statement, such as int, ends and the next element begins. Therefore, in the following statement −
var age int;
There must be at least one whitespace character (usually a space) between int and age for the compiler to be able to distinguish them. On the other hand, in the following statement −
fruit = apples + oranges; // get the total fruit
No whitespace characters are necessary between fruit and =, or between = and apples, although you are free to include some if you wish for readability purpose.
64 Lectures
6.5 hours
Ridhi Arora
20 Lectures
2.5 hours
Asif Hussain
22 Lectures
4 hours
Dilip Padmanabhan
48 Lectures
6 hours
Arnab Chakraborty
7 Lectures
1 hours
Aditya Kulkarni
44 Lectures
3 hours
Arnab Chakraborty
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2109,
"s": 1937,
"text": "We discussed the basic structure of a Go program in the previous chapter. Now it will be easy to understand the other basic building blocks of the Go programming language."
},
{
"code": null,
"e": 2302,
"s": 2109,
"text": "A Go program consists of various tokens. A token is either a keyword, an identifier, a constant, a string literal, or a symbol. For example, the following Go statement consists of six tokens −"
},
{
"code": null,
"e": 2332,
"s": 2302,
"text": "fmt.Println(\"Hello, World!\")\n"
},
{
"code": null,
"e": 2360,
"s": 2332,
"text": "The individual tokens are −"
},
{
"code": null,
"e": 2397,
"s": 2360,
"text": "fmt\n.\nPrintln\n(\n \"Hello, World!\"\n)"
},
{
"code": null,
"e": 2650,
"s": 2397,
"text": "In a Go program, the line separator key is a statement terminator. That is, individual statements don't need a special separator like “;” in C. The Go compiler internally places “;” as the statement terminator to indicate the end of one logical entity."
},
{
"code": null,
"e": 2705,
"s": 2650,
"text": "For example, take a look at the following statements −"
},
{
"code": null,
"e": 2780,
"s": 2705,
"text": "fmt.Println(\"Hello, World!\")\nfmt.Println(\"I am in Go Programming World!\")\n"
},
{
"code": null,
"e": 2943,
"s": 2780,
"text": "Comments are like helping texts in your Go program and they are ignored by the compiler. They start with /* and terminates with the characters */ as shown below −"
},
{
"code": null,
"e": 2973,
"s": 2943,
"text": "/* my first program in Go */\n"
},
{
"code": null,
"e": 3075,
"s": 2973,
"text": "You cannot have comments within comments and they do not occur within a string or character literals."
},
{
"code": null,
"e": 3311,
"s": 3075,
"text": "A Go identifier is a name used to identify a variable, function, or any other user-defined item. An identifier starts with a letter A to Z or a to z or an underscore _ followed by zero or more letters, underscores, and digits (0 to 9)."
},
{
"code": null,
"e": 3359,
"s": 3311,
"text": "identifier = letter { letter | unicode_digit }."
},
{
"code": null,
"e": 3601,
"s": 3359,
"text": "Go does not allow punctuation characters such as @, $, and % within identifiers. Go is a case-sensitive programming language. Thus, Manpower and manpower are two different identifiers in Go. Here are some examples of acceptable identifiers −"
},
{
"code": null,
"e": 3691,
"s": 3601,
"text": "mahesh kumar abc move_name a_123\nmyname50 _temp j a23b9 retVal\n"
},
{
"code": null,
"e": 3834,
"s": 3691,
"text": "The following list shows the reserved words in Go. These reserved words may not be used as constant or variable or any other identifier names."
},
{
"code": null,
"e": 4052,
"s": 3834,
"text": "Whitespace is the term used in Go to describe blanks, tabs, newline characters, and comments. A line containing only whitespace, possibly with a comment, is known as a blank line, and a Go compiler totally ignores it."
},
{
"code": null,
"e": 4268,
"s": 4052,
"text": "Whitespaces separate one part of a statement from another and enables the compiler to identify where one element in a statement, such as int, ends and the next element begins. Therefore, in the following statement −"
},
{
"code": null,
"e": 4282,
"s": 4268,
"text": "var age int;\n"
},
{
"code": null,
"e": 4465,
"s": 4282,
"text": "There must be at least one whitespace character (usually a space) between int and age for the compiler to be able to distinguish them. On the other hand, in the following statement −"
},
{
"code": null,
"e": 4517,
"s": 4465,
"text": "fruit = apples + oranges; // get the total fruit\n"
},
{
"code": null,
"e": 4677,
"s": 4517,
"text": "No whitespace characters are necessary between fruit and =, or between = and apples, although you are free to include some if you wish for readability purpose."
},
{
"code": null,
"e": 4712,
"s": 4677,
"text": "\n 64 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 4725,
"s": 4712,
"text": " Ridhi Arora"
},
{
"code": null,
"e": 4760,
"s": 4725,
"text": "\n 20 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4774,
"s": 4760,
"text": " Asif Hussain"
},
{
"code": null,
"e": 4807,
"s": 4774,
"text": "\n 22 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4826,
"s": 4807,
"text": " Dilip Padmanabhan"
},
{
"code": null,
"e": 4859,
"s": 4826,
"text": "\n 48 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 4878,
"s": 4859,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 4910,
"s": 4878,
"text": "\n 7 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 4927,
"s": 4910,
"text": " Aditya Kulkarni"
},
{
"code": null,
"e": 4960,
"s": 4927,
"text": "\n 44 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 4979,
"s": 4960,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 4986,
"s": 4979,
"text": " Print"
},
{
"code": null,
"e": 4997,
"s": 4986,
"text": " Add Notes"
}
] |
How do I create a random alpha-numeric string using C++? | In this section we will see how to generate a random alphanumeric string using C++. Here we are providing lowercase letters, uppercase letters and numbers (0-9). This program takes the characters randomly, then creates the random string.
Input: Here we are giving the string length
Output: A random string of that length. Example “XSme6VAsvJ”
Step 1:Define array to hold all uppercase, lowercase letters and numbers
Step 2: Take length n from user
Step 3: Randomly choose characters’ n times and create a string of length n
Step 4: End
Live Demo
#include <iostream>
#include <string>
#include <cstdlib>
#include <ctime>
using namespace std;
static const char alphanum[] = "0123456789" "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "abcdefghijklmnopqrstuvwxyz";
int len = sizeof(alphanum) - 1;
char genRandom() { // Random string generator function.
return alphanum[rand() % len];
}
int main() {
srand(time(0));
int n;
cout << "Enter string length: ";
cin >> n;
for(int z = 0; z < n; z++) { //generate string of length n
cout << genRandom(); //get random character from the given list
}
return 0;
}
Enter string length: 10
XSme6VAsvJ | [
{
"code": null,
"e": 1300,
"s": 1062,
"text": "In this section we will see how to generate a random alphanumeric string using C++. Here we are providing lowercase letters, uppercase letters and numbers (0-9). This program takes the characters randomly, then creates the random string."
},
{
"code": null,
"e": 1405,
"s": 1300,
"text": "Input: Here we are giving the string length\nOutput: A random string of that length. Example “XSme6VAsvJ”"
},
{
"code": null,
"e": 1598,
"s": 1405,
"text": "Step 1:Define array to hold all uppercase, lowercase letters and numbers\nStep 2: Take length n from user\nStep 3: Randomly choose characters’ n times and create a string of length n\nStep 4: End"
},
{
"code": null,
"e": 1609,
"s": 1598,
"text": " Live Demo"
},
{
"code": null,
"e": 2177,
"s": 1609,
"text": "#include <iostream>\n#include <string>\n#include <cstdlib>\n#include <ctime>\nusing namespace std;\nstatic const char alphanum[] = \"0123456789\" \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\" \"abcdefghijklmnopqrstuvwxyz\";\nint len = sizeof(alphanum) - 1;\nchar genRandom() { // Random string generator function.\n return alphanum[rand() % len];\n}\nint main() {\n srand(time(0));\n int n;\n cout << \"Enter string length: \";\n cin >> n;\n for(int z = 0; z < n; z++) { //generate string of length n\n cout << genRandom(); //get random character from the given list\n }\n return 0;\n}"
},
{
"code": null,
"e": 2212,
"s": 2177,
"text": "Enter string length: 10\nXSme6VAsvJ"
}
] |
Entity Framework - Command Interception | In Entity Framework 6.0, there is another new feature known as Interceptor or Interception. The interception code is built around the concept of interception interfaces. For example, the IDbCommandInterceptor interface defines methods that are called before EF makes a call to ExecuteNonQuery, ExecuteScalar, ExecuteReader, and related methods.
Entity Framework can truly shine by using interception. Using this approach you can capture a lot more information transiently without having to untidy your code.
Entity Framework can truly shine by using interception. Using this approach you can capture a lot more information transiently without having to untidy your code.
To implement this, you need to create your own custom interceptor and register it accordingly.
To implement this, you need to create your own custom interceptor and register it accordingly.
Once a class that implements IDbCommandInterceptor interface has been created it can be registered with Entity Framework using the DbInterception class.
Once a class that implements IDbCommandInterceptor interface has been created it can be registered with Entity Framework using the DbInterception class.
IDbCommandInterceptor interface has six methods and you need to implement all these methods. Following are the basic implementation of these methods.
IDbCommandInterceptor interface has six methods and you need to implement all these methods. Following are the basic implementation of these methods.
Let’s take a look at the following code in which IDbCommandInterceptor interface is implemented.
public class MyCommandInterceptor : IDbCommandInterceptor {
public static void Log(string comm, string message) {
Console.WriteLine("Intercepted: {0}, Command Text: {1} ", comm, message);
}
public void NonQueryExecuted(DbCommand command,
DbCommandInterceptionContext<int> interceptionContext) {
Log("NonQueryExecuted: ", command.CommandText);
}
public void NonQueryExecuting(DbCommand command,
DbCommandInterceptionContext<int> interceptionContext) {
Log("NonQueryExecuting: ", command.CommandText);
}
public void ReaderExecuted(DbCommand command,
DbCommandInterceptionContext<DbDataReader> interceptionContext) {
Log("ReaderExecuted: ", command.CommandText);
}
public void ReaderExecuting(DbCommand command,
DbCommandInterceptionContext<DbDataReader> interceptionContext) {
Log("ReaderExecuting: ", command.CommandText);
}
public void ScalarExecuted(DbCommand command,
DbCommandInterceptionContext<object> interceptionContext) {
Log("ScalarExecuted: ", command.CommandText);
}
public void ScalarExecuting(DbCommand command,
DbCommandInterceptionContext<object> interceptionContext) {
Log("ScalarExecuting: ", command.CommandText);
}
}
Once a class that implements one or more of the interception interfaces has been created it can be registered with EF using the DbInterception class as shown in the following code.
DbInterception.Add(new MyCommandInterceptor());
Interceptors can also be registered at the app-domain level using the DbConfiguration code-based configuration as shown in the following code.
public class MyDBConfiguration : DbConfiguration {
public MyDBConfiguration() {
DbInterception.Add(new MyCommandInterceptor());
}
}
You can also configure interceptor config file using the code −
<entityFramework>
<interceptors>
<interceptor type = "EFInterceptDemo.MyCommandInterceptor, EFInterceptDemo"/>
</interceptors>
</entityFramework>
19 Lectures
5 hours
Trevoir Williams
33 Lectures
3.5 hours
Nilay Mehta
21 Lectures
2.5 hours
TELCOMA Global
89 Lectures
7.5 hours
Mustafa Radaideh
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3377,
"s": 3032,
"text": "In Entity Framework 6.0, there is another new feature known as Interceptor or Interception. The interception code is built around the concept of interception interfaces. For example, the IDbCommandInterceptor interface defines methods that are called before EF makes a call to ExecuteNonQuery, ExecuteScalar, ExecuteReader, and related methods."
},
{
"code": null,
"e": 3540,
"s": 3377,
"text": "Entity Framework can truly shine by using interception. Using this approach you can capture a lot more information transiently without having to untidy your code."
},
{
"code": null,
"e": 3703,
"s": 3540,
"text": "Entity Framework can truly shine by using interception. Using this approach you can capture a lot more information transiently without having to untidy your code."
},
{
"code": null,
"e": 3798,
"s": 3703,
"text": "To implement this, you need to create your own custom interceptor and register it accordingly."
},
{
"code": null,
"e": 3893,
"s": 3798,
"text": "To implement this, you need to create your own custom interceptor and register it accordingly."
},
{
"code": null,
"e": 4046,
"s": 3893,
"text": "Once a class that implements IDbCommandInterceptor interface has been created it can be registered with Entity Framework using the DbInterception class."
},
{
"code": null,
"e": 4199,
"s": 4046,
"text": "Once a class that implements IDbCommandInterceptor interface has been created it can be registered with Entity Framework using the DbInterception class."
},
{
"code": null,
"e": 4349,
"s": 4199,
"text": "IDbCommandInterceptor interface has six methods and you need to implement all these methods. Following are the basic implementation of these methods."
},
{
"code": null,
"e": 4499,
"s": 4349,
"text": "IDbCommandInterceptor interface has six methods and you need to implement all these methods. Following are the basic implementation of these methods."
},
{
"code": null,
"e": 4596,
"s": 4499,
"text": "Let’s take a look at the following code in which IDbCommandInterceptor interface is implemented."
},
{
"code": null,
"e": 5884,
"s": 4596,
"text": "public class MyCommandInterceptor : IDbCommandInterceptor {\n\n public static void Log(string comm, string message) {\n Console.WriteLine(\"Intercepted: {0}, Command Text: {1} \", comm, message);\n }\n\n public void NonQueryExecuted(DbCommand command, \n DbCommandInterceptionContext<int> interceptionContext) {\n Log(\"NonQueryExecuted: \", command.CommandText);\n }\n\n public void NonQueryExecuting(DbCommand command, \n DbCommandInterceptionContext<int> interceptionContext) {\n Log(\"NonQueryExecuting: \", command.CommandText);\n }\n\n public void ReaderExecuted(DbCommand command, \n DbCommandInterceptionContext<DbDataReader> interceptionContext) {\n Log(\"ReaderExecuted: \", command.CommandText);\n }\n\n public void ReaderExecuting(DbCommand command, \n DbCommandInterceptionContext<DbDataReader> interceptionContext) {\n Log(\"ReaderExecuting: \", command.CommandText);\n }\n\n public void ScalarExecuted(DbCommand command, \n DbCommandInterceptionContext<object> interceptionContext) {\n Log(\"ScalarExecuted: \", command.CommandText);\n }\n\n public void ScalarExecuting(DbCommand command, \n DbCommandInterceptionContext<object> interceptionContext) {\n Log(\"ScalarExecuting: \", command.CommandText);\n }\n\n}"
},
{
"code": null,
"e": 6065,
"s": 5884,
"text": "Once a class that implements one or more of the interception interfaces has been created it can be registered with EF using the DbInterception class as shown in the following code."
},
{
"code": null,
"e": 6114,
"s": 6065,
"text": "DbInterception.Add(new MyCommandInterceptor());\n"
},
{
"code": null,
"e": 6257,
"s": 6114,
"text": "Interceptors can also be registered at the app-domain level using the DbConfiguration code-based configuration as shown in the following code."
},
{
"code": null,
"e": 6402,
"s": 6257,
"text": "public class MyDBConfiguration : DbConfiguration {\n\n public MyDBConfiguration() {\n DbInterception.Add(new MyCommandInterceptor());\n }\n}"
},
{
"code": null,
"e": 6466,
"s": 6402,
"text": "You can also configure interceptor config file using the code −"
},
{
"code": null,
"e": 6624,
"s": 6466,
"text": "<entityFramework>\n <interceptors>\n <interceptor type = \"EFInterceptDemo.MyCommandInterceptor, EFInterceptDemo\"/>\n </interceptors>\n</entityFramework>"
},
{
"code": null,
"e": 6657,
"s": 6624,
"text": "\n 19 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6675,
"s": 6657,
"text": " Trevoir Williams"
},
{
"code": null,
"e": 6710,
"s": 6675,
"text": "\n 33 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6723,
"s": 6710,
"text": " Nilay Mehta"
},
{
"code": null,
"e": 6758,
"s": 6723,
"text": "\n 21 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 6774,
"s": 6758,
"text": " TELCOMA Global"
},
{
"code": null,
"e": 6809,
"s": 6774,
"text": "\n 89 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 6827,
"s": 6809,
"text": " Mustafa Radaideh"
},
{
"code": null,
"e": 6834,
"s": 6827,
"text": " Print"
},
{
"code": null,
"e": 6845,
"s": 6834,
"text": " Add Notes"
}
] |
C++ Queue Library - back() Function | The C++ function std::queue::back() returns a reference to the last element of queue. This is the most recently enqueued element.
Following is the declaration for std::queue::back() function form std::queue header.
value_type& back();
const value_type& back() const;
reference& back();
const_reference& back() const;
None
Returns reference to the last element of the queue.
No-throw guarantee for standard non-empty containers.
Constant i.e. O(1)
The following example shows the usage of std::queue::back() function.
#include <iostream>
#include <queue>
using namespace std;
int main(void) {
queue<int> q;
for (int i = 0; i < 5; ++i)
q.push(i + 1);
cout << "Last element of queue q is = " << q.back() << endl;
return 0;
}
Let us compile and run the above program, this will produce the following result −
Last element of queue q is = 5
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2733,
"s": 2603,
"text": "The C++ function std::queue::back() returns a reference to the last element of queue. This is the most recently enqueued element."
},
{
"code": null,
"e": 2818,
"s": 2733,
"text": "Following is the declaration for std::queue::back() function form std::queue header."
},
{
"code": null,
"e": 2871,
"s": 2818,
"text": "value_type& back();\nconst value_type& back() const;\n"
},
{
"code": null,
"e": 2922,
"s": 2871,
"text": "reference& back();\nconst_reference& back() const;\n"
},
{
"code": null,
"e": 2927,
"s": 2922,
"text": "None"
},
{
"code": null,
"e": 2979,
"s": 2927,
"text": "Returns reference to the last element of the queue."
},
{
"code": null,
"e": 3033,
"s": 2979,
"text": "No-throw guarantee for standard non-empty containers."
},
{
"code": null,
"e": 3052,
"s": 3033,
"text": "Constant i.e. O(1)"
},
{
"code": null,
"e": 3122,
"s": 3052,
"text": "The following example shows the usage of std::queue::back() function."
},
{
"code": null,
"e": 3350,
"s": 3122,
"text": "#include <iostream>\n#include <queue>\n\nusing namespace std;\n\nint main(void) {\n queue<int> q;\n\n for (int i = 0; i < 5; ++i)\n q.push(i + 1);\n\n cout << \"Last element of queue q is = \" << q.back() << endl;\n\n return 0;\n}"
},
{
"code": null,
"e": 3433,
"s": 3350,
"text": "Let us compile and run the above program, this will produce the following result −"
},
{
"code": null,
"e": 3465,
"s": 3433,
"text": "Last element of queue q is = 5\n"
},
{
"code": null,
"e": 3472,
"s": 3465,
"text": " Print"
},
{
"code": null,
"e": 3483,
"s": 3472,
"text": " Add Notes"
}
] |
Java Swing JTable Example | Simple JTable Example in Java | PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC
EXCEPTIONS
COLLECTIONS
SWING
JDBC
JAVA 8
SPRING
SPRING BOOT
HIBERNATE
PYTHON
PHP
JQUERY
PROGRAMMINGJava ExamplesC Examples
Java Examples
C Examples
C Tutorials
aws
In this tutorials, we are going to create a simple JTable with Swing in Java. The Swing JTable is used to display data as a regular two-dimensional table of cells.
package com.swing.examples;
import java.awt.event.WindowAdapter;
import java.awt.event.WindowEvent;
import javax.swing.JFrame;
import javax.swing.JScrollPane;
import javax.swing.JTable;
public class JTableDemo extends JFrame {
JTable table;
JTableDemo() {
super("JTable Demo");
String headers[] = { "Name", "Address", "Phone", "Experiance" };
String data[][] = { { "Chandra", "Hyderabad", "4568569568", "true" },
{ "Srikanth", "Vijayawada", "8595652541", "true" },
{ "Rajesh", "Banglore", "8585656545", "false" },
{ "Charan", "Mumbai", "9858654852", "true" },
{ "Kumar", "Pune", "4568569568", "2" },
{ "Venu", "Chennai", "8451265923", "2" },
{ "Gopal", "Vizag", "7845956585", "2" } };
// Creating JTable with table data and headers
table = new JTable(data, headers);
// Adding table to content pane
getContentPane().add(new JScrollPane(table));
setSize(500, 150);
setVisible(true);
setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
}
public static void main(String[] args) {
new JTableDemo();
}
}
Output :
Happy Learning 🙂
Java JList Multiple Selection Example
How to change Look and Feel of Swing setLookAndFeel
Java Swing Advanced JTable Example
Java Swing JLabel Example
Java Swing Login Example
Java Swing JSplitPane Example
Java Swing ProgressBar Example
Java Swing JTabbedPane Example
Java Swing JMenu Example
Java Swing JToolBar Example
Java Swing JOptionPane Example
Java Swing JTree Example
How to create Java Smiley Swing
How to create Java Rainbow using Swing
Java Swing BorderFactory Example
Java JList Multiple Selection Example
How to change Look and Feel of Swing setLookAndFeel
Java Swing Advanced JTable Example
Java Swing JLabel Example
Java Swing Login Example
Java Swing JSplitPane Example
Java Swing ProgressBar Example
Java Swing JTabbedPane Example
Java Swing JMenu Example
Java Swing JToolBar Example
Java Swing JOptionPane Example
Java Swing JTree Example
How to create Java Smiley Swing
How to create Java Rainbow using Swing
Java Swing BorderFactory Example
Δ
Install Java on Mac OS
Install AWS CLI on Windows
Install Minikube on Windows
Install Docker Toolbox on Windows
Install SOAPUI on Windows
Install Gradle on Windows
Install RabbitMQ on Windows
Install PuTTY on windows
Install Mysql on Windows
Install Hibernate Tools in Eclipse
Install Elasticsearch on Windows
Install Maven on Windows
Install Maven on Ubuntu
Install Maven on Windows Command
Add OJDBC jar to Maven Repository
Install Ant on Windows
Install RabbitMQ on Windows
Install Apache Kafka on Ubuntu
Install Apache Kafka on Windows
Java8 – Install Windows
Java8 – foreach
Java8 – forEach with index
Java8 – Stream Filter Objects
Java8 – Comparator Userdefined
Java8 – GroupingBy
Java8 – SummingInt
Java8 – walk ReadFiles
Java8 – JAVA_HOME on Windows
Howto – Install Java on Mac OS
Howto – Convert Iterable to Stream
Howto – Get common elements from two Lists
Howto – Convert List to String
Howto – Concatenate Arrays using Stream
Howto – Remove duplicates from List
Howto – Filter null values from Stream
Howto – Convert List to Map
Howto – Convert Stream to List
Howto – Sort a Map
Howto – Filter a Map
Howto – Get Current UTC Time
Howto – Verify an Array contains a specific value
Howto – Convert ArrayList to Array
Howto – Read File Line By Line
Howto – Convert Date to LocalDate
Howto – Merge Streams
Howto – Resolve NullPointerException in toMap
Howto -Get Stream count
Howto – Get Min and Max values in a Stream
Howto – Convert InputStream to String | [
{
"code": null,
"e": 158,
"s": 123,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 172,
"s": 158,
"text": "Java Examples"
},
{
"code": null,
"e": 183,
"s": 172,
"text": "C Examples"
},
{
"code": null,
"e": 195,
"s": 183,
"text": "C Tutorials"
},
{
"code": null,
"e": 199,
"s": 195,
"text": "aws"
},
{
"code": null,
"e": 234,
"s": 199,
"text": "JAVAEXCEPTIONSCOLLECTIONSSWINGJDBC"
},
{
"code": null,
"e": 245,
"s": 234,
"text": "EXCEPTIONS"
},
{
"code": null,
"e": 257,
"s": 245,
"text": "COLLECTIONS"
},
{
"code": null,
"e": 263,
"s": 257,
"text": "SWING"
},
{
"code": null,
"e": 268,
"s": 263,
"text": "JDBC"
},
{
"code": null,
"e": 275,
"s": 268,
"text": "JAVA 8"
},
{
"code": null,
"e": 282,
"s": 275,
"text": "SPRING"
},
{
"code": null,
"e": 294,
"s": 282,
"text": "SPRING BOOT"
},
{
"code": null,
"e": 304,
"s": 294,
"text": "HIBERNATE"
},
{
"code": null,
"e": 311,
"s": 304,
"text": "PYTHON"
},
{
"code": null,
"e": 315,
"s": 311,
"text": "PHP"
},
{
"code": null,
"e": 322,
"s": 315,
"text": "JQUERY"
},
{
"code": null,
"e": 357,
"s": 322,
"text": "PROGRAMMINGJava ExamplesC Examples"
},
{
"code": null,
"e": 371,
"s": 357,
"text": "Java Examples"
},
{
"code": null,
"e": 382,
"s": 371,
"text": "C Examples"
},
{
"code": null,
"e": 394,
"s": 382,
"text": "C Tutorials"
},
{
"code": null,
"e": 398,
"s": 394,
"text": "aws"
},
{
"code": null,
"e": 562,
"s": 398,
"text": "In this tutorials, we are going to create a simple JTable with Swing in Java. The Swing JTable is used to display data as a regular two-dimensional table of cells."
},
{
"code": null,
"e": 1767,
"s": 562,
"text": "package com.swing.examples;\n\nimport java.awt.event.WindowAdapter;\nimport java.awt.event.WindowEvent;\n\nimport javax.swing.JFrame;\nimport javax.swing.JScrollPane;\nimport javax.swing.JTable;\n\npublic class JTableDemo extends JFrame {\n JTable table;\n\n JTableDemo() {\n super(\"JTable Demo\");\n String headers[] = { \"Name\", \"Address\", \"Phone\", \"Experiance\" };\n String data[][] = { { \"Chandra\", \"Hyderabad\", \"4568569568\", \"true\" },\n { \"Srikanth\", \"Vijayawada\", \"8595652541\", \"true\" },\n { \"Rajesh\", \"Banglore\", \"8585656545\", \"false\" },\n { \"Charan\", \"Mumbai\", \"9858654852\", \"true\" },\n { \"Kumar\", \"Pune\", \"4568569568\", \"2\" },\n { \"Venu\", \"Chennai\", \"8451265923\", \"2\" },\n { \"Gopal\", \"Vizag\", \"7845956585\", \"2\" } };\n\n // Creating JTable with table data and headers\n table = new JTable(data, headers);\n // Adding table to content pane\n getContentPane().add(new JScrollPane(table));\n setSize(500, 150);\n setVisible(true);\n setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);\n }\n\n public static void main(String[] args) {\n new JTableDemo();\n }\n}"
},
{
"code": null,
"e": 1776,
"s": 1767,
"text": "Output :"
},
{
"code": null,
"e": 1793,
"s": 1776,
"text": "Happy Learning 🙂"
},
{
"code": null,
"e": 2276,
"s": 1793,
"text": "\nJava JList Multiple Selection Example\nHow to change Look and Feel of Swing setLookAndFeel\nJava Swing Advanced JTable Example\nJava Swing JLabel Example\nJava Swing Login Example\nJava Swing JSplitPane Example\nJava Swing ProgressBar Example\nJava Swing JTabbedPane Example\nJava Swing JMenu Example\nJava Swing JToolBar Example\nJava Swing JOptionPane Example\nJava Swing JTree Example\nHow to create Java Smiley Swing\nHow to create Java Rainbow using Swing\nJava Swing BorderFactory Example\n"
},
{
"code": null,
"e": 2314,
"s": 2276,
"text": "Java JList Multiple Selection Example"
},
{
"code": null,
"e": 2366,
"s": 2314,
"text": "How to change Look and Feel of Swing setLookAndFeel"
},
{
"code": null,
"e": 2401,
"s": 2366,
"text": "Java Swing Advanced JTable Example"
},
{
"code": null,
"e": 2427,
"s": 2401,
"text": "Java Swing JLabel Example"
},
{
"code": null,
"e": 2452,
"s": 2427,
"text": "Java Swing Login Example"
},
{
"code": null,
"e": 2482,
"s": 2452,
"text": "Java Swing JSplitPane Example"
},
{
"code": null,
"e": 2513,
"s": 2482,
"text": "Java Swing ProgressBar Example"
},
{
"code": null,
"e": 2544,
"s": 2513,
"text": "Java Swing JTabbedPane Example"
},
{
"code": null,
"e": 2569,
"s": 2544,
"text": "Java Swing JMenu Example"
},
{
"code": null,
"e": 2597,
"s": 2569,
"text": "Java Swing JToolBar Example"
},
{
"code": null,
"e": 2628,
"s": 2597,
"text": "Java Swing JOptionPane Example"
},
{
"code": null,
"e": 2653,
"s": 2628,
"text": "Java Swing JTree Example"
},
{
"code": null,
"e": 2685,
"s": 2653,
"text": "How to create Java Smiley Swing"
},
{
"code": null,
"e": 2724,
"s": 2685,
"text": "How to create Java Rainbow using Swing"
},
{
"code": null,
"e": 2757,
"s": 2724,
"text": "Java Swing BorderFactory Example"
},
{
"code": null,
"e": 2763,
"s": 2761,
"text": "Δ"
},
{
"code": null,
"e": 2787,
"s": 2763,
"text": " Install Java on Mac OS"
},
{
"code": null,
"e": 2815,
"s": 2787,
"text": " Install AWS CLI on Windows"
},
{
"code": null,
"e": 2844,
"s": 2815,
"text": " Install Minikube on Windows"
},
{
"code": null,
"e": 2879,
"s": 2844,
"text": " Install Docker Toolbox on Windows"
},
{
"code": null,
"e": 2906,
"s": 2879,
"text": " Install SOAPUI on Windows"
},
{
"code": null,
"e": 2933,
"s": 2906,
"text": " Install Gradle on Windows"
},
{
"code": null,
"e": 2962,
"s": 2933,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 2988,
"s": 2962,
"text": " Install PuTTY on windows"
},
{
"code": null,
"e": 3014,
"s": 2988,
"text": " Install Mysql on Windows"
},
{
"code": null,
"e": 3050,
"s": 3014,
"text": " Install Hibernate Tools in Eclipse"
},
{
"code": null,
"e": 3084,
"s": 3050,
"text": " Install Elasticsearch on Windows"
},
{
"code": null,
"e": 3110,
"s": 3084,
"text": " Install Maven on Windows"
},
{
"code": null,
"e": 3135,
"s": 3110,
"text": " Install Maven on Ubuntu"
},
{
"code": null,
"e": 3169,
"s": 3135,
"text": " Install Maven on Windows Command"
},
{
"code": null,
"e": 3204,
"s": 3169,
"text": " Add OJDBC jar to Maven Repository"
},
{
"code": null,
"e": 3228,
"s": 3204,
"text": " Install Ant on Windows"
},
{
"code": null,
"e": 3257,
"s": 3228,
"text": " Install RabbitMQ on Windows"
},
{
"code": null,
"e": 3289,
"s": 3257,
"text": " Install Apache Kafka on Ubuntu"
},
{
"code": null,
"e": 3322,
"s": 3289,
"text": " Install Apache Kafka on Windows"
},
{
"code": null,
"e": 3347,
"s": 3322,
"text": " Java8 – Install Windows"
},
{
"code": null,
"e": 3364,
"s": 3347,
"text": " Java8 – foreach"
},
{
"code": null,
"e": 3392,
"s": 3364,
"text": " Java8 – forEach with index"
},
{
"code": null,
"e": 3423,
"s": 3392,
"text": " Java8 – Stream Filter Objects"
},
{
"code": null,
"e": 3455,
"s": 3423,
"text": " Java8 – Comparator Userdefined"
},
{
"code": null,
"e": 3475,
"s": 3455,
"text": " Java8 – GroupingBy"
},
{
"code": null,
"e": 3495,
"s": 3475,
"text": " Java8 – SummingInt"
},
{
"code": null,
"e": 3519,
"s": 3495,
"text": " Java8 – walk ReadFiles"
},
{
"code": null,
"e": 3549,
"s": 3519,
"text": " Java8 – JAVA_HOME on Windows"
},
{
"code": null,
"e": 3581,
"s": 3549,
"text": " Howto – Install Java on Mac OS"
},
{
"code": null,
"e": 3617,
"s": 3581,
"text": " Howto – Convert Iterable to Stream"
},
{
"code": null,
"e": 3661,
"s": 3617,
"text": " Howto – Get common elements from two Lists"
},
{
"code": null,
"e": 3693,
"s": 3661,
"text": " Howto – Convert List to String"
},
{
"code": null,
"e": 3734,
"s": 3693,
"text": " Howto – Concatenate Arrays using Stream"
},
{
"code": null,
"e": 3771,
"s": 3734,
"text": " Howto – Remove duplicates from List"
},
{
"code": null,
"e": 3811,
"s": 3771,
"text": " Howto – Filter null values from Stream"
},
{
"code": null,
"e": 3840,
"s": 3811,
"text": " Howto – Convert List to Map"
},
{
"code": null,
"e": 3872,
"s": 3840,
"text": " Howto – Convert Stream to List"
},
{
"code": null,
"e": 3892,
"s": 3872,
"text": " Howto – Sort a Map"
},
{
"code": null,
"e": 3914,
"s": 3892,
"text": " Howto – Filter a Map"
},
{
"code": null,
"e": 3944,
"s": 3914,
"text": " Howto – Get Current UTC Time"
},
{
"code": null,
"e": 3995,
"s": 3944,
"text": " Howto – Verify an Array contains a specific value"
},
{
"code": null,
"e": 4031,
"s": 3995,
"text": " Howto – Convert ArrayList to Array"
},
{
"code": null,
"e": 4063,
"s": 4031,
"text": " Howto – Read File Line By Line"
},
{
"code": null,
"e": 4098,
"s": 4063,
"text": " Howto – Convert Date to LocalDate"
},
{
"code": null,
"e": 4121,
"s": 4098,
"text": " Howto – Merge Streams"
},
{
"code": null,
"e": 4168,
"s": 4121,
"text": " Howto – Resolve NullPointerException in toMap"
},
{
"code": null,
"e": 4193,
"s": 4168,
"text": " Howto -Get Stream count"
},
{
"code": null,
"e": 4237,
"s": 4193,
"text": " Howto – Get Min and Max values in a Stream"
}
] |
Start Coding - Java | Practice | GeeksforGeeks | When learning a new language, we first learn to output some message. Here, we'll start with the famous Hello World message. Now, here you are given a function to complete. Don't worry about the ins and outs of functions, just add the command (System.out.print("Hello World")) to print Hello World.
Example:
Input:
No input
Output:
Hello World
Explanation:
Hello World is printed.
User Task:
Your task is to complete the function below to print hello world.
0
nivruttidahake304 days ago
// { Driver Code Starts//Initial Template for Java
// } Driver Code Ends//User function Template for Java
class Geeks{ // Function to print hello static void printHello(){ // Your code here System.out.println("Hello World"); } }
// { Driver Code Starts.
class GfG{ public static void main(String args[]){ //Creating an Object of Class Geeks Geeks g = new Geeks(); //Calling printHello() function of the Class Geeks g.printHello(); } } // } Driver Code Ends
0
maquickstop1 week ago
class hello{
public void m1(){
System.out.println("Hello World");
}
}
0
vadaliyaravi21 week ago
class Geeks{ // Function to print hello static void printHello(){ System.out.println("Hello World") } }
0
roxsonu92 weeks ago
class Geeks{ // Function to print hello static void printHello(){ System.out.print("Hello World"); } }
0
priyanshukumar235652 weeks ago
class Geeks{ // Function to print hello static void printHello(){ System.out.println("Hello World"); // Your code here } }
0
shubhamkumar0285s2 weeks ago
class Geeks{ // Function to print hello static void printHello(){ // Your code here System.out.println("Hello World"); } }
0
arpandutta5033 weeks ago
System.out.println(hello world)
0
arpandutta5033 weeks ago
System.out.println(Hello world);
0
ddarshu7713 weeks ago
Darshan K R
System.out.println("Hello World);
0
ashishreddy0333 weeks ago
System.out.println("Hello World");
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 576,
"s": 278,
"text": "When learning a new language, we first learn to output some message. Here, we'll start with the famous Hello World message. Now, here you are given a function to complete. Don't worry about the ins and outs of functions, just add the command (System.out.print(\"Hello World\")) to print Hello World."
},
{
"code": null,
"e": 585,
"s": 576,
"text": "Example:"
},
{
"code": null,
"e": 660,
"s": 585,
"text": "Input:\nNo input\n\nOutput:\nHello World\n\nExplanation:\nHello World is printed."
},
{
"code": null,
"e": 737,
"s": 660,
"text": "User Task:\nYour task is to complete the function below to print hello world."
},
{
"code": null,
"e": 739,
"s": 737,
"text": "0"
},
{
"code": null,
"e": 766,
"s": 739,
"text": "nivruttidahake304 days ago"
},
{
"code": null,
"e": 817,
"s": 766,
"text": "// { Driver Code Starts//Initial Template for Java"
},
{
"code": null,
"e": 872,
"s": 817,
"text": "// } Driver Code Ends//User function Template for Java"
},
{
"code": null,
"e": 1025,
"s": 872,
"text": "class Geeks{ // Function to print hello static void printHello(){ // Your code here System.out.println(\"Hello World\"); } }"
},
{
"code": null,
"e": 1050,
"s": 1025,
"text": "// { Driver Code Starts."
},
{
"code": null,
"e": 1299,
"s": 1050,
"text": "class GfG{ public static void main(String args[]){ //Creating an Object of Class Geeks Geeks g = new Geeks(); //Calling printHello() function of the Class Geeks g.printHello(); } } // } Driver Code Ends"
},
{
"code": null,
"e": 1301,
"s": 1299,
"text": "0"
},
{
"code": null,
"e": 1323,
"s": 1301,
"text": "maquickstop1 week ago"
},
{
"code": null,
"e": 1336,
"s": 1323,
"text": "class hello{"
},
{
"code": null,
"e": 1354,
"s": 1336,
"text": "public void m1(){"
},
{
"code": null,
"e": 1389,
"s": 1354,
"text": "System.out.println(\"Hello World\");"
},
{
"code": null,
"e": 1391,
"s": 1389,
"text": "}"
},
{
"code": null,
"e": 1393,
"s": 1391,
"text": "}"
},
{
"code": null,
"e": 1395,
"s": 1393,
"text": "0"
},
{
"code": null,
"e": 1419,
"s": 1395,
"text": "vadaliyaravi21 week ago"
},
{
"code": null,
"e": 1553,
"s": 1419,
"text": "class Geeks{ // Function to print hello static void printHello(){ System.out.println(\"Hello World\") } }"
},
{
"code": null,
"e": 1555,
"s": 1553,
"text": "0"
},
{
"code": null,
"e": 1575,
"s": 1555,
"text": "roxsonu92 weeks ago"
},
{
"code": null,
"e": 1709,
"s": 1575,
"text": "class Geeks{ // Function to print hello static void printHello(){ System.out.print(\"Hello World\"); } }"
},
{
"code": null,
"e": 1711,
"s": 1709,
"text": "0"
},
{
"code": null,
"e": 1742,
"s": 1711,
"text": "priyanshukumar235652 weeks ago"
},
{
"code": null,
"e": 1902,
"s": 1742,
"text": "class Geeks{ // Function to print hello static void printHello(){ System.out.println(\"Hello World\"); // Your code here } }"
},
{
"code": null,
"e": 1904,
"s": 1902,
"text": "0"
},
{
"code": null,
"e": 1933,
"s": 1904,
"text": "shubhamkumar0285s2 weeks ago"
},
{
"code": null,
"e": 2086,
"s": 1933,
"text": "class Geeks{ // Function to print hello static void printHello(){ // Your code here System.out.println(\"Hello World\"); } }"
},
{
"code": null,
"e": 2088,
"s": 2086,
"text": "0"
},
{
"code": null,
"e": 2113,
"s": 2088,
"text": "arpandutta5033 weeks ago"
},
{
"code": null,
"e": 2145,
"s": 2113,
"text": "System.out.println(hello world)"
},
{
"code": null,
"e": 2147,
"s": 2145,
"text": "0"
},
{
"code": null,
"e": 2172,
"s": 2147,
"text": "arpandutta5033 weeks ago"
},
{
"code": null,
"e": 2205,
"s": 2172,
"text": "System.out.println(Hello world);"
},
{
"code": null,
"e": 2207,
"s": 2205,
"text": "0"
},
{
"code": null,
"e": 2229,
"s": 2207,
"text": "ddarshu7713 weeks ago"
},
{
"code": null,
"e": 2241,
"s": 2229,
"text": "Darshan K R"
},
{
"code": null,
"e": 2275,
"s": 2241,
"text": "System.out.println(\"Hello World);"
},
{
"code": null,
"e": 2277,
"s": 2275,
"text": "0"
},
{
"code": null,
"e": 2303,
"s": 2277,
"text": "ashishreddy0333 weeks ago"
},
{
"code": null,
"e": 2338,
"s": 2303,
"text": "System.out.println(\"Hello World\");"
},
{
"code": null,
"e": 2484,
"s": 2338,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 2520,
"s": 2484,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 2530,
"s": 2520,
"text": "\nProblem\n"
},
{
"code": null,
"e": 2540,
"s": 2530,
"text": "\nContest\n"
},
{
"code": null,
"e": 2603,
"s": 2540,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 2751,
"s": 2603,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 2959,
"s": 2751,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 3065,
"s": 2959,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
How to draw triangle shape in android? | This example demonstrate about How to draw triangle shape in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
android:gravity="center"
android:layout_marginTop="30dp"
tools:context=".MainActivity">
<ImageView
android:layout_width="100dp"
android:layout_height="100dp"
android:background="@drawable/background"/>
</LinearLayout>
In the above code, we have taken imageview and added background as background.xml.
Step 3 − Add the following code to drawable/ background.xml
<vector xmlns:android="http://schemas.android.com/apk/res/android"
android:height="100dp"
android:width="100dp"
android:viewportWidth="24"
android:viewportHeight="24">
<path android:fillColor="#000" android:pathData="M1,21H23L12,2" />
</vector>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
Click here to download the project code | [
{
"code": null,
"e": 1132,
"s": 1062,
"text": "This example demonstrate about How to draw triangle shape in android."
},
{
"code": null,
"e": 1261,
"s": 1132,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1326,
"s": 1261,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 1849,
"s": 1326,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:orientation=\"vertical\"\n android:gravity=\"center\"\n android:layout_marginTop=\"30dp\"\n tools:context=\".MainActivity\">\n <ImageView\n android:layout_width=\"100dp\"\n android:layout_height=\"100dp\"\n android:background=\"@drawable/background\"/>\n</LinearLayout>"
},
{
"code": null,
"e": 1932,
"s": 1849,
"text": "In the above code, we have taken imageview and added background as background.xml."
},
{
"code": null,
"e": 1992,
"s": 1932,
"text": "Step 3 − Add the following code to drawable/ background.xml"
},
{
"code": null,
"e": 2252,
"s": 1992,
"text": "<vector xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:height=\"100dp\"\n android:width=\"100dp\"\n android:viewportWidth=\"24\"\n android:viewportHeight=\"24\">\n <path android:fillColor=\"#000\" android:pathData=\"M1,21H23L12,2\" />\n</vector>"
},
{
"code": null,
"e": 2599,
"s": 2252,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –"
},
{
"code": null,
"e": 2639,
"s": 2599,
"text": "Click here to download the project code"
}
] |
JSP - Directives | In this chapter, we will discuss Directives in JSP. These directives provide directions and instructions to the container, telling it how to handle certain aspects of the JSP processing.
A JSP directive affects the overall structure of the servlet class. It usually has the following form −
<%@ directive attribute = "value" %>
Directives can have a number of attributes which you can list down as key-value pairs and separated by commas.
The blanks between the @ symbol and the directive name, and between the last attribute and the closing %>, are optional.
There are three types of directive tag −
<%@ page ... %>
Defines page-dependent attributes, such as scripting language, error page, and buffering requirements.
<%@ include ... %>
Includes a file during the translation phase.
<%@ taglib ... %>
Declares a tag library, containing custom actions, used in the page
The page directive is used to provide instructions to the container. These instructions pertain to the current JSP page. You may code page directives anywhere in your JSP page. By convention, page directives are coded at the top of the JSP page.
Following is the basic syntax of the page directive −
<%@ page attribute = "value" %>
You can write the XML equivalent of the above syntax as follows −
<jsp:directive.page attribute = "value" />
Following table lists out the attributes associated with the page directive −
buffer
Specifies a buffering model for the output stream.
autoFlush
Controls the behavior of the servlet output buffer.
contentType
Defines the character encoding scheme.
errorPage
Defines the URL of another JSP that reports on Java unchecked runtime exceptions.
isErrorPage
Indicates if this JSP page is a URL specified by another JSP page's errorPage attribute.
extends
Specifies a superclass that the generated servlet must extend.
import
Specifies a list of packages or classes for use in the JSP as the Java import statement does for Java classes.
info
Defines a string that can be accessed with the servlet's getServletInfo() method.
isThreadSafe
Defines the threading model for the generated servlet.
language
Defines the programming language used in the JSP page.
session
Specifies whether or not the JSP page participates in HTTP sessions
isELIgnored
Specifies whether or not the EL expression within the JSP page will be ignored.
isScriptingEnabled
Determines if the scripting elements are allowed for use.
Check for more details related to all the above attributes at Page Directive.
The include directive is used to include a file during the translation phase. This directive tells the container to merge the content of other external files with the current JSP during the translation phase. You may code the include directives anywhere in your JSP page.
The general usage form of this directive is as follows −
<%@ include file = "relative url" >
The filename in the include directive is actually a relative URL. If you just specify a filename with no associated path, the JSP compiler assumes that the file is in the same directory as your JSP.
You can write the XML equivalent of the above syntax as follows −
<jsp:directive.include file = "relative url" />
For more details related to include directive, check the Include Directive.
The JavaServer Pages API allow you to define custom JSP tags that look like HTML or XML tags and a tag library is a set of user-defined tags that implement custom behavior.
The taglib directive declares that your JSP page uses a set of custom tags, identifies the location of the library, and provides means for identifying the custom tags in your JSP page.
The taglib directive follows the syntax given below −
<%@ taglib uri="uri" prefix = "prefixOfTag" >
Here, the uri attribute value resolves to a location the container understands and the prefix attribute informs a container what bits of markup are custom actions.
You can write the XML equivalent of the above syntax as follows −
<jsp:directive.taglib uri = "uri" prefix = "prefixOfTag" />
For more details related to the taglib directive, check the Taglib Directive.
108 Lectures
11 hours
Chaand Sheikh
517 Lectures
57 hours
Chaand Sheikh
41 Lectures
4.5 hours
Karthikeya T
42 Lectures
5.5 hours
TELCOMA Global
15 Lectures
3 hours
TELCOMA Global
44 Lectures
15 hours
Uplatz
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2426,
"s": 2239,
"text": "In this chapter, we will discuss Directives in JSP. These directives provide directions and instructions to the container, telling it how to handle certain aspects of the JSP processing."
},
{
"code": null,
"e": 2530,
"s": 2426,
"text": "A JSP directive affects the overall structure of the servlet class. It usually has the following form −"
},
{
"code": null,
"e": 2568,
"s": 2530,
"text": "<%@ directive attribute = \"value\" %>\n"
},
{
"code": null,
"e": 2679,
"s": 2568,
"text": "Directives can have a number of attributes which you can list down as key-value pairs and separated by commas."
},
{
"code": null,
"e": 2800,
"s": 2679,
"text": "The blanks between the @ symbol and the directive name, and between the last attribute and the closing %>, are optional."
},
{
"code": null,
"e": 2841,
"s": 2800,
"text": "There are three types of directive tag −"
},
{
"code": null,
"e": 2857,
"s": 2841,
"text": "<%@ page ... %>"
},
{
"code": null,
"e": 2960,
"s": 2857,
"text": "Defines page-dependent attributes, such as scripting language, error page, and buffering requirements."
},
{
"code": null,
"e": 2979,
"s": 2960,
"text": "<%@ include ... %>"
},
{
"code": null,
"e": 3025,
"s": 2979,
"text": "Includes a file during the translation phase."
},
{
"code": null,
"e": 3043,
"s": 3025,
"text": "<%@ taglib ... %>"
},
{
"code": null,
"e": 3111,
"s": 3043,
"text": "Declares a tag library, containing custom actions, used in the page"
},
{
"code": null,
"e": 3357,
"s": 3111,
"text": "The page directive is used to provide instructions to the container. These instructions pertain to the current JSP page. You may code page directives anywhere in your JSP page. By convention, page directives are coded at the top of the JSP page."
},
{
"code": null,
"e": 3411,
"s": 3357,
"text": "Following is the basic syntax of the page directive −"
},
{
"code": null,
"e": 3444,
"s": 3411,
"text": "<%@ page attribute = \"value\" %>\n"
},
{
"code": null,
"e": 3510,
"s": 3444,
"text": "You can write the XML equivalent of the above syntax as follows −"
},
{
"code": null,
"e": 3554,
"s": 3510,
"text": "<jsp:directive.page attribute = \"value\" />\n"
},
{
"code": null,
"e": 3632,
"s": 3554,
"text": "Following table lists out the attributes associated with the page directive −"
},
{
"code": null,
"e": 3639,
"s": 3632,
"text": "buffer"
},
{
"code": null,
"e": 3690,
"s": 3639,
"text": "Specifies a buffering model for the output stream."
},
{
"code": null,
"e": 3700,
"s": 3690,
"text": "autoFlush"
},
{
"code": null,
"e": 3752,
"s": 3700,
"text": "Controls the behavior of the servlet output buffer."
},
{
"code": null,
"e": 3764,
"s": 3752,
"text": "contentType"
},
{
"code": null,
"e": 3803,
"s": 3764,
"text": "Defines the character encoding scheme."
},
{
"code": null,
"e": 3813,
"s": 3803,
"text": "errorPage"
},
{
"code": null,
"e": 3895,
"s": 3813,
"text": "Defines the URL of another JSP that reports on Java unchecked runtime exceptions."
},
{
"code": null,
"e": 3907,
"s": 3895,
"text": "isErrorPage"
},
{
"code": null,
"e": 3996,
"s": 3907,
"text": "Indicates if this JSP page is a URL specified by another JSP page's errorPage attribute."
},
{
"code": null,
"e": 4004,
"s": 3996,
"text": "extends"
},
{
"code": null,
"e": 4067,
"s": 4004,
"text": "Specifies a superclass that the generated servlet must extend."
},
{
"code": null,
"e": 4074,
"s": 4067,
"text": "import"
},
{
"code": null,
"e": 4185,
"s": 4074,
"text": "Specifies a list of packages or classes for use in the JSP as the Java import statement does for Java classes."
},
{
"code": null,
"e": 4190,
"s": 4185,
"text": "info"
},
{
"code": null,
"e": 4272,
"s": 4190,
"text": "Defines a string that can be accessed with the servlet's getServletInfo() method."
},
{
"code": null,
"e": 4285,
"s": 4272,
"text": "isThreadSafe"
},
{
"code": null,
"e": 4340,
"s": 4285,
"text": "Defines the threading model for the generated servlet."
},
{
"code": null,
"e": 4349,
"s": 4340,
"text": "language"
},
{
"code": null,
"e": 4404,
"s": 4349,
"text": "Defines the programming language used in the JSP page."
},
{
"code": null,
"e": 4412,
"s": 4404,
"text": "session"
},
{
"code": null,
"e": 4480,
"s": 4412,
"text": "Specifies whether or not the JSP page participates in HTTP sessions"
},
{
"code": null,
"e": 4492,
"s": 4480,
"text": "isELIgnored"
},
{
"code": null,
"e": 4572,
"s": 4492,
"text": "Specifies whether or not the EL expression within the JSP page will be ignored."
},
{
"code": null,
"e": 4591,
"s": 4572,
"text": "isScriptingEnabled"
},
{
"code": null,
"e": 4649,
"s": 4591,
"text": "Determines if the scripting elements are allowed for use."
},
{
"code": null,
"e": 4727,
"s": 4649,
"text": "Check for more details related to all the above attributes at Page Directive."
},
{
"code": null,
"e": 4999,
"s": 4727,
"text": "The include directive is used to include a file during the translation phase. This directive tells the container to merge the content of other external files with the current JSP during the translation phase. You may code the include directives anywhere in your JSP page."
},
{
"code": null,
"e": 5056,
"s": 4999,
"text": "The general usage form of this directive is as follows −"
},
{
"code": null,
"e": 5093,
"s": 5056,
"text": "<%@ include file = \"relative url\" >\n"
},
{
"code": null,
"e": 5292,
"s": 5093,
"text": "The filename in the include directive is actually a relative URL. If you just specify a filename with no associated path, the JSP compiler assumes that the file is in the same directory as your JSP."
},
{
"code": null,
"e": 5358,
"s": 5292,
"text": "You can write the XML equivalent of the above syntax as follows −"
},
{
"code": null,
"e": 5407,
"s": 5358,
"text": "<jsp:directive.include file = \"relative url\" />\n"
},
{
"code": null,
"e": 5483,
"s": 5407,
"text": "For more details related to include directive, check the Include Directive."
},
{
"code": null,
"e": 5656,
"s": 5483,
"text": "The JavaServer Pages API allow you to define custom JSP tags that look like HTML or XML tags and a tag library is a set of user-defined tags that implement custom behavior."
},
{
"code": null,
"e": 5841,
"s": 5656,
"text": "The taglib directive declares that your JSP page uses a set of custom tags, identifies the location of the library, and provides means for identifying the custom tags in your JSP page."
},
{
"code": null,
"e": 5895,
"s": 5841,
"text": "The taglib directive follows the syntax given below −"
},
{
"code": null,
"e": 5942,
"s": 5895,
"text": "<%@ taglib uri=\"uri\" prefix = \"prefixOfTag\" >\n"
},
{
"code": null,
"e": 6106,
"s": 5942,
"text": "Here, the uri attribute value resolves to a location the container understands and the prefix attribute informs a container what bits of markup are custom actions."
},
{
"code": null,
"e": 6172,
"s": 6106,
"text": "You can write the XML equivalent of the above syntax as follows −"
},
{
"code": null,
"e": 6233,
"s": 6172,
"text": "<jsp:directive.taglib uri = \"uri\" prefix = \"prefixOfTag\" />\n"
},
{
"code": null,
"e": 6311,
"s": 6233,
"text": "For more details related to the taglib directive, check the Taglib Directive."
},
{
"code": null,
"e": 6346,
"s": 6311,
"text": "\n 108 Lectures \n 11 hours \n"
},
{
"code": null,
"e": 6361,
"s": 6346,
"text": " Chaand Sheikh"
},
{
"code": null,
"e": 6396,
"s": 6361,
"text": "\n 517 Lectures \n 57 hours \n"
},
{
"code": null,
"e": 6411,
"s": 6396,
"text": " Chaand Sheikh"
},
{
"code": null,
"e": 6446,
"s": 6411,
"text": "\n 41 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 6460,
"s": 6446,
"text": " Karthikeya T"
},
{
"code": null,
"e": 6495,
"s": 6460,
"text": "\n 42 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 6511,
"s": 6495,
"text": " TELCOMA Global"
},
{
"code": null,
"e": 6544,
"s": 6511,
"text": "\n 15 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 6560,
"s": 6544,
"text": " TELCOMA Global"
},
{
"code": null,
"e": 6594,
"s": 6560,
"text": "\n 44 Lectures \n 15 hours \n"
},
{
"code": null,
"e": 6602,
"s": 6594,
"text": " Uplatz"
},
{
"code": null,
"e": 6609,
"s": 6602,
"text": " Print"
},
{
"code": null,
"e": 6620,
"s": 6609,
"text": " Add Notes"
}
] |
Angular 2 - Data Binding | Two-way binding was a functionality in Angular JS, but has been removed from Angular 2.x onwards. But now, since the event of classes in Angular 2, we can bind to properties in AngularJS class.
Suppose if you had a class with a class name, a property which had a type and value.
export class className {
property: propertytype = value;
}
You could then bind the property of an html tag to the property of the class.
<html tag htmlproperty = 'property'>
The value of the property would then be assigned to the htmlproperty of the html.
Let’s look at an example of how we can achieve data binding. In our example, we will look at displaying images wherein the images source will come from the properties in our class. Following are the steps to achieve this.
Step 1 − Download any 2 images. For this example, we will download some simple images shown below.
Step 2 − Store these images in a folder called Images in the app directory. If the images folder is not present, please create it.
Step 3 − Add the following content in app.component.ts as shown below.
import { Component } from '@angular/core';
@Component ({
selector: 'my-app',
templateUrl: 'app/app.component.html'
})
export class AppComponent {
appTitle: string = 'Welcome';
appList: any[] = [ {
"ID": "1",
"url": 'app/Images/One.jpg'
},
{
"ID": "2",
"url": 'app/Images/Two.jpg'
} ];
}
Step 4 − Add the following content in app.component.html as shown below.
<div *ngFor = 'let lst of appList'>
<ul>
<li>{{lst.ID}}</li>
<img [src] = 'lst.url'>
</ul>
</div>
In the above app.component.html file, we are accessing the images from the properties in our class.
The output of the above program should be like this −
16 Lectures
1.5 hours
Anadi Sharma
28 Lectures
2.5 hours
Anadi Sharma
11 Lectures
7.5 hours
SHIVPRASAD KOIRALA
16 Lectures
2.5 hours
Frahaan Hussain
69 Lectures
5 hours
Senol Atac
53 Lectures
3.5 hours
Senol Atac
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2491,
"s": 2297,
"text": "Two-way binding was a functionality in Angular JS, but has been removed from Angular 2.x onwards. But now, since the event of classes in Angular 2, we can bind to properties in AngularJS class."
},
{
"code": null,
"e": 2576,
"s": 2491,
"text": "Suppose if you had a class with a class name, a property which had a type and value."
},
{
"code": null,
"e": 2638,
"s": 2576,
"text": "export class className {\n property: propertytype = value;\n}"
},
{
"code": null,
"e": 2716,
"s": 2638,
"text": "You could then bind the property of an html tag to the property of the class."
},
{
"code": null,
"e": 2754,
"s": 2716,
"text": "<html tag htmlproperty = 'property'>\n"
},
{
"code": null,
"e": 2836,
"s": 2754,
"text": "The value of the property would then be assigned to the htmlproperty of the html."
},
{
"code": null,
"e": 3058,
"s": 2836,
"text": "Let’s look at an example of how we can achieve data binding. In our example, we will look at displaying images wherein the images source will come from the properties in our class. Following are the steps to achieve this."
},
{
"code": null,
"e": 3157,
"s": 3058,
"text": "Step 1 − Download any 2 images. For this example, we will download some simple images shown below."
},
{
"code": null,
"e": 3288,
"s": 3157,
"text": "Step 2 − Store these images in a folder called Images in the app directory. If the images folder is not present, please create it."
},
{
"code": null,
"e": 3359,
"s": 3288,
"text": "Step 3 − Add the following content in app.component.ts as shown below."
},
{
"code": null,
"e": 3694,
"s": 3359,
"text": "import { Component } from '@angular/core';\n\n@Component ({\n selector: 'my-app',\n templateUrl: 'app/app.component.html'\n})\n\nexport class AppComponent {\n appTitle: string = 'Welcome';\n appList: any[] = [ {\n \"ID\": \"1\",\n \"url\": 'app/Images/One.jpg'\n },\n\n {\n \"ID\": \"2\",\n \"url\": 'app/Images/Two.jpg'\n } ];\n}"
},
{
"code": null,
"e": 3767,
"s": 3694,
"text": "Step 4 − Add the following content in app.component.html as shown below."
},
{
"code": null,
"e": 3883,
"s": 3767,
"text": "<div *ngFor = 'let lst of appList'>\n <ul>\n <li>{{lst.ID}}</li>\n <img [src] = 'lst.url'>\n </ul>\n</div>"
},
{
"code": null,
"e": 3983,
"s": 3883,
"text": "In the above app.component.html file, we are accessing the images from the properties in our class."
},
{
"code": null,
"e": 4037,
"s": 3983,
"text": "The output of the above program should be like this −"
},
{
"code": null,
"e": 4072,
"s": 4037,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 4086,
"s": 4072,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 4121,
"s": 4086,
"text": "\n 28 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4135,
"s": 4121,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 4170,
"s": 4135,
"text": "\n 11 Lectures \n 7.5 hours \n"
},
{
"code": null,
"e": 4190,
"s": 4170,
"text": " SHIVPRASAD KOIRALA"
},
{
"code": null,
"e": 4225,
"s": 4190,
"text": "\n 16 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4242,
"s": 4225,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 4275,
"s": 4242,
"text": "\n 69 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 4287,
"s": 4275,
"text": " Senol Atac"
},
{
"code": null,
"e": 4322,
"s": 4287,
"text": "\n 53 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 4334,
"s": 4322,
"text": " Senol Atac"
},
{
"code": null,
"e": 4341,
"s": 4334,
"text": " Print"
},
{
"code": null,
"e": 4352,
"s": 4341,
"text": " Add Notes"
}
] |
DAX Time Intelligence - DATESYTD function | Returns a table that contains a column of the dates for the year to date, in the current context.
DATESYTD (<dates>, [<year_end_date>])
dates
A column that contains dates.
year_end_date
Optional.
A literal string with a date that defines the year-end date.
If omitted, the default is December 31.
A table containing a single column of date values.
The dates parameter can be any of the following −
A reference to a date/time column.
A table expression that returns a single column of date/time values.
A Boolean expression that defines a single-column table of date/time values.
Constraints on Boolean expressions −
The expression cannot reference a calculated field.
The expression cannot reference a calculated field.
The expression cannot use CALCULATE function.
The expression cannot use CALCULATE function.
The expression cannot use any function that scans a table or returns a table, including aggregation functions.
The expression cannot use any function that scans a table or returns a table, including aggregation functions.
However, a Boolean expression can use any function that looks up a single value, or that calculates a scalar value.
= CALCULATE (
SUM (Sales [Sales Amount]), DATESYTD (Sales [Date])
)
53 Lectures
5.5 hours
Abhay Gadiya
24 Lectures
2 hours
Randy Minder
26 Lectures
4.5 hours
Randy Minder
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2099,
"s": 2001,
"text": "Returns a table that contains a column of the dates for the year to date, in the current context."
},
{
"code": null,
"e": 2139,
"s": 2099,
"text": "DATESYTD (<dates>, [<year_end_date>]) \n"
},
{
"code": null,
"e": 2145,
"s": 2139,
"text": "dates"
},
{
"code": null,
"e": 2175,
"s": 2145,
"text": "A column that contains dates."
},
{
"code": null,
"e": 2189,
"s": 2175,
"text": "year_end_date"
},
{
"code": null,
"e": 2199,
"s": 2189,
"text": "Optional."
},
{
"code": null,
"e": 2260,
"s": 2199,
"text": "A literal string with a date that defines the year-end date."
},
{
"code": null,
"e": 2300,
"s": 2260,
"text": "If omitted, the default is December 31."
},
{
"code": null,
"e": 2351,
"s": 2300,
"text": "A table containing a single column of date values."
},
{
"code": null,
"e": 2401,
"s": 2351,
"text": "The dates parameter can be any of the following −"
},
{
"code": null,
"e": 2436,
"s": 2401,
"text": "A reference to a date/time column."
},
{
"code": null,
"e": 2505,
"s": 2436,
"text": "A table expression that returns a single column of date/time values."
},
{
"code": null,
"e": 2582,
"s": 2505,
"text": "A Boolean expression that defines a single-column table of date/time values."
},
{
"code": null,
"e": 2619,
"s": 2582,
"text": "Constraints on Boolean expressions −"
},
{
"code": null,
"e": 2671,
"s": 2619,
"text": "The expression cannot reference a calculated field."
},
{
"code": null,
"e": 2723,
"s": 2671,
"text": "The expression cannot reference a calculated field."
},
{
"code": null,
"e": 2769,
"s": 2723,
"text": "The expression cannot use CALCULATE function."
},
{
"code": null,
"e": 2815,
"s": 2769,
"text": "The expression cannot use CALCULATE function."
},
{
"code": null,
"e": 2926,
"s": 2815,
"text": "The expression cannot use any function that scans a table or returns a table, including aggregation functions."
},
{
"code": null,
"e": 3037,
"s": 2926,
"text": "The expression cannot use any function that scans a table or returns a table, including aggregation functions."
},
{
"code": null,
"e": 3153,
"s": 3037,
"text": "However, a Boolean expression can use any function that looks up a single value, or that calculates a scalar value."
},
{
"code": null,
"e": 3226,
"s": 3153,
"text": "= CALCULATE ( \n SUM (Sales [Sales Amount]), DATESYTD (Sales [Date])\n) "
},
{
"code": null,
"e": 3261,
"s": 3226,
"text": "\n 53 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3275,
"s": 3261,
"text": " Abhay Gadiya"
},
{
"code": null,
"e": 3308,
"s": 3275,
"text": "\n 24 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3322,
"s": 3308,
"text": " Randy Minder"
},
{
"code": null,
"e": 3357,
"s": 3322,
"text": "\n 26 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 3371,
"s": 3357,
"text": " Randy Minder"
},
{
"code": null,
"e": 3378,
"s": 3371,
"text": " Print"
},
{
"code": null,
"e": 3389,
"s": 3378,
"text": " Add Notes"
}
] |
bounce - Unix, Linux Command | bounce [generic Postfix daemon options]
The bounce(8) daemon processes two types of service requests:
Optionally, a bounce (defer, trace) client can request that the
per-message log file be deleted when the requested operation fails.
This is used by clients that cannot retry transactions by
themselves, and that depend on retry logic in their own client.
RFC 822 (ARPA Internet Text Messages)
RFC 2045 (Format of Internet Message Bodies)
RFC 2822 (ARPA Internet Text Messages)
RFC 3462 (Delivery Status Notifications)
RFC 3464 (Delivery Status Notifications)
RFC 3834 (Auto-Submitted: message header)
The text below provides only a parameter summary. See
postconf(5) for more details including examples.
/var/spool/postfix/bounce/* non-delivery records
/var/spool/postfix/defer/* non-delivery records
/var/spool/postfix/trace/* delivery status records
bounce(5), bounce message template format
qmgr(8), queue manager
postconf(5), configuration parameters
master(5), generic daemon options
master(8), process manager
syslogd(8), system logging
Wietse Venema
IBM T.J. Watson Research
P.O. Box 704
Yorktown Heights, NY 10598, USA
Advertisements
129 Lectures
23 hours
Eduonix Learning Solutions
5 Lectures
4.5 hours
Frahaan Hussain
35 Lectures
2 hours
Pradeep D
41 Lectures
2.5 hours
Musab Zayadneh
46 Lectures
4 hours
GUHARAJANM
6 Lectures
4 hours
Uplatz
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 10618,
"s": 10577,
"text": "bounce [generic Postfix daemon options]\n"
},
{
"code": null,
"e": 10682,
"s": 10618,
"text": "\nThe bounce(8) daemon processes two types of service requests:\n"
},
{
"code": null,
"e": 10938,
"s": 10682,
"text": "\nOptionally, a bounce (defer, trace) client can request that the\nper-message log file be deleted when the requested operation fails.\nThis is used by clients that cannot retry transactions by\nthemselves, and that depend on retry logic in their own client.\n"
},
{
"code": null,
"e": 11185,
"s": 10938,
"text": "RFC 822 (ARPA Internet Text Messages)\nRFC 2045 (Format of Internet Message Bodies)\nRFC 2822 (ARPA Internet Text Messages)\nRFC 3462 (Delivery Status Notifications)\nRFC 3464 (Delivery Status Notifications)\nRFC 3834 (Auto-Submitted: message header)\n"
},
{
"code": null,
"e": 11292,
"s": 11187,
"text": "\nThe text below provides only a parameter summary. See\npostconf(5) for more details including examples.\n"
},
{
"code": null,
"e": 11441,
"s": 11292,
"text": "/var/spool/postfix/bounce/* non-delivery records\n/var/spool/postfix/defer/* non-delivery records\n/var/spool/postfix/trace/* delivery status records\n"
},
{
"code": null,
"e": 11633,
"s": 11441,
"text": "bounce(5), bounce message template format\nqmgr(8), queue manager\npostconf(5), configuration parameters\nmaster(5), generic daemon options\nmaster(8), process manager\nsyslogd(8), system logging\n"
},
{
"code": null,
"e": 11720,
"s": 11635,
"text": "Wietse Venema\nIBM T.J. Watson Research\nP.O. Box 704\nYorktown Heights, NY 10598, USA\n"
},
{
"code": null,
"e": 11737,
"s": 11720,
"text": "\nAdvertisements\n"
},
{
"code": null,
"e": 11772,
"s": 11737,
"text": "\n 129 Lectures \n 23 hours \n"
},
{
"code": null,
"e": 11800,
"s": 11772,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 11834,
"s": 11800,
"text": "\n 5 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 11851,
"s": 11834,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 11884,
"s": 11851,
"text": "\n 35 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 11895,
"s": 11884,
"text": " Pradeep D"
},
{
"code": null,
"e": 11930,
"s": 11895,
"text": "\n 41 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 11946,
"s": 11930,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 11979,
"s": 11946,
"text": "\n 46 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 11991,
"s": 11979,
"text": " GUHARAJANM"
},
{
"code": null,
"e": 12023,
"s": 11991,
"text": "\n 6 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 12031,
"s": 12023,
"text": " Uplatz"
},
{
"code": null,
"e": 12038,
"s": 12031,
"text": " Print"
},
{
"code": null,
"e": 12049,
"s": 12038,
"text": " Add Notes"
}
] |
MuleSoft - Mule Error Handling | The new Mule error handling is one of the biggest and major changes done in Mule 4. The new error handing may seem complex, but it is better and more efficient. In this chapter, we are going to discuss about components of Mule error, Error types, categories of Mule error and components for handling Mule errors.
Mule error is the result of Mule exception failure has the following components −
It is an important component of Mule error which will give description about the problem. Its expression is as follows −
#[error.description]
The Type component of Mule error is used to characterize the problem. It also allows routing within an error handler. Its expression is as follows −
#[error.errorType]
The Cause component of Mule error gives the underlying java throwable that causes the failure. Its expression is as follows −
#[error.cause]
The Message component of Mule error shows an optional message regarding the error. Its expression is as follows −
#[error.errorMessage]
The Child Errors component of Mule error gives an optional collection of inner errors. These inner errors are mainly used by elements like Scatter-Gather to provide aggregated route errors. Its expression is as follows −
#[error.childErrors]
In case of failure of HTTP request with a 401 status code, the Mule Errors are as follows −
Description: HTTP GET on resource ‘http://localhost:8181/TestApp’
failed: unauthorized (401)
Type: HTTP:UNAUTHORIZED
Cause: a ResponseValidatorTypedException instance
Error Message: { "message" : "Could not authorize the user." }
TRANSFORMATION
This Error Type indicates an error occurred while transforming a value. The transformation is Mule Runtime internal transformation and not the DataWeave transformations.
EXPRESSION
This kind of Error Type indicates an error occurred while evaluating an expression.
VALIDATION
This kind of Error Type indicates a validation error occurred.
DUPLICATE_MESSAGE
A kind of validation error which occurs when a message being processed twice.
REDELIVERY_EXHAUSTED
This kind of Error Type occurs when maximum attempts to reprocess a message from a source has been exhausted.
CONNECTIVITY
This Error Type indicates a problem while establishing a connection.
ROUTING
This Error Type indicates an error occurred while routing a message.
SECURITY
This Error Type indicates a security error occurred. For example, invalid credentials received.
STREAM_MAXIMUM_SIZE_EXCEEDED
This Error Type occurs when the maximum size allowed for a stream exhausted.
TIMEOUT
It indicates the timeout while processing a message.
UNKNOWN
This Error Type indicates an unexpected error occurred.
SOURCE
It represents the occurrence of an error in the source of the flow.
SOURCE_RESPONSE
It represents the occurrence of an error in the source of the flow while processing a successful response.
In the above example, you can see the message component of mule error.
Let us understand the Error Types with the help of its characteristics −
The first characteristics of Mule Error Types is that it consists of both, a namespace and an identifier. This allows us to distinguish the types according to their domain. In the above example, the Error Type is HTTP: UNAUTHORIZED.
The first characteristics of Mule Error Types is that it consists of both, a namespace and an identifier. This allows us to distinguish the types according to their domain. In the above example, the Error Type is HTTP: UNAUTHORIZED.
The second and important characteristic is that the Error Type may have a parent type. For example, the Error Type HTTP: UNAUTHORIZED has MULE:CLIENT_SECURITY as the parent which in turn also has a parent named MULE:SECURITY. This characteristic establishes the Error Type as specification of more global item.
The second and important characteristic is that the Error Type may have a parent type. For example, the Error Type HTTP: UNAUTHORIZED has MULE:CLIENT_SECURITY as the parent which in turn also has a parent named MULE:SECURITY. This characteristic establishes the Error Type as specification of more global item.
Following are the categories under which all the errors fall −
The errors under this category are the errors that may occur in a Flow. They are not so severe and can be handled easily.
The errors under this category are the severe errors that cannot be handled. Following is the list of Error Types under this category −
OVERLOAD
This Error Type indicates an error occurred due to problem of overloading. In this case, the execution will be rejected.
FATAL_JVM_ERROR
This kind of Error Type indicates the occurrence of a fatal error. For example, stack overflow.
The CUSTOM Error Types are the errors that are defined by us. They can be defined when mapping or when raising the errors. We must give a specific custom namespace to these Error Types for distinguishing them from the other existing Error Types within Mule application. For example, in Mule application using HTTP, we cannot use HTTP as the custom error type.
In broad sense, the errors in Mule can be divided into two categories namely, Messaging Errors and System Errors.
This category of Mule error is related to the Mule flow. Whenever a problem occurs within a Mule flow, Mule throws a messaging error. We can set up On Error component inside the error handler component to handle these Mule errors.
System error indicates an exception occurring at the system level. If there is no Mule event, the system error is handled by a system error handler. The following kind of exceptions handle by a system error handler −
Exception that occurs during an application start-up.
Exception that occurs when a connection to an external system fails.
In case a system error occurs, Mule sends an error notification to the registered listeners. It also logs the error. On the other hand, Mule executes a reconnection strategy if the error was caused by a connection failure.
Mule has following two Error Handlers for handling the errors −
The first Mule error handler is On-Error component, that defines the types of errors they can handle. As discussed earlier, we can configure On-Error components inside the scope-like Error Handler component. Each Mule flow contain only one error handler, but this error handler can contain as many On-Error scope as we needed. The steps for handling the Mule error inside the flow, with the help of On-Error component, are as follows −
First, whenever a Mule flow raises an error, the normal flow execution stops.
First, whenever a Mule flow raises an error, the normal flow execution stops.
Next, the process will be transferred to the Error Handler Component that already have On Error component to match the error types and expressions.
Next, the process will be transferred to the Error Handler Component that already have On Error component to match the error types and expressions.
At last, the Error Handler component routes the error to the first On Error scope that matches the error.
At last, the Error Handler component routes the error to the first On Error scope that matches the error.
Following are the two types of On-Error components supported by Mule −
On-Error Propagate component executes but propagates the error to the next level and breaks the owner’s execution. The transaction will be rolled back if it is handled by On Error Propagate component.
Like On-Error Propagate component, On-Error Continue component also executes the transaction. The only condition is, if the owner had completed the execution successfully then this component will use the result of the execution as the result of its owner. The transaction will be committed if it is handled by On-Error Continue component.
Try Scope is one of many new features available in Mule 4. It works similar to try block of JAVA in which we used to enclose the code having the possibility of being an exception, so that it can be handled without breaking the whole code.
We can wrap one or more Mule event processors in Try Scope and thereafter, try scope will catch and handle any exception thrown by these event processors. The main working of try scope revolves around its own error handling strategy which supports error handling on its inner component instead of whole flow. That is why we do not need to extract the flow into a separate flow.
Example
Following is an example of the use of try scope −
As we know, a transaction is a series of actions that should never be executed partially. All the operations within the scope of a transaction are executed in the same thread and if an error occurs, it should lead to a rollback or a commit. We can configure the try scope, in the following manner, so that it treats child operations as a transaction.
INDIFFERENT [Default] − If we choose this configuration on try block, then the child actions will not be treated as a transaction. In this case, error causes neither rollback nor commits.
INDIFFERENT [Default] − If we choose this configuration on try block, then the child actions will not be treated as a transaction. In this case, error causes neither rollback nor commits.
ALWAYS_BEGIN − It indicates that a new transaction will be started every time the scope is executed.
ALWAYS_BEGIN − It indicates that a new transaction will be started every time the scope is executed.
BEGIN_OR_JOIN − It indicates that if the current processing of the flow has already started a transaction, join it. Otherwise, start a new one.
BEGIN_OR_JOIN − It indicates that if the current processing of the flow has already started a transaction, join it. Otherwise, start a new one.
54 Lectures
19 hours
Arulchristhuraj Alphonse
47 Lectures
4 hours
Nelson Dias
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2381,
"s": 2068,
"text": "The new Mule error handling is one of the biggest and major changes done in Mule 4. The new error handing may seem complex, but it is better and more efficient. In this chapter, we are going to discuss about components of Mule error, Error types, categories of Mule error and components for handling Mule errors."
},
{
"code": null,
"e": 2463,
"s": 2381,
"text": "Mule error is the result of Mule exception failure has the following components −"
},
{
"code": null,
"e": 2584,
"s": 2463,
"text": "It is an important component of Mule error which will give description about the problem. Its expression is as follows −"
},
{
"code": null,
"e": 2606,
"s": 2584,
"text": "#[error.description]\n"
},
{
"code": null,
"e": 2755,
"s": 2606,
"text": "The Type component of Mule error is used to characterize the problem. It also allows routing within an error handler. Its expression is as follows −"
},
{
"code": null,
"e": 2775,
"s": 2755,
"text": "#[error.errorType]\n"
},
{
"code": null,
"e": 2901,
"s": 2775,
"text": "The Cause component of Mule error gives the underlying java throwable that causes the failure. Its expression is as follows −"
},
{
"code": null,
"e": 2917,
"s": 2901,
"text": "#[error.cause]\n"
},
{
"code": null,
"e": 3031,
"s": 2917,
"text": "The Message component of Mule error shows an optional message regarding the error. Its expression is as follows −"
},
{
"code": null,
"e": 3054,
"s": 3031,
"text": "#[error.errorMessage]\n"
},
{
"code": null,
"e": 3275,
"s": 3054,
"text": "The Child Errors component of Mule error gives an optional collection of inner errors. These inner errors are mainly used by elements like Scatter-Gather to provide aggregated route errors. Its expression is as follows −"
},
{
"code": null,
"e": 3297,
"s": 3275,
"text": "#[error.childErrors]\n"
},
{
"code": null,
"e": 3389,
"s": 3297,
"text": "In case of failure of HTTP request with a 401 status code, the Mule Errors are as follows −"
},
{
"code": null,
"e": 3620,
"s": 3389,
"text": "Description: HTTP GET on resource ‘http://localhost:8181/TestApp’ \nfailed: unauthorized (401)\nType: HTTP:UNAUTHORIZED\nCause: a ResponseValidatorTypedException instance\nError Message: { \"message\" : \"Could not authorize the user.\" }"
},
{
"code": null,
"e": 3635,
"s": 3620,
"text": "TRANSFORMATION"
},
{
"code": null,
"e": 3805,
"s": 3635,
"text": "This Error Type indicates an error occurred while transforming a value. The transformation is Mule Runtime internal transformation and not the DataWeave transformations."
},
{
"code": null,
"e": 3816,
"s": 3805,
"text": "EXPRESSION"
},
{
"code": null,
"e": 3900,
"s": 3816,
"text": "This kind of Error Type indicates an error occurred while evaluating an expression."
},
{
"code": null,
"e": 3911,
"s": 3900,
"text": "VALIDATION"
},
{
"code": null,
"e": 3974,
"s": 3911,
"text": "This kind of Error Type indicates a validation error occurred."
},
{
"code": null,
"e": 3992,
"s": 3974,
"text": "DUPLICATE_MESSAGE"
},
{
"code": null,
"e": 4070,
"s": 3992,
"text": "A kind of validation error which occurs when a message being processed twice."
},
{
"code": null,
"e": 4091,
"s": 4070,
"text": "REDELIVERY_EXHAUSTED"
},
{
"code": null,
"e": 4201,
"s": 4091,
"text": "This kind of Error Type occurs when maximum attempts to reprocess a message from a source has been exhausted."
},
{
"code": null,
"e": 4214,
"s": 4201,
"text": "CONNECTIVITY"
},
{
"code": null,
"e": 4283,
"s": 4214,
"text": "This Error Type indicates a problem while establishing a connection."
},
{
"code": null,
"e": 4291,
"s": 4283,
"text": "ROUTING"
},
{
"code": null,
"e": 4360,
"s": 4291,
"text": "This Error Type indicates an error occurred while routing a message."
},
{
"code": null,
"e": 4369,
"s": 4360,
"text": "SECURITY"
},
{
"code": null,
"e": 4465,
"s": 4369,
"text": "This Error Type indicates a security error occurred. For example, invalid credentials received."
},
{
"code": null,
"e": 4494,
"s": 4465,
"text": "STREAM_MAXIMUM_SIZE_EXCEEDED"
},
{
"code": null,
"e": 4571,
"s": 4494,
"text": "This Error Type occurs when the maximum size allowed for a stream exhausted."
},
{
"code": null,
"e": 4579,
"s": 4571,
"text": "TIMEOUT"
},
{
"code": null,
"e": 4632,
"s": 4579,
"text": "It indicates the timeout while processing a message."
},
{
"code": null,
"e": 4640,
"s": 4632,
"text": "UNKNOWN"
},
{
"code": null,
"e": 4696,
"s": 4640,
"text": "This Error Type indicates an unexpected error occurred."
},
{
"code": null,
"e": 4703,
"s": 4696,
"text": "SOURCE"
},
{
"code": null,
"e": 4771,
"s": 4703,
"text": "It represents the occurrence of an error in the source of the flow."
},
{
"code": null,
"e": 4787,
"s": 4771,
"text": "SOURCE_RESPONSE"
},
{
"code": null,
"e": 4894,
"s": 4787,
"text": "It represents the occurrence of an error in the source of the flow while processing a successful response."
},
{
"code": null,
"e": 4965,
"s": 4894,
"text": "In the above example, you can see the message component of mule error."
},
{
"code": null,
"e": 5038,
"s": 4965,
"text": "Let us understand the Error Types with the help of its characteristics −"
},
{
"code": null,
"e": 5271,
"s": 5038,
"text": "The first characteristics of Mule Error Types is that it consists of both, a namespace and an identifier. This allows us to distinguish the types according to their domain. In the above example, the Error Type is HTTP: UNAUTHORIZED."
},
{
"code": null,
"e": 5504,
"s": 5271,
"text": "The first characteristics of Mule Error Types is that it consists of both, a namespace and an identifier. This allows us to distinguish the types according to their domain. In the above example, the Error Type is HTTP: UNAUTHORIZED."
},
{
"code": null,
"e": 5815,
"s": 5504,
"text": "The second and important characteristic is that the Error Type may have a parent type. For example, the Error Type HTTP: UNAUTHORIZED has MULE:CLIENT_SECURITY as the parent which in turn also has a parent named MULE:SECURITY. This characteristic establishes the Error Type as specification of more global item."
},
{
"code": null,
"e": 6126,
"s": 5815,
"text": "The second and important characteristic is that the Error Type may have a parent type. For example, the Error Type HTTP: UNAUTHORIZED has MULE:CLIENT_SECURITY as the parent which in turn also has a parent named MULE:SECURITY. This characteristic establishes the Error Type as specification of more global item."
},
{
"code": null,
"e": 6189,
"s": 6126,
"text": "Following are the categories under which all the errors fall −"
},
{
"code": null,
"e": 6311,
"s": 6189,
"text": "The errors under this category are the errors that may occur in a Flow. They are not so severe and can be handled easily."
},
{
"code": null,
"e": 6447,
"s": 6311,
"text": "The errors under this category are the severe errors that cannot be handled. Following is the list of Error Types under this category −"
},
{
"code": null,
"e": 6456,
"s": 6447,
"text": "OVERLOAD"
},
{
"code": null,
"e": 6577,
"s": 6456,
"text": "This Error Type indicates an error occurred due to problem of overloading. In this case, the execution will be rejected."
},
{
"code": null,
"e": 6593,
"s": 6577,
"text": "FATAL_JVM_ERROR"
},
{
"code": null,
"e": 6689,
"s": 6593,
"text": "This kind of Error Type indicates the occurrence of a fatal error. For example, stack overflow."
},
{
"code": null,
"e": 7049,
"s": 6689,
"text": "The CUSTOM Error Types are the errors that are defined by us. They can be defined when mapping or when raising the errors. We must give a specific custom namespace to these Error Types for distinguishing them from the other existing Error Types within Mule application. For example, in Mule application using HTTP, we cannot use HTTP as the custom error type."
},
{
"code": null,
"e": 7163,
"s": 7049,
"text": "In broad sense, the errors in Mule can be divided into two categories namely, Messaging Errors and System Errors."
},
{
"code": null,
"e": 7394,
"s": 7163,
"text": "This category of Mule error is related to the Mule flow. Whenever a problem occurs within a Mule flow, Mule throws a messaging error. We can set up On Error component inside the error handler component to handle these Mule errors."
},
{
"code": null,
"e": 7611,
"s": 7394,
"text": "System error indicates an exception occurring at the system level. If there is no Mule event, the system error is handled by a system error handler. The following kind of exceptions handle by a system error handler −"
},
{
"code": null,
"e": 7665,
"s": 7611,
"text": "Exception that occurs during an application start-up."
},
{
"code": null,
"e": 7734,
"s": 7665,
"text": "Exception that occurs when a connection to an external system fails."
},
{
"code": null,
"e": 7957,
"s": 7734,
"text": "In case a system error occurs, Mule sends an error notification to the registered listeners. It also logs the error. On the other hand, Mule executes a reconnection strategy if the error was caused by a connection failure."
},
{
"code": null,
"e": 8021,
"s": 7957,
"text": "Mule has following two Error Handlers for handling the errors −"
},
{
"code": null,
"e": 8457,
"s": 8021,
"text": "The first Mule error handler is On-Error component, that defines the types of errors they can handle. As discussed earlier, we can configure On-Error components inside the scope-like Error Handler component. Each Mule flow contain only one error handler, but this error handler can contain as many On-Error scope as we needed. The steps for handling the Mule error inside the flow, with the help of On-Error component, are as follows −"
},
{
"code": null,
"e": 8535,
"s": 8457,
"text": "First, whenever a Mule flow raises an error, the normal flow execution stops."
},
{
"code": null,
"e": 8613,
"s": 8535,
"text": "First, whenever a Mule flow raises an error, the normal flow execution stops."
},
{
"code": null,
"e": 8761,
"s": 8613,
"text": "Next, the process will be transferred to the Error Handler Component that already have On Error component to match the error types and expressions."
},
{
"code": null,
"e": 8909,
"s": 8761,
"text": "Next, the process will be transferred to the Error Handler Component that already have On Error component to match the error types and expressions."
},
{
"code": null,
"e": 9015,
"s": 8909,
"text": "At last, the Error Handler component routes the error to the first On Error scope that matches the error."
},
{
"code": null,
"e": 9121,
"s": 9015,
"text": "At last, the Error Handler component routes the error to the first On Error scope that matches the error."
},
{
"code": null,
"e": 9192,
"s": 9121,
"text": "Following are the two types of On-Error components supported by Mule −"
},
{
"code": null,
"e": 9393,
"s": 9192,
"text": "On-Error Propagate component executes but propagates the error to the next level and breaks the owner’s execution. The transaction will be rolled back if it is handled by On Error Propagate component."
},
{
"code": null,
"e": 9732,
"s": 9393,
"text": "Like On-Error Propagate component, On-Error Continue component also executes the transaction. The only condition is, if the owner had completed the execution successfully then this component will use the result of the execution as the result of its owner. The transaction will be committed if it is handled by On-Error Continue component."
},
{
"code": null,
"e": 9971,
"s": 9732,
"text": "Try Scope is one of many new features available in Mule 4. It works similar to try block of JAVA in which we used to enclose the code having the possibility of being an exception, so that it can be handled without breaking the whole code."
},
{
"code": null,
"e": 10349,
"s": 9971,
"text": "We can wrap one or more Mule event processors in Try Scope and thereafter, try scope will catch and handle any exception thrown by these event processors. The main working of try scope revolves around its own error handling strategy which supports error handling on its inner component instead of whole flow. That is why we do not need to extract the flow into a separate flow."
},
{
"code": null,
"e": 10357,
"s": 10349,
"text": "Example"
},
{
"code": null,
"e": 10407,
"s": 10357,
"text": "Following is an example of the use of try scope −"
},
{
"code": null,
"e": 10758,
"s": 10407,
"text": "As we know, a transaction is a series of actions that should never be executed partially. All the operations within the scope of a transaction are executed in the same thread and if an error occurs, it should lead to a rollback or a commit. We can configure the try scope, in the following manner, so that it treats child operations as a transaction."
},
{
"code": null,
"e": 10946,
"s": 10758,
"text": "INDIFFERENT [Default] − If we choose this configuration on try block, then the child actions will not be treated as a transaction. In this case, error causes neither rollback nor commits."
},
{
"code": null,
"e": 11134,
"s": 10946,
"text": "INDIFFERENT [Default] − If we choose this configuration on try block, then the child actions will not be treated as a transaction. In this case, error causes neither rollback nor commits."
},
{
"code": null,
"e": 11235,
"s": 11134,
"text": "ALWAYS_BEGIN − It indicates that a new transaction will be started every time the scope is executed."
},
{
"code": null,
"e": 11336,
"s": 11235,
"text": "ALWAYS_BEGIN − It indicates that a new transaction will be started every time the scope is executed."
},
{
"code": null,
"e": 11480,
"s": 11336,
"text": "BEGIN_OR_JOIN − It indicates that if the current processing of the flow has already started a transaction, join it. Otherwise, start a new one."
},
{
"code": null,
"e": 11624,
"s": 11480,
"text": "BEGIN_OR_JOIN − It indicates that if the current processing of the flow has already started a transaction, join it. Otherwise, start a new one."
},
{
"code": null,
"e": 11658,
"s": 11624,
"text": "\n 54 Lectures \n 19 hours \n"
},
{
"code": null,
"e": 11684,
"s": 11658,
"text": " Arulchristhuraj Alphonse"
},
{
"code": null,
"e": 11717,
"s": 11684,
"text": "\n 47 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 11730,
"s": 11717,
"text": " Nelson Dias"
},
{
"code": null,
"e": 11737,
"s": 11730,
"text": " Print"
},
{
"code": null,
"e": 11748,
"s": 11737,
"text": " Add Notes"
}
] |
Make all the elements of array even with given operations - GeeksforGeeks | 31 May, 2021
Given an array arr[] of positive integers, find the minimum number of operations required to make all the array elements even where:
If there is an odd number, then, increment the element and the next adjacent element by 1.Each increment costs one operation.
If there is an odd number, then, increment the element and the next adjacent element by 1.
Each increment costs one operation.
Note: If there is any number in arr[] which is odd after all operations, then, print -1.
Examples:
Input: arr[] = {2, 3, 4, 5, 6} Output: 4 Explanation: Add 1 to 3 (at 1st index) and add 1 to its adjacent element 4(2nd index). Now the array becomes {2, 4, 5, 5, 6}. Add 1 to 5 (at 2nd index) and add 1 to its adjacent element 5(3rd index). Now the array becomes {2, 4, 6, 6, 6}. The resultant array has all even numbers. The total number of operations for 4 increments is 4.
Input: arr[] = {5, 6} Output: -1 Explanation: Adding 1 to 5(0th index), then we have to increment 1 to its adjacent element 6(1st index). Now the array becomes {6, 7}. And we have 1 odd number left after all possible increments. Therefore, we can’t make all array elements even.
Approach: This problem can be solved using Greedy Approach. The following are the steps:
Traverse the given array arr[].If an odd element occurs, then increment that element by 1 to make it even and the next adjacent element by 1.Repeat the above step for all the odd elements for the given array arr[].If all the elements in arr[] are even, then print the number of operations.Else print -1.
Traverse the given array arr[].
If an odd element occurs, then increment that element by 1 to make it even and the next adjacent element by 1.
Repeat the above step for all the odd elements for the given array arr[].
If all the elements in arr[] are even, then print the number of operations.
Else print -1.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program to make all array// element even#include "bits/stdc++.h"using namespace std; // Function to count the total// number of operations needed to make// all array element evenint countOperations(int arr[], int n){ int count = 0; // Traverse the given array for (int i = 0; i < n - 1; i++) { // If an odd element occurs // then increment that element // and next adjacent element // by 1 if (arr[i] & 1) { arr[i]++; arr[i + 1]++; count += 2; } } // Traverse the array if any odd // element occurs then return -1 for (int i = 0; i < n; i++) { if (arr[i] & 1) return -1; } // Returns the count of operations return count;} int main(){ int arr[] = { 2, 3, 4, 5, 6 }; int n = sizeof(arr) / sizeof(int); cout << countOperations(arr, n); return 0;}
// Java program to make all array// element evenclass GFG{ // Function to count the total// number of operations needed to make// all array element evenstatic int countOperations(int arr[], int n){ int count = 0; // Traverse the given array for (int i = 0; i < n - 1; i++) { // If an odd element occurs // then increment that element // and next adjacent element // by 1 if (arr[i] % 2 == 1) { arr[i]++; arr[i + 1]++; count += 2; } } // Traverse the array if any odd // element occurs then return -1 for (int i = 0; i < n; i++) { if (arr[i] % 2 == 1) return -1; } // Returns the count of operations return count;} // Driver codepublic static void main(String[] args){ int arr[] = { 2, 3, 4, 5, 6 }; int n = arr.length; System.out.print(countOperations(arr, n));}} // This code is contributed by 29AjayKumar
# Python3 program to make all array# element even # Function to count the total# number of operations needed to make# all array element evendef countOperations(arr, n) : count = 0; # Traverse the given array for i in range(n - 1) : # If an odd element occurs # then increment that element # and next adjacent element # by 1 if (arr[i] & 1) : arr[i] += 1; arr[i + 1] += 1; count += 2; # Traverse the array if any odd # element occurs then return -1 for i in range(n) : if (arr[i] & 1) : return -1; # Returns the count of operations return count; if __name__ == "__main__" : arr = [ 2, 3, 4, 5, 6 ]; n = len(arr); print(countOperations(arr, n)); # This code is contributed by AnkitRai01
// C# program to make all array// element evenusing System; class GFG{ // Function to count the total// number of operations needed to make// all array element evenstatic int countOperations(int []arr, int n){ int count = 0; // Traverse the given array for (int i = 0; i < n - 1; i++) { // If an odd element occurs // then increment that element // and next adjacent element // by 1 if (arr[i] % 2 == 1) { arr[i]++; arr[i + 1]++; count += 2; } } // Traverse the array if any odd // element occurs then return -1 for (int i = 0; i < n; i++) { if (arr[i] % 2 == 1) return -1; } // Returns the count of operations return count;} // Driver codepublic static void Main(){ int []arr = { 2, 3, 4, 5, 6 }; int n = arr.Length; Console.Write(countOperations(arr, n));}} // This code is contributed by AnkitRai01
<script>// Javascript program to make all array// element even // Function to count the total// number of operations needed to make// all array element evenfunction countOperations(arr, n){ let count = 0; // Traverse the given array for (let i = 0; i < n - 1; i++) { // If an odd element occurs // then increment that element // and next adjacent element // by 1 if (arr[i] & 1) { arr[i]++; arr[i + 1]++; count += 2; } } // Traverse the array if any odd // element occurs then return -1 for (let i = 0; i < n; i++) { if (arr[i] & 1) return -1; } // Returns the count of operations return count;} let arr = [ 2, 3, 4, 5, 6 ];let n = arr.length;document.write(countOperations(arr, n)); // This code is contributed by _saurabh_jaiswal</script>
4
Time Complexity: O(N) where N is the number of elements in the array.
29AjayKumar
ankthon
_saurabh_jaiswal
Algorithms
Arrays
Greedy
Arrays
Greedy
Algorithms
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
DSA Sheet by Love Babbar
SCAN (Elevator) Disk Scheduling Algorithms
Rail Fence Cipher - Encryption and Decryption
Program for SSTF disk scheduling algorithm
Quadratic Probing in Hashing
Arrays in Java
Maximum and minimum of an array using minimum number of comparisons
Arrays in C/C++
Write a program to reverse an array or string
Program for array rotation | [
{
"code": null,
"e": 24676,
"s": 24648,
"text": "\n31 May, 2021"
},
{
"code": null,
"e": 24811,
"s": 24676,
"text": "Given an array arr[] of positive integers, find the minimum number of operations required to make all the array elements even where: "
},
{
"code": null,
"e": 24937,
"s": 24811,
"text": "If there is an odd number, then, increment the element and the next adjacent element by 1.Each increment costs one operation."
},
{
"code": null,
"e": 25028,
"s": 24937,
"text": "If there is an odd number, then, increment the element and the next adjacent element by 1."
},
{
"code": null,
"e": 25064,
"s": 25028,
"text": "Each increment costs one operation."
},
{
"code": null,
"e": 25153,
"s": 25064,
"text": "Note: If there is any number in arr[] which is odd after all operations, then, print -1."
},
{
"code": null,
"e": 25164,
"s": 25153,
"text": "Examples: "
},
{
"code": null,
"e": 25540,
"s": 25164,
"text": "Input: arr[] = {2, 3, 4, 5, 6} Output: 4 Explanation: Add 1 to 3 (at 1st index) and add 1 to its adjacent element 4(2nd index). Now the array becomes {2, 4, 5, 5, 6}. Add 1 to 5 (at 2nd index) and add 1 to its adjacent element 5(3rd index). Now the array becomes {2, 4, 6, 6, 6}. The resultant array has all even numbers. The total number of operations for 4 increments is 4."
},
{
"code": null,
"e": 25821,
"s": 25540,
"text": "Input: arr[] = {5, 6} Output: -1 Explanation: Adding 1 to 5(0th index), then we have to increment 1 to its adjacent element 6(1st index). Now the array becomes {6, 7}. And we have 1 odd number left after all possible increments. Therefore, we can’t make all array elements even. "
},
{
"code": null,
"e": 25911,
"s": 25821,
"text": "Approach: This problem can be solved using Greedy Approach. The following are the steps: "
},
{
"code": null,
"e": 26215,
"s": 25911,
"text": "Traverse the given array arr[].If an odd element occurs, then increment that element by 1 to make it even and the next adjacent element by 1.Repeat the above step for all the odd elements for the given array arr[].If all the elements in arr[] are even, then print the number of operations.Else print -1."
},
{
"code": null,
"e": 26247,
"s": 26215,
"text": "Traverse the given array arr[]."
},
{
"code": null,
"e": 26358,
"s": 26247,
"text": "If an odd element occurs, then increment that element by 1 to make it even and the next adjacent element by 1."
},
{
"code": null,
"e": 26432,
"s": 26358,
"text": "Repeat the above step for all the odd elements for the given array arr[]."
},
{
"code": null,
"e": 26508,
"s": 26432,
"text": "If all the elements in arr[] are even, then print the number of operations."
},
{
"code": null,
"e": 26523,
"s": 26508,
"text": "Else print -1."
},
{
"code": null,
"e": 26575,
"s": 26523,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 26579,
"s": 26575,
"text": "C++"
},
{
"code": null,
"e": 26584,
"s": 26579,
"text": "Java"
},
{
"code": null,
"e": 26592,
"s": 26584,
"text": "Python3"
},
{
"code": null,
"e": 26595,
"s": 26592,
"text": "C#"
},
{
"code": null,
"e": 26606,
"s": 26595,
"text": "Javascript"
},
{
"code": "// C++ program to make all array// element even#include \"bits/stdc++.h\"using namespace std; // Function to count the total// number of operations needed to make// all array element evenint countOperations(int arr[], int n){ int count = 0; // Traverse the given array for (int i = 0; i < n - 1; i++) { // If an odd element occurs // then increment that element // and next adjacent element // by 1 if (arr[i] & 1) { arr[i]++; arr[i + 1]++; count += 2; } } // Traverse the array if any odd // element occurs then return -1 for (int i = 0; i < n; i++) { if (arr[i] & 1) return -1; } // Returns the count of operations return count;} int main(){ int arr[] = { 2, 3, 4, 5, 6 }; int n = sizeof(arr) / sizeof(int); cout << countOperations(arr, n); return 0;}",
"e": 27497,
"s": 26606,
"text": null
},
{
"code": "// Java program to make all array// element evenclass GFG{ // Function to count the total// number of operations needed to make// all array element evenstatic int countOperations(int arr[], int n){ int count = 0; // Traverse the given array for (int i = 0; i < n - 1; i++) { // If an odd element occurs // then increment that element // and next adjacent element // by 1 if (arr[i] % 2 == 1) { arr[i]++; arr[i + 1]++; count += 2; } } // Traverse the array if any odd // element occurs then return -1 for (int i = 0; i < n; i++) { if (arr[i] % 2 == 1) return -1; } // Returns the count of operations return count;} // Driver codepublic static void main(String[] args){ int arr[] = { 2, 3, 4, 5, 6 }; int n = arr.length; System.out.print(countOperations(arr, n));}} // This code is contributed by 29AjayKumar",
"e": 28453,
"s": 27497,
"text": null
},
{
"code": "# Python3 program to make all array# element even # Function to count the total# number of operations needed to make# all array element evendef countOperations(arr, n) : count = 0; # Traverse the given array for i in range(n - 1) : # If an odd element occurs # then increment that element # and next adjacent element # by 1 if (arr[i] & 1) : arr[i] += 1; arr[i + 1] += 1; count += 2; # Traverse the array if any odd # element occurs then return -1 for i in range(n) : if (arr[i] & 1) : return -1; # Returns the count of operations return count; if __name__ == \"__main__\" : arr = [ 2, 3, 4, 5, 6 ]; n = len(arr); print(countOperations(arr, n)); # This code is contributed by AnkitRai01",
"e": 29271,
"s": 28453,
"text": null
},
{
"code": "// C# program to make all array// element evenusing System; class GFG{ // Function to count the total// number of operations needed to make// all array element evenstatic int countOperations(int []arr, int n){ int count = 0; // Traverse the given array for (int i = 0; i < n - 1; i++) { // If an odd element occurs // then increment that element // and next adjacent element // by 1 if (arr[i] % 2 == 1) { arr[i]++; arr[i + 1]++; count += 2; } } // Traverse the array if any odd // element occurs then return -1 for (int i = 0; i < n; i++) { if (arr[i] % 2 == 1) return -1; } // Returns the count of operations return count;} // Driver codepublic static void Main(){ int []arr = { 2, 3, 4, 5, 6 }; int n = arr.Length; Console.Write(countOperations(arr, n));}} // This code is contributed by AnkitRai01",
"e": 30222,
"s": 29271,
"text": null
},
{
"code": "<script>// Javascript program to make all array// element even // Function to count the total// number of operations needed to make// all array element evenfunction countOperations(arr, n){ let count = 0; // Traverse the given array for (let i = 0; i < n - 1; i++) { // If an odd element occurs // then increment that element // and next adjacent element // by 1 if (arr[i] & 1) { arr[i]++; arr[i + 1]++; count += 2; } } // Traverse the array if any odd // element occurs then return -1 for (let i = 0; i < n; i++) { if (arr[i] & 1) return -1; } // Returns the count of operations return count;} let arr = [ 2, 3, 4, 5, 6 ];let n = arr.length;document.write(countOperations(arr, n)); // This code is contributed by _saurabh_jaiswal</script>",
"e": 31091,
"s": 30222,
"text": null
},
{
"code": null,
"e": 31093,
"s": 31091,
"text": "4"
},
{
"code": null,
"e": 31166,
"s": 31095,
"text": "Time Complexity: O(N) where N is the number of elements in the array. "
},
{
"code": null,
"e": 31178,
"s": 31166,
"text": "29AjayKumar"
},
{
"code": null,
"e": 31186,
"s": 31178,
"text": "ankthon"
},
{
"code": null,
"e": 31203,
"s": 31186,
"text": "_saurabh_jaiswal"
},
{
"code": null,
"e": 31214,
"s": 31203,
"text": "Algorithms"
},
{
"code": null,
"e": 31221,
"s": 31214,
"text": "Arrays"
},
{
"code": null,
"e": 31228,
"s": 31221,
"text": "Greedy"
},
{
"code": null,
"e": 31235,
"s": 31228,
"text": "Arrays"
},
{
"code": null,
"e": 31242,
"s": 31235,
"text": "Greedy"
},
{
"code": null,
"e": 31253,
"s": 31242,
"text": "Algorithms"
},
{
"code": null,
"e": 31351,
"s": 31253,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31376,
"s": 31351,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 31419,
"s": 31376,
"text": "SCAN (Elevator) Disk Scheduling Algorithms"
},
{
"code": null,
"e": 31465,
"s": 31419,
"text": "Rail Fence Cipher - Encryption and Decryption"
},
{
"code": null,
"e": 31508,
"s": 31465,
"text": "Program for SSTF disk scheduling algorithm"
},
{
"code": null,
"e": 31537,
"s": 31508,
"text": "Quadratic Probing in Hashing"
},
{
"code": null,
"e": 31552,
"s": 31537,
"text": "Arrays in Java"
},
{
"code": null,
"e": 31620,
"s": 31552,
"text": "Maximum and minimum of an array using minimum number of comparisons"
},
{
"code": null,
"e": 31636,
"s": 31620,
"text": "Arrays in C/C++"
},
{
"code": null,
"e": 31682,
"s": 31636,
"text": "Write a program to reverse an array or string"
}
] |
Hands-on Guide to Plotting a Decision Surface for ML in Python | by Salma El Shahawy | Towards Data Science | Lately, I have been struggling for a while to visualize the generated model of a classification model. I relied only on the classification report and the confusion matrix to weigh the model performance.
However, visualize the results of the classification has its charm and makes more sense of it. So, I built a decision surface, and when I succeeded, I decided to write about it as a learning process and for anyone who might have stuck on the same issue.
In this tutorial, I will start with the built-in dataset package within the Sklearn library to focus on the implementation steps. After that, I will use a pre-processed data (without missing data or outliers) to plot the decision surface after applying the standard scaler.
Decision Surface
Importing important libraries
Dataset generation
Generating decision surface
Applying for real data
Classification in machine learning means to train your data to assign labels to the input examples.
Each input feature is defining an axis on a feature space. A plane is characterized by a minimum of two input features, with dots representing input coordinates in the input space. If there were three input variables, the feature space would be a three-dimensional volume.
The ultimate goal of classification is to separate the feature space so that labels are assigned to points in the feature space as correctly as possible.
This method is called a decision surface or decision boundary, and it works as a demonstrative tool for explaining a model on a classification predictive modeling task. We can create a decision surface for each pair of input features if you have more than two input features.
import numpy as npimport pandas as pdimport matplotlib.pyplot as pltfrom matplotlib.colors import ListedColormapfrom sklearn import datasetsfrom sklearn.linear_model import LogisticRegressionfrom sklearn.preprocessing import StandardScalerfrom sklearn.metrics import accuracy_score, confusion_matrixfrom sklearn.model_selection import train_test_split
I will use the make_blobs()function within the datasets class from the Sklearn library to generate a custom dataset. Doing so would focus on the implementations rather than cleaning the data. However, the steps are the same and are a typical pattern.Let’s start by defining the dataset variables with 1000 samples and only two features and a standard deviation of 3 for simplicity’s sake.
X, y = datasets.make_blobs(n_samples = 1000, centers = 2, n_features = 2, random_state = 1, cluster_std = 3)
Once the dataset is generated, hence we can plot a scatter plot to see the variability between variables.
# create scatter plot for samples from each classfor class_value in range(2): # get row indexes for samples with this class row_ix = np.where(y == class_value) # create scatter of these samples plt.scatter(X[row_ix, 0], X[row_ix, 1])# show the plotplt.show()
Here we looped over the dataset and plotted points between each Xand y colored by a class label. In the next step, we need to build a predictive classification model to predict the class of unseen points. A logistic regression could be used in this case since we have only two categories.
regressor = LogisticRegression()# fit the regressor into X and yregressor.fit(X, y)# apply the predict method y_pred = regressor.predict(X)
All y_predcould be evaluated using the accuracy_scoreclass from thesklearn library.
accuracy = accuracy_score(y, y_pred)print('Accuracy: %.3f' % accuracy)## Accuracy: 0.972
matplotlib provides a handy function called contour(), which can insert the colors between points. However, as the documentation suggested, we need to define the grid of points Xof yin the feature space. The beginning point would be to find the maximum value and minimum value of each feature then increase by one to make sure that the whole space is covered.
min1, max1 = X[:, 0].min() - 1, X[:, 0].max() + 1 #1st featuremin2, max2 = X[:, 1].min() - 1, X[:, 1].max() + 1 #2nd feature
Then we can define the scale of the coordinates using arange() function from the numpy library with a0.01 resolution to get the scale range.
x1_scale = np.arange(min1, max1, 0.1)x2_scale = np.arange(min2, max2, 0.1)
The next step would be converting x1_scale and x2_scale into a grid. The function meshgrid() within the numpy library is what we need.
x_grid, y_grid = np.meshgrid(x1_scale, x2_scale)
The generated x_gridis a 2-D array. To be able to use it, we need to reduce the size to a one dimensional array using the flatten() method from thenumpy library.
# flatten each grid to a vectorx_g, y_g = x_grid.flatten(), y_grid.flatten()x_g, y_g = x_g.reshape((len(x_g), 1)), y_g.reshape((len(y_g), 1))
Finally, stacking the vectors side-by-side as columns in an input dataset, like the original dataset, but at a much higher resolution.
grid = np.hstack((x_g, y_g))
Now, we can fit into the model to predict values.
# make predictions for the gridy_pred_2 = model.predict(grid)#predict the probabilityp_pred = model.predict_proba(grid)# keep just the probabilities for class 0p_pred = p_pred[:, 0]# reshaping the resultsp_pred.shapepp_grid = p_pred.reshape(x_grid.shape)
Now, a grid of values and the predicted class label across the feature space has been generated.
Subsequently, we will plot those grids as a contour plot using contourf(). The contourf()function needs separate grids per axis. To achieve that, we can utilize the x_gridand y_gridand reshape the predictions (y_pred)to have the same shape.
# plot the grid of x, y and z values as a surfacesurface = plt.contourf(x_grid, y_grid, pp_grid, cmap='Pastel1')plt.colorbar(surface)# create scatter plot for samples from each classfor class_value in range(2):# get row indexes for samples with this class row_ix = np.where(y == class_value) # create scatter of these samples plt.scatter(X[row_ix, 0], X[row_ix, 1], cmap='Pastel1')# show the plotplt.show()
Now it is time to apply the previous steps to real data to connect everything. As I mentioned earlier, this dataset is already cleaned with no missing points. The dataset represents car purchase history for a sample of people according to their age and salary per year.
dataset = pd.read_csv('../input/logistic-reg-visual/Social_Network_Ads.csv')dataset.head()
The dataset has two features Ageand EstimatedSalaryand one dependent variable purchased as a binary column. Value 0 represents the person with similar age, and salary that didn’t make a car purchase. However, one means that the person did purchase the car. The next step would be to separate the dependent variable from features as X and y
X = dataset.iloc[:, :-1].valuesy = dataset.iloc[:, -1].values# splitting the datasetX_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.25, random_state = 0)
We need this step because Age and salary is not on the same scale
sc = StandardScaler()X_train = sc.fit_transform(X_train)X_test = sc.transform(X_test)
classifier = LogisticRegression(random_state = 0)# fit the classifier into train dataclassifier.fit(X_train, y_train)# predicting the value of y y_pred = classifier.predict(X_test)
#1. reverse the standard scaler on the X_trainX_set, y_set = sc.inverse_transform(X_train), y_train#2. Generate decision surface boundariesmin1, max1 = X_set[:, 0].min() - 10, X_set[:, 0].max() + 10 # for Agemin2, max2 = X_set[:, 1].min() - 1000, X_set[:, 1].max() + 1000 # for salary#3. Set coordinates scale accuracyx_scale ,y_scale = np.arange(min1, max1, 0.25), np.arange(min2, max2, 0.25)#4. Convert into vector X1, X2 = np.meshgrid(x_scale, y_scale)#5. Flatten X1 and X2 and return the output as a numpy arrayX_flatten = np.array([X1.ravel(), X2.ravel()])#6. Transfor the results into it's original form before scalingX_transformed = sc.transform(X_flatten.T)#7. Generate the prediction and reshape it to the X to have the same shapeZ_pred = classifier.predict(X_transformed).reshape(X1.shape)#8. set the plot sizeplt.figure(figsize=(20,10))#9. plot the contour functionplt.contourf(X1, X2, Z_pred, alpha = 0.75, cmap = ListedColormap(('#386cb0', '#f0027f')))#10. setting the axes limitplt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max())#11. plot the points scatter plot ( [salary, age] vs. predicted classification based on training set)for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) #12. plot labels and adjustmentsplt.title('Logistic Regression (Training set)')plt.xlabel('Age')plt.ylabel('Estimated Salary')plt.legend()plt.show()
It is exactly the same as the previous code, but instead of using train use test set.
Finally, I hope this boilerplate could help in visualizing the classification model results. I recommend applying the same steps using another classification model, for example, SVM with more than two features. Thanks for reading, I am looking forward to any constructive comments.
Sklearn.datasets APIUtilizing pandas to transform datamatplotlib.contour() APInumpy.meshgrid() APIPlot the decision surface of a decision tree on the iris dataset — sklearn exampleFull working Kaggle notebookGitHub repo
Sklearn.datasets API
Utilizing pandas to transform data
matplotlib.contour() API
numpy.meshgrid() API
Plot the decision surface of a decision tree on the iris dataset — sklearn example
Full working Kaggle notebook | [
{
"code": null,
"e": 375,
"s": 172,
"text": "Lately, I have been struggling for a while to visualize the generated model of a classification model. I relied only on the classification report and the confusion matrix to weigh the model performance."
},
{
"code": null,
"e": 629,
"s": 375,
"text": "However, visualize the results of the classification has its charm and makes more sense of it. So, I built a decision surface, and when I succeeded, I decided to write about it as a learning process and for anyone who might have stuck on the same issue."
},
{
"code": null,
"e": 903,
"s": 629,
"text": "In this tutorial, I will start with the built-in dataset package within the Sklearn library to focus on the implementation steps. After that, I will use a pre-processed data (without missing data or outliers) to plot the decision surface after applying the standard scaler."
},
{
"code": null,
"e": 920,
"s": 903,
"text": "Decision Surface"
},
{
"code": null,
"e": 950,
"s": 920,
"text": "Importing important libraries"
},
{
"code": null,
"e": 969,
"s": 950,
"text": "Dataset generation"
},
{
"code": null,
"e": 997,
"s": 969,
"text": "Generating decision surface"
},
{
"code": null,
"e": 1020,
"s": 997,
"text": "Applying for real data"
},
{
"code": null,
"e": 1120,
"s": 1020,
"text": "Classification in machine learning means to train your data to assign labels to the input examples."
},
{
"code": null,
"e": 1393,
"s": 1120,
"text": "Each input feature is defining an axis on a feature space. A plane is characterized by a minimum of two input features, with dots representing input coordinates in the input space. If there were three input variables, the feature space would be a three-dimensional volume."
},
{
"code": null,
"e": 1547,
"s": 1393,
"text": "The ultimate goal of classification is to separate the feature space so that labels are assigned to points in the feature space as correctly as possible."
},
{
"code": null,
"e": 1823,
"s": 1547,
"text": "This method is called a decision surface or decision boundary, and it works as a demonstrative tool for explaining a model on a classification predictive modeling task. We can create a decision surface for each pair of input features if you have more than two input features."
},
{
"code": null,
"e": 2175,
"s": 1823,
"text": "import numpy as npimport pandas as pdimport matplotlib.pyplot as pltfrom matplotlib.colors import ListedColormapfrom sklearn import datasetsfrom sklearn.linear_model import LogisticRegressionfrom sklearn.preprocessing import StandardScalerfrom sklearn.metrics import accuracy_score, confusion_matrixfrom sklearn.model_selection import train_test_split"
},
{
"code": null,
"e": 2564,
"s": 2175,
"text": "I will use the make_blobs()function within the datasets class from the Sklearn library to generate a custom dataset. Doing so would focus on the implementations rather than cleaning the data. However, the steps are the same and are a typical pattern.Let’s start by defining the dataset variables with 1000 samples and only two features and a standard deviation of 3 for simplicity’s sake."
},
{
"code": null,
"e": 2781,
"s": 2564,
"text": "X, y = datasets.make_blobs(n_samples = 1000, centers = 2, n_features = 2, random_state = 1, cluster_std = 3)"
},
{
"code": null,
"e": 2887,
"s": 2781,
"text": "Once the dataset is generated, hence we can plot a scatter plot to see the variability between variables."
},
{
"code": null,
"e": 3158,
"s": 2887,
"text": "# create scatter plot for samples from each classfor class_value in range(2): # get row indexes for samples with this class row_ix = np.where(y == class_value) # create scatter of these samples plt.scatter(X[row_ix, 0], X[row_ix, 1])# show the plotplt.show()"
},
{
"code": null,
"e": 3447,
"s": 3158,
"text": "Here we looped over the dataset and plotted points between each Xand y colored by a class label. In the next step, we need to build a predictive classification model to predict the class of unseen points. A logistic regression could be used in this case since we have only two categories."
},
{
"code": null,
"e": 3587,
"s": 3447,
"text": "regressor = LogisticRegression()# fit the regressor into X and yregressor.fit(X, y)# apply the predict method y_pred = regressor.predict(X)"
},
{
"code": null,
"e": 3671,
"s": 3587,
"text": "All y_predcould be evaluated using the accuracy_scoreclass from thesklearn library."
},
{
"code": null,
"e": 3760,
"s": 3671,
"text": "accuracy = accuracy_score(y, y_pred)print('Accuracy: %.3f' % accuracy)## Accuracy: 0.972"
},
{
"code": null,
"e": 4120,
"s": 3760,
"text": "matplotlib provides a handy function called contour(), which can insert the colors between points. However, as the documentation suggested, we need to define the grid of points Xof yin the feature space. The beginning point would be to find the maximum value and minimum value of each feature then increase by one to make sure that the whole space is covered."
},
{
"code": null,
"e": 4245,
"s": 4120,
"text": "min1, max1 = X[:, 0].min() - 1, X[:, 0].max() + 1 #1st featuremin2, max2 = X[:, 1].min() - 1, X[:, 1].max() + 1 #2nd feature"
},
{
"code": null,
"e": 4386,
"s": 4245,
"text": "Then we can define the scale of the coordinates using arange() function from the numpy library with a0.01 resolution to get the scale range."
},
{
"code": null,
"e": 4461,
"s": 4386,
"text": "x1_scale = np.arange(min1, max1, 0.1)x2_scale = np.arange(min2, max2, 0.1)"
},
{
"code": null,
"e": 4596,
"s": 4461,
"text": "The next step would be converting x1_scale and x2_scale into a grid. The function meshgrid() within the numpy library is what we need."
},
{
"code": null,
"e": 4645,
"s": 4596,
"text": "x_grid, y_grid = np.meshgrid(x1_scale, x2_scale)"
},
{
"code": null,
"e": 4807,
"s": 4645,
"text": "The generated x_gridis a 2-D array. To be able to use it, we need to reduce the size to a one dimensional array using the flatten() method from thenumpy library."
},
{
"code": null,
"e": 4949,
"s": 4807,
"text": "# flatten each grid to a vectorx_g, y_g = x_grid.flatten(), y_grid.flatten()x_g, y_g = x_g.reshape((len(x_g), 1)), y_g.reshape((len(y_g), 1))"
},
{
"code": null,
"e": 5084,
"s": 4949,
"text": "Finally, stacking the vectors side-by-side as columns in an input dataset, like the original dataset, but at a much higher resolution."
},
{
"code": null,
"e": 5113,
"s": 5084,
"text": "grid = np.hstack((x_g, y_g))"
},
{
"code": null,
"e": 5163,
"s": 5113,
"text": "Now, we can fit into the model to predict values."
},
{
"code": null,
"e": 5418,
"s": 5163,
"text": "# make predictions for the gridy_pred_2 = model.predict(grid)#predict the probabilityp_pred = model.predict_proba(grid)# keep just the probabilities for class 0p_pred = p_pred[:, 0]# reshaping the resultsp_pred.shapepp_grid = p_pred.reshape(x_grid.shape)"
},
{
"code": null,
"e": 5515,
"s": 5418,
"text": "Now, a grid of values and the predicted class label across the feature space has been generated."
},
{
"code": null,
"e": 5756,
"s": 5515,
"text": "Subsequently, we will plot those grids as a contour plot using contourf(). The contourf()function needs separate grids per axis. To achieve that, we can utilize the x_gridand y_gridand reshape the predictions (y_pred)to have the same shape."
},
{
"code": null,
"e": 6172,
"s": 5756,
"text": "# plot the grid of x, y and z values as a surfacesurface = plt.contourf(x_grid, y_grid, pp_grid, cmap='Pastel1')plt.colorbar(surface)# create scatter plot for samples from each classfor class_value in range(2):# get row indexes for samples with this class row_ix = np.where(y == class_value) # create scatter of these samples plt.scatter(X[row_ix, 0], X[row_ix, 1], cmap='Pastel1')# show the plotplt.show()"
},
{
"code": null,
"e": 6442,
"s": 6172,
"text": "Now it is time to apply the previous steps to real data to connect everything. As I mentioned earlier, this dataset is already cleaned with no missing points. The dataset represents car purchase history for a sample of people according to their age and salary per year."
},
{
"code": null,
"e": 6533,
"s": 6442,
"text": "dataset = pd.read_csv('../input/logistic-reg-visual/Social_Network_Ads.csv')dataset.head()"
},
{
"code": null,
"e": 6873,
"s": 6533,
"text": "The dataset has two features Ageand EstimatedSalaryand one dependent variable purchased as a binary column. Value 0 represents the person with similar age, and salary that didn’t make a car purchase. However, one means that the person did purchase the car. The next step would be to separate the dependent variable from features as X and y"
},
{
"code": null,
"e": 7191,
"s": 6873,
"text": "X = dataset.iloc[:, :-1].valuesy = dataset.iloc[:, -1].values# splitting the datasetX_train, X_test, y_train, y_test = train_test_split( X, y, test_size = 0.25, random_state = 0)"
},
{
"code": null,
"e": 7257,
"s": 7191,
"text": "We need this step because Age and salary is not on the same scale"
},
{
"code": null,
"e": 7343,
"s": 7257,
"text": "sc = StandardScaler()X_train = sc.fit_transform(X_train)X_test = sc.transform(X_test)"
},
{
"code": null,
"e": 7524,
"s": 7343,
"text": "classifier = LogisticRegression(random_state = 0)# fit the classifier into train dataclassifier.fit(X_train, y_train)# predicting the value of y y_pred = classifier.predict(X_test)"
},
{
"code": null,
"e": 9066,
"s": 7524,
"text": "#1. reverse the standard scaler on the X_trainX_set, y_set = sc.inverse_transform(X_train), y_train#2. Generate decision surface boundariesmin1, max1 = X_set[:, 0].min() - 10, X_set[:, 0].max() + 10 # for Agemin2, max2 = X_set[:, 1].min() - 1000, X_set[:, 1].max() + 1000 # for salary#3. Set coordinates scale accuracyx_scale ,y_scale = np.arange(min1, max1, 0.25), np.arange(min2, max2, 0.25)#4. Convert into vector X1, X2 = np.meshgrid(x_scale, y_scale)#5. Flatten X1 and X2 and return the output as a numpy arrayX_flatten = np.array([X1.ravel(), X2.ravel()])#6. Transfor the results into it's original form before scalingX_transformed = sc.transform(X_flatten.T)#7. Generate the prediction and reshape it to the X to have the same shapeZ_pred = classifier.predict(X_transformed).reshape(X1.shape)#8. set the plot sizeplt.figure(figsize=(20,10))#9. plot the contour functionplt.contourf(X1, X2, Z_pred, alpha = 0.75, cmap = ListedColormap(('#386cb0', '#f0027f')))#10. setting the axes limitplt.xlim(X1.min(), X1.max())plt.ylim(X2.min(), X2.max())#11. plot the points scatter plot ( [salary, age] vs. predicted classification based on training set)for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j) #12. plot labels and adjustmentsplt.title('Logistic Regression (Training set)')plt.xlabel('Age')plt.ylabel('Estimated Salary')plt.legend()plt.show()"
},
{
"code": null,
"e": 9152,
"s": 9066,
"text": "It is exactly the same as the previous code, but instead of using train use test set."
},
{
"code": null,
"e": 9434,
"s": 9152,
"text": "Finally, I hope this boilerplate could help in visualizing the classification model results. I recommend applying the same steps using another classification model, for example, SVM with more than two features. Thanks for reading, I am looking forward to any constructive comments."
},
{
"code": null,
"e": 9654,
"s": 9434,
"text": "Sklearn.datasets APIUtilizing pandas to transform datamatplotlib.contour() APInumpy.meshgrid() APIPlot the decision surface of a decision tree on the iris dataset — sklearn exampleFull working Kaggle notebookGitHub repo"
},
{
"code": null,
"e": 9675,
"s": 9654,
"text": "Sklearn.datasets API"
},
{
"code": null,
"e": 9710,
"s": 9675,
"text": "Utilizing pandas to transform data"
},
{
"code": null,
"e": 9735,
"s": 9710,
"text": "matplotlib.contour() API"
},
{
"code": null,
"e": 9756,
"s": 9735,
"text": "numpy.meshgrid() API"
},
{
"code": null,
"e": 9839,
"s": 9756,
"text": "Plot the decision surface of a decision tree on the iris dataset — sklearn example"
}
] |
Binary to BCD conversion in 8051 | In this problem, we will see how to convert an 8-bit binary number to its BCD equivalent. The binary number is stored at location 20H. After converting, the results will be stored at 30H and 31H. The 30H will hold the MS portion, and 31H will hold the LS portion.
So let us assume the data is D5H. The program converts the binary value of D5H to BCD value 213D.
MOVR1,#20H;Takethe address 20H into R1
MOVA,@R1;Takethe data into Acc
MOVB,#0AH;LoadB with AH = 10D
DIVAB ;DivideA with B
MOVR5,B;Storethe remainder
MOVB,#0AH;LoadB with AH = 10D
DIVAB ;DivideA with B
MOVR1,#30H;Loaddestination address
MOV@R1,A;Storethe MS portion
MOVA,B;LoadB content to A
SWAPA;Swapthe nibbles
ADDA,R5;Addstored remainder with A
INCR1;Increasethe address
MOV@R1,A
HALT: SJMPHALT
Here we are just taking the binary number into the accumulator. And then divide the content of accumulator by 0AH (10D). So the remainder partis stored into a separate register. This will be added later. Then again divide the quotient by 0AH, and generate the MS bits. After storing the MS bits, get the number from register B to the accumulator. Swap the nibbles of the accumulator to generate four zeros at the LSbits. Then add the previously stored remainder to generate the result. | [
{
"code": null,
"e": 1327,
"s": 1062,
"text": "In this problem, we will see how to convert an 8-bit binary number to its BCD equivalent. The binary number is stored at location 20H. After converting, the results will be stored at 30H and 31H. The 30H will hold the MS portion, and 31H will hold the LS portion. "
},
{
"code": null,
"e": 1425,
"s": 1327,
"text": "So let us assume the data is D5H. The program converts the binary value of D5H to BCD value 213D."
},
{
"code": null,
"e": 1825,
"s": 1425,
"text": "MOVR1,#20H;Takethe address 20H into R1\nMOVA,@R1;Takethe data into Acc\nMOVB,#0AH;LoadB with AH = 10D\nDIVAB ;DivideA with B\nMOVR5,B;Storethe remainder\nMOVB,#0AH;LoadB with AH = 10D\nDIVAB ;DivideA with B\nMOVR1,#30H;Loaddestination address\nMOV@R1,A;Storethe MS portion\nMOVA,B;LoadB content to A\nSWAPA;Swapthe nibbles\nADDA,R5;Addstored remainder with A\nINCR1;Increasethe address\nMOV@R1,A\nHALT: SJMPHALT"
},
{
"code": null,
"e": 2311,
"s": 1825,
"text": "Here we are just taking the binary number into the accumulator. And then divide the content of accumulator by 0AH (10D). So the remainder partis stored into a separate register. This will be added later. Then again divide the quotient by 0AH, and generate the MS bits. After storing the MS bits, get the number from register B to the accumulator. Swap the nibbles of the accumulator to generate four zeros at the LSbits. Then add the previously stored remainder to generate the result."
}
] |
Can we use “IF NOT IN” in a MySQL procedure? | Let us first see the syntax of IF NOT IN in MySQL −
if(yourVariableName NOT IN (yourValue1,yourValue2,........N) ) then
statement1
else
statement2
endif
Let us implement the above syntax to use IF NOT IN −
mysql> DELIMITER //
mysql> CREATE PROCEDURE IF_NOT_INDemo(IN value int)
-> BEGIN
-> if(value NOT IN (10,20,30) ) then
-> select "Value Not Found";
-> else
-> select "Value Found";
-> end if;
-> END
-> //
Query OK, 0 rows affected (0.25 sec)
mysql> DELIMITER ;
Now call the stored procedure using CALL command.
Case 1 − When a value is found −
mysql> call IF_NOT_INDemo(10);
+-------------+
| Value Found |
+-------------+
| Value Found |
+-------------+
1 row in set (0.00 sec)
Query OK, 0 rows affected (0.01 sec)
Case 2 - When a value isn’t found -
mysql> call IF_NOT_INDemo(100);
+-----------------+
| Value Not Found |
+-----------------+
| Value Not Found |
+-----------------+
1 row in set (0.05 sec)
Query OK, 0 rows affected (0.07 sec) | [
{
"code": null,
"e": 1114,
"s": 1062,
"text": "Let us first see the syntax of IF NOT IN in MySQL −"
},
{
"code": null,
"e": 1226,
"s": 1114,
"text": "if(yourVariableName NOT IN (yourValue1,yourValue2,........N) ) then\n statement1\nelse\n statement2\nendif "
},
{
"code": null,
"e": 1279,
"s": 1226,
"text": "Let us implement the above syntax to use IF NOT IN −"
},
{
"code": null,
"e": 1606,
"s": 1279,
"text": "mysql> DELIMITER //\nmysql> CREATE PROCEDURE IF_NOT_INDemo(IN value int)\n -> BEGIN\n -> if(value NOT IN (10,20,30) ) then\n -> select \"Value Not Found\";\n -> else\n -> select \"Value Found\";\n -> end if;\n -> END\n -> //\nQuery OK, 0 rows affected (0.25 sec)\nmysql> DELIMITER ;"
},
{
"code": null,
"e": 1656,
"s": 1606,
"text": "Now call the stored procedure using CALL command."
},
{
"code": null,
"e": 1689,
"s": 1656,
"text": "Case 1 − When a value is found −"
},
{
"code": null,
"e": 1720,
"s": 1689,
"text": "mysql> call IF_NOT_INDemo(10);"
},
{
"code": null,
"e": 1862,
"s": 1720,
"text": "+-------------+\n| Value Found |\n+-------------+\n| Value Found |\n+-------------+\n1 row in set (0.00 sec)\n\nQuery OK, 0 rows affected (0.01 sec)"
},
{
"code": null,
"e": 1898,
"s": 1862,
"text": "Case 2 - When a value isn’t found -"
},
{
"code": null,
"e": 1930,
"s": 1898,
"text": "mysql> call IF_NOT_INDemo(100);"
},
{
"code": null,
"e": 2097,
"s": 1930,
"text": " +-----------------+\n | Value Not Found |\n +-----------------+\n | Value Not Found |\n +-----------------+\n1 row in set (0.05 sec)\n\nQuery OK, 0 rows affected (0.07 sec)"
}
] |
How to set the maximum width of an element with JavaScript? | Use the maxWidth property in JavaScript to set the maximum width.
You can try to run the following code to set the maximum width of an element with JavaScript −
Live Demo
<!DOCTYPE html>
<html>
<body>
<p>Click below to set Maximum width.</p>
<button type="button" onclick="display()">Max Width</button>
<div id="box">
<p>This is a div. This is a div. This is a div.</p>
<p>This is a div. This is a div. This is a div.</p>
<p>This is a div. This is a div. This is a div.</p>
<p>This is a div. This is a div. This is a div.</p>
<p>This is a div. This is a div. This is a div.</p>
</div>
<br>
<script>
function display() {
document.getElementById("box").style.maxWidth = "80px";
}
</script>
</body>
</html> | [
{
"code": null,
"e": 1128,
"s": 1062,
"text": "Use the maxWidth property in JavaScript to set the maximum width."
},
{
"code": null,
"e": 1223,
"s": 1128,
"text": "You can try to run the following code to set the maximum width of an element with JavaScript −"
},
{
"code": null,
"e": 1233,
"s": 1223,
"text": "Live Demo"
},
{
"code": null,
"e": 1889,
"s": 1233,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <p>Click below to set Maximum width.</p>\n <button type=\"button\" onclick=\"display()\">Max Width</button>\n <div id=\"box\">\n <p>This is a div. This is a div. This is a div.</p>\n <p>This is a div. This is a div. This is a div.</p>\n <p>This is a div. This is a div. This is a div.</p>\n <p>This is a div. This is a div. This is a div.</p>\n <p>This is a div. This is a div. This is a div.</p>\n </div>\n <br>\n <script>\n function display() {\n document.getElementById(\"box\").style.maxWidth = \"80px\";\n }\n </script>\n </body>\n</html>"
}
] |
java.lang.reflect - Method Class | The java.lang.reflect.Method class provides information about, and access to, a single method on a class or interface. The reflected method may be a class method or an instance method (including an abstract method). A Method permits widening conversions to occur when matching the actual parameters to invoke with the underlying method's formal parameters, but it throws an IllegalArgumentException if a narrowing conversion would occur.
Following is the declaration for java.lang.reflect.Method class −
public final class Method<T>
extends AccessibleObject
implements GenericDeclaration, Member
Compares this Method against the specified object.
Returns this element's annotation for the specified type if such an annotation is present, else null.
Returns all annotations that are directly present on this element.
Returns the Class object representing the class that declares the method represented by this Method object.
Returns the default value for the annotation member represented by this Method instance.
Returns an array of Class objects that represent the types of exceptions declared to be thrown by the underlying constructor represented by this Constructor object.
Returns an array of Type objects that represent the exceptions declared to be thrown by this Constructor object.
Returns an array of Type objects that represent the formal parameter types, in declaration order, of the method represented by this Constructor object.
Returns a Type object that represents the formal return type of the method represented by this Method object.
Returns the Java language modifiers for the method represented by this Method object, as an integer.
Returns the name of this method, as a string.
Returns an array of arrays that represent the annotations on the formal parameters, in declaration order, of the method represented by this Method object.
Returns an array of Class objects that represent the formal parameter types, in declaration order, of the constructor represented by this Method object.
Returns a Class object that represents the formal return type of the method represented by this Method object.
Returns a hashcode for this Constructor.
Invokes the underlying method represented by this Method object, on the specified object with the specified parameters.
Returns true if this method is a bridge method; returns false otherwise.
Returns true if this method is a synthetic method; returns false otherwise.
Returns true if this method was declared to take a variable number of arguments; returns false otherwise.
Returns a string describing this Method, including type parameters.
Returns a string describing this Method.
This class inherits methods from the following classes −
java.lang.reflect.AccessibleObject
java.lang.Object
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 1893,
"s": 1454,
"text": "The java.lang.reflect.Method class provides information about, and access to, a single method on a class or interface. The reflected method may be a class method or an instance method (including an abstract method). A Method permits widening conversions to occur when matching the actual parameters to invoke with the underlying method's formal parameters, but it throws an IllegalArgumentException if a narrowing conversion would occur."
},
{
"code": null,
"e": 1959,
"s": 1893,
"text": "Following is the declaration for java.lang.reflect.Method class −"
},
{
"code": null,
"e": 2061,
"s": 1959,
"text": "public final class Method<T>\n extends AccessibleObject\n implements GenericDeclaration, Member\n"
},
{
"code": null,
"e": 2112,
"s": 2061,
"text": "Compares this Method against the specified object."
},
{
"code": null,
"e": 2214,
"s": 2112,
"text": "Returns this element's annotation for the specified type if such an annotation is present, else null."
},
{
"code": null,
"e": 2281,
"s": 2214,
"text": "Returns all annotations that are directly present on this element."
},
{
"code": null,
"e": 2389,
"s": 2281,
"text": "Returns the Class object representing the class that declares the method represented by this Method object."
},
{
"code": null,
"e": 2478,
"s": 2389,
"text": "Returns the default value for the annotation member represented by this Method instance."
},
{
"code": null,
"e": 2643,
"s": 2478,
"text": "Returns an array of Class objects that represent the types of exceptions declared to be thrown by the underlying constructor represented by this Constructor object."
},
{
"code": null,
"e": 2756,
"s": 2643,
"text": "Returns an array of Type objects that represent the exceptions declared to be thrown by this Constructor object."
},
{
"code": null,
"e": 2908,
"s": 2756,
"text": "Returns an array of Type objects that represent the formal parameter types, in declaration order, of the method represented by this Constructor object."
},
{
"code": null,
"e": 3018,
"s": 2908,
"text": "Returns a Type object that represents the formal return type of the method represented by this Method object."
},
{
"code": null,
"e": 3119,
"s": 3018,
"text": "Returns the Java language modifiers for the method represented by this Method object, as an integer."
},
{
"code": null,
"e": 3165,
"s": 3119,
"text": "Returns the name of this method, as a string."
},
{
"code": null,
"e": 3320,
"s": 3165,
"text": "Returns an array of arrays that represent the annotations on the formal parameters, in declaration order, of the method represented by this Method object."
},
{
"code": null,
"e": 3473,
"s": 3320,
"text": "Returns an array of Class objects that represent the formal parameter types, in declaration order, of the constructor represented by this Method object."
},
{
"code": null,
"e": 3584,
"s": 3473,
"text": "Returns a Class object that represents the formal return type of the method represented by this Method object."
},
{
"code": null,
"e": 3625,
"s": 3584,
"text": "Returns a hashcode for this Constructor."
},
{
"code": null,
"e": 3745,
"s": 3625,
"text": "Invokes the underlying method represented by this Method object, on the specified object with the specified parameters."
},
{
"code": null,
"e": 3818,
"s": 3745,
"text": "Returns true if this method is a bridge method; returns false otherwise."
},
{
"code": null,
"e": 3894,
"s": 3818,
"text": "Returns true if this method is a synthetic method; returns false otherwise."
},
{
"code": null,
"e": 4000,
"s": 3894,
"text": "Returns true if this method was declared to take a variable number of arguments; returns false otherwise."
},
{
"code": null,
"e": 4068,
"s": 4000,
"text": "Returns a string describing this Method, including type parameters."
},
{
"code": null,
"e": 4109,
"s": 4068,
"text": "Returns a string describing this Method."
},
{
"code": null,
"e": 4166,
"s": 4109,
"text": "This class inherits methods from the following classes −"
},
{
"code": null,
"e": 4201,
"s": 4166,
"text": "java.lang.reflect.AccessibleObject"
},
{
"code": null,
"e": 4218,
"s": 4201,
"text": "java.lang.Object"
},
{
"code": null,
"e": 4225,
"s": 4218,
"text": " Print"
},
{
"code": null,
"e": 4236,
"s": 4225,
"text": " Add Notes"
}
] |
Remove the first entry of the TreeMap in Java | To remove the first entry of the TreeMap, use the pollFirstEntry() method.
Let us first create a TreeMap and add elements −
TreeMap<Integer,String> m = new TreeMap<Integer,String>();
m.put(1,"India");
m.put(2,"US");
m.put(3,"Australia");
m.put(4,"Netherlands");
m.put(5,"Canada");
Remove the first entry now −
m.pollFirstEntry()
The following is an example to remove the first entry of the TreeMap.
Live Demo
import java.util.*;
public class Demo {
public static void main(String args[]) {
TreeMap<Integer,String> m = new TreeMap<Integer,String>();
m.put(1,"India");
m.put(2,"US");
m.put(3,"Australia");
m.put(4,"Netherlands");
m.put(5,"Canada");
System.out.println("TreeMap Elements = "+m);
System.out.println("Removing First Entry : "+m.pollFirstEntry());
System.out.println("Updated TreeMap Elements = "+m);
}
}
TreeMap Elements = {1=India, 2=US, 3=Australia, 4=Netherlands, 5=Canada}
Removing First Entry = 1=India
Updated TreeMap Elements = {2=US, 3=Australia, 4=Netherlands, 5=Canada} | [
{
"code": null,
"e": 1137,
"s": 1062,
"text": "To remove the first entry of the TreeMap, use the pollFirstEntry() method."
},
{
"code": null,
"e": 1186,
"s": 1137,
"text": "Let us first create a TreeMap and add elements −"
},
{
"code": null,
"e": 1343,
"s": 1186,
"text": "TreeMap<Integer,String> m = new TreeMap<Integer,String>();\nm.put(1,\"India\");\nm.put(2,\"US\");\nm.put(3,\"Australia\");\nm.put(4,\"Netherlands\");\nm.put(5,\"Canada\");"
},
{
"code": null,
"e": 1372,
"s": 1343,
"text": "Remove the first entry now −"
},
{
"code": null,
"e": 1391,
"s": 1372,
"text": "m.pollFirstEntry()"
},
{
"code": null,
"e": 1461,
"s": 1391,
"text": "The following is an example to remove the first entry of the TreeMap."
},
{
"code": null,
"e": 1472,
"s": 1461,
"text": " Live Demo"
},
{
"code": null,
"e": 1938,
"s": 1472,
"text": "import java.util.*;\npublic class Demo {\n public static void main(String args[]) {\n TreeMap<Integer,String> m = new TreeMap<Integer,String>();\n m.put(1,\"India\");\n m.put(2,\"US\");\n m.put(3,\"Australia\");\n m.put(4,\"Netherlands\");\n m.put(5,\"Canada\");\n System.out.println(\"TreeMap Elements = \"+m);\n System.out.println(\"Removing First Entry : \"+m.pollFirstEntry());\n System.out.println(\"Updated TreeMap Elements = \"+m);\n }\n}"
},
{
"code": null,
"e": 2114,
"s": 1938,
"text": "TreeMap Elements = {1=India, 2=US, 3=Australia, 4=Netherlands, 5=Canada}\nRemoving First Entry = 1=India\nUpdated TreeMap Elements = {2=US, 3=Australia, 4=Netherlands, 5=Canada}"
}
] |
Sentiment Analysis of Tweets. How to analyse the sentiment of tweets... | by Alan Jones | Towards Data Science | Sentiment Analysis, or Opinion Mining, is often used by marketing departments to monitor customer satisfaction with a service, product or brand when a large volume of feedback is obtained through social media.
Gone are the days of reading individual letters sent by post. Today’s customers produce vast numbers of comments on Twitter or other social media.
Such a large amount of data cannot be reasonably analysed individually, so what is produced electronically has to be analysed electronically.
There are two fundamental Sentiment Analysis solutions: first, there are rule based systems that use a lexicon of words and rules to classify a particular piece of text and, second, there are systems that use machine learning techniques that analyse a set of texts that are already labelled with a particular classification (typically, positive or negative) and predict a classification of a new text based upon this.
Machine learning techniques are used by the well-known Python library NLTK (Natural Language Toolkit) and, another NLP library, Textblob, provides both types. But, for our purposes, we are going to use a rule based system that is particularly aimed at social media texts and can not only classify text but also embedded emoticons and shorthand such as OMG.
The Python library that we will use is called VADER and, while it is now incorporated into NLTK, for simplicity we will use the standalone version.
To quote the README file from their Github account: “VADER (Valence Aware Dictionary and sEntiment Reasoner) is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media.” And since our aim is to analyse Tweets, this seems like a good choice.
So the first thing to do is to install it, for example:
pip3 install vaderSentiment
or
conda install vaderSentiment
VADER is very easy to use — here is how to create an analyzer:
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzeranalyzer = SentimentIntensityAnalyzer()
The first line imports the sentiment analyser and the second one creates an analyser object that we can use.
Next we define three strings. They portray attitudes that might have been expressed during Britain’s Brexit debate. The first is likely to have come from someone who is against Brexit, the second is an objective statement of fact, and the third could have come from someone who is a Brexit enthusiast.
s1 = "Britain’s trade will be much worse if it doesn’t have a good trade deal."s2 = "Britain will have to abide by WTO rules if it doesn’t have a trade deal."s3 = "Britain will be very successful whether or not it has a trade deal."
Now we are going to run the sentiment analyzer on the three sentences by passing each string to the polarity_scores method of the analyser
vs = analyzer.polarity_scores(s1)print("{}... {}".format(s1[:30], str(vs)))vs = analyzer.polarity_scores(s2)print("{}... {}".format(s2[:30], str(vs)))vs = analyzer.polarity_scores(s3)print("{}... {}".format(s3[:30], str(vs)))
Printing out the result, we get this:
Britain’s trade will be much w... {'neg': 0.315, 'neu': 0.685, 'pos': 0.0, 'compound': -0.6711}Britain will have to abide by ... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}Britain will be very successfu... {'neg': 0.0, 'neu': 0.746, 'pos': 0.254, 'compound': 0.624}
The first part of the string is displayed followed by the result of the analysis. This result is a dictionary with four elements: neg, neu, pos, and compound.
These represent the negative, neutral and positive sentiment as measured by the analyser plus a combination of the scores to produce an overall sentiment value. The first three parts are in the range 0 to 1, whereas the compound result is between -1 and +1: a negative value represents a negative sentiment value and positive value means a positive sentiment. A value around zero obviously means that the sentiment expressed is fairly neutral.
Looking at the compound results for our three strings we can see that they are what we might expect, the first is negative, the second neutral and the last one is positive. Well done, VADER!
Now we need to get some tweets. What you want to look for obviously depends on who you are but the solution below allows you to specify your own search term and look at the tweets that mention that. For this example we will search for “vaccine” but you might want to search for a product name or brand.
I’m going to use the Twitter library but there are other libraries that will do a similar job. You’ll need to sign up for a Twitter developer account to use this and when you do, you will be given the various codes that will need to be inserted below.
import twitterCONSUMER_KEY = '####################'CONSUMER_SECRET = '###############################################'OAUTH_TOKEN = '###############################################'OAUTH_TOKEN_SECRET = '###########################################'auth = twitter.oauth.OAuth(OAUTH_TOKEN,OAUTH_TOKEN_SECRET,CONSUMER_KEY, CONSUMER_SECRET)twitter_api = twitter.Twitter(auth=auth)
Now we are ready to download some tweets.
count = 20query = "vaccine"tweets = twitter_api.search.tweets(q=query, count=count, lang='en',tweet_mode="extended")
As you can see, I’ve asked for 20 tweets that contain the search term “vaccine” and the return tweets are in JSON format with the actual tweet text in tweets[‘statuses’][‘full_text’]. In the code below I iterate through the tweets, analyse the text from each one and construct a list of dictionaries containing the text of the tweet and the overall sentiment value (the compound value returned from the analyzer).
tweetsWithSent = []for t in tweets['statuses']: text = (t['full_text']) ps = analyzer.polarity_scores(text) tweetsWithSent.append({'text':text, 'compound':ps['compound']})
Next I convert the dictionary to a Pandas dataframe and plot the result as a bar graph
import pandas as pdtweetdf = pd.DataFrame(tweetsWithSent)tweetdf.plot.bar(figsize=(15,5),width=1)
You can see that sentiment is fairly evenly distributed — where bars do not appear the value is zero, meaning neutral sentiment.
You will want to use your own search term in order to judge the sentiment of whatever interest you but to give you an idea of the results that I got, here is a screenshot:
(Note that I have anonymised and shortened the text of the tweets above, normally they would contain Twitter handles along with the text.)
If these tweets were comments on your product or service you might be happy to read the positive ones but you would be better spending your time looking at those with a negative sentiment to find out what the problem is.
It would be simple enough to filter out the tweets that are particularly negative for special attention.
Of course, it is not impossible that all of your feedback will be positive — but in the real world that is unlikely. Sentiment Analysis allows you to get an overview of how your customers feel and can allow you to spot problems before they get out of hand.
And that is about it, so I’ll just sign off with the following:
ps = analyzer.polarity_scores("Have a safe 2021")print(ps){'neg': 0.0, 'neu': 0.508, 'pos': 0.492, 'compound': 0.4404}
See a demo of Sentiment Analysis here.
To learn more about the academic background to VADER, you can read this paper:
Hutto, C.J. & Gilbert, E.E. (2014). VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. Eighth International Conference on Weblogs and Social Media (ICWSM-14). Ann Arbor, MI, June 2014. | [
{
"code": null,
"e": 382,
"s": 172,
"text": "Sentiment Analysis, or Opinion Mining, is often used by marketing departments to monitor customer satisfaction with a service, product or brand when a large volume of feedback is obtained through social media."
},
{
"code": null,
"e": 529,
"s": 382,
"text": "Gone are the days of reading individual letters sent by post. Today’s customers produce vast numbers of comments on Twitter or other social media."
},
{
"code": null,
"e": 671,
"s": 529,
"text": "Such a large amount of data cannot be reasonably analysed individually, so what is produced electronically has to be analysed electronically."
},
{
"code": null,
"e": 1089,
"s": 671,
"text": "There are two fundamental Sentiment Analysis solutions: first, there are rule based systems that use a lexicon of words and rules to classify a particular piece of text and, second, there are systems that use machine learning techniques that analyse a set of texts that are already labelled with a particular classification (typically, positive or negative) and predict a classification of a new text based upon this."
},
{
"code": null,
"e": 1446,
"s": 1089,
"text": "Machine learning techniques are used by the well-known Python library NLTK (Natural Language Toolkit) and, another NLP library, Textblob, provides both types. But, for our purposes, we are going to use a rule based system that is particularly aimed at social media texts and can not only classify text but also embedded emoticons and shorthand such as OMG."
},
{
"code": null,
"e": 1594,
"s": 1446,
"text": "The Python library that we will use is called VADER and, while it is now incorporated into NLTK, for simplicity we will use the standalone version."
},
{
"code": null,
"e": 1897,
"s": 1594,
"text": "To quote the README file from their Github account: “VADER (Valence Aware Dictionary and sEntiment Reasoner) is a lexicon and rule-based sentiment analysis tool that is specifically attuned to sentiments expressed in social media.” And since our aim is to analyse Tweets, this seems like a good choice."
},
{
"code": null,
"e": 1953,
"s": 1897,
"text": "So the first thing to do is to install it, for example:"
},
{
"code": null,
"e": 1981,
"s": 1953,
"text": "pip3 install vaderSentiment"
},
{
"code": null,
"e": 1984,
"s": 1981,
"text": "or"
},
{
"code": null,
"e": 2013,
"s": 1984,
"text": "conda install vaderSentiment"
},
{
"code": null,
"e": 2076,
"s": 2013,
"text": "VADER is very easy to use — here is how to create an analyzer:"
},
{
"code": null,
"e": 2184,
"s": 2076,
"text": "from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzeranalyzer = SentimentIntensityAnalyzer()"
},
{
"code": null,
"e": 2293,
"s": 2184,
"text": "The first line imports the sentiment analyser and the second one creates an analyser object that we can use."
},
{
"code": null,
"e": 2595,
"s": 2293,
"text": "Next we define three strings. They portray attitudes that might have been expressed during Britain’s Brexit debate. The first is likely to have come from someone who is against Brexit, the second is an objective statement of fact, and the third could have come from someone who is a Brexit enthusiast."
},
{
"code": null,
"e": 2828,
"s": 2595,
"text": "s1 = \"Britain’s trade will be much worse if it doesn’t have a good trade deal.\"s2 = \"Britain will have to abide by WTO rules if it doesn’t have a trade deal.\"s3 = \"Britain will be very successful whether or not it has a trade deal.\""
},
{
"code": null,
"e": 2967,
"s": 2828,
"text": "Now we are going to run the sentiment analyzer on the three sentences by passing each string to the polarity_scores method of the analyser"
},
{
"code": null,
"e": 3193,
"s": 2967,
"text": "vs = analyzer.polarity_scores(s1)print(\"{}... {}\".format(s1[:30], str(vs)))vs = analyzer.polarity_scores(s2)print(\"{}... {}\".format(s2[:30], str(vs)))vs = analyzer.polarity_scores(s3)print(\"{}... {}\".format(s3[:30], str(vs)))"
},
{
"code": null,
"e": 3231,
"s": 3193,
"text": "Printing out the result, we get this:"
},
{
"code": null,
"e": 3507,
"s": 3231,
"text": "Britain’s trade will be much w... {'neg': 0.315, 'neu': 0.685, 'pos': 0.0, 'compound': -0.6711}Britain will have to abide by ... {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}Britain will be very successfu... {'neg': 0.0, 'neu': 0.746, 'pos': 0.254, 'compound': 0.624}"
},
{
"code": null,
"e": 3666,
"s": 3507,
"text": "The first part of the string is displayed followed by the result of the analysis. This result is a dictionary with four elements: neg, neu, pos, and compound."
},
{
"code": null,
"e": 4110,
"s": 3666,
"text": "These represent the negative, neutral and positive sentiment as measured by the analyser plus a combination of the scores to produce an overall sentiment value. The first three parts are in the range 0 to 1, whereas the compound result is between -1 and +1: a negative value represents a negative sentiment value and positive value means a positive sentiment. A value around zero obviously means that the sentiment expressed is fairly neutral."
},
{
"code": null,
"e": 4301,
"s": 4110,
"text": "Looking at the compound results for our three strings we can see that they are what we might expect, the first is negative, the second neutral and the last one is positive. Well done, VADER!"
},
{
"code": null,
"e": 4604,
"s": 4301,
"text": "Now we need to get some tweets. What you want to look for obviously depends on who you are but the solution below allows you to specify your own search term and look at the tweets that mention that. For this example we will search for “vaccine” but you might want to search for a product name or brand."
},
{
"code": null,
"e": 4856,
"s": 4604,
"text": "I’m going to use the Twitter library but there are other libraries that will do a similar job. You’ll need to sign up for a Twitter developer account to use this and when you do, you will be given the various codes that will need to be inserted below."
},
{
"code": null,
"e": 5241,
"s": 4856,
"text": "import twitterCONSUMER_KEY = '####################'CONSUMER_SECRET = '###############################################'OAUTH_TOKEN = '###############################################'OAUTH_TOKEN_SECRET = '###########################################'auth = twitter.oauth.OAuth(OAUTH_TOKEN,OAUTH_TOKEN_SECRET,CONSUMER_KEY, CONSUMER_SECRET)twitter_api = twitter.Twitter(auth=auth)"
},
{
"code": null,
"e": 5283,
"s": 5241,
"text": "Now we are ready to download some tweets."
},
{
"code": null,
"e": 5403,
"s": 5283,
"text": "count = 20query = \"vaccine\"tweets = twitter_api.search.tweets(q=query, count=count, lang='en',tweet_mode=\"extended\")"
},
{
"code": null,
"e": 5817,
"s": 5403,
"text": "As you can see, I’ve asked for 20 tweets that contain the search term “vaccine” and the return tweets are in JSON format with the actual tweet text in tweets[‘statuses’][‘full_text’]. In the code below I iterate through the tweets, analyse the text from each one and construct a list of dictionaries containing the text of the tweet and the overall sentiment value (the compound value returned from the analyzer)."
},
{
"code": null,
"e": 5995,
"s": 5817,
"text": "tweetsWithSent = []for t in tweets['statuses']: text = (t['full_text']) ps = analyzer.polarity_scores(text) tweetsWithSent.append({'text':text, 'compound':ps['compound']})"
},
{
"code": null,
"e": 6082,
"s": 5995,
"text": "Next I convert the dictionary to a Pandas dataframe and plot the result as a bar graph"
},
{
"code": null,
"e": 6180,
"s": 6082,
"text": "import pandas as pdtweetdf = pd.DataFrame(tweetsWithSent)tweetdf.plot.bar(figsize=(15,5),width=1)"
},
{
"code": null,
"e": 6309,
"s": 6180,
"text": "You can see that sentiment is fairly evenly distributed — where bars do not appear the value is zero, meaning neutral sentiment."
},
{
"code": null,
"e": 6481,
"s": 6309,
"text": "You will want to use your own search term in order to judge the sentiment of whatever interest you but to give you an idea of the results that I got, here is a screenshot:"
},
{
"code": null,
"e": 6620,
"s": 6481,
"text": "(Note that I have anonymised and shortened the text of the tweets above, normally they would contain Twitter handles along with the text.)"
},
{
"code": null,
"e": 6841,
"s": 6620,
"text": "If these tweets were comments on your product or service you might be happy to read the positive ones but you would be better spending your time looking at those with a negative sentiment to find out what the problem is."
},
{
"code": null,
"e": 6946,
"s": 6841,
"text": "It would be simple enough to filter out the tweets that are particularly negative for special attention."
},
{
"code": null,
"e": 7203,
"s": 6946,
"text": "Of course, it is not impossible that all of your feedback will be positive — but in the real world that is unlikely. Sentiment Analysis allows you to get an overview of how your customers feel and can allow you to spot problems before they get out of hand."
},
{
"code": null,
"e": 7267,
"s": 7203,
"text": "And that is about it, so I’ll just sign off with the following:"
},
{
"code": null,
"e": 7386,
"s": 7267,
"text": "ps = analyzer.polarity_scores(\"Have a safe 2021\")print(ps){'neg': 0.0, 'neu': 0.508, 'pos': 0.492, 'compound': 0.4404}"
},
{
"code": null,
"e": 7425,
"s": 7386,
"text": "See a demo of Sentiment Analysis here."
},
{
"code": null,
"e": 7504,
"s": 7425,
"text": "To learn more about the academic background to VADER, you can read this paper:"
}
] |
Minimum indexed character | Practice | GeeksforGeeks | Given a string str and another string patt. Find the first position (considering 0-based indexing) of the character in patt that is present at the minimum index in str.
Example 1:
Input:
str = geeksforgeeks
patt = set
Output: 1
Explanation: e is the character which is
present in given str "geeksforgeeks"
and is first found in patt "set". First Position
of e in str is 1.
Example 2:
Input:
str = adcffaet
patt = onkl
Output: -1
Explanation: There are none of the
characters which is common in patt
and str.
Your Task:
You only need to complete the function minIndexChar() that returns the index of answer in str or returns -1 in case no character of patt is present in str.
Expected Time Complexity: O(N).
Expected Auxiliary Space: O(Number of distinct characters).
Constraints:
1 ≤ |str|,|patt| ≤ 105
'a' <= stri, patti <= 'z'
0
tanashah8 minutes ago
unordered_map<char,int>mp;
for(int i=0;i<patt.size();i++){
mp[patt[i]]==1;
}
for(int i=0;i<str.size();i++){
if(mp.find(str[i])!=mp.end()){
return i;
}
}
return -1;//TC O(m+n)
0
rohanyadav49834 days ago
Time Limit Exceeded, Can any1 suggest for optimization?
//Function to find the minimum indexed character. public static int minIndexChar(String str, String patt) { char x[]=str.toCharArray(); char y[]=patt.toCharArray(); int a=0; for(int i=0;i<x.length;i++){ for(int j=0;j<y.length;j++){ if(x[i]==y[j]){ a=i; return a; } } } return -1; }}
0
lindan1235 days ago
int minIndexChar(string str, string patt) { unordered_map<int,int> mp; int n = str.length(); for(int i=0;i<n;i++) { if(mp.find(str[i])==mp.end()) { mp[str[i]]=i; } } int maxx=INT_MAX; int m = patt.length(); for(int i=0;i<m;i++) { if(mp.find(patt[i])!=mp.end()) { if(maxx>mp[patt[i]]) { maxx=mp[patt[i]]; } } } if(maxx==INT_MAX) { return -1; } return maxx; }
Time Taken : 0.4sec
Cpp
0
jayesh296 days ago
Java 0.6 sec
public static int minIndexChar(String str, String patt){
HashMap<Character,Integer> hm = new HashMap<>();
for(int i=0;i<str.length();i++){
if(!hm.containsKey(str.charAt(i))){
hm.put(str.charAt(i),i);
}
}
int res = Integer.MAX_VALUE;
for(int i=0;i<patt.length();i++){
if(hm.containsKey(patt.charAt(i))){
res = Math.min(res,hm.get(patt.charAt(i)));
}
}
if(res==Integer.MAX_VALUE) return -1;
else return res;
}
}
0
abhishekvicky123452 weeks ago
er function template for JAVA
class Solution{ //Function to find the minimum indexed character. public static int minIndexChar(String str, String patt) { HashMap<Character,Integer> ary=new HashMap<>(); for(int i=0;i<str.length();i++) { if(!ary.containsKey(str.charAt(i))) { ary.put(str.charAt(i),i); } } int min=100001; int flag=0; for(int i=0;i<patt.length();i++) { if(ary.containsKey(patt.charAt(i))) { if(ary.get(patt.charAt(i))<min) { min=ary.get(patt.charAt(i)); flag=1; } } } if(flag==1) return min; return -1;}}
0
abera25831 month ago
class Solution{ public: //Function to find the minimum indexed character. int minIndexChar(string str, string patt) { // Your code here set<char>s; for(auto value:patt){ s.insert(value); } for(int i=0;i<str.size();i++){ if(s.find(str[i])!=s.end()){ return i; } } return -1; }};
0
vrajeshmodi991 month ago
int minIndexChar(string str, string patt)
{
unordered_map<char,int>mp;
for(int i=0;i<patt.size();i++){
mp[patt[i]]==1;
}
for(int i=0;i<str.size();i++){
if(mp.find(str[i])!=mp.end()){
return i;
}
}
return -1;
}
0
singhdipranjan671 month ago
public static int minIndexChar(String str, String patt)
{
HashSet<Character> set=new HashSet<>();
for(int i=0; i<patt.length(); i++)
{
set.add(patt.charAt(i));
}
for(int i=0; i<str.length(); i++)
{
if(set.contains(str.charAt(i)))
return i;
}
return -1;
}
}
0
imabhishek021 month ago
Simple cpp program we just counted the each character. like if any character is present in patt then ++ that charcter after that we just compared str characters is present in patt it would have already increased by so if that character count >0 then we just print ith of character of str string;
int minIndexChar(string str, string patt)
{
int fr[26]={0};
for(int i=0;i<patt.size();i++)
{
char ch=patt[i];
fr[ch-'a']++;
}
for(int i=0;i<str.size();i++)
{
char ch=str[i];
if(fr[ch-'a']>0)
{
return i;
}
}return -1;
0
roopsaisurampudi1 month ago
HashMap<Character, Integer> hm = new HashMap<>(); int len1 = str.length(); int len2 = patt.length(); for(int i = 0; i < len1; i++) { if (hm.containsKey(str.charAt(i))); //skip else hm.put(str.charAt(i), i); } int minimum = Integer.MAX_VALUE; for(int i = 0; i < len2; i++) { if (hm.containsKey(patt.charAt(i))) minimum = Math.min(minimum, hm.get(patt.charAt(i)));
} return minimum == Integer.MAX_VALUE ? -1: minimum;
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 407,
"s": 238,
"text": "Given a string str and another string patt. Find the first position (considering 0-based indexing) of the character in patt that is present at the minimum index in str."
},
{
"code": null,
"e": 419,
"s": 407,
"text": "\nExample 1:"
},
{
"code": null,
"e": 614,
"s": 419,
"text": "Input:\nstr = geeksforgeeks\npatt = set\nOutput: 1\nExplanation: e is the character which is\npresent in given str \"geeksforgeeks\"\nand is first found in patt \"set\". First Position\nof e in str is 1. \n"
},
{
"code": null,
"e": 625,
"s": 614,
"text": "Example 2:"
},
{
"code": null,
"e": 749,
"s": 625,
"text": "Input:\nstr = adcffaet\npatt = onkl\nOutput: -1\nExplanation: There are none of the\ncharacters which is common in patt\nand str."
},
{
"code": null,
"e": 917,
"s": 749,
"text": "\nYour Task:\nYou only need to complete the function minIndexChar() that returns the index of answer in str or returns -1 in case no character of patt is present in str."
},
{
"code": null,
"e": 1010,
"s": 917,
"text": "\nExpected Time Complexity: O(N).\nExpected Auxiliary Space: O(Number of distinct characters)."
},
{
"code": null,
"e": 1074,
"s": 1010,
"text": "\nConstraints:\n1 ≤ |str|,|patt| ≤ 105 \n'a' <= stri, patti <= 'z'"
},
{
"code": null,
"e": 1076,
"s": 1074,
"text": "0"
},
{
"code": null,
"e": 1098,
"s": 1076,
"text": "tanashah8 minutes ago"
},
{
"code": null,
"e": 1366,
"s": 1098,
"text": " unordered_map<char,int>mp;\n for(int i=0;i<patt.size();i++){\n mp[patt[i]]==1;\n }\n for(int i=0;i<str.size();i++){\n if(mp.find(str[i])!=mp.end()){\n return i;\n }\n }\n return -1;//TC O(m+n)"
},
{
"code": null,
"e": 1368,
"s": 1366,
"text": "0"
},
{
"code": null,
"e": 1393,
"s": 1368,
"text": "rohanyadav49834 days ago"
},
{
"code": null,
"e": 1449,
"s": 1393,
"text": "Time Limit Exceeded, Can any1 suggest for optimization?"
},
{
"code": null,
"e": 1956,
"s": 1449,
"text": " //Function to find the minimum indexed character. public static int minIndexChar(String str, String patt) { char x[]=str.toCharArray(); char y[]=patt.toCharArray(); int a=0; for(int i=0;i<x.length;i++){ for(int j=0;j<y.length;j++){ if(x[i]==y[j]){ a=i; return a; } } } return -1; }}"
},
{
"code": null,
"e": 1958,
"s": 1956,
"text": "0"
},
{
"code": null,
"e": 1978,
"s": 1958,
"text": "lindan1235 days ago"
},
{
"code": null,
"e": 2589,
"s": 1978,
"text": "int minIndexChar(string str, string patt) { unordered_map<int,int> mp; int n = str.length(); for(int i=0;i<n;i++) { if(mp.find(str[i])==mp.end()) { mp[str[i]]=i; } } int maxx=INT_MAX; int m = patt.length(); for(int i=0;i<m;i++) { if(mp.find(patt[i])!=mp.end()) { if(maxx>mp[patt[i]]) { maxx=mp[patt[i]]; } } } if(maxx==INT_MAX) { return -1; } return maxx; }"
},
{
"code": null,
"e": 2611,
"s": 2591,
"text": "Time Taken : 0.4sec"
},
{
"code": null,
"e": 2615,
"s": 2611,
"text": "Cpp"
},
{
"code": null,
"e": 2617,
"s": 2615,
"text": "0"
},
{
"code": null,
"e": 2636,
"s": 2617,
"text": "jayesh296 days ago"
},
{
"code": null,
"e": 2649,
"s": 2636,
"text": "Java 0.6 sec"
},
{
"code": null,
"e": 3215,
"s": 2649,
"text": " public static int minIndexChar(String str, String patt){\n HashMap<Character,Integer> hm = new HashMap<>();\n for(int i=0;i<str.length();i++){\n if(!hm.containsKey(str.charAt(i))){\n hm.put(str.charAt(i),i);\n }\n }\n int res = Integer.MAX_VALUE;\n for(int i=0;i<patt.length();i++){\n if(hm.containsKey(patt.charAt(i))){\n res = Math.min(res,hm.get(patt.charAt(i)));\n }\n }\n if(res==Integer.MAX_VALUE) return -1;\n else return res;\n }\n}"
},
{
"code": null,
"e": 3217,
"s": 3215,
"text": "0"
},
{
"code": null,
"e": 3247,
"s": 3217,
"text": "abhishekvicky123452 weeks ago"
},
{
"code": null,
"e": 3277,
"s": 3247,
"text": "er function template for JAVA"
},
{
"code": null,
"e": 3984,
"s": 3277,
"text": "class Solution{ //Function to find the minimum indexed character. public static int minIndexChar(String str, String patt) { HashMap<Character,Integer> ary=new HashMap<>(); for(int i=0;i<str.length();i++) { if(!ary.containsKey(str.charAt(i))) { ary.put(str.charAt(i),i); } } int min=100001; int flag=0; for(int i=0;i<patt.length();i++) { if(ary.containsKey(patt.charAt(i))) { if(ary.get(patt.charAt(i))<min) { min=ary.get(patt.charAt(i)); flag=1; } } } if(flag==1) return min; return -1;}}"
},
{
"code": null,
"e": 3986,
"s": 3984,
"text": "0"
},
{
"code": null,
"e": 4007,
"s": 3986,
"text": "abera25831 month ago"
},
{
"code": null,
"e": 4382,
"s": 4007,
"text": "class Solution{ public: //Function to find the minimum indexed character. int minIndexChar(string str, string patt) { // Your code here set<char>s; for(auto value:patt){ s.insert(value); } for(int i=0;i<str.size();i++){ if(s.find(str[i])!=s.end()){ return i; } } return -1; }};"
},
{
"code": null,
"e": 4384,
"s": 4382,
"text": "0"
},
{
"code": null,
"e": 4409,
"s": 4384,
"text": "vrajeshmodi991 month ago"
},
{
"code": null,
"e": 4727,
"s": 4409,
"text": "int minIndexChar(string str, string patt)\n {\n unordered_map<char,int>mp;\n for(int i=0;i<patt.size();i++){\n mp[patt[i]]==1;\n }\n for(int i=0;i<str.size();i++){\n if(mp.find(str[i])!=mp.end()){\n return i;\n }\n }\n return -1;\n }"
},
{
"code": null,
"e": 4729,
"s": 4727,
"text": "0"
},
{
"code": null,
"e": 4757,
"s": 4729,
"text": "singhdipranjan671 month ago"
},
{
"code": null,
"e": 5124,
"s": 4757,
"text": " public static int minIndexChar(String str, String patt)\n {\n HashSet<Character> set=new HashSet<>();\n for(int i=0; i<patt.length(); i++)\n {\n set.add(patt.charAt(i));\n }\n for(int i=0; i<str.length(); i++)\n {\n if(set.contains(str.charAt(i)))\n return i;\n }\n return -1; \n }\n}\n"
},
{
"code": null,
"e": 5126,
"s": 5124,
"text": "0"
},
{
"code": null,
"e": 5150,
"s": 5126,
"text": "imabhishek021 month ago"
},
{
"code": null,
"e": 5446,
"s": 5150,
"text": "Simple cpp program we just counted the each character. like if any character is present in patt then ++ that charcter after that we just compared str characters is present in patt it would have already increased by so if that character count >0 then we just print ith of character of str string;"
},
{
"code": null,
"e": 5843,
"s": 5446,
"text": "int minIndexChar(string str, string patt)\n {\n int fr[26]={0};\n \n for(int i=0;i<patt.size();i++)\n {\n char ch=patt[i];\n fr[ch-'a']++;\n }\n for(int i=0;i<str.size();i++)\n {\n char ch=str[i];\n \n if(fr[ch-'a']>0)\n {\n return i;\n }\n \n }return -1;"
},
{
"code": null,
"e": 5847,
"s": 5845,
"text": "0"
},
{
"code": null,
"e": 5875,
"s": 5847,
"text": "roopsaisurampudi1 month ago"
},
{
"code": null,
"e": 6304,
"s": 5875,
"text": "HashMap<Character, Integer> hm = new HashMap<>(); int len1 = str.length(); int len2 = patt.length(); for(int i = 0; i < len1; i++) { if (hm.containsKey(str.charAt(i))); //skip else hm.put(str.charAt(i), i); } int minimum = Integer.MAX_VALUE; for(int i = 0; i < len2; i++) { if (hm.containsKey(patt.charAt(i))) minimum = Math.min(minimum, hm.get(patt.charAt(i)));"
},
{
"code": null,
"e": 6370,
"s": 6304,
"text": " } return minimum == Integer.MAX_VALUE ? -1: minimum;"
},
{
"code": null,
"e": 6516,
"s": 6370,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 6552,
"s": 6516,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 6562,
"s": 6552,
"text": "\nProblem\n"
},
{
"code": null,
"e": 6572,
"s": 6562,
"text": "\nContest\n"
},
{
"code": null,
"e": 6635,
"s": 6572,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 6783,
"s": 6635,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 6991,
"s": 6783,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 7097,
"s": 6991,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
How to create instance Objects using __init__ in Python? | The instantiation or calling-a-class-object operation creates an empty object. Many classes like to create objects with instances with a specific initial state. Therefore a class may define a special method named __init__(), as follows −
def __init__(self) −
self.data = [ ]
When a class defines an __init__() method, class instantiation automatically invokes the newly-created class instance which is obtained by −
x = MyClass()
The __init__() method may have arguments. In such a case, arguments given to the class instantiation operator are passed on to __init__(). For example,
>>> class Complex:
... def __init__(self, realpart, imagpart):
... self.r = realpart
... self.i = imagpart
...
>>> x = Complex(4.0, -6.5)
>>> x.r, x.i
(4.0, -6.5) | [
{
"code": null,
"e": 1300,
"s": 1062,
"text": "The instantiation or calling-a-class-object operation creates an empty object. Many classes like to create objects with instances with a specific initial state. Therefore a class may define a special method named __init__(), as follows −"
},
{
"code": null,
"e": 1321,
"s": 1300,
"text": "def __init__(self) −"
},
{
"code": null,
"e": 1341,
"s": 1321,
"text": " self.data = [ ]"
},
{
"code": null,
"e": 1482,
"s": 1341,
"text": "When a class defines an __init__() method, class instantiation automatically invokes the newly-created class instance which is obtained by −"
},
{
"code": null,
"e": 1496,
"s": 1482,
"text": "x = MyClass()"
},
{
"code": null,
"e": 1648,
"s": 1496,
"text": "The __init__() method may have arguments. In such a case, arguments given to the class instantiation operator are passed on to __init__(). For example,"
},
{
"code": null,
"e": 1832,
"s": 1648,
"text": ">>> class Complex:\n... def __init__(self, realpart, imagpart):\n... self.r = realpart\n... self.i = imagpart\n...\n>>> x = Complex(4.0, -6.5)\n>>> x.r, x.i\n(4.0, -6.5)\n"
}
] |
How to unpack using star expression in Python? | One of the basic limitation of unpacking is that you must know the length of the sequences you are unpacking in advance.
random_numbers = [0, 1, 5, 9, 17, 12, 7, 10, 3, 2]
random_numbers_descending = sorted(random_numbers, reverse=True)
print(f"Output \n*** {random_numbers_descending}")
*** [17, 12, 10, 9, 7, 5, 3, 2, 1, 0]
If I now wanted to find out the largest and second largest from the numbers, we will get an exception "too many values to unpack".
print(f"Output \n*** Getting the largest and second largest")
largest, second_largest = random_numbers_descending
*** Getting the largest and second largest
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in
1 print(f"Output \n*** Getting the largest and second largest")
----> 2 largest, second_largest = random_numbers_descending
ValueError: too many values to unpack (expected 2)
Python often rely on indexing and slicing. For example, when I want to extract the largest, second largest from a list of items below is how we can do.
largest = random_numbers_descending[0]
print(f"Output \n*** Getting the largest - {largest}")
*** Getting the largest - 17
second_largest = random_numbers_descending[1]
print(f"Output \n*** Getting the second largest - {second_largest}")
*** Getting the second largest - 12
rest_of_numbers = random_numbers_descending[2:]
print(f"Output \n*** Getting the rest of numbers - {rest_of_numbers}")
*** Getting the rest of numbers - [10, 9, 7, 5, 3, 2, 1, 0]
While this works, all of the indexing and slicing is visually noisy. In practice, it is error prone to divide the members of a sequence into various subsets this way.
To do it better, Python supports catch-all unpacking through a starred expression.
This starred syntax allows one part of the unpacking assignment to receive all values that do not match any other part of the unpacking pattern.
largest,second_largest, *rest_of_numbers = random_numbers_descending
print(f"Output \n largest: {largest} \n second_largest:{second_largest} \n rest_of_numbers:{rest_of_numbers}")
largest: 17
second_largest:12
rest_of_numbers:[10, 9, 7, 5, 3, 2, 1, 0]
How does the above code look? In a single line which is also easier to read we are able to acheive the output. A starred expression may appear in any position, so you can get the benefits of catch-all unpacking anytime you need to extract one slice
largest: 17
rest_of_numbers:[12, 10, 9, 7, 5, 3, 2, 1]
smallest:0
*rest_of_numbers, second_smallest, smallest = random_numbers_descending
print(f"Output \n rest_of_numbers:{rest_of_numbers} \n second_smallest: {second_smallest} \n smallest:{smallest}")
rest_of_numbers:[17, 12, 10, 9, 7, 5, 3, 2]
second_smallest: 1
smallest:0
However, to unpack assignments that contain a starred expression, you must have at least one required part, or else you’ll get a SyntaxError. We can’t use a catch-all expression on its own.
*rest_of_numbers = random_numbers_descending
File "", line 1
*rest_of_numbers = random_numbers_descending
^
SyntaxError: starred assignment target must be in a list or tuple
We also cannot use multiple catch-all expressions in a single-level unpacking pattern. This is another important note to consider.
*rest_of_numbers, *more_smallest, smallest = random_numbers_descending
File "", line 1
*rest_of_numbers, *more_smallest, smallest = random_numbers_descending
^
SyntaxError: two starred expressions in assignment
But it is possible to use multiple starred expressions in an unpacking assignment statement, as long as they’re catch-alls for different parts of the multilevel structure being unpacked.
player_grandslame_and_atptitles = {
'Federer': (20, 103),
'Nadal': (20,84),}
((player1, (grandslam1, *atptitles1)), (player2, (grandslam2, *atptitles2))) = player_grandslame_and_atptitles.items()
print(f'Output \nPlayer - {player1} Have acheived {grandslam1} grandslams and , {atptitles1} atp tour titles')
print(f'Player - {player2} Have acheived {grandslam2} grandslams and , {atptitles2} atp tour titles')
Player - Federer Have acheived 20 grandslams and , [103] atp tour titles
Player - Nadal Have acheived 20 grandslams and , [84] atp tour titles
Starred expressions become list instances in all cases. If there are no leftover items from the sequence being unpacked, the catch-all part will be an empty list. This is especially useful when you are processing a sequence that you know in advance has at least N elements.
random_numbers = [0, 1]
first, second, *rest = random_numbers
print(f"Output \n{first, second, rest}")
(0, 1, []) | [
{
"code": null,
"e": 1183,
"s": 1062,
"text": "One of the basic limitation of unpacking is that you must know the length of the sequences you are unpacking in advance."
},
{
"code": null,
"e": 1350,
"s": 1183,
"text": "random_numbers = [0, 1, 5, 9, 17, 12, 7, 10, 3, 2]\nrandom_numbers_descending = sorted(random_numbers, reverse=True)\nprint(f\"Output \\n*** {random_numbers_descending}\")"
},
{
"code": null,
"e": 1388,
"s": 1350,
"text": "*** [17, 12, 10, 9, 7, 5, 3, 2, 1, 0]"
},
{
"code": null,
"e": 1519,
"s": 1388,
"text": "If I now wanted to find out the largest and second largest from the numbers, we will get an exception \"too many values to unpack\"."
},
{
"code": null,
"e": 1633,
"s": 1519,
"text": "print(f\"Output \\n*** Getting the largest and second largest\")\nlargest, second_largest = random_numbers_descending"
},
{
"code": null,
"e": 1676,
"s": 1633,
"text": "*** Getting the largest and second largest"
},
{
"code": null,
"e": 1976,
"s": 1676,
"text": "---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\nin\n1 print(f\"Output \\n*** Getting the largest and second largest\")\n----> 2 largest, second_largest = random_numbers_descending\n\nValueError: too many values to unpack (expected 2)"
},
{
"code": null,
"e": 2128,
"s": 1976,
"text": "Python often rely on indexing and slicing. For example, when I want to extract the largest, second largest from a list of items below is how we can do."
},
{
"code": null,
"e": 2222,
"s": 2128,
"text": "largest = random_numbers_descending[0]\nprint(f\"Output \\n*** Getting the largest - {largest}\")"
},
{
"code": null,
"e": 2251,
"s": 2222,
"text": "*** Getting the largest - 17"
},
{
"code": null,
"e": 2366,
"s": 2251,
"text": "second_largest = random_numbers_descending[1]\nprint(f\"Output \\n*** Getting the second largest - {second_largest}\")"
},
{
"code": null,
"e": 2402,
"s": 2366,
"text": "*** Getting the second largest - 12"
},
{
"code": null,
"e": 2521,
"s": 2402,
"text": "rest_of_numbers = random_numbers_descending[2:]\nprint(f\"Output \\n*** Getting the rest of numbers - {rest_of_numbers}\")"
},
{
"code": null,
"e": 2581,
"s": 2521,
"text": "*** Getting the rest of numbers - [10, 9, 7, 5, 3, 2, 1, 0]"
},
{
"code": null,
"e": 2748,
"s": 2581,
"text": "While this works, all of the indexing and slicing is visually noisy. In practice, it is error prone to divide the members of a sequence into various subsets this way."
},
{
"code": null,
"e": 2831,
"s": 2748,
"text": "To do it better, Python supports catch-all unpacking through a starred expression."
},
{
"code": null,
"e": 2976,
"s": 2831,
"text": "This starred syntax allows one part of the unpacking assignment to receive all values that do not match any other part of the unpacking pattern."
},
{
"code": null,
"e": 3156,
"s": 2976,
"text": "largest,second_largest, *rest_of_numbers = random_numbers_descending\nprint(f\"Output \\n largest: {largest} \\n second_largest:{second_largest} \\n rest_of_numbers:{rest_of_numbers}\")"
},
{
"code": null,
"e": 3228,
"s": 3156,
"text": "largest: 17\nsecond_largest:12\nrest_of_numbers:[10, 9, 7, 5, 3, 2, 1, 0]"
},
{
"code": null,
"e": 3477,
"s": 3228,
"text": "How does the above code look? In a single line which is also easier to read we are able to acheive the output. A starred expression may appear in any position, so you can get the benefits of catch-all unpacking anytime you need to extract one slice"
},
{
"code": null,
"e": 3543,
"s": 3477,
"text": "largest: 17\nrest_of_numbers:[12, 10, 9, 7, 5, 3, 2, 1]\nsmallest:0"
},
{
"code": null,
"e": 3730,
"s": 3543,
"text": "*rest_of_numbers, second_smallest, smallest = random_numbers_descending\nprint(f\"Output \\n rest_of_numbers:{rest_of_numbers} \\n second_smallest: {second_smallest} \\n smallest:{smallest}\")"
},
{
"code": null,
"e": 3804,
"s": 3730,
"text": "rest_of_numbers:[17, 12, 10, 9, 7, 5, 3, 2]\nsecond_smallest: 1\nsmallest:0"
},
{
"code": null,
"e": 3994,
"s": 3804,
"text": "However, to unpack assignments that contain a starred expression, you must have at least one required part, or else you’ll get a SyntaxError. We can’t use a catch-all expression on its own."
},
{
"code": null,
"e": 4039,
"s": 3994,
"text": "*rest_of_numbers = random_numbers_descending"
},
{
"code": null,
"e": 4168,
"s": 4039,
"text": "File \"\", line 1\n*rest_of_numbers = random_numbers_descending\n^\nSyntaxError: starred assignment target must be in a list or tuple"
},
{
"code": null,
"e": 4299,
"s": 4168,
"text": "We also cannot use multiple catch-all expressions in a single-level unpacking pattern. This is another important note to consider."
},
{
"code": null,
"e": 4370,
"s": 4299,
"text": "*rest_of_numbers, *more_smallest, smallest = random_numbers_descending"
},
{
"code": null,
"e": 4510,
"s": 4370,
"text": "File \"\", line 1\n*rest_of_numbers, *more_smallest, smallest = random_numbers_descending\n^\nSyntaxError: two starred expressions in assignment"
},
{
"code": null,
"e": 4697,
"s": 4510,
"text": "But it is possible to use multiple starred expressions in an unpacking assignment statement, as long as they’re catch-alls for different parts of the multilevel structure being unpacked."
},
{
"code": null,
"e": 5108,
"s": 4697,
"text": "player_grandslame_and_atptitles = {\n'Federer': (20, 103),\n'Nadal': (20,84),}\n\n((player1, (grandslam1, *atptitles1)), (player2, (grandslam2, *atptitles2))) = player_grandslame_and_atptitles.items()\n\nprint(f'Output \\nPlayer - {player1} Have acheived {grandslam1} grandslams and , {atptitles1} atp tour titles')\nprint(f'Player - {player2} Have acheived {grandslam2} grandslams and , {atptitles2} atp tour titles')"
},
{
"code": null,
"e": 5251,
"s": 5108,
"text": "Player - Federer Have acheived 20 grandslams and , [103] atp tour titles\nPlayer - Nadal Have acheived 20 grandslams and , [84] atp tour titles"
},
{
"code": null,
"e": 5525,
"s": 5251,
"text": "Starred expressions become list instances in all cases. If there are no leftover items from the sequence being unpacked, the catch-all part will be an empty list. This is especially useful when you are processing a sequence that you know in advance has at least N elements."
},
{
"code": null,
"e": 5628,
"s": 5525,
"text": "random_numbers = [0, 1]\nfirst, second, *rest = random_numbers\nprint(f\"Output \\n{first, second, rest}\")"
},
{
"code": null,
"e": 5639,
"s": 5628,
"text": "(0, 1, [])"
}
] |
Best 5 free stock market APIs in 2020 | by Shen Huang | Towards Data Science | The financial APIs market grows so quickly that last year’s post or platform is not a good choice this year. So in this story, I will show you the best 5 stock market APIs that I use in 2019.
Stock market data APIs offer real-time or historical data on financial assets that are currently being traded in the markets. These APIs usually offer prices of public stocks, ETFs, ETNs.
These data can be used for generating technical indicators which are the foundation to build trading strategies and monitor the market.
In this story, I mainly care about price information. For other data, there are some other APIs mainly for that use cases which will not be covered here.
I will talk about the following APIs and where they can be used:
Yahoo Finance
Google Finance in Google Sheets
IEX Cloud
AlphaVantage
World trading data
Other APIs (Polygon.io, Intrinio, Quandl)
Docs: yfinance
Yahoo Finance API was shut down in 2017. So you can see a lot of posts about alternatives for Yahoo Finance. However, it went back sometime in 2019. So you can still use Yahoo Finance to get free stock market data. Yahoo’s API was the gold standard for stock-data APIs employed by both individual and enterprise-level users.
Yahoo Finance provides access to more than 5 years of daily OHLC price data. And it’s free and reliable.
There’s a new python module yfinance that wraps the new Yahoo Finance API, and you can just use it.
# To install yfinance before you use it.> pip install yfinance
Below is an example of how to use the API. Check out the Github link above to see the full document, and you are good to go.
Google Finance is deprecated in 2012. However, it doesn’t shut down all the features. There’s a feature in Google Sheets that support you get stock marketing data. And it’s called GOOGLEFINANCE in Google Sheets.
The way it works is to type something like below and you will get the last stock price.
GOOGLEFINANCE("GOOG", "price")
Syntax is:
GOOGLEFINANCE(ticker, [attribute], [start_date], [end_date|num_days], [interval])
ticker: The ticker symbol for the security to consider.
attribute(Optional,"price" by default ): The attribute to fetch about ticker from Google Finance.
start_date(Optional): The start date when fetching historical data.
end_date|num_days(Optional): The end date when fetching historical data, or the number of days from start_date for which to return data.
interval(Optional): The frequency of returned data; either "DAILY" or "WEEKLY".
An example of use is attached.
Website: https://iexcloud.io/
IEX Cloud is a new financial service just released this year. It’s an independent business separate from IEX Group’s flagship stock exchange, is a high-performance, financial data platform that connects developers and financial data creators.
It’s very cheap compared to other subscription services. $9/month you almost can get all the data you need. Also, the basic free trial, you already get 500,000 core message free for each month.
There’s a python module to wrap their APIs. You can easily check it out: iexfinance
Website: https://www.alphavantage.co/
Alpha Vantage Inc. is a leading provider of various free APIs. It provides APIs to gain access to historical and real-time stock data, FX-data, and cryptocurrency data.
With Alphavantage you can perform up to 5 API-requests per minute and 500 API requests per day. 30 API requests per minute with $29.9/month.
Website: https://www.worldtradingdata.com/
Also, full intraday data API and currency API access are given. For those who need more data points, plans from $8 per month to $ 32 per month are available.
Right now there are four different plans available. For free access, you can get up to 5 stocks per request (real-time API). Up to 250 total requests per day. The subscription plan is not that expensive, and you can get a
They provide URL and your response will be JSON format. There’s currently no available python module to wrap their API yet. So you have to use requests or other web modules to wrap their APIs.
Website: https://polygon.io
It’s $199/month only for the US stock market. This is might be not a good choice for beginners.
Website: https://intrinio.com
It’s $75/month only for the realtime stock market. Also, for EOD price data, it’s $40/month. You can get EOD price data almost free from other APIs I suggest. Even though they have 206 pricing feeds, ten financial data feeds and tons of other data to subscribe. The price is not that friendly for independent traders.
Website: https://www.quandl.com/
Quandl is an aggregated marketplace for financial, economic and other related APIs. Quandl aggregates APIs from third-party marketplaces as services for users to purchase whatever APIs they want to use.
So you need to subscribe to the different marketplace to get different financial data. And different APIs will have different price systems. Some are free and others are subscription-based or one-time-purchase based.
Also, Quandl has an analysis tool inside its website.
Quandl is a good platform if you don’t care about money.
Learning and building a trading system is not easy. But the financial data is the foundation of all. If you have any questions, please ask them below. | [
{
"code": null,
"e": 364,
"s": 172,
"text": "The financial APIs market grows so quickly that last year’s post or platform is not a good choice this year. So in this story, I will show you the best 5 stock market APIs that I use in 2019."
},
{
"code": null,
"e": 552,
"s": 364,
"text": "Stock market data APIs offer real-time or historical data on financial assets that are currently being traded in the markets. These APIs usually offer prices of public stocks, ETFs, ETNs."
},
{
"code": null,
"e": 688,
"s": 552,
"text": "These data can be used for generating technical indicators which are the foundation to build trading strategies and monitor the market."
},
{
"code": null,
"e": 842,
"s": 688,
"text": "In this story, I mainly care about price information. For other data, there are some other APIs mainly for that use cases which will not be covered here."
},
{
"code": null,
"e": 907,
"s": 842,
"text": "I will talk about the following APIs and where they can be used:"
},
{
"code": null,
"e": 921,
"s": 907,
"text": "Yahoo Finance"
},
{
"code": null,
"e": 953,
"s": 921,
"text": "Google Finance in Google Sheets"
},
{
"code": null,
"e": 963,
"s": 953,
"text": "IEX Cloud"
},
{
"code": null,
"e": 976,
"s": 963,
"text": "AlphaVantage"
},
{
"code": null,
"e": 995,
"s": 976,
"text": "World trading data"
},
{
"code": null,
"e": 1037,
"s": 995,
"text": "Other APIs (Polygon.io, Intrinio, Quandl)"
},
{
"code": null,
"e": 1052,
"s": 1037,
"text": "Docs: yfinance"
},
{
"code": null,
"e": 1377,
"s": 1052,
"text": "Yahoo Finance API was shut down in 2017. So you can see a lot of posts about alternatives for Yahoo Finance. However, it went back sometime in 2019. So you can still use Yahoo Finance to get free stock market data. Yahoo’s API was the gold standard for stock-data APIs employed by both individual and enterprise-level users."
},
{
"code": null,
"e": 1482,
"s": 1377,
"text": "Yahoo Finance provides access to more than 5 years of daily OHLC price data. And it’s free and reliable."
},
{
"code": null,
"e": 1582,
"s": 1482,
"text": "There’s a new python module yfinance that wraps the new Yahoo Finance API, and you can just use it."
},
{
"code": null,
"e": 1645,
"s": 1582,
"text": "# To install yfinance before you use it.> pip install yfinance"
},
{
"code": null,
"e": 1770,
"s": 1645,
"text": "Below is an example of how to use the API. Check out the Github link above to see the full document, and you are good to go."
},
{
"code": null,
"e": 1982,
"s": 1770,
"text": "Google Finance is deprecated in 2012. However, it doesn’t shut down all the features. There’s a feature in Google Sheets that support you get stock marketing data. And it’s called GOOGLEFINANCE in Google Sheets."
},
{
"code": null,
"e": 2070,
"s": 1982,
"text": "The way it works is to type something like below and you will get the last stock price."
},
{
"code": null,
"e": 2101,
"s": 2070,
"text": "GOOGLEFINANCE(\"GOOG\", \"price\")"
},
{
"code": null,
"e": 2112,
"s": 2101,
"text": "Syntax is:"
},
{
"code": null,
"e": 2194,
"s": 2112,
"text": "GOOGLEFINANCE(ticker, [attribute], [start_date], [end_date|num_days], [interval])"
},
{
"code": null,
"e": 2250,
"s": 2194,
"text": "ticker: The ticker symbol for the security to consider."
},
{
"code": null,
"e": 2348,
"s": 2250,
"text": "attribute(Optional,\"price\" by default ): The attribute to fetch about ticker from Google Finance."
},
{
"code": null,
"e": 2416,
"s": 2348,
"text": "start_date(Optional): The start date when fetching historical data."
},
{
"code": null,
"e": 2553,
"s": 2416,
"text": "end_date|num_days(Optional): The end date when fetching historical data, or the number of days from start_date for which to return data."
},
{
"code": null,
"e": 2633,
"s": 2553,
"text": "interval(Optional): The frequency of returned data; either \"DAILY\" or \"WEEKLY\"."
},
{
"code": null,
"e": 2664,
"s": 2633,
"text": "An example of use is attached."
},
{
"code": null,
"e": 2694,
"s": 2664,
"text": "Website: https://iexcloud.io/"
},
{
"code": null,
"e": 2937,
"s": 2694,
"text": "IEX Cloud is a new financial service just released this year. It’s an independent business separate from IEX Group’s flagship stock exchange, is a high-performance, financial data platform that connects developers and financial data creators."
},
{
"code": null,
"e": 3131,
"s": 2937,
"text": "It’s very cheap compared to other subscription services. $9/month you almost can get all the data you need. Also, the basic free trial, you already get 500,000 core message free for each month."
},
{
"code": null,
"e": 3215,
"s": 3131,
"text": "There’s a python module to wrap their APIs. You can easily check it out: iexfinance"
},
{
"code": null,
"e": 3253,
"s": 3215,
"text": "Website: https://www.alphavantage.co/"
},
{
"code": null,
"e": 3422,
"s": 3253,
"text": "Alpha Vantage Inc. is a leading provider of various free APIs. It provides APIs to gain access to historical and real-time stock data, FX-data, and cryptocurrency data."
},
{
"code": null,
"e": 3563,
"s": 3422,
"text": "With Alphavantage you can perform up to 5 API-requests per minute and 500 API requests per day. 30 API requests per minute with $29.9/month."
},
{
"code": null,
"e": 3606,
"s": 3563,
"text": "Website: https://www.worldtradingdata.com/"
},
{
"code": null,
"e": 3764,
"s": 3606,
"text": "Also, full intraday data API and currency API access are given. For those who need more data points, plans from $8 per month to $ 32 per month are available."
},
{
"code": null,
"e": 3986,
"s": 3764,
"text": "Right now there are four different plans available. For free access, you can get up to 5 stocks per request (real-time API). Up to 250 total requests per day. The subscription plan is not that expensive, and you can get a"
},
{
"code": null,
"e": 4179,
"s": 3986,
"text": "They provide URL and your response will be JSON format. There’s currently no available python module to wrap their API yet. So you have to use requests or other web modules to wrap their APIs."
},
{
"code": null,
"e": 4207,
"s": 4179,
"text": "Website: https://polygon.io"
},
{
"code": null,
"e": 4303,
"s": 4207,
"text": "It’s $199/month only for the US stock market. This is might be not a good choice for beginners."
},
{
"code": null,
"e": 4333,
"s": 4303,
"text": "Website: https://intrinio.com"
},
{
"code": null,
"e": 4651,
"s": 4333,
"text": "It’s $75/month only for the realtime stock market. Also, for EOD price data, it’s $40/month. You can get EOD price data almost free from other APIs I suggest. Even though they have 206 pricing feeds, ten financial data feeds and tons of other data to subscribe. The price is not that friendly for independent traders."
},
{
"code": null,
"e": 4684,
"s": 4651,
"text": "Website: https://www.quandl.com/"
},
{
"code": null,
"e": 4887,
"s": 4684,
"text": "Quandl is an aggregated marketplace for financial, economic and other related APIs. Quandl aggregates APIs from third-party marketplaces as services for users to purchase whatever APIs they want to use."
},
{
"code": null,
"e": 5104,
"s": 4887,
"text": "So you need to subscribe to the different marketplace to get different financial data. And different APIs will have different price systems. Some are free and others are subscription-based or one-time-purchase based."
},
{
"code": null,
"e": 5158,
"s": 5104,
"text": "Also, Quandl has an analysis tool inside its website."
},
{
"code": null,
"e": 5215,
"s": 5158,
"text": "Quandl is a good platform if you don’t care about money."
}
] |
What is RowSet? How to retrieve contents of a table using RowSet? Explain? | A RowSet object is similar to ResultSet, it also stores tabular data, in addition to the features of a ResultSet a RowSet follows JavaBeans component model. This can be used as a JavaBeans component in a visual Bean development environment, i.e. in environments like IDE’s you can visually manipulate these properties.
The RowSet interface provides methods to set Java bean properties to connect it to the required database −
void setURL(String url):
void setUserName(String user_name):
void setPassword(String password):
A RowSet object contains properties and each property have Setter and getter methods. Using these you can set and get values from a command property.
To set and get values of different datatypes the RowSet interface provides various setter and getter methods such as setInt(), getInt(), setFloat(), getFloat(), setTimestamp, getTimeStamp() etc...
Since RowSet object follow JavaBeans event model. Whenever events like cursor/pointer movement, insertion/deletion/updation of a row, change in the RowSet contents occur. All the registered components (components that implemented the RowSetListener methods) are notified.
Assume we have a table named Dispatches in the database with 5 records as shown below −
+----+-------------+--------------+--------------+--------------+-------+----------------+
| ID | ProductName | CustomerName | DispatchDate | DeliveryTime | Price | Location |
+----+-------------+--------------+--------------+--------------+-------+----------------+
| 1 | Key-Board | Raja | 2019-09-01 | 05:30:00 | 7000 | Hyderabad |
| 2 | Earphones | Roja | 2019-05-01 | 05:30:00 | 2000 | Vishakhapatnam |
| 3 | Mouse | Puja | 2019-03-01 | 05:29:59 | 3000 | Vijayawada |
| 4 | Mobile | Vanaja | 2019-03-01 | 04:40:52 | 9000 | Chennai |
| 5 | Headset | Jalaja | 2019-04-06 | 18:38:59 | 6000 | Goa |
+----+-------------+--------------+--------------+--------------+-------+----------------+
Following JDBC programs retrieves the Product Name, Customer Name, Location of the delivery, of the records with price greater than 5000 and, displays the result.
import java.sql.DriverManager;
import javax.sql.RowSet;
import javax.sql.rowset.RowSetProvider;
public class RowSetExample {
public static void main(String args[]) throws Exception {
//Registering the Driver
DriverManager.registerDriver(new com.mysql.jdbc.Driver());
//Creating the RowSet object
RowSet rowSet = RowSetProvider.newFactory().createJdbcRowSet();
//Setting the URL
String mysqlUrl = "jdbc:mysql://localhost/SampleDB";
rowSet.setUrl(mysqlUrl);
//Setting the user name
rowSet.setUsername("root");
//Setting the password
rowSet.setPassword("password");
//Setting the query/command
rowSet.setCommand("select * from Dispatches");
rowSet.setCommand("SELECT ProductName, CustomerName, Price, Location from Dispatches where price > ?");
rowSet.setInt(1, 2000);
rowSet.execute();
System.out.println("Contents of the table");
while(rowSet.next()) {
System.out.print("Product Name: "+rowSet.getString("ProductName")+", ");
System.out.print("Customer Name: "+rowSet.getString("CustomerName")+", ");
System.out.print("Price: "+rowSet.getString("Price")+", ");
System.out.print("Location: "+rowSet.getString("Location"));
System.out.println("");
}
}
}
Product Name: Key-Board, Customer Name: Raja, Price: 7000, Location: Hyderabad
Product Name: Mouse, Customer Name: Puja, Price: 3000, Location: Vijayawada
Product Name: Mobile, Customer Name: Vanaja, Price: 9000, Location: Vijayawada
Product Name: Headset, Customer Name: Jalaja, Price: 6000, Location: Vijayawada | [
{
"code": null,
"e": 1381,
"s": 1062,
"text": "A RowSet object is similar to ResultSet, it also stores tabular data, in addition to the features of a ResultSet a RowSet follows JavaBeans component model. This can be used as a JavaBeans component in a visual Bean development environment, i.e. in environments like IDE’s you can visually manipulate these properties."
},
{
"code": null,
"e": 1488,
"s": 1381,
"text": "The RowSet interface provides methods to set Java bean properties to connect it to the required database −"
},
{
"code": null,
"e": 1513,
"s": 1488,
"text": "void setURL(String url):"
},
{
"code": null,
"e": 1549,
"s": 1513,
"text": "void setUserName(String user_name):"
},
{
"code": null,
"e": 1584,
"s": 1549,
"text": "void setPassword(String password):"
},
{
"code": null,
"e": 1734,
"s": 1584,
"text": "A RowSet object contains properties and each property have Setter and getter methods. Using these you can set and get values from a command property."
},
{
"code": null,
"e": 1931,
"s": 1734,
"text": "To set and get values of different datatypes the RowSet interface provides various setter and getter methods such as setInt(), getInt(), setFloat(), getFloat(), setTimestamp, getTimeStamp() etc..."
},
{
"code": null,
"e": 2203,
"s": 1931,
"text": "Since RowSet object follow JavaBeans event model. Whenever events like cursor/pointer movement, insertion/deletion/updation of a row, change in the RowSet contents occur. All the registered components (components that implemented the RowSetListener methods) are notified."
},
{
"code": null,
"e": 2291,
"s": 2203,
"text": "Assume we have a table named Dispatches in the database with 5 records as shown below −"
},
{
"code": null,
"e": 3110,
"s": 2291,
"text": "+----+-------------+--------------+--------------+--------------+-------+----------------+\n| ID | ProductName | CustomerName | DispatchDate | DeliveryTime | Price | Location |\n+----+-------------+--------------+--------------+--------------+-------+----------------+\n| 1 | Key-Board | Raja | 2019-09-01 | 05:30:00 | 7000 | Hyderabad |\n| 2 | Earphones | Roja | 2019-05-01 | 05:30:00 | 2000 | Vishakhapatnam |\n| 3 | Mouse | Puja | 2019-03-01 | 05:29:59 | 3000 | Vijayawada |\n| 4 | Mobile | Vanaja | 2019-03-01 | 04:40:52 | 9000 | Chennai |\n| 5 | Headset | Jalaja | 2019-04-06 | 18:38:59 | 6000 | Goa |\n+----+-------------+--------------+--------------+--------------+-------+----------------+"
},
{
"code": null,
"e": 3273,
"s": 3110,
"text": "Following JDBC programs retrieves the Product Name, Customer Name, Location of the delivery, of the records with price greater than 5000 and, displays the result."
},
{
"code": null,
"e": 4589,
"s": 3273,
"text": "import java.sql.DriverManager;\nimport javax.sql.RowSet;\nimport javax.sql.rowset.RowSetProvider;\npublic class RowSetExample {\n public static void main(String args[]) throws Exception {\n //Registering the Driver\n DriverManager.registerDriver(new com.mysql.jdbc.Driver());\n //Creating the RowSet object\n RowSet rowSet = RowSetProvider.newFactory().createJdbcRowSet();\n //Setting the URL\n String mysqlUrl = \"jdbc:mysql://localhost/SampleDB\";\n rowSet.setUrl(mysqlUrl);\n //Setting the user name\n rowSet.setUsername(\"root\");\n //Setting the password\n rowSet.setPassword(\"password\");\n //Setting the query/command\n rowSet.setCommand(\"select * from Dispatches\");\n rowSet.setCommand(\"SELECT ProductName, CustomerName, Price, Location from Dispatches where price > ?\");\n rowSet.setInt(1, 2000);\n rowSet.execute();\n System.out.println(\"Contents of the table\");\n while(rowSet.next()) {\n System.out.print(\"Product Name: \"+rowSet.getString(\"ProductName\")+\", \");\n System.out.print(\"Customer Name: \"+rowSet.getString(\"CustomerName\")+\", \");\n System.out.print(\"Price: \"+rowSet.getString(\"Price\")+\", \");\n System.out.print(\"Location: \"+rowSet.getString(\"Location\"));\n System.out.println(\"\");\n }\n }\n}"
},
{
"code": null,
"e": 4903,
"s": 4589,
"text": "Product Name: Key-Board, Customer Name: Raja, Price: 7000, Location: Hyderabad\nProduct Name: Mouse, Customer Name: Puja, Price: 3000, Location: Vijayawada\nProduct Name: Mobile, Customer Name: Vanaja, Price: 9000, Location: Vijayawada\nProduct Name: Headset, Customer Name: Jalaja, Price: 6000, Location: Vijayawada"
}
] |
Recognising Traffic Signs With 98% Accuracy Using Deep Learning | by Eddie Forson | Towards Data Science | This is project 2 of Term 1 of the Udacity Self-Driving Car Engineer Nanodegree. You can find all code related to this project on github. You can also read my post on project 1: Detecting Lane Lines Using Computer Vision by simply clicking on the link.
Traffic signs are an integral part of our road infrastructure. They provide critical information, sometimes compelling recommendations, for road users, which in turn requires them to adjust their driving behaviour to make sure they adhere with whatever road regulation currently enforced. Without such useful signs, we would most likely be faced with more accidents, as drivers would not be given critical feedback on how fast they could safely go, or informed about road works, sharp turn, or school crossings ahead. In our modern age, around 1.3M people die on roads each year. This number would be much higher without our road signs. Naturally, autonomous vehicles must also abide by road legislation and therefore recognize and understand traffic signs.
Traditionally, standard computer vision methods were employed to detect and classify traffic signs, but these required considerable and time-consuming manual work to handcraft important features in images. Instead, by applying deep learning to this problem, we create a model that reliably classifies traffic signs, learning to identify the most appropriate features for this problem by itself. In this post, I show how we can create a deep learning architecture that can identify traffic signs with close to 98% accuracy on the test set.
The dataset is plit into training, test and validation sets, with the following characteristics:
Images are 32 (width) x 32 (height) x 3 (RGB color channels)
Training set is composed of 34799 images
Validation set is composed of 4410 images
Test set is composed of 12630 images
There are 43 classes (e.g. Speed Limit 20km/h, No entry, Bumpy road, etc.)
Moreover, we will be using Python 3.5 with Tensorflow to write our code.
You can see below a sample of the images from the dataset, with labels displayed above the row of corresponding images. Some of them are quite dark so we will look to improve contrast a bit later.
There is also a significant imbalance across classes in the training set, as shown in the histogram below. Some classes have less than 200 images, while others have over 2000. This means that our model could be biased towards over-represented classes, especially when it is unsure in its predictions. We will see later how we can mitigate this discrepancy using data augmentation.
We initially apply two pre-processing steps to our images:
GrayscaleWe convert our 3 channel image to a single grayscale image (we do the same thing in project 1 — Lane Line Detection — you can read my blog post about it HERE).
Image NormalisationWe center the distribution of the image dataset by subtracting each image by the dataset mean and divide by its standard deviation. This helps our model treating images uniformly. The resulting images look as follows:
The architecture proposed is inspired from Yann Le Cun’s paper on classification of traffic signs. We added a few tweaks and created a modular codebase which allows us to try out different filter sizes, depth, and number of convolution layers, as well as the dimensions of fully connected layers. In homage to Le Cun, and with a touch of cheekiness, we called such network EdLeNet :).
We mainly tried 5x5 and 3x3 filter (aka kernel) sizes, and start with depth of 32 for our first convolutional layer. EdLeNet’s 3x3 architecture is shown below:
The network is composed of 3 convolutional layers — kernel size is 3x3, with depth doubling at next layer — using ReLU as the activation function, each followed by a 2x2 max pooling operation. The last 3 layers are fully connected, with the final layer producing 43 results (the total number of possible labels) computed using the SoftMax activation function. The network is trained using mini-batch stochastic gradient descent with the Adam optimizer. We build a highly modular coding infrastructure that enables us to dynamically create our models like in the following snippets:
mc_3x3 = ModelConfig(EdLeNet, "EdLeNet_Norm_Grayscale_3x3_Dropout_0.50", [32, 32, 1], [3, 32, 3], [120, 84], n_classes, [0.75, 0.5])mc_5x5 = ModelConfig(EdLeNet, "EdLeNet_Norm_Grayscale_5x5_Dropout_0.50", [32, 32, 1], [5, 32, 2], [120, 84], n_classes, [0.75, 0.5])me_g_norm_drpt_0_50_3x3 = ModelExecutor(mc_3x3)me_g_norm_drpt_0_50_5x5 = ModelExecutor(mc_5x5)
The ModelConfig contains information about the model such as:
The model function (e.g. EdLeNet)
the model name
input format (e.g. [32, 32, 1] for grayscale),
convolutional layers config [filter size, start depth, number of layers],
fully connected layers dimensions (e.g. [120, 84])
number of classes
dropout keep percentage values [p-conv, p-fc]
The ModelExecutor is reponsible for training, evaluating, predicting, and producing visualizations of our activation maps.
To better isolate our models and make sure they do not all exist under the same Tensorflow graph, we use the following useful construct:
self.graph = tf.Graph()with self.graph.as_default() as g: with g.name_scope( self.model_config.name ) as scope:...with tf.Session(graph = self.graph) as sess:
This way, we create separate graphs for every model, making sure there is no mixing of our variables, placeholders etc. It’s saved me a lot of headaches.
We actually started with a convolutional depth of 16, but obtained better results with 32 so settled on this value. We also compared color vs grayscale, standard and normalised images, and saw that grayscale tended to outperform color. Unfortunately, we barely scratched 93% test set accuracy on 3x3 or 5x5 models, not consistently reaching this milestone. Moreover, we observed some erratic loss behaviour on the validation set after a given number of epochs, which actually meant our model was overfitting on the training set and not generalising. You can see below some of our metric graphs for different model configurations.
In order to improve the model reliability, we turned to dropout, which is a form of regularisation where weights are kept with a probability p: the unkept weights are thus “dropped”. This prevents the model from overfitting. Dropout was introduced by Geoffrey Hinton, a pioneer in the deep learning space. His group’s paper on this topic is a must read to better understand the motivations behind the authors. There’s also a fascinating parallel with biology and evolution.In the paper, the authors apply varying degrees of dropout, depending on the type of layer. I therefore decided to adopt a similar approach, defining two levels of dropout, one for convolutional layers, the other for fully connected layers:
p-conv: probability of keeping weight in convolutional layerp-fc: probability of keeping weight in fully connected layer
Moreover, the authors gradually adopted more aggressive (i.e. lower) values of dropout as they go deeper in the network. Therefore I also decided:
p-conv >= p-fc
that is, we will keep weights with a greater than or equal probability in the convolutional than fully connected layers. The way to reason about this is that we treat the network as a funnel and therefore want to gradually tighten it as we move deeper into the layers: we don’t want to discard too much information at the start as some of it would be extremely valuable. Besides, as we apply MaxPooling in the convolutional layers, we are already losing a bit of information.
We tried different paratemers but ultimately settled on p-conv=0.75 and p-fc=0.5, which enabled us to achieve a test set accuracy of 97.55% on normalised grayscale images with the 3x3 model. Interestingly, we achieved over 98.3% accuracy on the validation set:
Training EdLeNet_Norm_Grayscale_3x3_Dropout_0.50 [epochs=100, batch_size=512]...[1] total=5.222s | train: time=3.139s, loss=3.4993, acc=0.1047 | val: time=2.083s, loss=3.5613, acc=0.1007[10] total=5.190s | train: time=3.122s, loss=0.2589, acc=0.9360 | val: time=2.067s, loss=0.3260, acc=0.8973...[90] total=5.193s | train: time=3.120s, loss=0.0006, acc=0.9999 | val: time=2.074s, loss=0.0747, acc=0.9841[100] total=5.191s | train: time=3.123s, loss=0.0004, acc=1.0000 | val: time=2.068s, loss=0.0849, acc=0.9832Model ./models/EdLeNet_Norm_Grayscale_3x3_Dropout_0.50.chkpt saved[EdLeNet_Norm_Grayscale_3x3_Dropout_0.50 - Test Set] time=0.686s, loss=0.1119, acc=0.9755
The graphs above show that the model is smooth, unlike some of the graphs higher up. We have already achieved the objective of scoring over 93% accuracy on the test set, but can we do better? Remember that some of the images were blurry and the distribution of images per class was very uneven. We explore below additional techniques we used to tackle each point.
Histogram Equalization is a computer vision technique used to increase the contrast in images. As some of our images suffer from low contrast (blurry, dark), we will improve visibility by applying OpenCV’s Contrast Limiting Adaptive Histogram Equalization (aka CLAHE) function.
We once again try various configurations, and find the best results, with test accuracy of 97.75%, on the 3x3 model using the following dropout values: p-conv=0.6, p-fc=0.5 .
Training EdLeNet_Grayscale_CLAHE_Norm_Take-2_3x3_Dropout_0.50 [epochs=500, batch_size=512]...[1] total=5.194s | train: time=3.137s, loss=3.6254, acc=0.0662 | val: time=2.058s, loss=3.6405, acc=0.0655[10] total=5.155s | train: time=3.115s, loss=0.8645, acc=0.7121 | val: time=2.040s, loss=0.9159, acc=0.6819...[480] total=5.149s | train: time=3.106s, loss=0.0009, acc=0.9998 | val: time=2.042s, loss=0.0355, acc=0.9884[490] total=5.148s | train: time=3.106s, loss=0.0007, acc=0.9998 | val: time=2.042s, loss=0.0390, acc=0.9884[500] total=5.148s | train: time=3.104s, loss=0.0006, acc=0.9999 | val: time=2.044s, loss=0.0420, acc=0.9862Model ./models/EdLeNet_Grayscale_CLAHE_Norm_Take-2_3x3_Dropout_0.50.chkpt saved[EdLeNet_Grayscale_CLAHE_Norm_Take-2_3x3_Dropout_0.50 - Test Set] time=0.675s, loss=0.0890, acc=0.9775
We show below graphs of previous runs where we tested the 5x5 model as well, over 220 epochs. We can see a much smoother curve here, reinforcing our intuition that the model we have is more stable.
We identified 269 images that are model could not identify correctly. We display 10 of them below, chosen randomly, to conjecture why the model was wrong.
Some of the images are very blurry, despite our histogram equalization, while others seem distorted. We probably don’t have enough examples of such images in our test set for our model’s predictions to improve. Additionally, while 97.75% test accuracy is very good, we still one more ace up our sleeve: data augmentation.
We observed earlier that the data presented glaring imbalance across the 43 classes. Yet it does not seem to be a crippling problem as we are able to reach very high accuracy despite the class imbalance. We also noticed that some images in the test set are distorted. We are therefore going to use data augmentation techniques in an attempt to:
Extend dataset and provide additional pictures in different lighting settings and orientationsImprove model’s ability to become more genericImprove test and validation accuracy, especially on distorted images
Extend dataset and provide additional pictures in different lighting settings and orientations
Improve model’s ability to become more generic
Improve test and validation accuracy, especially on distorted images
We use a nifty library called imgaug to create our augmentations. We mainly apply affine transformations to augment the images. Our code looks as follows:
def augment_imgs(imgs, p): """ Performs a set of augmentations with with a probability p """ augs = iaa.SomeOf((2, 4), [ iaa.Crop(px=(0, 4)), # crop images from each side by 0 to 4px (randomly chosen) iaa.Affine(scale={"x": (0.8, 1.2), "y": (0.8, 1.2)}), iaa.Affine(translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)}), iaa.Affine(rotate=(-45, 45)), # rotate by -45 to +45 degrees) iaa.Affine(shear=(-10, 10)) # shear by -10 to +10 degrees ]) seq = iaa.Sequential([iaa.Sometimes(p, augs)]) return seq.augment_images(imgs)
While the class imbalance probably causes some bias in the model, we have decided not to address it at this stage as it would cause our dataset to swell significantly and lengthen our training time (we don’t have a lot of time to spend on training at this stage). Instead, we decided to augment each class by 10%. Our new dataset looks as follows.
The distribution of images does not change significantly of course, but we do apply grayscale, histogram equalization and normalisation pre-processing steps to our images. We train for 2000 epochs with dropout (p-conv=0.6, p-fc=0.5) and achieve 97.86% accuracy on the test set:
[EdLeNet] Building neural network [conv layers=3, conv filter size=3, conv start depth=32, fc layers=2]Training EdLeNet_Augs_Grayscale_CLAHE_Norm_Take4_Bis_3x3_Dropout_0.50 [epochs=2000, batch_size=512]...[1] total=5.824s | train: time=3.594s, loss=3.6283, acc=0.0797 | val: time=2.231s, loss=3.6463, acc=0.0687...[1970] total=5.627s | train: time=3.408s, loss=0.0525, acc=0.9870 | val: time=2.219s, loss=0.0315, acc=0.9914[1980] total=5.627s | train: time=3.409s, loss=0.0530, acc=0.9862 | val: time=2.218s, loss=0.0309, acc=0.9902[1990] total=5.628s | train: time=3.412s, loss=0.0521, acc=0.9869 | val: time=2.216s, loss=0.0302, acc=0.9900[2000] total=5.632s | train: time=3.415s, loss=0.0521, acc=0.9869 | val: time=2.217s, loss=0.0311, acc=0.9902Model ./models/EdLeNet_Augs_Grayscale_CLAHE_Norm_Take4_Bis_3x3_Dropout_0.50.chkpt saved[EdLeNet_Augs_Grayscale_CLAHE_Norm_Take4_Bis_3x3_Dropout_0.50 - Test Set] time=0.678s, loss=0.0842, acc=0.9786
But... if you look at the loss metric on the training set, you can see that at 0.0521, we most likely still have some wiggle room. We are planning to train for more epochs and will report on our new results in the future.
We decided to test our model on new images as well, to make sure that it’s indeed generalised to more than the traffic signs in our original dataset. We therefore downloaded five new images and submitted them to our model for predictions.
The ground truth for the images is as follows:
['Speed limit (120km/h)', 'Priority road', 'No vehicles', 'Road work', 'Vehicles over 3.5 metric tons prohibited']
The Images were chosen because of the following:
They represent different traffic signs that we currently classify
They vary in shape and color
They are under different lighting conditions (the 4th one has sunlight reflection)
They are under different orientations (the 3rd one is slanted)
They have different background
The last image is actually a design, not a real picture, and we wanted to test the model against it
Some of them are in under-represented classes
The first step we took was to apply the same CLAHE to those new images, resulting in the following:
We achieve perfect accuracy of 100% on the new images. On the original test set, we achieved 97.86% accuracy. We could explore blurring/distorting our new images or modifying contrast to see how the model handles those changes in the future.
new_img_grayscale_norm_pred_acc = np.sum(new_img_lbs == preds) / len(preds)print("[Grayscale Normalised] Predictional accuracy on new images: {0}%".format(new_img_grayscale_norm_pred_acc * 100))...[Grayscale Normalised] Predictional accuracy on new images: 100.0%
We also show the top 5 SoftMax probabilities computed for each image, with the green bar showing the ground truth. We can clearly see that our model is quite confident in its predictions. In the worst case (last image), the 2nd most likely prediction has a probability of around 0.1% . In fact our model struggles most on the last image, which I believe is actually a design and not even a real picture. Overall, we have developed a strong model!
We show below the results produced by each convolutional layer (before max pooling), resulting in 3 activation maps.
We can see that the network is focusing a lot on the edges of the circle and somehow on the truck. The background is mostly ignored.
It is rather hard to determine what the network is focusing on in layer 2, but it seems to “activate” around the edges of the circle and in the middle, where the truck appears.
Layer 3
This activation map is also hard to decipher... But it seems the network reacts to stimuli on the edges and in the middle once again.
We covered how deep learning can be used to classify traffic signs with high accuracy, employing a variety of pre-processing and regularization techniques (e.g. dropout), and trying different model architectures. We built highly configurable code and developed a flexible way of evaluating multiple architectures. Our model reached close to close to 98% accuracy on the test set, achieving 99% on the validation set.
Personally, I thoroughly enjoyed this project and gained practical experience using Tensorflow, matplotlib and investigating artificial neural network architectures. Moreover, I delved into some seminal papers in this field, which reinforced my understanding and more importantly refined my intuition about deep learning.
In the future, I believe higher accuracy can be achieved by applying further regularization techniques such as batch normalization and also by adopting more modern architectures such as GoogLeNet’s Inception Module, ResNet, or Xception.
Thanks for reading this post. I hope you found it useful. I’m now building a new startup called EnVsion! At EnVsion, we’re creating the central repository for UX researchers and product teams to unlock the insights from their user interview videos. And of course we use AI for this ;).
If you’re a UX researcher or product manager feeling overwhelmed with all your video calls with users and customers, then EnVsion is for you!
You also can follow me on Twitter. | [
{
"code": null,
"e": 425,
"s": 172,
"text": "This is project 2 of Term 1 of the Udacity Self-Driving Car Engineer Nanodegree. You can find all code related to this project on github. You can also read my post on project 1: Detecting Lane Lines Using Computer Vision by simply clicking on the link."
},
{
"code": null,
"e": 1183,
"s": 425,
"text": "Traffic signs are an integral part of our road infrastructure. They provide critical information, sometimes compelling recommendations, for road users, which in turn requires them to adjust their driving behaviour to make sure they adhere with whatever road regulation currently enforced. Without such useful signs, we would most likely be faced with more accidents, as drivers would not be given critical feedback on how fast they could safely go, or informed about road works, sharp turn, or school crossings ahead. In our modern age, around 1.3M people die on roads each year. This number would be much higher without our road signs. Naturally, autonomous vehicles must also abide by road legislation and therefore recognize and understand traffic signs."
},
{
"code": null,
"e": 1722,
"s": 1183,
"text": "Traditionally, standard computer vision methods were employed to detect and classify traffic signs, but these required considerable and time-consuming manual work to handcraft important features in images. Instead, by applying deep learning to this problem, we create a model that reliably classifies traffic signs, learning to identify the most appropriate features for this problem by itself. In this post, I show how we can create a deep learning architecture that can identify traffic signs with close to 98% accuracy on the test set."
},
{
"code": null,
"e": 1819,
"s": 1722,
"text": "The dataset is plit into training, test and validation sets, with the following characteristics:"
},
{
"code": null,
"e": 1880,
"s": 1819,
"text": "Images are 32 (width) x 32 (height) x 3 (RGB color channels)"
},
{
"code": null,
"e": 1921,
"s": 1880,
"text": "Training set is composed of 34799 images"
},
{
"code": null,
"e": 1963,
"s": 1921,
"text": "Validation set is composed of 4410 images"
},
{
"code": null,
"e": 2000,
"s": 1963,
"text": "Test set is composed of 12630 images"
},
{
"code": null,
"e": 2075,
"s": 2000,
"text": "There are 43 classes (e.g. Speed Limit 20km/h, No entry, Bumpy road, etc.)"
},
{
"code": null,
"e": 2148,
"s": 2075,
"text": "Moreover, we will be using Python 3.5 with Tensorflow to write our code."
},
{
"code": null,
"e": 2345,
"s": 2148,
"text": "You can see below a sample of the images from the dataset, with labels displayed above the row of corresponding images. Some of them are quite dark so we will look to improve contrast a bit later."
},
{
"code": null,
"e": 2726,
"s": 2345,
"text": "There is also a significant imbalance across classes in the training set, as shown in the histogram below. Some classes have less than 200 images, while others have over 2000. This means that our model could be biased towards over-represented classes, especially when it is unsure in its predictions. We will see later how we can mitigate this discrepancy using data augmentation."
},
{
"code": null,
"e": 2785,
"s": 2726,
"text": "We initially apply two pre-processing steps to our images:"
},
{
"code": null,
"e": 2954,
"s": 2785,
"text": "GrayscaleWe convert our 3 channel image to a single grayscale image (we do the same thing in project 1 — Lane Line Detection — you can read my blog post about it HERE)."
},
{
"code": null,
"e": 3191,
"s": 2954,
"text": "Image NormalisationWe center the distribution of the image dataset by subtracting each image by the dataset mean and divide by its standard deviation. This helps our model treating images uniformly. The resulting images look as follows:"
},
{
"code": null,
"e": 3576,
"s": 3191,
"text": "The architecture proposed is inspired from Yann Le Cun’s paper on classification of traffic signs. We added a few tweaks and created a modular codebase which allows us to try out different filter sizes, depth, and number of convolution layers, as well as the dimensions of fully connected layers. In homage to Le Cun, and with a touch of cheekiness, we called such network EdLeNet :)."
},
{
"code": null,
"e": 3736,
"s": 3576,
"text": "We mainly tried 5x5 and 3x3 filter (aka kernel) sizes, and start with depth of 32 for our first convolutional layer. EdLeNet’s 3x3 architecture is shown below:"
},
{
"code": null,
"e": 4318,
"s": 3736,
"text": "The network is composed of 3 convolutional layers — kernel size is 3x3, with depth doubling at next layer — using ReLU as the activation function, each followed by a 2x2 max pooling operation. The last 3 layers are fully connected, with the final layer producing 43 results (the total number of possible labels) computed using the SoftMax activation function. The network is trained using mini-batch stochastic gradient descent with the Adam optimizer. We build a highly modular coding infrastructure that enables us to dynamically create our models like in the following snippets:"
},
{
"code": null,
"e": 4677,
"s": 4318,
"text": "mc_3x3 = ModelConfig(EdLeNet, \"EdLeNet_Norm_Grayscale_3x3_Dropout_0.50\", [32, 32, 1], [3, 32, 3], [120, 84], n_classes, [0.75, 0.5])mc_5x5 = ModelConfig(EdLeNet, \"EdLeNet_Norm_Grayscale_5x5_Dropout_0.50\", [32, 32, 1], [5, 32, 2], [120, 84], n_classes, [0.75, 0.5])me_g_norm_drpt_0_50_3x3 = ModelExecutor(mc_3x3)me_g_norm_drpt_0_50_5x5 = ModelExecutor(mc_5x5)"
},
{
"code": null,
"e": 4739,
"s": 4677,
"text": "The ModelConfig contains information about the model such as:"
},
{
"code": null,
"e": 4773,
"s": 4739,
"text": "The model function (e.g. EdLeNet)"
},
{
"code": null,
"e": 4788,
"s": 4773,
"text": "the model name"
},
{
"code": null,
"e": 4835,
"s": 4788,
"text": "input format (e.g. [32, 32, 1] for grayscale),"
},
{
"code": null,
"e": 4909,
"s": 4835,
"text": "convolutional layers config [filter size, start depth, number of layers],"
},
{
"code": null,
"e": 4960,
"s": 4909,
"text": "fully connected layers dimensions (e.g. [120, 84])"
},
{
"code": null,
"e": 4978,
"s": 4960,
"text": "number of classes"
},
{
"code": null,
"e": 5024,
"s": 4978,
"text": "dropout keep percentage values [p-conv, p-fc]"
},
{
"code": null,
"e": 5147,
"s": 5024,
"text": "The ModelExecutor is reponsible for training, evaluating, predicting, and producing visualizations of our activation maps."
},
{
"code": null,
"e": 5284,
"s": 5147,
"text": "To better isolate our models and make sure they do not all exist under the same Tensorflow graph, we use the following useful construct:"
},
{
"code": null,
"e": 5446,
"s": 5284,
"text": "self.graph = tf.Graph()with self.graph.as_default() as g: with g.name_scope( self.model_config.name ) as scope:...with tf.Session(graph = self.graph) as sess:"
},
{
"code": null,
"e": 5600,
"s": 5446,
"text": "This way, we create separate graphs for every model, making sure there is no mixing of our variables, placeholders etc. It’s saved me a lot of headaches."
},
{
"code": null,
"e": 6230,
"s": 5600,
"text": "We actually started with a convolutional depth of 16, but obtained better results with 32 so settled on this value. We also compared color vs grayscale, standard and normalised images, and saw that grayscale tended to outperform color. Unfortunately, we barely scratched 93% test set accuracy on 3x3 or 5x5 models, not consistently reaching this milestone. Moreover, we observed some erratic loss behaviour on the validation set after a given number of epochs, which actually meant our model was overfitting on the training set and not generalising. You can see below some of our metric graphs for different model configurations."
},
{
"code": null,
"e": 6944,
"s": 6230,
"text": "In order to improve the model reliability, we turned to dropout, which is a form of regularisation where weights are kept with a probability p: the unkept weights are thus “dropped”. This prevents the model from overfitting. Dropout was introduced by Geoffrey Hinton, a pioneer in the deep learning space. His group’s paper on this topic is a must read to better understand the motivations behind the authors. There’s also a fascinating parallel with biology and evolution.In the paper, the authors apply varying degrees of dropout, depending on the type of layer. I therefore decided to adopt a similar approach, defining two levels of dropout, one for convolutional layers, the other for fully connected layers:"
},
{
"code": null,
"e": 7065,
"s": 6944,
"text": "p-conv: probability of keeping weight in convolutional layerp-fc: probability of keeping weight in fully connected layer"
},
{
"code": null,
"e": 7212,
"s": 7065,
"text": "Moreover, the authors gradually adopted more aggressive (i.e. lower) values of dropout as they go deeper in the network. Therefore I also decided:"
},
{
"code": null,
"e": 7227,
"s": 7212,
"text": "p-conv >= p-fc"
},
{
"code": null,
"e": 7703,
"s": 7227,
"text": "that is, we will keep weights with a greater than or equal probability in the convolutional than fully connected layers. The way to reason about this is that we treat the network as a funnel and therefore want to gradually tighten it as we move deeper into the layers: we don’t want to discard too much information at the start as some of it would be extremely valuable. Besides, as we apply MaxPooling in the convolutional layers, we are already losing a bit of information."
},
{
"code": null,
"e": 7964,
"s": 7703,
"text": "We tried different paratemers but ultimately settled on p-conv=0.75 and p-fc=0.5, which enabled us to achieve a test set accuracy of 97.55% on normalised grayscale images with the 3x3 model. Interestingly, we achieved over 98.3% accuracy on the validation set:"
},
{
"code": null,
"e": 8631,
"s": 7964,
"text": "Training EdLeNet_Norm_Grayscale_3x3_Dropout_0.50 [epochs=100, batch_size=512]...[1]\ttotal=5.222s | train: time=3.139s, loss=3.4993, acc=0.1047 | val: time=2.083s, loss=3.5613, acc=0.1007[10]\ttotal=5.190s | train: time=3.122s, loss=0.2589, acc=0.9360 | val: time=2.067s, loss=0.3260, acc=0.8973...[90]\ttotal=5.193s | train: time=3.120s, loss=0.0006, acc=0.9999 | val: time=2.074s, loss=0.0747, acc=0.9841[100]\ttotal=5.191s | train: time=3.123s, loss=0.0004, acc=1.0000 | val: time=2.068s, loss=0.0849, acc=0.9832Model ./models/EdLeNet_Norm_Grayscale_3x3_Dropout_0.50.chkpt saved[EdLeNet_Norm_Grayscale_3x3_Dropout_0.50 - Test Set]\ttime=0.686s, loss=0.1119, acc=0.9755"
},
{
"code": null,
"e": 8995,
"s": 8631,
"text": "The graphs above show that the model is smooth, unlike some of the graphs higher up. We have already achieved the objective of scoring over 93% accuracy on the test set, but can we do better? Remember that some of the images were blurry and the distribution of images per class was very uneven. We explore below additional techniques we used to tackle each point."
},
{
"code": null,
"e": 9273,
"s": 8995,
"text": "Histogram Equalization is a computer vision technique used to increase the contrast in images. As some of our images suffer from low contrast (blurry, dark), we will improve visibility by applying OpenCV’s Contrast Limiting Adaptive Histogram Equalization (aka CLAHE) function."
},
{
"code": null,
"e": 9448,
"s": 9273,
"text": "We once again try various configurations, and find the best results, with test accuracy of 97.75%, on the 3x3 model using the following dropout values: p-conv=0.6, p-fc=0.5 ."
},
{
"code": null,
"e": 10263,
"s": 9448,
"text": "Training EdLeNet_Grayscale_CLAHE_Norm_Take-2_3x3_Dropout_0.50 [epochs=500, batch_size=512]...[1]\ttotal=5.194s | train: time=3.137s, loss=3.6254, acc=0.0662 | val: time=2.058s, loss=3.6405, acc=0.0655[10]\ttotal=5.155s | train: time=3.115s, loss=0.8645, acc=0.7121 | val: time=2.040s, loss=0.9159, acc=0.6819...[480]\ttotal=5.149s | train: time=3.106s, loss=0.0009, acc=0.9998 | val: time=2.042s, loss=0.0355, acc=0.9884[490]\ttotal=5.148s | train: time=3.106s, loss=0.0007, acc=0.9998 | val: time=2.042s, loss=0.0390, acc=0.9884[500]\ttotal=5.148s | train: time=3.104s, loss=0.0006, acc=0.9999 | val: time=2.044s, loss=0.0420, acc=0.9862Model ./models/EdLeNet_Grayscale_CLAHE_Norm_Take-2_3x3_Dropout_0.50.chkpt saved[EdLeNet_Grayscale_CLAHE_Norm_Take-2_3x3_Dropout_0.50 - Test Set]\ttime=0.675s, loss=0.0890, acc=0.9775"
},
{
"code": null,
"e": 10461,
"s": 10263,
"text": "We show below graphs of previous runs where we tested the 5x5 model as well, over 220 epochs. We can see a much smoother curve here, reinforcing our intuition that the model we have is more stable."
},
{
"code": null,
"e": 10616,
"s": 10461,
"text": "We identified 269 images that are model could not identify correctly. We display 10 of them below, chosen randomly, to conjecture why the model was wrong."
},
{
"code": null,
"e": 10938,
"s": 10616,
"text": "Some of the images are very blurry, despite our histogram equalization, while others seem distorted. We probably don’t have enough examples of such images in our test set for our model’s predictions to improve. Additionally, while 97.75% test accuracy is very good, we still one more ace up our sleeve: data augmentation."
},
{
"code": null,
"e": 11283,
"s": 10938,
"text": "We observed earlier that the data presented glaring imbalance across the 43 classes. Yet it does not seem to be a crippling problem as we are able to reach very high accuracy despite the class imbalance. We also noticed that some images in the test set are distorted. We are therefore going to use data augmentation techniques in an attempt to:"
},
{
"code": null,
"e": 11492,
"s": 11283,
"text": "Extend dataset and provide additional pictures in different lighting settings and orientationsImprove model’s ability to become more genericImprove test and validation accuracy, especially on distorted images"
},
{
"code": null,
"e": 11587,
"s": 11492,
"text": "Extend dataset and provide additional pictures in different lighting settings and orientations"
},
{
"code": null,
"e": 11634,
"s": 11587,
"text": "Improve model’s ability to become more generic"
},
{
"code": null,
"e": 11703,
"s": 11634,
"text": "Improve test and validation accuracy, especially on distorted images"
},
{
"code": null,
"e": 11858,
"s": 11703,
"text": "We use a nifty library called imgaug to create our augmentations. We mainly apply affine transformations to augment the images. Our code looks as follows:"
},
{
"code": null,
"e": 12493,
"s": 11858,
"text": "def augment_imgs(imgs, p): \"\"\" Performs a set of augmentations with with a probability p \"\"\" augs = iaa.SomeOf((2, 4), [ iaa.Crop(px=(0, 4)), # crop images from each side by 0 to 4px (randomly chosen) iaa.Affine(scale={\"x\": (0.8, 1.2), \"y\": (0.8, 1.2)}), iaa.Affine(translate_percent={\"x\": (-0.2, 0.2), \"y\": (-0.2, 0.2)}), iaa.Affine(rotate=(-45, 45)), # rotate by -45 to +45 degrees) iaa.Affine(shear=(-10, 10)) # shear by -10 to +10 degrees ]) seq = iaa.Sequential([iaa.Sometimes(p, augs)]) return seq.augment_images(imgs)"
},
{
"code": null,
"e": 12841,
"s": 12493,
"text": "While the class imbalance probably causes some bias in the model, we have decided not to address it at this stage as it would cause our dataset to swell significantly and lengthen our training time (we don’t have a lot of time to spend on training at this stage). Instead, we decided to augment each class by 10%. Our new dataset looks as follows."
},
{
"code": null,
"e": 13119,
"s": 12841,
"text": "The distribution of images does not change significantly of course, but we do apply grayscale, histogram equalization and normalisation pre-processing steps to our images. We train for 2000 epochs with dropout (p-conv=0.6, p-fc=0.5) and achieve 97.86% accuracy on the test set:"
},
{
"code": null,
"e": 14067,
"s": 13119,
"text": "[EdLeNet] Building neural network [conv layers=3, conv filter size=3, conv start depth=32, fc layers=2]Training EdLeNet_Augs_Grayscale_CLAHE_Norm_Take4_Bis_3x3_Dropout_0.50 [epochs=2000, batch_size=512]...[1]\ttotal=5.824s | train: time=3.594s, loss=3.6283, acc=0.0797 | val: time=2.231s, loss=3.6463, acc=0.0687...[1970]\ttotal=5.627s | train: time=3.408s, loss=0.0525, acc=0.9870 | val: time=2.219s, loss=0.0315, acc=0.9914[1980]\ttotal=5.627s | train: time=3.409s, loss=0.0530, acc=0.9862 | val: time=2.218s, loss=0.0309, acc=0.9902[1990]\ttotal=5.628s | train: time=3.412s, loss=0.0521, acc=0.9869 | val: time=2.216s, loss=0.0302, acc=0.9900[2000]\ttotal=5.632s | train: time=3.415s, loss=0.0521, acc=0.9869 | val: time=2.217s, loss=0.0311, acc=0.9902Model ./models/EdLeNet_Augs_Grayscale_CLAHE_Norm_Take4_Bis_3x3_Dropout_0.50.chkpt saved[EdLeNet_Augs_Grayscale_CLAHE_Norm_Take4_Bis_3x3_Dropout_0.50 - Test Set]\ttime=0.678s, loss=0.0842, acc=0.9786"
},
{
"code": null,
"e": 14289,
"s": 14067,
"text": "But... if you look at the loss metric on the training set, you can see that at 0.0521, we most likely still have some wiggle room. We are planning to train for more epochs and will report on our new results in the future."
},
{
"code": null,
"e": 14528,
"s": 14289,
"text": "We decided to test our model on new images as well, to make sure that it’s indeed generalised to more than the traffic signs in our original dataset. We therefore downloaded five new images and submitted them to our model for predictions."
},
{
"code": null,
"e": 14575,
"s": 14528,
"text": "The ground truth for the images is as follows:"
},
{
"code": null,
"e": 14690,
"s": 14575,
"text": "['Speed limit (120km/h)', 'Priority road', 'No vehicles', 'Road work', 'Vehicles over 3.5 metric tons prohibited']"
},
{
"code": null,
"e": 14739,
"s": 14690,
"text": "The Images were chosen because of the following:"
},
{
"code": null,
"e": 14805,
"s": 14739,
"text": "They represent different traffic signs that we currently classify"
},
{
"code": null,
"e": 14834,
"s": 14805,
"text": "They vary in shape and color"
},
{
"code": null,
"e": 14917,
"s": 14834,
"text": "They are under different lighting conditions (the 4th one has sunlight reflection)"
},
{
"code": null,
"e": 14980,
"s": 14917,
"text": "They are under different orientations (the 3rd one is slanted)"
},
{
"code": null,
"e": 15011,
"s": 14980,
"text": "They have different background"
},
{
"code": null,
"e": 15111,
"s": 15011,
"text": "The last image is actually a design, not a real picture, and we wanted to test the model against it"
},
{
"code": null,
"e": 15157,
"s": 15111,
"text": "Some of them are in under-represented classes"
},
{
"code": null,
"e": 15257,
"s": 15157,
"text": "The first step we took was to apply the same CLAHE to those new images, resulting in the following:"
},
{
"code": null,
"e": 15499,
"s": 15257,
"text": "We achieve perfect accuracy of 100% on the new images. On the original test set, we achieved 97.86% accuracy. We could explore blurring/distorting our new images or modifying contrast to see how the model handles those changes in the future."
},
{
"code": null,
"e": 15763,
"s": 15499,
"text": "new_img_grayscale_norm_pred_acc = np.sum(new_img_lbs == preds) / len(preds)print(\"[Grayscale Normalised] Predictional accuracy on new images: {0}%\".format(new_img_grayscale_norm_pred_acc * 100))...[Grayscale Normalised] Predictional accuracy on new images: 100.0%"
},
{
"code": null,
"e": 16210,
"s": 15763,
"text": "We also show the top 5 SoftMax probabilities computed for each image, with the green bar showing the ground truth. We can clearly see that our model is quite confident in its predictions. In the worst case (last image), the 2nd most likely prediction has a probability of around 0.1% . In fact our model struggles most on the last image, which I believe is actually a design and not even a real picture. Overall, we have developed a strong model!"
},
{
"code": null,
"e": 16327,
"s": 16210,
"text": "We show below the results produced by each convolutional layer (before max pooling), resulting in 3 activation maps."
},
{
"code": null,
"e": 16460,
"s": 16327,
"text": "We can see that the network is focusing a lot on the edges of the circle and somehow on the truck. The background is mostly ignored."
},
{
"code": null,
"e": 16637,
"s": 16460,
"text": "It is rather hard to determine what the network is focusing on in layer 2, but it seems to “activate” around the edges of the circle and in the middle, where the truck appears."
},
{
"code": null,
"e": 16645,
"s": 16637,
"text": "Layer 3"
},
{
"code": null,
"e": 16779,
"s": 16645,
"text": "This activation map is also hard to decipher... But it seems the network reacts to stimuli on the edges and in the middle once again."
},
{
"code": null,
"e": 17196,
"s": 16779,
"text": "We covered how deep learning can be used to classify traffic signs with high accuracy, employing a variety of pre-processing and regularization techniques (e.g. dropout), and trying different model architectures. We built highly configurable code and developed a flexible way of evaluating multiple architectures. Our model reached close to close to 98% accuracy on the test set, achieving 99% on the validation set."
},
{
"code": null,
"e": 17518,
"s": 17196,
"text": "Personally, I thoroughly enjoyed this project and gained practical experience using Tensorflow, matplotlib and investigating artificial neural network architectures. Moreover, I delved into some seminal papers in this field, which reinforced my understanding and more importantly refined my intuition about deep learning."
},
{
"code": null,
"e": 17755,
"s": 17518,
"text": "In the future, I believe higher accuracy can be achieved by applying further regularization techniques such as batch normalization and also by adopting more modern architectures such as GoogLeNet’s Inception Module, ResNet, or Xception."
},
{
"code": null,
"e": 18041,
"s": 17755,
"text": "Thanks for reading this post. I hope you found it useful. I’m now building a new startup called EnVsion! At EnVsion, we’re creating the central repository for UX researchers and product teams to unlock the insights from their user interview videos. And of course we use AI for this ;)."
},
{
"code": null,
"e": 18183,
"s": 18041,
"text": "If you’re a UX researcher or product manager feeling overwhelmed with all your video calls with users and customers, then EnVsion is for you!"
}
] |
Battling out the GPU Frustration with Google Colab | by sriganesh lokesh | Towards Data Science | With a momentous increase in the size of datasets for machine learning and deep learning models to fit, there has been an escalation in the demand for GPUs to faster train the model. As students of data science, we all know how long our challenged laptops take to run a machine learning or deep learning model with a dataset containing millions of records.
Generally, basic Laptops and PCs take a huge chunk of the user’s time in fitting the data to the constructed model. So the process of HyperTuning the model will take a hit with such a constraint.
An alternative to this situation would be to buy an external GPU, which will cost 250–300$ on average for a basic model. For a student, this is a steep price to pay for a college project which he/she is working hard for.
I KNOW IT IS FRUSTRATING!!!
Thanks to Google, students can stop worrying about such a situation, as Google provides a free GPU on the cloud for running large scale machine learning projects.
This article will get you started with Google Colab, a free GPU cloud service with an editor based on Jupyter Notebook.
Users can run their Machine Learning and Deep Learning models built on the most popular libraries currently available — Keras, Pytorch, Tensorflow and OpenCV.
To get started, you will need some knowledge of Python and Jupyter notebook along with basic machine learning concepts.
Google Colab is a perfect tool for Python’s popular libraries such as Keras, Tensorflow, Pytorch. You have the power to create a new Python 3 or Python 2 Notebook, Open Notebooks stored in the respective google accounts used to create the notebooks and can upload notebooks from the local computer. In the case of uploading our dataset on to the platform, there is a drawback, wherein the dataset is removed when the session is restarted. We can Mount Google Drive to the Notebook, which is a cool feature instead of uploading the file each time we open the notebook.
We can launch the platform by clicking on the link below
colab.research.google.com
Once you click on the link, you are asked to log in with your Google account wherein all the notebooks which you create will be automatically stored for future use.
The Examples tab provides some code which has to be properly reviewed in order to use most of the features of Google Colab.
Recent tab provides the last few notebooks which you had created or worked on.
Google Drive tab provides all the notebooks linked to your google account.
Github tab lets you link your github account to google colab.
Upload tab provides a link to upload file from the local computer. Most of the hidden features can be known by extensively using the platform.
This is the basic layout that is generated when a new notebook is created. Most of the features are similar to Jupyter Notebook. You can change the name of the notebook by double-clicking on Untitled.ipynb. +Code and + Text are used to add a new code cell or markdown cell respectively. The notebook can be shared with other users by adding their Gmail address so that everyone can make the appropriate changes in one place. You can start a new session by clicking the Connect button on the top right corner of the screen.
Before starting the session, it is a good practice to change the runtime type to GPU( As it is the main purpose behind the usage of Google Colab) which can be accessed in the Runtime tab.
Google Colab is a very simple yet powerful platform to run your Machine Learning and Deep Learning models without worrying about GPU power. If you are used to working in Jupyter Notebook, then this will be a piece of cake.
I would like to stress on Markdowns as they add a lot of weight while developing a Jupyter Notebook. It helps you to tell a story through your code, which is a must if you are aspiring to be a Data Scientist. The management mainly requires the output rather than the code involved and this eases that process. Google Colab provides a live preview of the markdown written so that you don’t have to make minor changes by running the cell every time.
This window appears to the left of the screen with an arrow associated with it. Here, you can upload a file from the local computer or Mount your Google Drive (i will explain in the latter part of the post).
from google.drive import drive drive.mount(/content/gdrive)
!pip install keras
!git clone https://github.com/pytorch/pytorch
!python setup.py
TPU(Tensor Processing Unit) was developed by Google to provide high computational power to train machine learning and deep learning models. Even though an NVIDIA Tesla K80 is present at your disposable, TPU provides much more in terms of power. As per the information provided by Google’s Colab documentation, A GPU provides 1.8TFlops and has a 12GB RAM while TPU delivers 180TFlops and provides a 64GB RAM.
Google Colab is a great alternative for Jupyter Notebook for running high computational deep learning and machine learning models. You can share your code with other developers on the go so that they can help you optimize your code. Most of the python libraries come pre-installed with Google Colab so it reduces the burden of installing each and every library. And the main feature would be FREE GPU!!!.
There are some limitations with free products as usual. (insignificant restrictions but can make a difference in certain scenarios). You can only use Python 2.7(Deprecated from Jan 2020) or 3.6 as the programming language, with no support for R. GPU allocation per user is restricted to 12 hours at a time. The GPU used is the NVIDIA Tesla K80, and once the session is complete, the user can continue using the resource by connecting to a different VM.
I would recommend you to refer Your One-Stop Guide to Google Colab which provides a deeper understanding of Google Colab with more tips and tricks. | [
{
"code": null,
"e": 529,
"s": 172,
"text": "With a momentous increase in the size of datasets for machine learning and deep learning models to fit, there has been an escalation in the demand for GPUs to faster train the model. As students of data science, we all know how long our challenged laptops take to run a machine learning or deep learning model with a dataset containing millions of records."
},
{
"code": null,
"e": 725,
"s": 529,
"text": "Generally, basic Laptops and PCs take a huge chunk of the user’s time in fitting the data to the constructed model. So the process of HyperTuning the model will take a hit with such a constraint."
},
{
"code": null,
"e": 946,
"s": 725,
"text": "An alternative to this situation would be to buy an external GPU, which will cost 250–300$ on average for a basic model. For a student, this is a steep price to pay for a college project which he/she is working hard for."
},
{
"code": null,
"e": 974,
"s": 946,
"text": "I KNOW IT IS FRUSTRATING!!!"
},
{
"code": null,
"e": 1137,
"s": 974,
"text": "Thanks to Google, students can stop worrying about such a situation, as Google provides a free GPU on the cloud for running large scale machine learning projects."
},
{
"code": null,
"e": 1257,
"s": 1137,
"text": "This article will get you started with Google Colab, a free GPU cloud service with an editor based on Jupyter Notebook."
},
{
"code": null,
"e": 1416,
"s": 1257,
"text": "Users can run their Machine Learning and Deep Learning models built on the most popular libraries currently available — Keras, Pytorch, Tensorflow and OpenCV."
},
{
"code": null,
"e": 1536,
"s": 1416,
"text": "To get started, you will need some knowledge of Python and Jupyter notebook along with basic machine learning concepts."
},
{
"code": null,
"e": 2104,
"s": 1536,
"text": "Google Colab is a perfect tool for Python’s popular libraries such as Keras, Tensorflow, Pytorch. You have the power to create a new Python 3 or Python 2 Notebook, Open Notebooks stored in the respective google accounts used to create the notebooks and can upload notebooks from the local computer. In the case of uploading our dataset on to the platform, there is a drawback, wherein the dataset is removed when the session is restarted. We can Mount Google Drive to the Notebook, which is a cool feature instead of uploading the file each time we open the notebook."
},
{
"code": null,
"e": 2161,
"s": 2104,
"text": "We can launch the platform by clicking on the link below"
},
{
"code": null,
"e": 2187,
"s": 2161,
"text": "colab.research.google.com"
},
{
"code": null,
"e": 2352,
"s": 2187,
"text": "Once you click on the link, you are asked to log in with your Google account wherein all the notebooks which you create will be automatically stored for future use."
},
{
"code": null,
"e": 2476,
"s": 2352,
"text": "The Examples tab provides some code which has to be properly reviewed in order to use most of the features of Google Colab."
},
{
"code": null,
"e": 2555,
"s": 2476,
"text": "Recent tab provides the last few notebooks which you had created or worked on."
},
{
"code": null,
"e": 2630,
"s": 2555,
"text": "Google Drive tab provides all the notebooks linked to your google account."
},
{
"code": null,
"e": 2692,
"s": 2630,
"text": "Github tab lets you link your github account to google colab."
},
{
"code": null,
"e": 2835,
"s": 2692,
"text": "Upload tab provides a link to upload file from the local computer. Most of the hidden features can be known by extensively using the platform."
},
{
"code": null,
"e": 3358,
"s": 2835,
"text": "This is the basic layout that is generated when a new notebook is created. Most of the features are similar to Jupyter Notebook. You can change the name of the notebook by double-clicking on Untitled.ipynb. +Code and + Text are used to add a new code cell or markdown cell respectively. The notebook can be shared with other users by adding their Gmail address so that everyone can make the appropriate changes in one place. You can start a new session by clicking the Connect button on the top right corner of the screen."
},
{
"code": null,
"e": 3546,
"s": 3358,
"text": "Before starting the session, it is a good practice to change the runtime type to GPU( As it is the main purpose behind the usage of Google Colab) which can be accessed in the Runtime tab."
},
{
"code": null,
"e": 3769,
"s": 3546,
"text": "Google Colab is a very simple yet powerful platform to run your Machine Learning and Deep Learning models without worrying about GPU power. If you are used to working in Jupyter Notebook, then this will be a piece of cake."
},
{
"code": null,
"e": 4217,
"s": 3769,
"text": "I would like to stress on Markdowns as they add a lot of weight while developing a Jupyter Notebook. It helps you to tell a story through your code, which is a must if you are aspiring to be a Data Scientist. The management mainly requires the output rather than the code involved and this eases that process. Google Colab provides a live preview of the markdown written so that you don’t have to make minor changes by running the cell every time."
},
{
"code": null,
"e": 4425,
"s": 4217,
"text": "This window appears to the left of the screen with an arrow associated with it. Here, you can upload a file from the local computer or Mount your Google Drive (i will explain in the latter part of the post)."
},
{
"code": null,
"e": 4485,
"s": 4425,
"text": "from google.drive import drive drive.mount(/content/gdrive)"
},
{
"code": null,
"e": 4504,
"s": 4485,
"text": "!pip install keras"
},
{
"code": null,
"e": 4550,
"s": 4504,
"text": "!git clone https://github.com/pytorch/pytorch"
},
{
"code": null,
"e": 4567,
"s": 4550,
"text": "!python setup.py"
},
{
"code": null,
"e": 4975,
"s": 4567,
"text": "TPU(Tensor Processing Unit) was developed by Google to provide high computational power to train machine learning and deep learning models. Even though an NVIDIA Tesla K80 is present at your disposable, TPU provides much more in terms of power. As per the information provided by Google’s Colab documentation, A GPU provides 1.8TFlops and has a 12GB RAM while TPU delivers 180TFlops and provides a 64GB RAM."
},
{
"code": null,
"e": 5380,
"s": 4975,
"text": "Google Colab is a great alternative for Jupyter Notebook for running high computational deep learning and machine learning models. You can share your code with other developers on the go so that they can help you optimize your code. Most of the python libraries come pre-installed with Google Colab so it reduces the burden of installing each and every library. And the main feature would be FREE GPU!!!."
},
{
"code": null,
"e": 5833,
"s": 5380,
"text": "There are some limitations with free products as usual. (insignificant restrictions but can make a difference in certain scenarios). You can only use Python 2.7(Deprecated from Jan 2020) or 3.6 as the programming language, with no support for R. GPU allocation per user is restricted to 12 hours at a time. The GPU used is the NVIDIA Tesla K80, and once the session is complete, the user can continue using the resource by connecting to a different VM."
}
] |
The Power of Lambda Expressions in Python | by Soner Yıldırım | Towards Data Science | A function is a block of code that takes zero or more inputs, performs some operations, and returns a value. Functions are essential tools to create efficient and powerful programs.
In this article, we will cover a special form of functions in Python: lambda expressions. The first and foremost point we need to emphasize is that a lambda expression is a function.
square = lambda x: x**2type(square)functionsquare(5)25
The square is a function that returns the square of a number. In the traditional form of defining functions in Python, the square function would look as below.
def square(x): return x**2
Why do we need a different way of defining a function? The main motivation behind the lambda expressions is the simplicity and practicality.
Consider an operation that needs to be done once or very few times. Furthermore, we have many variations of this operation which are slightly different than the original one. In such case, it is not ideal to define a separate function for each operation. Instead, lambda expressions provide a much more efficient way of accomplishing the tasks.
A key characteristic of lambda expressions is that they are nameless functions. You might argue that we have actually assigned a name to the lambda expression (square) but it was for demonstration purposes. In general, lambda expressions are used without a name.
One common use case for lambda expressions is that they can be passed as argument to another function. The map, reduce, and filter functions in Python are higher-order functions that can accept other functions as arguments.
Let’s do an example for each and see how lambda expressions come in handy. We have the following list a.
a = [1, 3, 2, 6, 7, 4]
The reduce function reduces the list by applying a function to its elements. We can write a reduce function that multiplies elements in the list.
from functools import reducereduce(lambda x, y: x*y, a)1008
The map function creates a mapping to transform each element in the list. For instance, the following map function squares each element in the list based on the given lambda expression.
list(map(lambda x: x*x, a))[1, 9, 4, 36, 49, 16]
The filter function operates similar to the map function. Instead of transforming elements, it filters them based on the given condition.
list(filter(lambda x: x > 4, a))[6, 7]
The lambda expression in the filter function serves as a filter the elements greater than 4.
Lambda expressions are also used with Pandas data manipulation and transformation functions. Consider the following dataframe.
For instance, we can take the log of the columns using the apply function and a lambda expression.
import numpy as npimport pandas as pddf.apply(lambda x: np.log(x))
We can find the maximum value in each row using the following lambda expression and apply function.
df.apply(lambda x: x.max(), axis=1)0 15.01 19.02 15.03 19.04 12.0dtype: float64
If we change the axis parameter to 0, the function will return the maximum value in each column.
df.apply(lambda x: x.max(), axis=0)cola 9.00colb 19.00colc 0.76dtype: float64
The apply function of pandas performs the given operation columns-wise or row-wise depending on the specified axis.
Pandas also provides the applymap function which allows to apply a given operation to all the elements in a dataframe. We can also pass a lambda expression to the applymap function.
Consider the following dataframe.
We can find the length of strings in each cell by using a lambda expression with the applymap function.
df.applymap(lambda x: len(x))
Lambda expressions are great to use as a replacement for simple functions. They also simplify the syntax. However, it is not ideal to use lambda expressions all the time.
Lambda expressions should not be used to perform complex operations. Otherwise, we would violating the purpose.
Lambda expression are great fit for tasks that are done once or very few times. If we need to do an operation several times throughout the code, it is better to explicitly define a function.
Thank you for reading. Please let me know if you have any feedback. | [
{
"code": null,
"e": 354,
"s": 172,
"text": "A function is a block of code that takes zero or more inputs, performs some operations, and returns a value. Functions are essential tools to create efficient and powerful programs."
},
{
"code": null,
"e": 537,
"s": 354,
"text": "In this article, we will cover a special form of functions in Python: lambda expressions. The first and foremost point we need to emphasize is that a lambda expression is a function."
},
{
"code": null,
"e": 592,
"s": 537,
"text": "square = lambda x: x**2type(square)functionsquare(5)25"
},
{
"code": null,
"e": 752,
"s": 592,
"text": "The square is a function that returns the square of a number. In the traditional form of defining functions in Python, the square function would look as below."
},
{
"code": null,
"e": 781,
"s": 752,
"text": "def square(x): return x**2"
},
{
"code": null,
"e": 922,
"s": 781,
"text": "Why do we need a different way of defining a function? The main motivation behind the lambda expressions is the simplicity and practicality."
},
{
"code": null,
"e": 1267,
"s": 922,
"text": "Consider an operation that needs to be done once or very few times. Furthermore, we have many variations of this operation which are slightly different than the original one. In such case, it is not ideal to define a separate function for each operation. Instead, lambda expressions provide a much more efficient way of accomplishing the tasks."
},
{
"code": null,
"e": 1530,
"s": 1267,
"text": "A key characteristic of lambda expressions is that they are nameless functions. You might argue that we have actually assigned a name to the lambda expression (square) but it was for demonstration purposes. In general, lambda expressions are used without a name."
},
{
"code": null,
"e": 1754,
"s": 1530,
"text": "One common use case for lambda expressions is that they can be passed as argument to another function. The map, reduce, and filter functions in Python are higher-order functions that can accept other functions as arguments."
},
{
"code": null,
"e": 1859,
"s": 1754,
"text": "Let’s do an example for each and see how lambda expressions come in handy. We have the following list a."
},
{
"code": null,
"e": 1882,
"s": 1859,
"text": "a = [1, 3, 2, 6, 7, 4]"
},
{
"code": null,
"e": 2028,
"s": 1882,
"text": "The reduce function reduces the list by applying a function to its elements. We can write a reduce function that multiplies elements in the list."
},
{
"code": null,
"e": 2088,
"s": 2028,
"text": "from functools import reducereduce(lambda x, y: x*y, a)1008"
},
{
"code": null,
"e": 2274,
"s": 2088,
"text": "The map function creates a mapping to transform each element in the list. For instance, the following map function squares each element in the list based on the given lambda expression."
},
{
"code": null,
"e": 2323,
"s": 2274,
"text": "list(map(lambda x: x*x, a))[1, 9, 4, 36, 49, 16]"
},
{
"code": null,
"e": 2461,
"s": 2323,
"text": "The filter function operates similar to the map function. Instead of transforming elements, it filters them based on the given condition."
},
{
"code": null,
"e": 2500,
"s": 2461,
"text": "list(filter(lambda x: x > 4, a))[6, 7]"
},
{
"code": null,
"e": 2593,
"s": 2500,
"text": "The lambda expression in the filter function serves as a filter the elements greater than 4."
},
{
"code": null,
"e": 2720,
"s": 2593,
"text": "Lambda expressions are also used with Pandas data manipulation and transformation functions. Consider the following dataframe."
},
{
"code": null,
"e": 2819,
"s": 2720,
"text": "For instance, we can take the log of the columns using the apply function and a lambda expression."
},
{
"code": null,
"e": 2886,
"s": 2819,
"text": "import numpy as npimport pandas as pddf.apply(lambda x: np.log(x))"
},
{
"code": null,
"e": 2986,
"s": 2886,
"text": "We can find the maximum value in each row using the following lambda expression and apply function."
},
{
"code": null,
"e": 3081,
"s": 2986,
"text": "df.apply(lambda x: x.max(), axis=1)0 15.01 19.02 15.03 19.04 12.0dtype: float64"
},
{
"code": null,
"e": 3178,
"s": 3081,
"text": "If we change the axis parameter to 0, the function will return the maximum value in each column."
},
{
"code": null,
"e": 3267,
"s": 3178,
"text": "df.apply(lambda x: x.max(), axis=0)cola 9.00colb 19.00colc 0.76dtype: float64"
},
{
"code": null,
"e": 3383,
"s": 3267,
"text": "The apply function of pandas performs the given operation columns-wise or row-wise depending on the specified axis."
},
{
"code": null,
"e": 3565,
"s": 3383,
"text": "Pandas also provides the applymap function which allows to apply a given operation to all the elements in a dataframe. We can also pass a lambda expression to the applymap function."
},
{
"code": null,
"e": 3599,
"s": 3565,
"text": "Consider the following dataframe."
},
{
"code": null,
"e": 3703,
"s": 3599,
"text": "We can find the length of strings in each cell by using a lambda expression with the applymap function."
},
{
"code": null,
"e": 3733,
"s": 3703,
"text": "df.applymap(lambda x: len(x))"
},
{
"code": null,
"e": 3904,
"s": 3733,
"text": "Lambda expressions are great to use as a replacement for simple functions. They also simplify the syntax. However, it is not ideal to use lambda expressions all the time."
},
{
"code": null,
"e": 4016,
"s": 3904,
"text": "Lambda expressions should not be used to perform complex operations. Otherwise, we would violating the purpose."
},
{
"code": null,
"e": 4207,
"s": 4016,
"text": "Lambda expression are great fit for tasks that are done once or very few times. If we need to do an operation several times throughout the code, it is better to explicitly define a function."
}
] |
Interfaces in C++ (Abstract Classes) | An interface describes the behavior or capabilities of a C++ class without committing to a particular implementation of that class.
The C++ interfaces are implemented using abstract classes and these abstract classes should not be confused with data abstraction which is a concept of keeping implementation details separate from associated data.
A class is made abstract by declaring at least one of its functions as pure virtual function. A pure virtual function is specified by placing "= 0" in its declaration as follows −
class Box {
public:
// pure virtual function
virtual double getVolume() = 0;
private:
double length; // Length of a box
double breadth; // Breadth of a box
double height; // Height of a box
};
The purpose of an abstract class (often referred to as an ABC) is to provide an appropriate base class from which other classes can inherit. Abstract classes cannot be used to instantiate objects and serves only as an interface. Attempting to instantiate an object of an abstract class causes a compilation error.
Thus, if a subclass of an ABC needs to be instantiated, it has to implement each of the virtual functions, which means that it supports the interface declared by the ABC. Failure to override a pure virtual function in a derived class, then attempting to instantiate objects of that class, is a compilation error.
Classes that can be used to instantiate objects are called concrete classes.
Consider the following example where parent class provides an interface to the base class to implement a function called getArea() −
#include <iostream>
using namespace std;
// Base class
class Shape {
public:
// pure virtual function providing interface framework.
virtual int getArea() = 0;
void setWidth(int w) {
width = w;
}
void setHeight(int h) {
height = h;
}
protected:
int width;
int height;
};
// Derived classes
class Rectangle: public Shape {
public:
int getArea() {
return (width * height);
}
};
class Triangle: public Shape {
public:
int getArea() {
return (width * height)/2;
}
};
int main(void) {
Rectangle Rect;
Triangle Tri;
Rect.setWidth(5);
Rect.setHeight(7);
// Print the area of the object.
cout << "Total Rectangle area: " << Rect.getArea() << endl;
Tri.setWidth(5);
Tri.setHeight(7);
// Print the area of the object.
cout << "Total Triangle area: " << Tri.getArea() << endl;
return 0;
}
When the above code is compiled and executed, it produces the following result −
Total Rectangle area: 35
Total Triangle area: 17
You can see how an abstract class defined an interface in terms of getArea() and two other classes implemented same function but with different algorithm to calculate the area specific to the shape.
An object-oriented system might use an abstract base class to provide a common and standardized interface appropriate for all the external applications. Then, through inheritance from that abstract base class, derived classes are formed that operate similarly.
The capabilities (i.e., the public functions) offered by the external applications are provided as pure virtual functions in the abstract base class. The implementations of these pure virtual functions are provided in the derived classes that correspond to the specific types of the application.
This architecture also allows new applications to be added to a system easily, even after the system has been defined.
154 Lectures
11.5 hours
Arnab Chakraborty
14 Lectures
57 mins
Kaushik Roy Chowdhury
30 Lectures
12.5 hours
Frahaan Hussain
54 Lectures
3.5 hours
Frahaan Hussain
77 Lectures
5.5 hours
Frahaan Hussain
12 Lectures
3.5 hours
Frahaan Hussain
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2450,
"s": 2318,
"text": "An interface describes the behavior or capabilities of a C++ class without committing to a particular implementation of that class."
},
{
"code": null,
"e": 2664,
"s": 2450,
"text": "The C++ interfaces are implemented using abstract classes and these abstract classes should not be confused with data abstraction which is a concept of keeping implementation details separate from associated data."
},
{
"code": null,
"e": 2844,
"s": 2664,
"text": "A class is made abstract by declaring at least one of its functions as pure virtual function. A pure virtual function is specified by placing \"= 0\" in its declaration as follows −"
},
{
"code": null,
"e": 3095,
"s": 2844,
"text": "class Box {\n public:\n // pure virtual function\n virtual double getVolume() = 0;\n \n private:\n double length; // Length of a box\n double breadth; // Breadth of a box\n double height; // Height of a box\n};\n"
},
{
"code": null,
"e": 3409,
"s": 3095,
"text": "The purpose of an abstract class (often referred to as an ABC) is to provide an appropriate base class from which other classes can inherit. Abstract classes cannot be used to instantiate objects and serves only as an interface. Attempting to instantiate an object of an abstract class causes a compilation error."
},
{
"code": null,
"e": 3723,
"s": 3409,
"text": "Thus, if a subclass of an ABC needs to be instantiated, it has to implement each of the virtual functions, which means that it supports the interface declared by the ABC. Failure to override a pure virtual function in a derived class, then attempting to instantiate objects of that class, is a compilation error."
},
{
"code": null,
"e": 3800,
"s": 3723,
"text": "Classes that can be used to instantiate objects are called concrete classes."
},
{
"code": null,
"e": 3934,
"s": 3800,
"text": "Consider the following example where parent class provides an interface to the base class to implement a function called getArea() −"
},
{
"code": null,
"e": 4900,
"s": 3934,
"text": "#include <iostream>\n \nusing namespace std;\n \n// Base class\nclass Shape {\n public:\n // pure virtual function providing interface framework.\n virtual int getArea() = 0;\n void setWidth(int w) {\n width = w;\n }\n \n void setHeight(int h) {\n height = h;\n }\n \n protected:\n int width;\n int height;\n};\n \n// Derived classes\nclass Rectangle: public Shape {\n public:\n int getArea() { \n return (width * height); \n }\n};\n\nclass Triangle: public Shape {\n public:\n int getArea() { \n return (width * height)/2; \n }\n};\n \nint main(void) {\n Rectangle Rect;\n Triangle Tri;\n \n Rect.setWidth(5);\n Rect.setHeight(7);\n \n // Print the area of the object.\n cout << \"Total Rectangle area: \" << Rect.getArea() << endl;\n\n Tri.setWidth(5);\n Tri.setHeight(7);\n \n // Print the area of the object.\n cout << \"Total Triangle area: \" << Tri.getArea() << endl; \n\n return 0;\n}"
},
{
"code": null,
"e": 4981,
"s": 4900,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 5031,
"s": 4981,
"text": "Total Rectangle area: 35\nTotal Triangle area: 17\n"
},
{
"code": null,
"e": 5230,
"s": 5031,
"text": "You can see how an abstract class defined an interface in terms of getArea() and two other classes implemented same function but with different algorithm to calculate the area specific to the shape."
},
{
"code": null,
"e": 5491,
"s": 5230,
"text": "An object-oriented system might use an abstract base class to provide a common and standardized interface appropriate for all the external applications. Then, through inheritance from that abstract base class, derived classes are formed that operate similarly."
},
{
"code": null,
"e": 5787,
"s": 5491,
"text": "The capabilities (i.e., the public functions) offered by the external applications are provided as pure virtual functions in the abstract base class. The implementations of these pure virtual functions are provided in the derived classes that correspond to the specific types of the application."
},
{
"code": null,
"e": 5906,
"s": 5787,
"text": "This architecture also allows new applications to be added to a system easily, even after the system has been defined."
},
{
"code": null,
"e": 5943,
"s": 5906,
"text": "\n 154 Lectures \n 11.5 hours \n"
},
{
"code": null,
"e": 5962,
"s": 5943,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 5994,
"s": 5962,
"text": "\n 14 Lectures \n 57 mins\n"
},
{
"code": null,
"e": 6017,
"s": 5994,
"text": " Kaushik Roy Chowdhury"
},
{
"code": null,
"e": 6053,
"s": 6017,
"text": "\n 30 Lectures \n 12.5 hours \n"
},
{
"code": null,
"e": 6070,
"s": 6053,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6105,
"s": 6070,
"text": "\n 54 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6122,
"s": 6105,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6157,
"s": 6122,
"text": "\n 77 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 6174,
"s": 6157,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6209,
"s": 6174,
"text": "\n 12 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6226,
"s": 6209,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 6233,
"s": 6226,
"text": " Print"
},
{
"code": null,
"e": 6244,
"s": 6233,
"text": " Add Notes"
}
] |
How to Predict Severe Traffic Jams with Python and Recurrent Neural Networks? | by Shuyi Wang | Towards Data Science | In this tutorial, I will show you how to use RNN deep learning model to find patterns from Waze Traffic Open Data of Incidents Report, and predict if severe traffic jams will happen shortly. Interventions can be taken out effectively.
On December 1st, 2018, I participated in a Hackathon Programming competition in UNT Inspire Park at Frisco, TX. The Hackathon is called “HackNTX”.
The Hackathon began at 8 am and finished after 9 pm. The participants can eat snacks and fruits freely, and take walks in the yard to get some fresh air and sunshine. Also, people can talk, discuss and form teams (no more than 5 people each team) freely.
The host provided the competitors various kinds of open data sets, with some pre-defined demo problems. You can choose your own new problem and solve it in the competition.
Waze data, was one of datasets provided.
In China, I never used Waze App for navigation. It’s new to me.
I googled and found the features of Waze. It can not only provide users the ordinary navigation function, but also let the users to report traffic events, so that other users can adjust their routes accordingly.
To me, the most import feature is, when you are trapped in a traffic jam, you can know exactly what happened in the front, such as road under construction, or car accident. In this case, you will be able to have a better estimation and calm down. It will be good for your health.
Since several years ago, Waze cooperated with the government and began to share data. For the government, they can benefit from the instant traffic condition report, and respond to events in time. For the Waze platform, it can integrate the government’s open data, such as road planning, to improve its routing function and make the users happier.
The Waze data provided by the host of HackNTX, is about Traffic Incidents ranging from Nov, 1st till Nov 29th, in DFW area.
The raw data was in TSV format, and the total size is about 300MB.
Each row represents a single incident report, containing the coordinates and time stamp.
During the Exploratory Data Analysis phase, I made several visualizations based on the data.
It was the first time I heard about QGIS, and the software is very powerful. Thank you Jesse for teaching me how to use it!
In the screenshot above, you can see each dot represents a single event. There are too many of them!
As I am not so familiar with QGIS, and Jesse left the competition early in the afternoon, I had to go back to use Python for detailed visualization.
The figure above was made with Geopandas package of Python. It shows three different types of traffic events, namely traffic jam (in red color), accidents (in yellow) and “stopped car on the shoulder” (in blue). Note that they were only extracted from the first 3000 rows in the data set.
As you may already figured out, the amount of red dots (meaning traffic jam) is huge. Traffic jam is indeed a big problem.
I extracted all the unique event types from the data, and got the following list:
As we may learn, traffic jams can be divided into several different levels. The most severe ones, are “large traffic jam” and “huge traffic jam”.
I merged these two types of events to a set A. While the other events can be regarded as set B.
For each event in set A, I back traced 30 minutes, and accumulated every events reported on the same road into a sequence. In total, there are 987 of them. However, some of them are empty lists, meaning nothing was reported before the severe jam, and it happened all suddenly. For these kinds of traffic jam, nobody can predict them, so I decided to remove them, and kept 861 non-empty sequences.
Similarly, I extracted 861 non-empty sequences randomly by back tracing the events coming from set B. Note these sequences did not lead to severe traffic jams.
Now that we have got the sequences as input data, we mark sequences generated from set A as 1, while the others are labeled 0.
Our research question is, can we use a model to classify these sequence data?
The result turned out to be a success. We won the first prize!
Our team, nick named “watch-dumplings”, are formed by Chunying Wang, a visiting PhD student from Wuhan University and me, representing the UNT IIA lab. Each of us got $100. Look how happy we were!
The website of HackNTX soon made a report of the Hackathon competition. Here is the link.
And several days later, UNT reported the news, too.
To me, winning the first prize is almost pure luck. However, I think the model has got potential practical value.
In the latter parts of this article, I will try to show you how to implement the RNN model to classify Waze incident event sequences, using Python and Keras.
To do deep learning, you need GPU or TPU, or your laptop will be tortured. I went to the competition with a fan-less Macbook, and there was no way I can use it to train deep neural networks.
I used Google Colab. It’s quite cool, and the Google people are so generous by providing the users with free GPU and TPU.
Please download and install Google Chrome first, and then click on this link, to install a plugin called Colaboratory.
And you will see the yellow icon of it in the extension bar of Chrome.
I have already uploaded all the code and data of this tutorial on to this github repo.
Please follow the link above, and click on the demo.ipynb.
Now you can click on the icon of Colaboratory, and Chrome will open Google Colab for you automatically, and load this ipynb file into it.
Click on the “COPY TO DRIVE” button I circled with red color in the above screenshot, and Google will create you a copy of the Jupyter Notebook file on your Google Drive.
Go to the menu for “Runtime”, and click on “Change runtime type”.
Double check that the options are set as below:
Save it, and then you are all set.
To run a chunk of code, just hit the run button on the left side.
Let us do it step by step, and I will give you necessary explanations.
We need to load the Pandas package, to handle the tabular data.
import pandas as pd
We will need another package to load data saved by the preprocessing phase. It is called pickle.
import pickle
It can save single or multiple Python data into external file. When you load it back into the program, it will restore the data as it was saved. In this case, it’s easier and more efficient than exchange data with CSV files.
Let’s clone the github repo for this tutorial, and get the data into Google Colab workspace.
!git clone https://github.com/wshuyi/demo_traffic_jam_prediction.git
The data is not so big, and will be downloaded in a short time.
Cloning into 'demo_traffic_jam_prediction'...remote: Enumerating objects: 6, done.[Kremote: Counting objects: 100% (6/6), done.[Kremote: Compressing objects: 100% (4/4), done.[Kremote: Total 6 (delta 0), reused 3 (delta 0), pack-reused 0[KUnpacking objects: 100% (6/6), done.
Let us tell Jupyter Notebook the path of data folder.
from pathlib import Pathdata_dir = Path('demo_traffic_jam_prediction')
We will open the data file and load two different data variables with pickle.
with open(data_dir / 'data.pickle', 'rb') as f: [event_dict, df] = pickle.load(f)
First, let us look into the dictionary of events called event_dict :
event_dict
Here are all the event types.
{1: 'road closed due to construction', 2: 'traffic jam', 3: 'stopped car on the shoulder', 4: 'road closed', 5: 'other', 6: 'object on roadway', 7: 'major event', 8: 'pothole', 9: 'traffic heavier than normal', 10: 'road construction', 11: 'fog', 12: 'accident', 13: 'slowdown', 14: 'stopped car', 15: 'small traffic jam', 16: 'stopped traffic', 17: 'heavy traffic', 18: 'minor accident', 19: 'medium traffic jam', 20: 'malfunctioning traffic light', 21: 'missing sign on the shoulder', 22: 'animal on the shoulder', 23: 'animal struck', 24: 'large traffic jam', 25: 'hazard on the shoulder', 26: 'hazard on road', 27: 'ice on roadway', 28: 'weather hazard', 29: 'flooding', 30: 'road closed due to hazard', 31: 'hail', 32: 'huge traffic jam'}
Then, let’s see what is in the Dataframe called df.
It’s quite a long table. Let us just display the first 10 rows.
df.head(10)
In each row, there is a corresponding label showing if the sequence of data followed with a severe traffic jam event.
Then we will ask Pandas to show us the last 10 rows.
df.tail(10)
Now that we have loaded the data correctly, we will see which row contains the longest sequence.
We will use a function called idxmax() from Pandas, and it will help us to get the index of the largest value.
max_len_event_id = df.events.apply(len).idxmax()max_len_event_id
Here is the result:
105
Let us dig into the sequence this row has:
max_len_event = df.iloc[max_len_event_id]max_len_event.events
The result is quite a long list.
['stopped car on the shoulder', 'heavy traffic', 'heavy traffic', 'heavy traffic', 'slowdown', 'stopped traffic', 'heavy traffic', 'heavy traffic', 'heavy traffic', 'heavy traffic', 'traffic heavier than normal', 'stopped car on the shoulder', 'traffic jam', 'heavy traffic', 'stopped traffic', 'stopped traffic', 'stopped traffic', 'heavy traffic', 'traffic jam', 'stopped car on the shoulder', 'stopped traffic', 'stopped traffic', 'stopped traffic', 'heavy traffic', 'traffic heavier than normal', 'traffic heavier than normal', 'traffic heavier than normal', 'traffic heavier than normal', 'heavy traffic', 'stopped traffic', 'traffic heavier than normal', 'pothole', 'stopped car on the shoulder', 'traffic jam', 'slowdown', 'stopped traffic', 'heavy traffic', 'traffic heavier than normal', 'traffic jam', 'traffic jam', 'stopped car on the shoulder', 'major event', 'traffic jam', 'traffic jam', 'stopped traffic', 'heavy traffic', 'traffic heavier than normal', 'stopped car on the shoulder', 'slowdown', 'heavy traffic', 'heavy traffic', 'stopped car on the shoulder', 'traffic jam', 'slowdown', 'slowdown', 'heavy traffic', 'stopped car on the shoulder', 'heavy traffic', 'minor accident', 'stopped car on the shoulder', 'heavy traffic', 'stopped car on the shoulder', 'heavy traffic', 'stopped traffic', 'heavy traffic', 'traffic heavier than normal', 'heavy traffic', 'stopped car on the shoulder', 'traffic heavier than normal', 'stopped traffic', 'heavy traffic', 'heavy traffic', 'heavy traffic', 'stopped car on the shoulder', 'slowdown', 'stopped traffic', 'heavy traffic', 'stopped car on the shoulder', 'traffic heavier than normal', 'heavy traffic', 'minor accident', 'major event', 'stopped car on the shoulder', 'stopped car on the shoulder']
If you examine the sequence carefully, you will notice there are hints for a severe traffic jam on this road. However, you need to let the machine to get the “feeling” and classify the sequences automatically.
How long is the longest sequence?
maxlen = len(max_len_event.events)maxlen
Here is the answer:
84
Wow! That is a long list of events!
The computer is not good at reading events names. Let us try to convert the names into numbers, so that the computer can handle the problem better.
To do it efficiently, we need to reverse the dictionary loaded before. That is, to try to convert the “index: event type” format into “event type:index”.
reversed_dict = {}for k, v in event_dict.items(): reversed_dict[v] = k
Let us examine the reversed dictionary.
reversed_dict
Here is the result.
{'accident': 12, 'animal on the shoulder': 22, 'animal struck': 23, 'flooding': 29, 'fog': 11, 'hail': 31, 'hazard on road': 26, 'hazard on the shoulder': 25, 'heavy traffic': 17, 'huge traffic jam': 32, 'ice on roadway': 27, 'large traffic jam': 24, 'major event': 7, 'malfunctioning traffic light': 20, 'medium traffic jam': 19, 'minor accident': 18, 'missing sign on the shoulder': 21, 'object on roadway': 6, 'other': 5, 'pothole': 8, 'road closed': 4, 'road closed due to construction': 1, 'road closed due to hazard': 30, 'road construction': 10, 'slowdown': 13, 'small traffic jam': 15, 'stopped car': 14, 'stopped car on the shoulder': 3, 'stopped traffic': 16, 'traffic heavier than normal': 9, 'traffic jam': 2, 'weather hazard': 28}
We made it.
Now we will need to compose a function, to convert a list of events, and return us a list of numbers.
def map_event_list_to_idxs(event_list): list_idxs = [] for event in (event_list): idx = reversed_dict[event] list_idxs.append(idx) return list_idxs
Let us try the function on our longest list.
map_event_list_to_idxs(max_len_event.events)
The result is:
[3, 17, 17, 17, 13, 16, 17, 17, 17, 17, 9, 3, 2, 17, 16, 16, 16, 17, 2, 3, 16, 16, 16, 17, 9, 9, 9, 9, 17, 16, 9, 8, 3, 2, 13, 16, 17, 9, 2, 2, 3, 7, 2, 2, 16, 17, 9, 3, 13, 17, 17, 3, 2, 13, 13, 17, 3, 17, 18, 3, 17, 3, 17, 16, 17, 9, 17, 3, 9, 16, 17, 17, 17, 3, 13, 16, 17, 3, 9, 17, 18, 7, 3, 3]
Now we load numpy and some utility functions from Keras.
import numpy as npfrom keras.utils import to_categoricalfrom keras.preprocessing.sequence import pad_sequences
We need to figure out how many different event types we have got.
len(event_dict)
Here is the result:
32
Let us convert the all the sequence of events into lists of numbers.
df.events.apply(map_event_list_to_idxs)
The result is:
0 [9, 17, 18, 14, 13, 17, 3, 13, 16, 3, 17, 17, ...1 [2, 10, 3]2 [2]3 [2]4 [2, 2, 2, 2, 2, 2, 2, 9]5 [3, 2, 17]6 [3, 2, 17]7 [2, 15, 2, 17, 2, 2, 13, 17, 2]8 [17, 2, 2, 16, 17, 2]9 [17, 2, 2, 16, 17, 2]10 [17, 16, 17, 2, 17, 3, 17, 17, 16, 17, 16, 18,...11 [17]12 [17]13 [24, 24]14 [24, 2, 24, 24, 2]15 [24, 2, 24, 24, 2]16 [2, 10, 2, 2, 2, 18, 16, 16, 7, 2, 16, 2, 2, 9...17 [2, 10, 2, 2, 2, 18, 16, 16, 7, 2, 16, 2, 2, 9...18 [24, 24, 24, 16, 2, 16]19 [24, 24, 24, 16, 2, 16]20 [2, 2]21 [2, 16, 2]22 [2, 16, 2]23 [2, 2]24 [2, 2]25 [24, 24]26 [2, 2]27 [2, 2, 2, 17]28 [2, 19, 2]29 [24] ...831 [9, 9, 9, 2, 9, 9, 17, 2, 9, 17]832 [3, 3, 3]833 [2, 9, 2, 17, 17, 2]834 [3, 3, 17, 3, 13, 3, 3, 23, 9, 3, 3, 25, 3, 3]835 [3, 17, 9, 14, 9, 17, 14, 9, 2, 9, 3, 2, 2, 17]836 [2]837 [17, 2, 16, 3, 9, 17, 17, 17, 13, 17, 9, 17]838 [13, 17, 17, 3, 3, 16, 17, 16, 17, 16, 3, 9, 1...839 [2]840 [3]841 [2]842 [17, 17, 17, 3, 17, 23, 16, 17, 17, 3, 2, 13, ...843 [3, 3]844 [2]845 [2, 17, 2, 2, 2, 2, 2, 17, 2, 2]846 [7, 17, 3, 18, 17]847 [3, 3, 3]848 [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...849 [2, 2]850 [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 13, 3, 2]851 [2, 2, 2]852 [16, 2, 16]853 [3, 16, 5, 3, 17, 3, 16, 9, 3, 2, 17]854 [16]855 [3, 3, 3, 3, 3, 3, 3, 3, 2, 13, 3, 6, 3, 6, 3,...856 [17, 17, 17, 2, 3, 2, 2, 2, 2, 2]857 [2, 2]858 [2, 2, 9, 17, 2, 2]859 [17, 3, 2, 2, 2, 2, 2, 2]860 [17, 3, 3, 17, 3, 17, 2, 3, 18, 14, 3, 3, 16, ...Name: events, Length: 1722, dtype: object
As human beings, it is hard for us to recognize the meaning of each number represents. However, for the computer, it is much easier.
We name the result list as sequences, and display for first five rows.
sequences = df.events.apply(map_event_list_to_idxs).tolist()sequences[:5]
Here is the result:
[[9, 17, 18, 14, 13, 17, 3, 13, 16, 3, 17, 17, 16, 3, 16, 17, 9, 17, 2, 17, 2, 7, 16, 17, 17, 17, 17, 13, 5, 17, 9, 9, 16, 16, 3], [2, 10, 3], [2], [2], [2, 2, 2, 2, 2, 2, 2, 9]]
Note the first row is much longer than the following ones.
However, to apply a sequence model on the data, we need to make sure all the input sequences share the same length. Hence, we use the length of the longest sequence as the max length, and fill other shorter sequences with 0s from the beginning.
data = pad_sequences(sequences, maxlen=maxlen)data
Here are the padded sequences:
array([[ 0, 0, 0, ..., 16, 16, 3], [ 0, 0, 0, ..., 2, 10, 3], [ 0, 0, 0, ..., 0, 0, 2], ..., [ 0, 0, 0, ..., 17, 2, 2], [ 0, 0, 0, ..., 2, 2, 2], [ 0, 0, 0, ..., 3, 3, 2]], dtype=int32)
Now all the sequences have the same length.
We will need to get the label column, and save it into a variable called labels.
labels = np.array(df.label)
As we need to use several random functions, to keep your running result identical to mine, we will specify the random seed value as 12.
np.random.seed(12)
When you finished the first running of the code, feel free to modify it.
We shuffle the sequences along with their corresponding labels.
indices = np.arange(data.shape[0])np.random.shuffle(indices)data = data[indices]labels = labels[indices]
The training set will contain 80% of the data, while the other 20% goes into the validation set.
training_samples = int(len(indices) * .8)validation_samples = len(indices) - training_samples
The following codes divide the data into training and validation sets, along with the labels.
X_train = data[:training_samples]y_train = labels[:training_samples]X_valid = data[training_samples: training_samples + validation_samples]y_valid = labels[training_samples: training_samples + validation_samples]
Let us show the content of training data:
X_train
Here is the result.
array([[ 0, 0, 0, ..., 15, 15, 3], [ 0, 0, 0, ..., 0, 2, 2], [ 0, 0, 0, ..., 0, 0, 16], ..., [ 0, 0, 0, ..., 2, 15, 16], [ 0, 0, 0, ..., 2, 2, 2], [ 0, 0, 0, ..., 0, 0, 2]], dtype=int32)
Please do note that as we filled the sequences with 0 as padding value, now we have 33, instead of 32 event types.
So the number of event types will be set to 33.
num_events = len(event_dict) + 1
If we simply put the numbers into classification model, it will regard each number as a continuous value. However, they are not. So we will let the numbers go through an Embedding layer, and convert each number (representing a certain type of event) into a vector. Each vector, will contain 20 scalars.
embedding_dim = 20
The initial embedding matrix will be generated randomly.
embedding_matrix = np.random.rand(num_events, embedding_dim)
Finally, we can build a model now.
We use the Sequential model in Keras, and put different layers one by one, as we play with legos.
The first layer is Embedding Layer, then a LSTM Layer follows, the last layer is a dense one, whose activation function is sigmoid, to make binary classification.
from keras.models import Sequentialfrom keras.layers import Embedding, Flatten, Dense, LSTMunits = 32model = Sequential()model.add(Embedding(num_events, embedding_dim))model.add(LSTM(units))model.add(Dense(1, activation='sigmoid'))
If you are not familiar with Keras, I recommend you to read “Deep Learning with Python” by François Chollet, the creater of Keras.
The next step is to handle the parameters in the Embedding layer. For now, we just load in the initial embedding matrix generated randomly, and won’t let the training process change the weights in Embedding Layer.
model.layers[0].set_weights([embedding_matrix])model.layers[0].trainable = False
Then, we train the model, and save the model into a h5 file.
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_valid, y_valid))model.save("mymodel_embedding_untrainable.h5")
With the strong support of TPU, the training is quite fast.
After the model is trained, let us visualize the curves of accuracy and loss with matplotlib.
import matplotlib.pyplot as pltacc = history.history['acc']val_acc = history.history['val_acc']loss = history.history['loss']val_loss = history.history['val_loss']epochs = range(1, len(acc) + 1)plt.plot(epochs, acc, 'bo', label='Training acc')plt.plot(epochs, val_acc, 'b', label='Validation acc')plt.title('Training and validation accuracy')plt.legend()plt.figure()plt.plot(epochs, loss, 'bo', label='Training loss')plt.plot(epochs, val_loss, 'b', label='Validation loss')plt.title('Training and validation loss')plt.legend()plt.show()
This is the curve of accuracy.
As you can see, it is not bad. If we use a dummy model to predict everything as label 0 (or all as 1), the accuracy will stay at 0.50. So our model, apparently, has captured some pattern, and out-performed the dummy one.
However, it is very unstable.
Then let us look into the curve of loss.
As you may find out, it is not good. When the loss of training went down, the loss on validation set bumped, and there is no significant trend of convergence.
It is more important to find out the reason.
Note that we used a randomly initialized Embedding Matrix which stayed static during the training phase. It may lead us into trouble.
So next step, we can do an experiment to allow the Embedding layer be trained and adjusted.
from keras.models import Sequentialfrom keras.layers import Embedding, Flatten, Dense, LSTMunits = 32model = Sequential()model.add(Embedding(num_events, embedding_dim))model.add(LSTM(units))model.add(Dense(1, activation='sigmoid'))
The only different in the code, is that parameter trainable was set to True.
model.layers[0].set_weights([embedding_matrix])model.layers[0].trainable = True
Let us compile the model and run it over again.
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_valid, y_valid))model.save("mymodel_embedding_trainable.h5")
And we draw the curves of accuracy and loss, too.
import matplotlib.pyplot as pltacc = history.history['acc']val_acc = history.history['val_acc']loss = history.history['loss']val_loss = history.history['val_loss']epochs = range(1, len(acc) + 1)plt.plot(epochs, acc, 'bo', label='Training acc')plt.plot(epochs, val_acc, 'b', label='Validation acc')plt.title('Training and validation accuracy')plt.legend()plt.figure()plt.plot(epochs, loss, 'bo', label='Training loss')plt.plot(epochs, val_loss, 'b', label='Validation loss')plt.title('Training and validation loss')plt.legend()plt.show()
The curve of accuracy is shown as below.
As you can see, it got better. The fluctuation of validation accuracy curve went down, while the validation accuracy got higher than 0.75.
This model is, to some extent, more valuable.
However, we should not draw a conclusion so soon. If you look into the curve of loss, you cannot be so optimistic.
Since half-way, the trends of loss on different sets went to different direction.
It is a hint of over-fitting.
Over-fitting always indicates the training data is not enough for a complex model.
You can either add more data for the training, or bring down the complexity.
The first approach is not so applicable for now, because we have only got a dataset ranging 29 days in November.
However, to bring down the complexity of the model, it can be done easily with Dropout.
When you use Dropout, the model will randomly select a certain proportion (you call the shots) of neurons, and set the weights of them to zero in the training phase, so that they can be regarded as “removed” from the network, and the complexity became lower.
Note in validation phase, the model will use all neurons without any Dropout.
We will add two parameters related with Dropouts. To do this, we use dropout=0.2, recurrent_dropout=0.2 when defining the LSTM layer.
from keras.models import Sequentialfrom keras.layers import Embedding, Flatten, Dense, LSTMunits = 32model = Sequential()model.add(Embedding(num_events, embedding_dim))model.add(LSTM(units, dropout=0.2, recurrent_dropout=0.2))model.add(Dense(1, activation='sigmoid'))
We will keep the parameter trainable of Embedding Layer to True.
model.layers[0].set_weights([embedding_matrix])model.layers[0].trainable = True
Let us run the training again.
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_valid, y_valid))model.save("mymodel_embedding_trainable_with_dropout.h5")
There is no modification of the visualization part.
import matplotlib.pyplot as pltacc = history.history['acc']val_acc = history.history['val_acc']loss = history.history['loss']val_loss = history.history['val_loss']epochs = range(1, len(acc) + 1)plt.plot(epochs, acc, 'bo', label='Training acc')plt.plot(epochs, val_acc, 'b', label='Validation acc')plt.title('Training and validation accuracy')plt.legend()plt.figure()plt.plot(epochs, loss, 'bo', label='Training loss')plt.plot(epochs, val_loss, 'b', label='Validation loss')plt.title('Training and validation loss')plt.legend()plt.show()
From the accuracy curve, you may see nothing exciting.
However, when you look into the curve of loss, you’ll see significant improvement.
The curve of validation loss is smoother, and much closer to the trend of training loss.
Over-fitting has been taken care of, and the model is now more stable and generalizable to unseen data.
The Traffic Administration can then use the model to predict the happening of severe traffic jam with the Waze open data of incidents report. The expectation of model accuracy is about 75%.
Maybe in this way, some severe traffic jam will not happen at all, thanks to the preventions.
Hope you can get the following points from this tutorial.
Sequence model, such as RNN and LSTM, can not only be used on texts, but other sequential data as well.
You can use Embedding layer in these kinds of tasks, even though they don’t have pre-trained word-embedding models, such as word2vec, glove, or fasttext. Make sure you set the weights of embedding layer trainable.
You can try to beat over-fitting with several different methods. Dropout is one of them. In our case, it is effective.
Hope you can handle your own classification task with sequential data now.
Happy Deep Learning!
My other Tutorials on Deep Learning:
Deep Learning with Python and fast.ai, Part 1: Image classification with pre-trained model
Deep Learning with Python and fast.ai, Part 2: NLP Classification with Transfer Learning
Deep Learning with Python, Part 0: Setup Fast.ai 1.0 on Google Cloud
How to Accelerate Your Python Deep Learning with Cloud GPU? | [
{
"code": null,
"e": 407,
"s": 172,
"text": "In this tutorial, I will show you how to use RNN deep learning model to find patterns from Waze Traffic Open Data of Incidents Report, and predict if severe traffic jams will happen shortly. Interventions can be taken out effectively."
},
{
"code": null,
"e": 554,
"s": 407,
"text": "On December 1st, 2018, I participated in a Hackathon Programming competition in UNT Inspire Park at Frisco, TX. The Hackathon is called “HackNTX”."
},
{
"code": null,
"e": 809,
"s": 554,
"text": "The Hackathon began at 8 am and finished after 9 pm. The participants can eat snacks and fruits freely, and take walks in the yard to get some fresh air and sunshine. Also, people can talk, discuss and form teams (no more than 5 people each team) freely."
},
{
"code": null,
"e": 982,
"s": 809,
"text": "The host provided the competitors various kinds of open data sets, with some pre-defined demo problems. You can choose your own new problem and solve it in the competition."
},
{
"code": null,
"e": 1023,
"s": 982,
"text": "Waze data, was one of datasets provided."
},
{
"code": null,
"e": 1087,
"s": 1023,
"text": "In China, I never used Waze App for navigation. It’s new to me."
},
{
"code": null,
"e": 1299,
"s": 1087,
"text": "I googled and found the features of Waze. It can not only provide users the ordinary navigation function, but also let the users to report traffic events, so that other users can adjust their routes accordingly."
},
{
"code": null,
"e": 1579,
"s": 1299,
"text": "To me, the most import feature is, when you are trapped in a traffic jam, you can know exactly what happened in the front, such as road under construction, or car accident. In this case, you will be able to have a better estimation and calm down. It will be good for your health."
},
{
"code": null,
"e": 1927,
"s": 1579,
"text": "Since several years ago, Waze cooperated with the government and began to share data. For the government, they can benefit from the instant traffic condition report, and respond to events in time. For the Waze platform, it can integrate the government’s open data, such as road planning, to improve its routing function and make the users happier."
},
{
"code": null,
"e": 2051,
"s": 1927,
"text": "The Waze data provided by the host of HackNTX, is about Traffic Incidents ranging from Nov, 1st till Nov 29th, in DFW area."
},
{
"code": null,
"e": 2118,
"s": 2051,
"text": "The raw data was in TSV format, and the total size is about 300MB."
},
{
"code": null,
"e": 2207,
"s": 2118,
"text": "Each row represents a single incident report, containing the coordinates and time stamp."
},
{
"code": null,
"e": 2300,
"s": 2207,
"text": "During the Exploratory Data Analysis phase, I made several visualizations based on the data."
},
{
"code": null,
"e": 2424,
"s": 2300,
"text": "It was the first time I heard about QGIS, and the software is very powerful. Thank you Jesse for teaching me how to use it!"
},
{
"code": null,
"e": 2525,
"s": 2424,
"text": "In the screenshot above, you can see each dot represents a single event. There are too many of them!"
},
{
"code": null,
"e": 2674,
"s": 2525,
"text": "As I am not so familiar with QGIS, and Jesse left the competition early in the afternoon, I had to go back to use Python for detailed visualization."
},
{
"code": null,
"e": 2963,
"s": 2674,
"text": "The figure above was made with Geopandas package of Python. It shows three different types of traffic events, namely traffic jam (in red color), accidents (in yellow) and “stopped car on the shoulder” (in blue). Note that they were only extracted from the first 3000 rows in the data set."
},
{
"code": null,
"e": 3086,
"s": 2963,
"text": "As you may already figured out, the amount of red dots (meaning traffic jam) is huge. Traffic jam is indeed a big problem."
},
{
"code": null,
"e": 3168,
"s": 3086,
"text": "I extracted all the unique event types from the data, and got the following list:"
},
{
"code": null,
"e": 3314,
"s": 3168,
"text": "As we may learn, traffic jams can be divided into several different levels. The most severe ones, are “large traffic jam” and “huge traffic jam”."
},
{
"code": null,
"e": 3410,
"s": 3314,
"text": "I merged these two types of events to a set A. While the other events can be regarded as set B."
},
{
"code": null,
"e": 3807,
"s": 3410,
"text": "For each event in set A, I back traced 30 minutes, and accumulated every events reported on the same road into a sequence. In total, there are 987 of them. However, some of them are empty lists, meaning nothing was reported before the severe jam, and it happened all suddenly. For these kinds of traffic jam, nobody can predict them, so I decided to remove them, and kept 861 non-empty sequences."
},
{
"code": null,
"e": 3967,
"s": 3807,
"text": "Similarly, I extracted 861 non-empty sequences randomly by back tracing the events coming from set B. Note these sequences did not lead to severe traffic jams."
},
{
"code": null,
"e": 4094,
"s": 3967,
"text": "Now that we have got the sequences as input data, we mark sequences generated from set A as 1, while the others are labeled 0."
},
{
"code": null,
"e": 4172,
"s": 4094,
"text": "Our research question is, can we use a model to classify these sequence data?"
},
{
"code": null,
"e": 4235,
"s": 4172,
"text": "The result turned out to be a success. We won the first prize!"
},
{
"code": null,
"e": 4432,
"s": 4235,
"text": "Our team, nick named “watch-dumplings”, are formed by Chunying Wang, a visiting PhD student from Wuhan University and me, representing the UNT IIA lab. Each of us got $100. Look how happy we were!"
},
{
"code": null,
"e": 4522,
"s": 4432,
"text": "The website of HackNTX soon made a report of the Hackathon competition. Here is the link."
},
{
"code": null,
"e": 4574,
"s": 4522,
"text": "And several days later, UNT reported the news, too."
},
{
"code": null,
"e": 4688,
"s": 4574,
"text": "To me, winning the first prize is almost pure luck. However, I think the model has got potential practical value."
},
{
"code": null,
"e": 4846,
"s": 4688,
"text": "In the latter parts of this article, I will try to show you how to implement the RNN model to classify Waze incident event sequences, using Python and Keras."
},
{
"code": null,
"e": 5037,
"s": 4846,
"text": "To do deep learning, you need GPU or TPU, or your laptop will be tortured. I went to the competition with a fan-less Macbook, and there was no way I can use it to train deep neural networks."
},
{
"code": null,
"e": 5159,
"s": 5037,
"text": "I used Google Colab. It’s quite cool, and the Google people are so generous by providing the users with free GPU and TPU."
},
{
"code": null,
"e": 5278,
"s": 5159,
"text": "Please download and install Google Chrome first, and then click on this link, to install a plugin called Colaboratory."
},
{
"code": null,
"e": 5349,
"s": 5278,
"text": "And you will see the yellow icon of it in the extension bar of Chrome."
},
{
"code": null,
"e": 5436,
"s": 5349,
"text": "I have already uploaded all the code and data of this tutorial on to this github repo."
},
{
"code": null,
"e": 5495,
"s": 5436,
"text": "Please follow the link above, and click on the demo.ipynb."
},
{
"code": null,
"e": 5633,
"s": 5495,
"text": "Now you can click on the icon of Colaboratory, and Chrome will open Google Colab for you automatically, and load this ipynb file into it."
},
{
"code": null,
"e": 5804,
"s": 5633,
"text": "Click on the “COPY TO DRIVE” button I circled with red color in the above screenshot, and Google will create you a copy of the Jupyter Notebook file on your Google Drive."
},
{
"code": null,
"e": 5870,
"s": 5804,
"text": "Go to the menu for “Runtime”, and click on “Change runtime type”."
},
{
"code": null,
"e": 5918,
"s": 5870,
"text": "Double check that the options are set as below:"
},
{
"code": null,
"e": 5953,
"s": 5918,
"text": "Save it, and then you are all set."
},
{
"code": null,
"e": 6019,
"s": 5953,
"text": "To run a chunk of code, just hit the run button on the left side."
},
{
"code": null,
"e": 6090,
"s": 6019,
"text": "Let us do it step by step, and I will give you necessary explanations."
},
{
"code": null,
"e": 6154,
"s": 6090,
"text": "We need to load the Pandas package, to handle the tabular data."
},
{
"code": null,
"e": 6174,
"s": 6154,
"text": "import pandas as pd"
},
{
"code": null,
"e": 6271,
"s": 6174,
"text": "We will need another package to load data saved by the preprocessing phase. It is called pickle."
},
{
"code": null,
"e": 6285,
"s": 6271,
"text": "import pickle"
},
{
"code": null,
"e": 6510,
"s": 6285,
"text": "It can save single or multiple Python data into external file. When you load it back into the program, it will restore the data as it was saved. In this case, it’s easier and more efficient than exchange data with CSV files."
},
{
"code": null,
"e": 6603,
"s": 6510,
"text": "Let’s clone the github repo for this tutorial, and get the data into Google Colab workspace."
},
{
"code": null,
"e": 6672,
"s": 6603,
"text": "!git clone https://github.com/wshuyi/demo_traffic_jam_prediction.git"
},
{
"code": null,
"e": 6736,
"s": 6672,
"text": "The data is not so big, and will be downloaded in a short time."
},
{
"code": null,
"e": 7012,
"s": 6736,
"text": "Cloning into 'demo_traffic_jam_prediction'...remote: Enumerating objects: 6, done.[Kremote: Counting objects: 100% (6/6), done.[Kremote: Compressing objects: 100% (4/4), done.[Kremote: Total 6 (delta 0), reused 3 (delta 0), pack-reused 0[KUnpacking objects: 100% (6/6), done."
},
{
"code": null,
"e": 7066,
"s": 7012,
"text": "Let us tell Jupyter Notebook the path of data folder."
},
{
"code": null,
"e": 7137,
"s": 7066,
"text": "from pathlib import Pathdata_dir = Path('demo_traffic_jam_prediction')"
},
{
"code": null,
"e": 7215,
"s": 7137,
"text": "We will open the data file and load two different data variables with pickle."
},
{
"code": null,
"e": 7300,
"s": 7215,
"text": "with open(data_dir / 'data.pickle', 'rb') as f: [event_dict, df] = pickle.load(f)"
},
{
"code": null,
"e": 7369,
"s": 7300,
"text": "First, let us look into the dictionary of events called event_dict :"
},
{
"code": null,
"e": 7380,
"s": 7369,
"text": "event_dict"
},
{
"code": null,
"e": 7410,
"s": 7380,
"text": "Here are all the event types."
},
{
"code": null,
"e": 8154,
"s": 7410,
"text": "{1: 'road closed due to construction', 2: 'traffic jam', 3: 'stopped car on the shoulder', 4: 'road closed', 5: 'other', 6: 'object on roadway', 7: 'major event', 8: 'pothole', 9: 'traffic heavier than normal', 10: 'road construction', 11: 'fog', 12: 'accident', 13: 'slowdown', 14: 'stopped car', 15: 'small traffic jam', 16: 'stopped traffic', 17: 'heavy traffic', 18: 'minor accident', 19: 'medium traffic jam', 20: 'malfunctioning traffic light', 21: 'missing sign on the shoulder', 22: 'animal on the shoulder', 23: 'animal struck', 24: 'large traffic jam', 25: 'hazard on the shoulder', 26: 'hazard on road', 27: 'ice on roadway', 28: 'weather hazard', 29: 'flooding', 30: 'road closed due to hazard', 31: 'hail', 32: 'huge traffic jam'}"
},
{
"code": null,
"e": 8206,
"s": 8154,
"text": "Then, let’s see what is in the Dataframe called df."
},
{
"code": null,
"e": 8270,
"s": 8206,
"text": "It’s quite a long table. Let us just display the first 10 rows."
},
{
"code": null,
"e": 8282,
"s": 8270,
"text": "df.head(10)"
},
{
"code": null,
"e": 8400,
"s": 8282,
"text": "In each row, there is a corresponding label showing if the sequence of data followed with a severe traffic jam event."
},
{
"code": null,
"e": 8453,
"s": 8400,
"text": "Then we will ask Pandas to show us the last 10 rows."
},
{
"code": null,
"e": 8465,
"s": 8453,
"text": "df.tail(10)"
},
{
"code": null,
"e": 8562,
"s": 8465,
"text": "Now that we have loaded the data correctly, we will see which row contains the longest sequence."
},
{
"code": null,
"e": 8673,
"s": 8562,
"text": "We will use a function called idxmax() from Pandas, and it will help us to get the index of the largest value."
},
{
"code": null,
"e": 8738,
"s": 8673,
"text": "max_len_event_id = df.events.apply(len).idxmax()max_len_event_id"
},
{
"code": null,
"e": 8758,
"s": 8738,
"text": "Here is the result:"
},
{
"code": null,
"e": 8762,
"s": 8758,
"text": "105"
},
{
"code": null,
"e": 8805,
"s": 8762,
"text": "Let us dig into the sequence this row has:"
},
{
"code": null,
"e": 8867,
"s": 8805,
"text": "max_len_event = df.iloc[max_len_event_id]max_len_event.events"
},
{
"code": null,
"e": 8900,
"s": 8867,
"text": "The result is quite a long list."
},
{
"code": null,
"e": 10665,
"s": 8900,
"text": "['stopped car on the shoulder', 'heavy traffic', 'heavy traffic', 'heavy traffic', 'slowdown', 'stopped traffic', 'heavy traffic', 'heavy traffic', 'heavy traffic', 'heavy traffic', 'traffic heavier than normal', 'stopped car on the shoulder', 'traffic jam', 'heavy traffic', 'stopped traffic', 'stopped traffic', 'stopped traffic', 'heavy traffic', 'traffic jam', 'stopped car on the shoulder', 'stopped traffic', 'stopped traffic', 'stopped traffic', 'heavy traffic', 'traffic heavier than normal', 'traffic heavier than normal', 'traffic heavier than normal', 'traffic heavier than normal', 'heavy traffic', 'stopped traffic', 'traffic heavier than normal', 'pothole', 'stopped car on the shoulder', 'traffic jam', 'slowdown', 'stopped traffic', 'heavy traffic', 'traffic heavier than normal', 'traffic jam', 'traffic jam', 'stopped car on the shoulder', 'major event', 'traffic jam', 'traffic jam', 'stopped traffic', 'heavy traffic', 'traffic heavier than normal', 'stopped car on the shoulder', 'slowdown', 'heavy traffic', 'heavy traffic', 'stopped car on the shoulder', 'traffic jam', 'slowdown', 'slowdown', 'heavy traffic', 'stopped car on the shoulder', 'heavy traffic', 'minor accident', 'stopped car on the shoulder', 'heavy traffic', 'stopped car on the shoulder', 'heavy traffic', 'stopped traffic', 'heavy traffic', 'traffic heavier than normal', 'heavy traffic', 'stopped car on the shoulder', 'traffic heavier than normal', 'stopped traffic', 'heavy traffic', 'heavy traffic', 'heavy traffic', 'stopped car on the shoulder', 'slowdown', 'stopped traffic', 'heavy traffic', 'stopped car on the shoulder', 'traffic heavier than normal', 'heavy traffic', 'minor accident', 'major event', 'stopped car on the shoulder', 'stopped car on the shoulder']"
},
{
"code": null,
"e": 10875,
"s": 10665,
"text": "If you examine the sequence carefully, you will notice there are hints for a severe traffic jam on this road. However, you need to let the machine to get the “feeling” and classify the sequences automatically."
},
{
"code": null,
"e": 10909,
"s": 10875,
"text": "How long is the longest sequence?"
},
{
"code": null,
"e": 10950,
"s": 10909,
"text": "maxlen = len(max_len_event.events)maxlen"
},
{
"code": null,
"e": 10970,
"s": 10950,
"text": "Here is the answer:"
},
{
"code": null,
"e": 10973,
"s": 10970,
"text": "84"
},
{
"code": null,
"e": 11009,
"s": 10973,
"text": "Wow! That is a long list of events!"
},
{
"code": null,
"e": 11157,
"s": 11009,
"text": "The computer is not good at reading events names. Let us try to convert the names into numbers, so that the computer can handle the problem better."
},
{
"code": null,
"e": 11311,
"s": 11157,
"text": "To do it efficiently, we need to reverse the dictionary loaded before. That is, to try to convert the “index: event type” format into “event type:index”."
},
{
"code": null,
"e": 11383,
"s": 11311,
"text": "reversed_dict = {}for k, v in event_dict.items(): reversed_dict[v] = k"
},
{
"code": null,
"e": 11423,
"s": 11383,
"text": "Let us examine the reversed dictionary."
},
{
"code": null,
"e": 11437,
"s": 11423,
"text": "reversed_dict"
},
{
"code": null,
"e": 11457,
"s": 11437,
"text": "Here is the result."
},
{
"code": null,
"e": 12201,
"s": 11457,
"text": "{'accident': 12, 'animal on the shoulder': 22, 'animal struck': 23, 'flooding': 29, 'fog': 11, 'hail': 31, 'hazard on road': 26, 'hazard on the shoulder': 25, 'heavy traffic': 17, 'huge traffic jam': 32, 'ice on roadway': 27, 'large traffic jam': 24, 'major event': 7, 'malfunctioning traffic light': 20, 'medium traffic jam': 19, 'minor accident': 18, 'missing sign on the shoulder': 21, 'object on roadway': 6, 'other': 5, 'pothole': 8, 'road closed': 4, 'road closed due to construction': 1, 'road closed due to hazard': 30, 'road construction': 10, 'slowdown': 13, 'small traffic jam': 15, 'stopped car': 14, 'stopped car on the shoulder': 3, 'stopped traffic': 16, 'traffic heavier than normal': 9, 'traffic jam': 2, 'weather hazard': 28}"
},
{
"code": null,
"e": 12213,
"s": 12201,
"text": "We made it."
},
{
"code": null,
"e": 12315,
"s": 12213,
"text": "Now we will need to compose a function, to convert a list of events, and return us a list of numbers."
},
{
"code": null,
"e": 12472,
"s": 12315,
"text": "def map_event_list_to_idxs(event_list): list_idxs = [] for event in (event_list): idx = reversed_dict[event] list_idxs.append(idx) return list_idxs"
},
{
"code": null,
"e": 12517,
"s": 12472,
"text": "Let us try the function on our longest list."
},
{
"code": null,
"e": 12562,
"s": 12517,
"text": "map_event_list_to_idxs(max_len_event.events)"
},
{
"code": null,
"e": 12577,
"s": 12562,
"text": "The result is:"
},
{
"code": null,
"e": 12877,
"s": 12577,
"text": "[3, 17, 17, 17, 13, 16, 17, 17, 17, 17, 9, 3, 2, 17, 16, 16, 16, 17, 2, 3, 16, 16, 16, 17, 9, 9, 9, 9, 17, 16, 9, 8, 3, 2, 13, 16, 17, 9, 2, 2, 3, 7, 2, 2, 16, 17, 9, 3, 13, 17, 17, 3, 2, 13, 13, 17, 3, 17, 18, 3, 17, 3, 17, 16, 17, 9, 17, 3, 9, 16, 17, 17, 17, 3, 13, 16, 17, 3, 9, 17, 18, 7, 3, 3]"
},
{
"code": null,
"e": 12934,
"s": 12877,
"text": "Now we load numpy and some utility functions from Keras."
},
{
"code": null,
"e": 13045,
"s": 12934,
"text": "import numpy as npfrom keras.utils import to_categoricalfrom keras.preprocessing.sequence import pad_sequences"
},
{
"code": null,
"e": 13111,
"s": 13045,
"text": "We need to figure out how many different event types we have got."
},
{
"code": null,
"e": 13127,
"s": 13111,
"text": "len(event_dict)"
},
{
"code": null,
"e": 13147,
"s": 13127,
"text": "Here is the result:"
},
{
"code": null,
"e": 13150,
"s": 13147,
"text": "32"
},
{
"code": null,
"e": 13219,
"s": 13150,
"text": "Let us convert the all the sequence of events into lists of numbers."
},
{
"code": null,
"e": 13259,
"s": 13219,
"text": "df.events.apply(map_event_list_to_idxs)"
},
{
"code": null,
"e": 13274,
"s": 13259,
"text": "The result is:"
},
{
"code": null,
"e": 16708,
"s": 13274,
"text": "0 [9, 17, 18, 14, 13, 17, 3, 13, 16, 3, 17, 17, ...1 [2, 10, 3]2 [2]3 [2]4 [2, 2, 2, 2, 2, 2, 2, 9]5 [3, 2, 17]6 [3, 2, 17]7 [2, 15, 2, 17, 2, 2, 13, 17, 2]8 [17, 2, 2, 16, 17, 2]9 [17, 2, 2, 16, 17, 2]10 [17, 16, 17, 2, 17, 3, 17, 17, 16, 17, 16, 18,...11 [17]12 [17]13 [24, 24]14 [24, 2, 24, 24, 2]15 [24, 2, 24, 24, 2]16 [2, 10, 2, 2, 2, 18, 16, 16, 7, 2, 16, 2, 2, 9...17 [2, 10, 2, 2, 2, 18, 16, 16, 7, 2, 16, 2, 2, 9...18 [24, 24, 24, 16, 2, 16]19 [24, 24, 24, 16, 2, 16]20 [2, 2]21 [2, 16, 2]22 [2, 16, 2]23 [2, 2]24 [2, 2]25 [24, 24]26 [2, 2]27 [2, 2, 2, 17]28 [2, 19, 2]29 [24] ...831 [9, 9, 9, 2, 9, 9, 17, 2, 9, 17]832 [3, 3, 3]833 [2, 9, 2, 17, 17, 2]834 [3, 3, 17, 3, 13, 3, 3, 23, 9, 3, 3, 25, 3, 3]835 [3, 17, 9, 14, 9, 17, 14, 9, 2, 9, 3, 2, 2, 17]836 [2]837 [17, 2, 16, 3, 9, 17, 17, 17, 13, 17, 9, 17]838 [13, 17, 17, 3, 3, 16, 17, 16, 17, 16, 3, 9, 1...839 [2]840 [3]841 [2]842 [17, 17, 17, 3, 17, 23, 16, 17, 17, 3, 2, 13, ...843 [3, 3]844 [2]845 [2, 17, 2, 2, 2, 2, 2, 17, 2, 2]846 [7, 17, 3, 18, 17]847 [3, 3, 3]848 [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...849 [2, 2]850 [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 13, 3, 2]851 [2, 2, 2]852 [16, 2, 16]853 [3, 16, 5, 3, 17, 3, 16, 9, 3, 2, 17]854 [16]855 [3, 3, 3, 3, 3, 3, 3, 3, 2, 13, 3, 6, 3, 6, 3,...856 [17, 17, 17, 2, 3, 2, 2, 2, 2, 2]857 [2, 2]858 [2, 2, 9, 17, 2, 2]859 [17, 3, 2, 2, 2, 2, 2, 2]860 [17, 3, 3, 17, 3, 17, 2, 3, 18, 14, 3, 3, 16, ...Name: events, Length: 1722, dtype: object"
},
{
"code": null,
"e": 16841,
"s": 16708,
"text": "As human beings, it is hard for us to recognize the meaning of each number represents. However, for the computer, it is much easier."
},
{
"code": null,
"e": 16912,
"s": 16841,
"text": "We name the result list as sequences, and display for first five rows."
},
{
"code": null,
"e": 16986,
"s": 16912,
"text": "sequences = df.events.apply(map_event_list_to_idxs).tolist()sequences[:5]"
},
{
"code": null,
"e": 17006,
"s": 16986,
"text": "Here is the result:"
},
{
"code": null,
"e": 17219,
"s": 17006,
"text": "[[9, 17, 18, 14, 13, 17, 3, 13, 16, 3, 17, 17, 16, 3, 16, 17, 9, 17, 2, 17, 2, 7, 16, 17, 17, 17, 17, 13, 5, 17, 9, 9, 16, 16, 3], [2, 10, 3], [2], [2], [2, 2, 2, 2, 2, 2, 2, 9]]"
},
{
"code": null,
"e": 17278,
"s": 17219,
"text": "Note the first row is much longer than the following ones."
},
{
"code": null,
"e": 17523,
"s": 17278,
"text": "However, to apply a sequence model on the data, we need to make sure all the input sequences share the same length. Hence, we use the length of the longest sequence as the max length, and fill other shorter sequences with 0s from the beginning."
},
{
"code": null,
"e": 17574,
"s": 17523,
"text": "data = pad_sequences(sequences, maxlen=maxlen)data"
},
{
"code": null,
"e": 17605,
"s": 17574,
"text": "Here are the padded sequences:"
},
{
"code": null,
"e": 17853,
"s": 17605,
"text": "array([[ 0, 0, 0, ..., 16, 16, 3], [ 0, 0, 0, ..., 2, 10, 3], [ 0, 0, 0, ..., 0, 0, 2], ..., [ 0, 0, 0, ..., 17, 2, 2], [ 0, 0, 0, ..., 2, 2, 2], [ 0, 0, 0, ..., 3, 3, 2]], dtype=int32)"
},
{
"code": null,
"e": 17897,
"s": 17853,
"text": "Now all the sequences have the same length."
},
{
"code": null,
"e": 17978,
"s": 17897,
"text": "We will need to get the label column, and save it into a variable called labels."
},
{
"code": null,
"e": 18006,
"s": 17978,
"text": "labels = np.array(df.label)"
},
{
"code": null,
"e": 18142,
"s": 18006,
"text": "As we need to use several random functions, to keep your running result identical to mine, we will specify the random seed value as 12."
},
{
"code": null,
"e": 18161,
"s": 18142,
"text": "np.random.seed(12)"
},
{
"code": null,
"e": 18234,
"s": 18161,
"text": "When you finished the first running of the code, feel free to modify it."
},
{
"code": null,
"e": 18298,
"s": 18234,
"text": "We shuffle the sequences along with their corresponding labels."
},
{
"code": null,
"e": 18403,
"s": 18298,
"text": "indices = np.arange(data.shape[0])np.random.shuffle(indices)data = data[indices]labels = labels[indices]"
},
{
"code": null,
"e": 18500,
"s": 18403,
"text": "The training set will contain 80% of the data, while the other 20% goes into the validation set."
},
{
"code": null,
"e": 18594,
"s": 18500,
"text": "training_samples = int(len(indices) * .8)validation_samples = len(indices) - training_samples"
},
{
"code": null,
"e": 18688,
"s": 18594,
"text": "The following codes divide the data into training and validation sets, along with the labels."
},
{
"code": null,
"e": 18901,
"s": 18688,
"text": "X_train = data[:training_samples]y_train = labels[:training_samples]X_valid = data[training_samples: training_samples + validation_samples]y_valid = labels[training_samples: training_samples + validation_samples]"
},
{
"code": null,
"e": 18943,
"s": 18901,
"text": "Let us show the content of training data:"
},
{
"code": null,
"e": 18951,
"s": 18943,
"text": "X_train"
},
{
"code": null,
"e": 18971,
"s": 18951,
"text": "Here is the result."
},
{
"code": null,
"e": 19219,
"s": 18971,
"text": "array([[ 0, 0, 0, ..., 15, 15, 3], [ 0, 0, 0, ..., 0, 2, 2], [ 0, 0, 0, ..., 0, 0, 16], ..., [ 0, 0, 0, ..., 2, 15, 16], [ 0, 0, 0, ..., 2, 2, 2], [ 0, 0, 0, ..., 0, 0, 2]], dtype=int32)"
},
{
"code": null,
"e": 19334,
"s": 19219,
"text": "Please do note that as we filled the sequences with 0 as padding value, now we have 33, instead of 32 event types."
},
{
"code": null,
"e": 19382,
"s": 19334,
"text": "So the number of event types will be set to 33."
},
{
"code": null,
"e": 19415,
"s": 19382,
"text": "num_events = len(event_dict) + 1"
},
{
"code": null,
"e": 19718,
"s": 19415,
"text": "If we simply put the numbers into classification model, it will regard each number as a continuous value. However, they are not. So we will let the numbers go through an Embedding layer, and convert each number (representing a certain type of event) into a vector. Each vector, will contain 20 scalars."
},
{
"code": null,
"e": 19737,
"s": 19718,
"text": "embedding_dim = 20"
},
{
"code": null,
"e": 19794,
"s": 19737,
"text": "The initial embedding matrix will be generated randomly."
},
{
"code": null,
"e": 19855,
"s": 19794,
"text": "embedding_matrix = np.random.rand(num_events, embedding_dim)"
},
{
"code": null,
"e": 19890,
"s": 19855,
"text": "Finally, we can build a model now."
},
{
"code": null,
"e": 19988,
"s": 19890,
"text": "We use the Sequential model in Keras, and put different layers one by one, as we play with legos."
},
{
"code": null,
"e": 20151,
"s": 19988,
"text": "The first layer is Embedding Layer, then a LSTM Layer follows, the last layer is a dense one, whose activation function is sigmoid, to make binary classification."
},
{
"code": null,
"e": 20383,
"s": 20151,
"text": "from keras.models import Sequentialfrom keras.layers import Embedding, Flatten, Dense, LSTMunits = 32model = Sequential()model.add(Embedding(num_events, embedding_dim))model.add(LSTM(units))model.add(Dense(1, activation='sigmoid'))"
},
{
"code": null,
"e": 20515,
"s": 20383,
"text": "If you are not familiar with Keras, I recommend you to read “Deep Learning with Python” by François Chollet, the creater of Keras."
},
{
"code": null,
"e": 20729,
"s": 20515,
"text": "The next step is to handle the parameters in the Embedding layer. For now, we just load in the initial embedding matrix generated randomly, and won’t let the training process change the weights in Embedding Layer."
},
{
"code": null,
"e": 20810,
"s": 20729,
"text": "model.layers[0].set_weights([embedding_matrix])model.layers[0].trainable = False"
},
{
"code": null,
"e": 20871,
"s": 20810,
"text": "Then, we train the model, and save the model into a h5 file."
},
{
"code": null,
"e": 21179,
"s": 20871,
"text": "model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_valid, y_valid))model.save(\"mymodel_embedding_untrainable.h5\")"
},
{
"code": null,
"e": 21239,
"s": 21179,
"text": "With the strong support of TPU, the training is quite fast."
},
{
"code": null,
"e": 21333,
"s": 21239,
"text": "After the model is trained, let us visualize the curves of accuracy and loss with matplotlib."
},
{
"code": null,
"e": 21870,
"s": 21333,
"text": "import matplotlib.pyplot as pltacc = history.history['acc']val_acc = history.history['val_acc']loss = history.history['loss']val_loss = history.history['val_loss']epochs = range(1, len(acc) + 1)plt.plot(epochs, acc, 'bo', label='Training acc')plt.plot(epochs, val_acc, 'b', label='Validation acc')plt.title('Training and validation accuracy')plt.legend()plt.figure()plt.plot(epochs, loss, 'bo', label='Training loss')plt.plot(epochs, val_loss, 'b', label='Validation loss')plt.title('Training and validation loss')plt.legend()plt.show()"
},
{
"code": null,
"e": 21901,
"s": 21870,
"text": "This is the curve of accuracy."
},
{
"code": null,
"e": 22122,
"s": 21901,
"text": "As you can see, it is not bad. If we use a dummy model to predict everything as label 0 (or all as 1), the accuracy will stay at 0.50. So our model, apparently, has captured some pattern, and out-performed the dummy one."
},
{
"code": null,
"e": 22152,
"s": 22122,
"text": "However, it is very unstable."
},
{
"code": null,
"e": 22193,
"s": 22152,
"text": "Then let us look into the curve of loss."
},
{
"code": null,
"e": 22352,
"s": 22193,
"text": "As you may find out, it is not good. When the loss of training went down, the loss on validation set bumped, and there is no significant trend of convergence."
},
{
"code": null,
"e": 22397,
"s": 22352,
"text": "It is more important to find out the reason."
},
{
"code": null,
"e": 22531,
"s": 22397,
"text": "Note that we used a randomly initialized Embedding Matrix which stayed static during the training phase. It may lead us into trouble."
},
{
"code": null,
"e": 22623,
"s": 22531,
"text": "So next step, we can do an experiment to allow the Embedding layer be trained and adjusted."
},
{
"code": null,
"e": 22855,
"s": 22623,
"text": "from keras.models import Sequentialfrom keras.layers import Embedding, Flatten, Dense, LSTMunits = 32model = Sequential()model.add(Embedding(num_events, embedding_dim))model.add(LSTM(units))model.add(Dense(1, activation='sigmoid'))"
},
{
"code": null,
"e": 22932,
"s": 22855,
"text": "The only different in the code, is that parameter trainable was set to True."
},
{
"code": null,
"e": 23012,
"s": 22932,
"text": "model.layers[0].set_weights([embedding_matrix])model.layers[0].trainable = True"
},
{
"code": null,
"e": 23060,
"s": 23012,
"text": "Let us compile the model and run it over again."
},
{
"code": null,
"e": 23366,
"s": 23060,
"text": "model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_valid, y_valid))model.save(\"mymodel_embedding_trainable.h5\")"
},
{
"code": null,
"e": 23416,
"s": 23366,
"text": "And we draw the curves of accuracy and loss, too."
},
{
"code": null,
"e": 23953,
"s": 23416,
"text": "import matplotlib.pyplot as pltacc = history.history['acc']val_acc = history.history['val_acc']loss = history.history['loss']val_loss = history.history['val_loss']epochs = range(1, len(acc) + 1)plt.plot(epochs, acc, 'bo', label='Training acc')plt.plot(epochs, val_acc, 'b', label='Validation acc')plt.title('Training and validation accuracy')plt.legend()plt.figure()plt.plot(epochs, loss, 'bo', label='Training loss')plt.plot(epochs, val_loss, 'b', label='Validation loss')plt.title('Training and validation loss')plt.legend()plt.show()"
},
{
"code": null,
"e": 23994,
"s": 23953,
"text": "The curve of accuracy is shown as below."
},
{
"code": null,
"e": 24133,
"s": 23994,
"text": "As you can see, it got better. The fluctuation of validation accuracy curve went down, while the validation accuracy got higher than 0.75."
},
{
"code": null,
"e": 24179,
"s": 24133,
"text": "This model is, to some extent, more valuable."
},
{
"code": null,
"e": 24294,
"s": 24179,
"text": "However, we should not draw a conclusion so soon. If you look into the curve of loss, you cannot be so optimistic."
},
{
"code": null,
"e": 24376,
"s": 24294,
"text": "Since half-way, the trends of loss on different sets went to different direction."
},
{
"code": null,
"e": 24406,
"s": 24376,
"text": "It is a hint of over-fitting."
},
{
"code": null,
"e": 24489,
"s": 24406,
"text": "Over-fitting always indicates the training data is not enough for a complex model."
},
{
"code": null,
"e": 24566,
"s": 24489,
"text": "You can either add more data for the training, or bring down the complexity."
},
{
"code": null,
"e": 24679,
"s": 24566,
"text": "The first approach is not so applicable for now, because we have only got a dataset ranging 29 days in November."
},
{
"code": null,
"e": 24767,
"s": 24679,
"text": "However, to bring down the complexity of the model, it can be done easily with Dropout."
},
{
"code": null,
"e": 25026,
"s": 24767,
"text": "When you use Dropout, the model will randomly select a certain proportion (you call the shots) of neurons, and set the weights of them to zero in the training phase, so that they can be regarded as “removed” from the network, and the complexity became lower."
},
{
"code": null,
"e": 25104,
"s": 25026,
"text": "Note in validation phase, the model will use all neurons without any Dropout."
},
{
"code": null,
"e": 25238,
"s": 25104,
"text": "We will add two parameters related with Dropouts. To do this, we use dropout=0.2, recurrent_dropout=0.2 when defining the LSTM layer."
},
{
"code": null,
"e": 25506,
"s": 25238,
"text": "from keras.models import Sequentialfrom keras.layers import Embedding, Flatten, Dense, LSTMunits = 32model = Sequential()model.add(Embedding(num_events, embedding_dim))model.add(LSTM(units, dropout=0.2, recurrent_dropout=0.2))model.add(Dense(1, activation='sigmoid'))"
},
{
"code": null,
"e": 25571,
"s": 25506,
"text": "We will keep the parameter trainable of Embedding Layer to True."
},
{
"code": null,
"e": 25651,
"s": 25571,
"text": "model.layers[0].set_weights([embedding_matrix])model.layers[0].trainable = True"
},
{
"code": null,
"e": 25682,
"s": 25651,
"text": "Let us run the training again."
},
{
"code": null,
"e": 26001,
"s": 25682,
"text": "model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['acc'])history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_valid, y_valid))model.save(\"mymodel_embedding_trainable_with_dropout.h5\")"
},
{
"code": null,
"e": 26053,
"s": 26001,
"text": "There is no modification of the visualization part."
},
{
"code": null,
"e": 26590,
"s": 26053,
"text": "import matplotlib.pyplot as pltacc = history.history['acc']val_acc = history.history['val_acc']loss = history.history['loss']val_loss = history.history['val_loss']epochs = range(1, len(acc) + 1)plt.plot(epochs, acc, 'bo', label='Training acc')plt.plot(epochs, val_acc, 'b', label='Validation acc')plt.title('Training and validation accuracy')plt.legend()plt.figure()plt.plot(epochs, loss, 'bo', label='Training loss')plt.plot(epochs, val_loss, 'b', label='Validation loss')plt.title('Training and validation loss')plt.legend()plt.show()"
},
{
"code": null,
"e": 26645,
"s": 26590,
"text": "From the accuracy curve, you may see nothing exciting."
},
{
"code": null,
"e": 26728,
"s": 26645,
"text": "However, when you look into the curve of loss, you’ll see significant improvement."
},
{
"code": null,
"e": 26817,
"s": 26728,
"text": "The curve of validation loss is smoother, and much closer to the trend of training loss."
},
{
"code": null,
"e": 26921,
"s": 26817,
"text": "Over-fitting has been taken care of, and the model is now more stable and generalizable to unseen data."
},
{
"code": null,
"e": 27111,
"s": 26921,
"text": "The Traffic Administration can then use the model to predict the happening of severe traffic jam with the Waze open data of incidents report. The expectation of model accuracy is about 75%."
},
{
"code": null,
"e": 27205,
"s": 27111,
"text": "Maybe in this way, some severe traffic jam will not happen at all, thanks to the preventions."
},
{
"code": null,
"e": 27263,
"s": 27205,
"text": "Hope you can get the following points from this tutorial."
},
{
"code": null,
"e": 27367,
"s": 27263,
"text": "Sequence model, such as RNN and LSTM, can not only be used on texts, but other sequential data as well."
},
{
"code": null,
"e": 27581,
"s": 27367,
"text": "You can use Embedding layer in these kinds of tasks, even though they don’t have pre-trained word-embedding models, such as word2vec, glove, or fasttext. Make sure you set the weights of embedding layer trainable."
},
{
"code": null,
"e": 27700,
"s": 27581,
"text": "You can try to beat over-fitting with several different methods. Dropout is one of them. In our case, it is effective."
},
{
"code": null,
"e": 27775,
"s": 27700,
"text": "Hope you can handle your own classification task with sequential data now."
},
{
"code": null,
"e": 27796,
"s": 27775,
"text": "Happy Deep Learning!"
},
{
"code": null,
"e": 27833,
"s": 27796,
"text": "My other Tutorials on Deep Learning:"
},
{
"code": null,
"e": 27924,
"s": 27833,
"text": "Deep Learning with Python and fast.ai, Part 1: Image classification with pre-trained model"
},
{
"code": null,
"e": 28013,
"s": 27924,
"text": "Deep Learning with Python and fast.ai, Part 2: NLP Classification with Transfer Learning"
},
{
"code": null,
"e": 28082,
"s": 28013,
"text": "Deep Learning with Python, Part 0: Setup Fast.ai 1.0 on Google Cloud"
}
] |
Turtle - Draw Lines using arrow keys - GeeksforGeeks | 24 Jan, 2021
In this article, we will learn how to draw lines using the keyboard (arrow keys) in the turtle graphics. Let’s first discuss some methods used in the implementation below:
wn.listen(): Using this then we can give keyboard inputswn.onkeypress(func, “key”): This function is used to bind fun to the key-release event of the key. In order to be able to register key-events, TurtleScreen must have focus.setx(position): This method is used to set the turtle’s second coordinate to x, leaving the first coordinate unchanged Here, whatever the position of the turtle is, it set the x coordinate to the given input keeping the y coordinate unchanged.sety(position): This method is used to set the turtle’s second coordinate to y, leaving the first coordinate unchanged Here, whatever the position of the turtle is, it set the y coordinate to the given input keeping the x coordinate unchanged.ycor(): This function is used to return the turtle’s y coordinate of the current position of the turtle. It doesn’t require any argument.xcor(): This function is used to return the turtle’s x coordinate of the current position of the turtle. It doesn’t require any argumenthead.penup: Picks the pen up so the turtle does not draw a line as it moveshead.hideturtle: This method is used to make the turtle invisible. It’s a good idea to do this while you’re in the middle of a complicated drawing because hiding the turtle speeds up the drawing observably. This method does not require any argument.head.clear: This function is used to delete the turtle’s drawings from the screenhead.write: This function is used to write text at the current turtle position.
wn.listen(): Using this then we can give keyboard inputs
wn.onkeypress(func, “key”): This function is used to bind fun to the key-release event of the key. In order to be able to register key-events, TurtleScreen must have focus.
setx(position): This method is used to set the turtle’s second coordinate to x, leaving the first coordinate unchanged Here, whatever the position of the turtle is, it set the x coordinate to the given input keeping the y coordinate unchanged.
sety(position): This method is used to set the turtle’s second coordinate to y, leaving the first coordinate unchanged Here, whatever the position of the turtle is, it set the y coordinate to the given input keeping the x coordinate unchanged.
ycor(): This function is used to return the turtle’s y coordinate of the current position of the turtle. It doesn’t require any argument.
xcor(): This function is used to return the turtle’s x coordinate of the current position of the turtle. It doesn’t require any argument
head.penup: Picks the pen up so the turtle does not draw a line as it moves
head.hideturtle: This method is used to make the turtle invisible. It’s a good idea to do this while you’re in the middle of a complicated drawing because hiding the turtle speeds up the drawing observably. This method does not require any argument.
head.clear: This function is used to delete the turtle’s drawings from the screen
head.write: This function is used to write text at the current turtle position.
Approach
Import the turtle modules.
Get a screen to draw on
Define two instances for the turtle one is a pen and another is the head.
Head is for telling which key is currently pressed
Define the functions for the up, down, left, right movement of the turtle.
In the respective up, left, right and down functions set the arrow to move 100 units in up, left, right, and down directions respectively by changing the x and y coordinates.
Use function listen() for giving keyboard inputs.
Use onkeypress in order to register key-events.
Below is the Python implementation of the above approach:
Python3
# import for turtle moduleimport turtle # making a workScreenwn = turtle.Screen() # defining 2 turtle instancehead = turtle.Turtle()pen = turtle.Turtle() # head is for telling which key is pressedhead.penup()head.hideturtle() # head is at 0,260 coordinatehead.goto(0, 260)head.write("This is to tell which key is currently pressed", align="center", font=("courier", 14, "normal")) def f(): y = pen.ycor() pen.sety(y+100) head.clear() head.write("UP", align="center", font=("courier", 24, "normal")) def b(): y = pen.ycor() pen.sety(y-100) head.clear() head.write("Down", align="center", font=("courier", 24, "normal")) def l(): x = pen.xcor() pen.setx(x-100) head.clear() head.write("left", align="center", font=("courier", 24, "normal")) def r(): x = pen.xcor() pen.setx(x+100) head.clear() head.write("Right", align="center", font=("courier", 24, "normal")) wn.listen()wn.onkeypress(f, "Up") # when up is pressed pen will go upwn.onkeypress(b, "Down") # when down is pressed pen will go downwn.onkeypress(l, "Left") # when left is pressed pen will go leftwn.onkeypress(r, "Right") # when right is pressed pen will go right
Output
Python-turtle
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
How to drop one or multiple columns in Pandas Dataframe
Python Classes and Objects
Python | os.path.join() method
Create a directory in Python
Defaultdict in Python
Python | Get unique values from a list
Python | Pandas dataframe.groupby() | [
{
"code": null,
"e": 25691,
"s": 25663,
"text": "\n24 Jan, 2021"
},
{
"code": null,
"e": 25863,
"s": 25691,
"text": "In this article, we will learn how to draw lines using the keyboard (arrow keys) in the turtle graphics. Let’s first discuss some methods used in the implementation below:"
},
{
"code": null,
"e": 27335,
"s": 25863,
"text": "wn.listen(): Using this then we can give keyboard inputswn.onkeypress(func, “key”): This function is used to bind fun to the key-release event of the key. In order to be able to register key-events, TurtleScreen must have focus.setx(position): This method is used to set the turtle’s second coordinate to x, leaving the first coordinate unchanged Here, whatever the position of the turtle is, it set the x coordinate to the given input keeping the y coordinate unchanged.sety(position): This method is used to set the turtle’s second coordinate to y, leaving the first coordinate unchanged Here, whatever the position of the turtle is, it set the y coordinate to the given input keeping the x coordinate unchanged.ycor(): This function is used to return the turtle’s y coordinate of the current position of the turtle. It doesn’t require any argument.xcor(): This function is used to return the turtle’s x coordinate of the current position of the turtle. It doesn’t require any argumenthead.penup: Picks the pen up so the turtle does not draw a line as it moveshead.hideturtle: This method is used to make the turtle invisible. It’s a good idea to do this while you’re in the middle of a complicated drawing because hiding the turtle speeds up the drawing observably. This method does not require any argument.head.clear: This function is used to delete the turtle’s drawings from the screenhead.write: This function is used to write text at the current turtle position."
},
{
"code": null,
"e": 27392,
"s": 27335,
"text": "wn.listen(): Using this then we can give keyboard inputs"
},
{
"code": null,
"e": 27565,
"s": 27392,
"text": "wn.onkeypress(func, “key”): This function is used to bind fun to the key-release event of the key. In order to be able to register key-events, TurtleScreen must have focus."
},
{
"code": null,
"e": 27809,
"s": 27565,
"text": "setx(position): This method is used to set the turtle’s second coordinate to x, leaving the first coordinate unchanged Here, whatever the position of the turtle is, it set the x coordinate to the given input keeping the y coordinate unchanged."
},
{
"code": null,
"e": 28053,
"s": 27809,
"text": "sety(position): This method is used to set the turtle’s second coordinate to y, leaving the first coordinate unchanged Here, whatever the position of the turtle is, it set the y coordinate to the given input keeping the x coordinate unchanged."
},
{
"code": null,
"e": 28191,
"s": 28053,
"text": "ycor(): This function is used to return the turtle’s y coordinate of the current position of the turtle. It doesn’t require any argument."
},
{
"code": null,
"e": 28328,
"s": 28191,
"text": "xcor(): This function is used to return the turtle’s x coordinate of the current position of the turtle. It doesn’t require any argument"
},
{
"code": null,
"e": 28404,
"s": 28328,
"text": "head.penup: Picks the pen up so the turtle does not draw a line as it moves"
},
{
"code": null,
"e": 28654,
"s": 28404,
"text": "head.hideturtle: This method is used to make the turtle invisible. It’s a good idea to do this while you’re in the middle of a complicated drawing because hiding the turtle speeds up the drawing observably. This method does not require any argument."
},
{
"code": null,
"e": 28736,
"s": 28654,
"text": "head.clear: This function is used to delete the turtle’s drawings from the screen"
},
{
"code": null,
"e": 28816,
"s": 28736,
"text": "head.write: This function is used to write text at the current turtle position."
},
{
"code": null,
"e": 28825,
"s": 28816,
"text": "Approach"
},
{
"code": null,
"e": 28852,
"s": 28825,
"text": "Import the turtle modules."
},
{
"code": null,
"e": 28876,
"s": 28852,
"text": "Get a screen to draw on"
},
{
"code": null,
"e": 28950,
"s": 28876,
"text": "Define two instances for the turtle one is a pen and another is the head."
},
{
"code": null,
"e": 29001,
"s": 28950,
"text": "Head is for telling which key is currently pressed"
},
{
"code": null,
"e": 29076,
"s": 29001,
"text": "Define the functions for the up, down, left, right movement of the turtle."
},
{
"code": null,
"e": 29251,
"s": 29076,
"text": "In the respective up, left, right and down functions set the arrow to move 100 units in up, left, right, and down directions respectively by changing the x and y coordinates."
},
{
"code": null,
"e": 29301,
"s": 29251,
"text": "Use function listen() for giving keyboard inputs."
},
{
"code": null,
"e": 29349,
"s": 29301,
"text": "Use onkeypress in order to register key-events."
},
{
"code": null,
"e": 29407,
"s": 29349,
"text": "Below is the Python implementation of the above approach:"
},
{
"code": null,
"e": 29415,
"s": 29407,
"text": "Python3"
},
{
"code": "# import for turtle moduleimport turtle # making a workScreenwn = turtle.Screen() # defining 2 turtle instancehead = turtle.Turtle()pen = turtle.Turtle() # head is for telling which key is pressedhead.penup()head.hideturtle() # head is at 0,260 coordinatehead.goto(0, 260)head.write(\"This is to tell which key is currently pressed\", align=\"center\", font=(\"courier\", 14, \"normal\")) def f(): y = pen.ycor() pen.sety(y+100) head.clear() head.write(\"UP\", align=\"center\", font=(\"courier\", 24, \"normal\")) def b(): y = pen.ycor() pen.sety(y-100) head.clear() head.write(\"Down\", align=\"center\", font=(\"courier\", 24, \"normal\")) def l(): x = pen.xcor() pen.setx(x-100) head.clear() head.write(\"left\", align=\"center\", font=(\"courier\", 24, \"normal\")) def r(): x = pen.xcor() pen.setx(x+100) head.clear() head.write(\"Right\", align=\"center\", font=(\"courier\", 24, \"normal\")) wn.listen()wn.onkeypress(f, \"Up\") # when up is pressed pen will go upwn.onkeypress(b, \"Down\") # when down is pressed pen will go downwn.onkeypress(l, \"Left\") # when left is pressed pen will go leftwn.onkeypress(r, \"Right\") # when right is pressed pen will go right",
"e": 30621,
"s": 29415,
"text": null
},
{
"code": null,
"e": 30628,
"s": 30621,
"text": "Output"
},
{
"code": null,
"e": 30642,
"s": 30628,
"text": "Python-turtle"
},
{
"code": null,
"e": 30649,
"s": 30642,
"text": "Python"
},
{
"code": null,
"e": 30747,
"s": 30649,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30779,
"s": 30747,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 30821,
"s": 30779,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 30863,
"s": 30821,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 30919,
"s": 30863,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 30946,
"s": 30919,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 30977,
"s": 30946,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 31006,
"s": 30977,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 31028,
"s": 31006,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 31067,
"s": 31028,
"text": "Python | Get unique values from a list"
}
] |
numpy.dot() in Python - GeeksforGeeks | 18 Nov, 2021
numpy.dot(vector_a, vector_b, out = None) returns the dot product of vectors a and b. It can handle 2D arrays but considers them as matrix and will perform matrix multiplication. For N dimensions it is a sum-product over the last axis of a and the second-to-last of b :
dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m])
Parameters
vector_a : [array_like] if a is complex its complex conjugate is used for the calculation of the dot product. vector_b : [array_like] if b is complex its complex conjugate is used for the calculation of the dot product. out : [array, optional] output argument must be C-contiguous, and its dtype must be the dtype that would be returned for dot(a,b).
vector_a : [array_like] if a is complex its complex conjugate is used for the calculation of the dot product.
vector_b : [array_like] if b is complex its complex conjugate is used for the calculation of the dot product.
out : [array, optional] output argument must be C-contiguous, and its dtype must be the dtype that would be returned for dot(a,b).
Return:
Dot Product of vectors a and b. if vector_a and vector_b are 1D, then scalar is returned
Code 1:
Python
# Python Program illustrating# numpy.dot() method import numpy as geek # Scalarsproduct = geek.dot(5, 4)print("Dot Product of scalar values : ", product) # 1D arrayvector_a = 2 + 3jvector_b = 4 + 5j product = geek.dot(vector_a, vector_b)print("Dot Product : ", product)
Output:
Dot Product of scalar values : 20
Dot Product : (-7+22j)
How Code1 works ?
vector_a = 2 + 3j
vector_b = 4 + 5j
now dot product
= 2(4 + 5j) + 3j(4 – 5j)
= 8 + 10j + 12j – 15
= -7 + 22j
Code 2:
Python
# Python Program illustrating# numpy.dot() method import numpy as geek # 1D arrayvector_a = geek.array([[1, 4], [5, 6]])vector_b = geek.array([[2, 4], [5, 2]]) product = geek.dot(vector_a, vector_b)print("Dot Product : \n", product) product = geek.dot(vector_b, vector_a)print("\nDot Product : \n", product) """Code 2 : as normal matrix multiplication"""
Output:
Dot Product :
[[22 12]
[40 32]]
Dot Product :
[[22 32]
[15 32]]
References: https://numpy.org/doc/stable/reference/generated/numpy.dot.html
This article is contributed by Mohit Gupta_OMG . If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
codeachalesh
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
Check if element exists in list in Python
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Defaultdict in Python
Python Classes and Objects
Create a directory in Python
Python | os.path.join() method
Python | Pandas dataframe.groupby()
Python | Get unique values from a list | [
{
"code": null,
"e": 24390,
"s": 24362,
"text": "\n18 Nov, 2021"
},
{
"code": null,
"e": 24660,
"s": 24390,
"text": "numpy.dot(vector_a, vector_b, out = None) returns the dot product of vectors a and b. It can handle 2D arrays but considers them as matrix and will perform matrix multiplication. For N dimensions it is a sum-product over the last axis of a and the second-to-last of b :"
},
{
"code": null,
"e": 24709,
"s": 24660,
"text": "dot(a, b)[i,j,k,m] = sum(a[i,j,:] * b[k,:,m]) \n "
},
{
"code": null,
"e": 24720,
"s": 24709,
"text": "Parameters"
},
{
"code": null,
"e": 25072,
"s": 24720,
"text": "vector_a : [array_like] if a is complex its complex conjugate is used for the calculation of the dot product. vector_b : [array_like] if b is complex its complex conjugate is used for the calculation of the dot product. out : [array, optional] output argument must be C-contiguous, and its dtype must be the dtype that would be returned for dot(a,b). "
},
{
"code": null,
"e": 25183,
"s": 25072,
"text": "vector_a : [array_like] if a is complex its complex conjugate is used for the calculation of the dot product. "
},
{
"code": null,
"e": 25294,
"s": 25183,
"text": "vector_b : [array_like] if b is complex its complex conjugate is used for the calculation of the dot product. "
},
{
"code": null,
"e": 25426,
"s": 25294,
"text": "out : [array, optional] output argument must be C-contiguous, and its dtype must be the dtype that would be returned for dot(a,b). "
},
{
"code": null,
"e": 25434,
"s": 25426,
"text": "Return:"
},
{
"code": null,
"e": 25523,
"s": 25434,
"text": "Dot Product of vectors a and b. if vector_a and vector_b are 1D, then scalar is returned"
},
{
"code": null,
"e": 25531,
"s": 25523,
"text": "Code 1:"
},
{
"code": null,
"e": 25538,
"s": 25531,
"text": "Python"
},
{
"code": "# Python Program illustrating# numpy.dot() method import numpy as geek # Scalarsproduct = geek.dot(5, 4)print(\"Dot Product of scalar values : \", product) # 1D arrayvector_a = 2 + 3jvector_b = 4 + 5j product = geek.dot(vector_a, vector_b)print(\"Dot Product : \", product)",
"e": 25810,
"s": 25538,
"text": null
},
{
"code": null,
"e": 25818,
"s": 25810,
"text": "Output:"
},
{
"code": null,
"e": 25879,
"s": 25818,
"text": "Dot Product of scalar values : 20\nDot Product : (-7+22j)"
},
{
"code": null,
"e": 25898,
"s": 25879,
"text": "How Code1 works ? "
},
{
"code": null,
"e": 25917,
"s": 25898,
"text": "vector_a = 2 + 3j "
},
{
"code": null,
"e": 25935,
"s": 25917,
"text": "vector_b = 4 + 5j"
},
{
"code": null,
"e": 25952,
"s": 25935,
"text": "now dot product "
},
{
"code": null,
"e": 25978,
"s": 25952,
"text": "= 2(4 + 5j) + 3j(4 – 5j) "
},
{
"code": null,
"e": 26000,
"s": 25978,
"text": "= 8 + 10j + 12j – 15 "
},
{
"code": null,
"e": 26011,
"s": 26000,
"text": "= -7 + 22j"
},
{
"code": null,
"e": 26019,
"s": 26011,
"text": "Code 2:"
},
{
"code": null,
"e": 26026,
"s": 26019,
"text": "Python"
},
{
"code": "# Python Program illustrating# numpy.dot() method import numpy as geek # 1D arrayvector_a = geek.array([[1, 4], [5, 6]])vector_b = geek.array([[2, 4], [5, 2]]) product = geek.dot(vector_a, vector_b)print(\"Dot Product : \\n\", product) product = geek.dot(vector_b, vector_a)print(\"\\nDot Product : \\n\", product) \"\"\"Code 2 : as normal matrix multiplication\"\"\"",
"e": 26383,
"s": 26026,
"text": null
},
{
"code": null,
"e": 26391,
"s": 26383,
"text": "Output:"
},
{
"code": null,
"e": 26464,
"s": 26391,
"text": "Dot Product : \n [[22 12]\n [40 32]]\n\nDot Product : \n [[22 32]\n [15 32]]"
},
{
"code": null,
"e": 26540,
"s": 26464,
"text": "References: https://numpy.org/doc/stable/reference/generated/numpy.dot.html"
},
{
"code": null,
"e": 26840,
"s": 26540,
"text": "This article is contributed by Mohit Gupta_OMG . If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks."
},
{
"code": null,
"e": 26966,
"s": 26840,
"text": "Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 26979,
"s": 26966,
"text": "codeachalesh"
},
{
"code": null,
"e": 26986,
"s": 26979,
"text": "Python"
},
{
"code": null,
"e": 27084,
"s": 26986,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27116,
"s": 27084,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27158,
"s": 27116,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 27200,
"s": 27158,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 27256,
"s": 27200,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 27278,
"s": 27256,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 27305,
"s": 27278,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 27334,
"s": 27305,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 27365,
"s": 27334,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 27401,
"s": 27365,
"text": "Python | Pandas dataframe.groupby()"
}
] |
HashSet equals() method in Java with Example - GeeksforGeeks | 19 Feb, 2020
The equals() method of java.util.HashSet class is used verify the equality of an Object with a HashSet and compare them. The list returns true only if both HashSet contains same elements, irrespective of order.
Syntax:
public boolean equals(Object o)
Parameters: This method takes the object o as a parameter to be compared for equality with this set.
Returns Value: This method returns true if the specified object is equal to this set.
Below are the examples to illustrate the equals() method.
Example 1:
// Java program to demonstrate equals()// method of HashSet import java.util.*; public class GFG { public static void main(String[] argv) { // Creating object of HashSet<String> HashSet<String> arrset1 = new HashSet<String>(); // Populating arrset1 arrset1.add("A"); arrset1.add("B"); arrset1.add("C"); arrset1.add("D"); arrset1.add("E"); // print arrset1 System.out.println("First HashSet: " + arrset1); // Creating another object of HashSet<String> HashSet<String> arrset2 = new HashSet<String>(); // Populating arrset2 arrset2.add("A"); arrset2.add("B"); arrset2.add("C"); arrset2.add("D"); arrset2.add("E"); // print arrset2 System.out.println("Second HashSet: " + arrset2); // comparing first HashSet to another // using equals() method boolean value = arrset1.equals(arrset2); // print the value System.out.println("Are both set equal: " + value); }}
First HashSet: [A, B, C, D, E]
Second HashSet: [A, B, C, D, E]
Are both set equal: true
Example 2:
// Java program to demonstrate equals()// method of HashSet import java.util.*; public class GFG1 { public static void main(String[] argv) { // Creating object of HashSet HashSet<Integer> arrset1 = new HashSet<Integer>(); // Populating arrset1 arrset1.add(10); arrset1.add(20); arrset1.add(30); arrset1.add(40); arrset1.add(50); // print arrset1 System.out.println("First HashSet: " + arrset1); // Creating another object of HashSet HashSet<Integer> arrset2 = new HashSet<Integer>(); // Populating arrset2 arrset2.add(10); arrset2.add(20); arrset2.add(30); // print arrset2 System.out.println("Second HashSet: " + arrset2); // comparing first HashSet to another // using equals() method boolean value = arrset1.equals(arrset2); // print the value System.out.println("Are both set equal: " + value); }}
First HashSet: [50, 20, 40, 10, 30]
Second HashSet: [20, 10, 30]
Are both set equal: false
vivekpant
Java - util package
Java-Collections
Java-Functions
java-hashset
Java
Java
Java-Collections
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Interfaces in Java
ArrayList in Java
Stack Class in Java
Singleton Class in Java
Multidimensional Arrays in Java
Multithreading in Java
Collections in Java
Initializing a List in Java
Overriding in Java | [
{
"code": null,
"e": 25565,
"s": 25537,
"text": "\n19 Feb, 2020"
},
{
"code": null,
"e": 25776,
"s": 25565,
"text": "The equals() method of java.util.HashSet class is used verify the equality of an Object with a HashSet and compare them. The list returns true only if both HashSet contains same elements, irrespective of order."
},
{
"code": null,
"e": 25784,
"s": 25776,
"text": "Syntax:"
},
{
"code": null,
"e": 25816,
"s": 25784,
"text": "public boolean equals(Object o)"
},
{
"code": null,
"e": 25917,
"s": 25816,
"text": "Parameters: This method takes the object o as a parameter to be compared for equality with this set."
},
{
"code": null,
"e": 26003,
"s": 25917,
"text": "Returns Value: This method returns true if the specified object is equal to this set."
},
{
"code": null,
"e": 26061,
"s": 26003,
"text": "Below are the examples to illustrate the equals() method."
},
{
"code": null,
"e": 26072,
"s": 26061,
"text": "Example 1:"
},
{
"code": "// Java program to demonstrate equals()// method of HashSet import java.util.*; public class GFG { public static void main(String[] argv) { // Creating object of HashSet<String> HashSet<String> arrset1 = new HashSet<String>(); // Populating arrset1 arrset1.add(\"A\"); arrset1.add(\"B\"); arrset1.add(\"C\"); arrset1.add(\"D\"); arrset1.add(\"E\"); // print arrset1 System.out.println(\"First HashSet: \" + arrset1); // Creating another object of HashSet<String> HashSet<String> arrset2 = new HashSet<String>(); // Populating arrset2 arrset2.add(\"A\"); arrset2.add(\"B\"); arrset2.add(\"C\"); arrset2.add(\"D\"); arrset2.add(\"E\"); // print arrset2 System.out.println(\"Second HashSet: \" + arrset2); // comparing first HashSet to another // using equals() method boolean value = arrset1.equals(arrset2); // print the value System.out.println(\"Are both set equal: \" + value); }}",
"e": 27242,
"s": 26072,
"text": null
},
{
"code": null,
"e": 27331,
"s": 27242,
"text": "First HashSet: [A, B, C, D, E]\nSecond HashSet: [A, B, C, D, E]\nAre both set equal: true\n"
},
{
"code": null,
"e": 27342,
"s": 27331,
"text": "Example 2:"
},
{
"code": "// Java program to demonstrate equals()// method of HashSet import java.util.*; public class GFG1 { public static void main(String[] argv) { // Creating object of HashSet HashSet<Integer> arrset1 = new HashSet<Integer>(); // Populating arrset1 arrset1.add(10); arrset1.add(20); arrset1.add(30); arrset1.add(40); arrset1.add(50); // print arrset1 System.out.println(\"First HashSet: \" + arrset1); // Creating another object of HashSet HashSet<Integer> arrset2 = new HashSet<Integer>(); // Populating arrset2 arrset2.add(10); arrset2.add(20); arrset2.add(30); // print arrset2 System.out.println(\"Second HashSet: \" + arrset2); // comparing first HashSet to another // using equals() method boolean value = arrset1.equals(arrset2); // print the value System.out.println(\"Are both set equal: \" + value); }}",
"e": 28432,
"s": 27342,
"text": null
},
{
"code": null,
"e": 28524,
"s": 28432,
"text": "First HashSet: [50, 20, 40, 10, 30]\nSecond HashSet: [20, 10, 30]\nAre both set equal: false\n"
},
{
"code": null,
"e": 28534,
"s": 28524,
"text": "vivekpant"
},
{
"code": null,
"e": 28554,
"s": 28534,
"text": "Java - util package"
},
{
"code": null,
"e": 28571,
"s": 28554,
"text": "Java-Collections"
},
{
"code": null,
"e": 28586,
"s": 28571,
"text": "Java-Functions"
},
{
"code": null,
"e": 28599,
"s": 28586,
"text": "java-hashset"
},
{
"code": null,
"e": 28604,
"s": 28599,
"text": "Java"
},
{
"code": null,
"e": 28609,
"s": 28604,
"text": "Java"
},
{
"code": null,
"e": 28626,
"s": 28609,
"text": "Java-Collections"
},
{
"code": null,
"e": 28724,
"s": 28626,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28739,
"s": 28724,
"text": "Stream In Java"
},
{
"code": null,
"e": 28758,
"s": 28739,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 28776,
"s": 28758,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 28796,
"s": 28776,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 28820,
"s": 28796,
"text": "Singleton Class in Java"
},
{
"code": null,
"e": 28852,
"s": 28820,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 28875,
"s": 28852,
"text": "Multithreading in Java"
},
{
"code": null,
"e": 28895,
"s": 28875,
"text": "Collections in Java"
},
{
"code": null,
"e": 28923,
"s": 28895,
"text": "Initializing a List in Java"
}
] |
How will you print numbers from 1 to 100 without using loop? | Set-2 - GeeksforGeeks | 30 Jun, 2021
If we take a look at this problem carefully, we can see that the idea of “loop” is to track some counter value e.g. “i=0” till “i <= 100′′. So if we aren’t allowed to use loop, how else can be track something in C language!It can be done in many ways to print numbers using any looping conditions such as for(), while(), do-while(). But the same can be done without using loops (using recursive functions, goto statement).
Printing numbers from 1 to 100 using recursive functions has already been discussed in Set-1 . In this post, other two methods have been discussed:
1. Using goto statement:
C++
C
C#
#include <iostream>using namespace std; int main(){ int i = 0; begin: i = i + 1; cout << i << " "; if (i < 100) { goto begin; } return 0;} // This code is contributed by ShubhamCoder
#include <stdio.h> int main(){ int i = 0;begin: i = i + 1; printf("%d ", i); if (i < 100) goto begin; return 0;}
using System; class GFG{ static public void Main (){ int i = 0; begin: i = i + 1; Console.Write(" " + i + " "); if (i < 100) { goto begin; }}} // This code is contributed by ShubhamCoder
1 2 3 4 . . . 97 98 99 100
2. Using recursive main function:
C++
C
Java
Python3
C#
// C++ program to count all pairs from both the// linked lists whose product is equal to// a given value#include <iostream>using namespace std; int main(){ static int i = 1; if (i <= 100) { cout << i++ << " "; main(); } return 0;} // This code is contributed by ShubhamCoder
// C program to count all pairs from both the// linked lists whose product is equal to// a given value#include <stdio.h> int main(){ static int i = 1; if (i <= 100) { printf("%d ", i++); main(); } return 0;}
// Java program to count all pairs from both the// linked lists whose product is equal to// a given valueclass GFG{ static int i = 1; public static void main(String[] args) { if (i <= 100) { System.out.printf("%d ", i++); main(null); } }} // This code is contributed by Rajput-Ji
# Python3 program to count all pairs from both# the linked lists whose product is equal to# a given valuedef main(i): if (i <= 100): print(i, end = " ") i = i + 1 main(i) i = 1main(i) # This code is contributed by SoumikMondal
// C# program to count all pairs from both the// linked lists whose product is equal to// a given valueusing System; class GFG{ static int i = 1; public static void Main(String[] args) { if (i <= 100) { Console.Write("{0} ", i++); Main(null); } }} // This code is contributed by Rajput-Ji
1 2 3 4 . . . 97 98 99 100
Rajput-Ji
shubhamsingh84100
SoumikMondal
cpp-puzzle
C Language
Recursion
Recursion
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Multidimensional Arrays in C / C++
Left Shift and Right Shift Operators in C/C++
fork() in C
Function Pointer in C
Substring in C++
Write a program to print all permutations of a given string
Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)
Recursion
Program for Tower of Hanoi
Backtracking | Introduction | [
{
"code": null,
"e": 27051,
"s": 27023,
"text": "\n30 Jun, 2021"
},
{
"code": null,
"e": 27474,
"s": 27051,
"text": "If we take a look at this problem carefully, we can see that the idea of “loop” is to track some counter value e.g. “i=0” till “i <= 100′′. So if we aren’t allowed to use loop, how else can be track something in C language!It can be done in many ways to print numbers using any looping conditions such as for(), while(), do-while(). But the same can be done without using loops (using recursive functions, goto statement)."
},
{
"code": null,
"e": 27623,
"s": 27474,
"text": "Printing numbers from 1 to 100 using recursive functions has already been discussed in Set-1 . In this post, other two methods have been discussed: "
},
{
"code": null,
"e": 27649,
"s": 27623,
"text": "1. Using goto statement: "
},
{
"code": null,
"e": 27653,
"s": 27649,
"text": "C++"
},
{
"code": null,
"e": 27655,
"s": 27653,
"text": "C"
},
{
"code": null,
"e": 27658,
"s": 27655,
"text": "C#"
},
{
"code": "#include <iostream>using namespace std; int main(){ int i = 0; begin: i = i + 1; cout << i << \" \"; if (i < 100) { goto begin; } return 0;} // This code is contributed by ShubhamCoder",
"e": 27874,
"s": 27658,
"text": null
},
{
"code": "#include <stdio.h> int main(){ int i = 0;begin: i = i + 1; printf(\"%d \", i); if (i < 100) goto begin; return 0;}",
"e": 28010,
"s": 27874,
"text": null
},
{
"code": "using System; class GFG{ static public void Main (){ int i = 0; begin: i = i + 1; Console.Write(\" \" + i + \" \"); if (i < 100) { goto begin; }}} // This code is contributed by ShubhamCoder",
"e": 28250,
"s": 28010,
"text": null
},
{
"code": null,
"e": 28277,
"s": 28250,
"text": "1 2 3 4 . . . 97 98 99 100"
},
{
"code": null,
"e": 28314,
"s": 28279,
"text": "2. Using recursive main function: "
},
{
"code": null,
"e": 28318,
"s": 28314,
"text": "C++"
},
{
"code": null,
"e": 28320,
"s": 28318,
"text": "C"
},
{
"code": null,
"e": 28325,
"s": 28320,
"text": "Java"
},
{
"code": null,
"e": 28333,
"s": 28325,
"text": "Python3"
},
{
"code": null,
"e": 28336,
"s": 28333,
"text": "C#"
},
{
"code": "// C++ program to count all pairs from both the// linked lists whose product is equal to// a given value#include <iostream>using namespace std; int main(){ static int i = 1; if (i <= 100) { cout << i++ << \" \"; main(); } return 0;} // This code is contributed by ShubhamCoder",
"e": 28645,
"s": 28336,
"text": null
},
{
"code": "// C program to count all pairs from both the// linked lists whose product is equal to// a given value#include <stdio.h> int main(){ static int i = 1; if (i <= 100) { printf(\"%d \", i++); main(); } return 0;}",
"e": 28879,
"s": 28645,
"text": null
},
{
"code": "// Java program to count all pairs from both the// linked lists whose product is equal to// a given valueclass GFG{ static int i = 1; public static void main(String[] args) { if (i <= 100) { System.out.printf(\"%d \", i++); main(null); } }} // This code is contributed by Rajput-Ji",
"e": 29216,
"s": 28879,
"text": null
},
{
"code": "# Python3 program to count all pairs from both# the linked lists whose product is equal to# a given valuedef main(i): if (i <= 100): print(i, end = \" \") i = i + 1 main(i) i = 1main(i) # This code is contributed by SoumikMondal",
"e": 29480,
"s": 29216,
"text": null
},
{
"code": "// C# program to count all pairs from both the// linked lists whose product is equal to// a given valueusing System; class GFG{ static int i = 1; public static void Main(String[] args) { if (i <= 100) { Console.Write(\"{0} \", i++); Main(null); } }} // This code is contributed by Rajput-Ji",
"e": 29825,
"s": 29480,
"text": null
},
{
"code": null,
"e": 29852,
"s": 29825,
"text": "1 2 3 4 . . . 97 98 99 100"
},
{
"code": null,
"e": 29866,
"s": 29856,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 29884,
"s": 29866,
"text": "shubhamsingh84100"
},
{
"code": null,
"e": 29897,
"s": 29884,
"text": "SoumikMondal"
},
{
"code": null,
"e": 29908,
"s": 29897,
"text": "cpp-puzzle"
},
{
"code": null,
"e": 29919,
"s": 29908,
"text": "C Language"
},
{
"code": null,
"e": 29929,
"s": 29919,
"text": "Recursion"
},
{
"code": null,
"e": 29939,
"s": 29929,
"text": "Recursion"
},
{
"code": null,
"e": 30037,
"s": 29939,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30072,
"s": 30037,
"text": "Multidimensional Arrays in C / C++"
},
{
"code": null,
"e": 30118,
"s": 30072,
"text": "Left Shift and Right Shift Operators in C/C++"
},
{
"code": null,
"e": 30130,
"s": 30118,
"text": "fork() in C"
},
{
"code": null,
"e": 30152,
"s": 30130,
"text": "Function Pointer in C"
},
{
"code": null,
"e": 30169,
"s": 30152,
"text": "Substring in C++"
},
{
"code": null,
"e": 30229,
"s": 30169,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 30314,
"s": 30229,
"text": "Given an array A[] and a number x, check for pair in A[] with sum as x (aka Two Sum)"
},
{
"code": null,
"e": 30324,
"s": 30314,
"text": "Recursion"
},
{
"code": null,
"e": 30351,
"s": 30324,
"text": "Program for Tower of Hanoi"
}
] |
Conditional Probability with Python: Concepts, Tables & Code | by Paul Apivat | Towards Data Science | This post is chapter 6 in continuation of my coverage of Data Science from Scratch by Joel Grus. We will work our way towards understanding conditional probability by understanding preceding concepts like marginal and joint probabilities.
At the end, we’ll tie all concepts together through code. For those inclined, you can jump to the code towards the bottom of this post.
The first challenge in this section is distinguishing between two conditional probability statements.
Here’s the setup. We have a family with two (unknown) children with two assumptions. First, each child is equally likely to be a boy or a girl. Second, the gender of the second child is independent of the gender of the first child.
Outcome 1: What is the probability of the event “both children are girls” (B) conditional on the event “the older child is a girl” (G)?
The probability for statement one is roughly 50% or (1/2).
Outcome 2: What is the probability of the event “both children are girls” (B) conditional on the event “at least one of the children is a girl” (L)?
The probability for statement two is roughly 33% or (1/3).
But at first glance, they look similar.
The book jumps straight to conditional probabilities, but first, we’ll have to look at marginal and joint probabilities. Then we’ll create a joint probabilities table and sum probabilities to help us figure out the differences. We’ll then resume with conditional probabilities.
Before anything, we need to realize the situation we have is one of independence. The gender of one child is independent of a second child.
The intuition for this scenario will be different from a dependent situation. For example, if we draw two cards from a deck (without replacement), the probabilities are different. The probability of drawing one King ♠️ is (4/52) and the probability of drawing a second King ♣️ is now (3/51); the probability of the second event (a second King) is dependent on the result of the first draw.
Back to the two unknown children.
We can say the probability of the first child being either a boy or a girl is 50/50. Moreover, the probability of the second child, which is independent of the first, is also 50/50. Remember, our first assumption is that each child is equally likely to be a boy or a girl.
Let’s put these numbers in a table. The (1/2) probabilities shown here are called marginal probabilities (note how they’re at the margins of the table).
Since we have two gender (much like two sides of a flipped coin), we can intuitively figure out all possible outcomes:
first child (Boy), second child (Boy)first child (Boy), second child (Girl)first child (Girl), second child (Boy)first child (Girl), second child (Girl)
first child (Boy), second child (Boy)
first child (Boy), second child (Girl)
first child (Girl), second child (Boy)
first child (Girl), second child (Girl)
There are 4 possible outcomes so the probability of getting any one of the four outcomes is (1/4). We can actually write these probabilities in the middle of the table, the joint probabilities:
To recap, the probability of the first child being either boy or girl is 50/50, simple enough. The probability of the second child being either boy or girl is also 50/50. When put in a table, this yielded the marginal probability.
Now we want to know the probability of say, ‘first child being a boy and second child being a girl’. This is a joint probability because is is the probability that the first child take a specific gender (boy) AND the second child take a specific gender (girl).
If two event are independent, and in this case they are, their joint probabilities are the product of the probabilities of each one happening.
The probability of the first child being a Boy (1/2) and second child being a Girl (1/2); The product of each marginal probability is the joint probability (1/2 * 1/2 = 1/4).
This can be repeated for the other three joint probabilities.
Now we get into conditional probability which is the probability of one event happening (i.e., second child being a Boy or Girl) given that or on conditional that another event happened (i.e., first child being a Boy).
At this point, it might be a good idea to begin writing probability statements similar to how it is expressed in mathematics.
A joint probability is the product of each individual event happening (assuming they are independent events). For example we might have two individual events:
P(1st Child = Boy): 1/2
P(2nd Child = Boy): 1/2
Here is their joint probability:
P(1st Child = Boy, 2nd Child = Boy)
P(1st Child = Boy) * P(2nd Child = Boy)
(1/2 * 1/2 = 1/4)
There is a relationship between conditional probabilities and joint probabilities.
Here is their conditional probability:
P(2nd Child = Boy | 1st Child = Boy)
P(1st Child = Boy, 2nd Child = Boy) / P(1st Child = Boy)
This works out to:
(1/4) / (1/2) = 1/2 or
(1/4) * (2/1) = 1/2
In other words, the probability that the second child is a boy, given that the first child is a boy is still 50% (this implies that with respect to conditional probability, if the events are independent it is not different from a single event).
Now we’re ready to tackle the two outcomes posed at the beginning of this post.
Outcome 1: What is the probability of the event “both children are girls” (B) conditional on the event “the older child is a girl” (G)?
Let’s break it down. First we want the probability of the event that “both children are girls”. We’ll take the product of two events; the probability that the first child is a girl (1/2) and the probability that the second child is a girl (1/2). So for both child to be girls, 1/2 * 1/2 = 1/4
P(1st Child = Girl, 2nd Child = Girl) = 1/4
Second, we want that to be given that the “older child is a girl”.
P(1st Child = Girl) = 1/2
Conditional probability:
P(1st Child = Girl, 2nd Child = Girl) / P(1st Child = Girl)
(1/4) / (1/2) = (1/4) * (2/1) = (2/4) = 1/2 or roughly 50%
Now let’s break down the second outcome:
Outcome 2: What is the probability of the event “both children are girls” (B) conditional on the event “at least one of the children is a girl” (L)?
Again, we start with “both children are girls”:
P(1st Child = Girl, 2nd Child = Girl) = 1/4
Then, we have “on condition that at least one of the children is a girl”. We’ll reference a joint probability table. We see that when trying to figure out the probability that “at least one of the children is a girl”, we rule out the scenario where both children are boys. This is actually the compliment to at least one child is a girl. The remaining 3 out of 4 probabilities, fit the condition.
The probability of at least one children being a girl is:
(1/4) + (1/4) + (1/4) = 3/4
So:
P(1st Child = Girl, 2nd Child = Girl) / P(“at least one child is a girl”)
(1/4) / (3/4) = (1/4) * (4/3) = (4/12) = 1/3 or roughly 33%
When two events are independent, their joint probability is the product of each event:
P(E,F) = P(E) * P(F)
Their conditional probability is the joint probability divided by the conditional (i.e., P(F)).
P(E|F) = P(E,F) / P(F)
And so for our two challenge scenarios, we have:
Challenge 1:
B = probability that both children are girls
G = probability that the older children is a girl
This can be stated as: P(B|G) = P(B,G) / P(G)
Challenge 2:
B = probability that both children are girls
L = probability that at least one children is a girl
This can be stated as: P(B|L) = P(B,L) / P(L)
Now that we have an intuition and have worked out the problem on paper, we can use code to express conditional probability:
import enum, randomclass Kid(enum.Enum): BOY = 0 GIRL = 1def random_kid() -> Kid: return random.choice([Kid.BOY, Kid.GIRL])both_girls = 0older_girl = 0either_girl = 0random.seed(0)for _ in range(10000): younger = random_kid() older = random_kid() if older == Kid.GIRL: older_girl += 1 if older == Kid.GIRL and younger == Kid.GIRL: both_girls += 1 if older == Kid.GIRL or younger == Kid.GIRL: either_girl += 1print("P(both | older):", both_girls / older_girl) # 0.5007089325501317print("P(both | either):", both_girls / either_girl) # 0.3311897106109325
We can see that code confirms our intuition by looking at each of the joint probabilities
either_girl #7,464 / 10,000 ~ roughly 75% or 3/4 probability that there is at least one girlboth_girls #2,472 / 10,000 ~ roughly 25% or 1/4 probability that both children are girlsolder_girl #4,937 / 10,000 ~ roughly 50% or 1/2 probability that the first child is a girl
Challenge 1:
P(B|G) = P(B,G) / P(G) or more explicitly:
P(both_girls | older_girl) = P(both_girls) / P(older_girl)
Challenge 2:
P(B|L) = P(B,L) / P(L) or more explicitly:
P(both_girls | either_girl) = P(both_girls) / P(either_girl)
Conditional probabilities are conditional statements in code.
First, we establish a random function that assigns a random.choice method to assign gender such that each child (i.e.,Kid class instance) is equally likely to be a boy or a girl. This is the first assumption of our scenario.
import enum, randomclass Kid(enum.Enum): BOY = 0 GIRL = 1def random_kid() -> Kid: return random.choice([Kid.BOY, Kid.GIRL])
Next we create variables representing joint distributions; one variable for both children being girls (both_girls), one variable for only the older child being a girl (older_girl), and one for at least one child being a girl (either_girl).
First the probability of any one child being a girl is (1/2), consistent with our assumption, we’d expect:
older_girl #4,937 / 10,000 ~ roughly 50% or 1/2 probability that the first child is a girl
Recall that when we take the product of each child being a girl (1/2), we can figure out the joint probability of both child being a girl (1/4). Thus, we’d expect:
both_girls #2,472 / 10,000 ~ roughly 25% or 1/4 probability that both children are girls
Finally, recall that if we’re trying to calculate that at least one (of two) children is a girl, we can rule-out the (1/4) probability that both children are boys leaving (1/4 + 1/4 + 1/4 = 3/4) (see table above). Thus, we’d expect:
either_girl #7,464 / 10,000 ~ roughly 75% or 3/4 probability that there is at least one girl
To arrive at the numbers we see above, we create 10,000 simulations of scenarios where the 1st Child and 2nd Child (see table above) are randomly assigned gender in each of the scenario and conditionally filter through the code to see if certain outcomes are True.
random.seed(0)for _ in range(10000): younger = random_kid() older = random_kid() if older == Kid.GIRL: older_girl += 1 if older == Kid.GIRL and younger == Kid.GIRL: both_girls += 1 if older == Kid.GIRL or younger == Kid.GIRL: either_girl += 1
This simulation yields the joint probabilities which are then used to find the conditional probabilities of the two outcomes above:
print("P(both | older):", both_girls / older_girl) # 0.5007089325501317print("P(both | either):", both_girls / either_girl) # 0.3311897106109325
For more content on data science, machine learning, R, Python, SQL and more, find me on Twitter. | [
{
"code": null,
"e": 411,
"s": 172,
"text": "This post is chapter 6 in continuation of my coverage of Data Science from Scratch by Joel Grus. We will work our way towards understanding conditional probability by understanding preceding concepts like marginal and joint probabilities."
},
{
"code": null,
"e": 547,
"s": 411,
"text": "At the end, we’ll tie all concepts together through code. For those inclined, you can jump to the code towards the bottom of this post."
},
{
"code": null,
"e": 649,
"s": 547,
"text": "The first challenge in this section is distinguishing between two conditional probability statements."
},
{
"code": null,
"e": 881,
"s": 649,
"text": "Here’s the setup. We have a family with two (unknown) children with two assumptions. First, each child is equally likely to be a boy or a girl. Second, the gender of the second child is independent of the gender of the first child."
},
{
"code": null,
"e": 1017,
"s": 881,
"text": "Outcome 1: What is the probability of the event “both children are girls” (B) conditional on the event “the older child is a girl” (G)?"
},
{
"code": null,
"e": 1076,
"s": 1017,
"text": "The probability for statement one is roughly 50% or (1/2)."
},
{
"code": null,
"e": 1225,
"s": 1076,
"text": "Outcome 2: What is the probability of the event “both children are girls” (B) conditional on the event “at least one of the children is a girl” (L)?"
},
{
"code": null,
"e": 1284,
"s": 1225,
"text": "The probability for statement two is roughly 33% or (1/3)."
},
{
"code": null,
"e": 1324,
"s": 1284,
"text": "But at first glance, they look similar."
},
{
"code": null,
"e": 1602,
"s": 1324,
"text": "The book jumps straight to conditional probabilities, but first, we’ll have to look at marginal and joint probabilities. Then we’ll create a joint probabilities table and sum probabilities to help us figure out the differences. We’ll then resume with conditional probabilities."
},
{
"code": null,
"e": 1742,
"s": 1602,
"text": "Before anything, we need to realize the situation we have is one of independence. The gender of one child is independent of a second child."
},
{
"code": null,
"e": 2132,
"s": 1742,
"text": "The intuition for this scenario will be different from a dependent situation. For example, if we draw two cards from a deck (without replacement), the probabilities are different. The probability of drawing one King ♠️ is (4/52) and the probability of drawing a second King ♣️ is now (3/51); the probability of the second event (a second King) is dependent on the result of the first draw."
},
{
"code": null,
"e": 2166,
"s": 2132,
"text": "Back to the two unknown children."
},
{
"code": null,
"e": 2439,
"s": 2166,
"text": "We can say the probability of the first child being either a boy or a girl is 50/50. Moreover, the probability of the second child, which is independent of the first, is also 50/50. Remember, our first assumption is that each child is equally likely to be a boy or a girl."
},
{
"code": null,
"e": 2592,
"s": 2439,
"text": "Let’s put these numbers in a table. The (1/2) probabilities shown here are called marginal probabilities (note how they’re at the margins of the table)."
},
{
"code": null,
"e": 2711,
"s": 2592,
"text": "Since we have two gender (much like two sides of a flipped coin), we can intuitively figure out all possible outcomes:"
},
{
"code": null,
"e": 2864,
"s": 2711,
"text": "first child (Boy), second child (Boy)first child (Boy), second child (Girl)first child (Girl), second child (Boy)first child (Girl), second child (Girl)"
},
{
"code": null,
"e": 2902,
"s": 2864,
"text": "first child (Boy), second child (Boy)"
},
{
"code": null,
"e": 2941,
"s": 2902,
"text": "first child (Boy), second child (Girl)"
},
{
"code": null,
"e": 2980,
"s": 2941,
"text": "first child (Girl), second child (Boy)"
},
{
"code": null,
"e": 3020,
"s": 2980,
"text": "first child (Girl), second child (Girl)"
},
{
"code": null,
"e": 3214,
"s": 3020,
"text": "There are 4 possible outcomes so the probability of getting any one of the four outcomes is (1/4). We can actually write these probabilities in the middle of the table, the joint probabilities:"
},
{
"code": null,
"e": 3445,
"s": 3214,
"text": "To recap, the probability of the first child being either boy or girl is 50/50, simple enough. The probability of the second child being either boy or girl is also 50/50. When put in a table, this yielded the marginal probability."
},
{
"code": null,
"e": 3706,
"s": 3445,
"text": "Now we want to know the probability of say, ‘first child being a boy and second child being a girl’. This is a joint probability because is is the probability that the first child take a specific gender (boy) AND the second child take a specific gender (girl)."
},
{
"code": null,
"e": 3849,
"s": 3706,
"text": "If two event are independent, and in this case they are, their joint probabilities are the product of the probabilities of each one happening."
},
{
"code": null,
"e": 4024,
"s": 3849,
"text": "The probability of the first child being a Boy (1/2) and second child being a Girl (1/2); The product of each marginal probability is the joint probability (1/2 * 1/2 = 1/4)."
},
{
"code": null,
"e": 4086,
"s": 4024,
"text": "This can be repeated for the other three joint probabilities."
},
{
"code": null,
"e": 4305,
"s": 4086,
"text": "Now we get into conditional probability which is the probability of one event happening (i.e., second child being a Boy or Girl) given that or on conditional that another event happened (i.e., first child being a Boy)."
},
{
"code": null,
"e": 4431,
"s": 4305,
"text": "At this point, it might be a good idea to begin writing probability statements similar to how it is expressed in mathematics."
},
{
"code": null,
"e": 4590,
"s": 4431,
"text": "A joint probability is the product of each individual event happening (assuming they are independent events). For example we might have two individual events:"
},
{
"code": null,
"e": 4614,
"s": 4590,
"text": "P(1st Child = Boy): 1/2"
},
{
"code": null,
"e": 4638,
"s": 4614,
"text": "P(2nd Child = Boy): 1/2"
},
{
"code": null,
"e": 4671,
"s": 4638,
"text": "Here is their joint probability:"
},
{
"code": null,
"e": 4707,
"s": 4671,
"text": "P(1st Child = Boy, 2nd Child = Boy)"
},
{
"code": null,
"e": 4747,
"s": 4707,
"text": "P(1st Child = Boy) * P(2nd Child = Boy)"
},
{
"code": null,
"e": 4765,
"s": 4747,
"text": "(1/2 * 1/2 = 1/4)"
},
{
"code": null,
"e": 4848,
"s": 4765,
"text": "There is a relationship between conditional probabilities and joint probabilities."
},
{
"code": null,
"e": 4887,
"s": 4848,
"text": "Here is their conditional probability:"
},
{
"code": null,
"e": 4924,
"s": 4887,
"text": "P(2nd Child = Boy | 1st Child = Boy)"
},
{
"code": null,
"e": 4981,
"s": 4924,
"text": "P(1st Child = Boy, 2nd Child = Boy) / P(1st Child = Boy)"
},
{
"code": null,
"e": 5000,
"s": 4981,
"text": "This works out to:"
},
{
"code": null,
"e": 5023,
"s": 5000,
"text": "(1/4) / (1/2) = 1/2 or"
},
{
"code": null,
"e": 5043,
"s": 5023,
"text": "(1/4) * (2/1) = 1/2"
},
{
"code": null,
"e": 5288,
"s": 5043,
"text": "In other words, the probability that the second child is a boy, given that the first child is a boy is still 50% (this implies that with respect to conditional probability, if the events are independent it is not different from a single event)."
},
{
"code": null,
"e": 5368,
"s": 5288,
"text": "Now we’re ready to tackle the two outcomes posed at the beginning of this post."
},
{
"code": null,
"e": 5504,
"s": 5368,
"text": "Outcome 1: What is the probability of the event “both children are girls” (B) conditional on the event “the older child is a girl” (G)?"
},
{
"code": null,
"e": 5797,
"s": 5504,
"text": "Let’s break it down. First we want the probability of the event that “both children are girls”. We’ll take the product of two events; the probability that the first child is a girl (1/2) and the probability that the second child is a girl (1/2). So for both child to be girls, 1/2 * 1/2 = 1/4"
},
{
"code": null,
"e": 5841,
"s": 5797,
"text": "P(1st Child = Girl, 2nd Child = Girl) = 1/4"
},
{
"code": null,
"e": 5908,
"s": 5841,
"text": "Second, we want that to be given that the “older child is a girl”."
},
{
"code": null,
"e": 5934,
"s": 5908,
"text": "P(1st Child = Girl) = 1/2"
},
{
"code": null,
"e": 5959,
"s": 5934,
"text": "Conditional probability:"
},
{
"code": null,
"e": 6019,
"s": 5959,
"text": "P(1st Child = Girl, 2nd Child = Girl) / P(1st Child = Girl)"
},
{
"code": null,
"e": 6078,
"s": 6019,
"text": "(1/4) / (1/2) = (1/4) * (2/1) = (2/4) = 1/2 or roughly 50%"
},
{
"code": null,
"e": 6119,
"s": 6078,
"text": "Now let’s break down the second outcome:"
},
{
"code": null,
"e": 6268,
"s": 6119,
"text": "Outcome 2: What is the probability of the event “both children are girls” (B) conditional on the event “at least one of the children is a girl” (L)?"
},
{
"code": null,
"e": 6316,
"s": 6268,
"text": "Again, we start with “both children are girls”:"
},
{
"code": null,
"e": 6360,
"s": 6316,
"text": "P(1st Child = Girl, 2nd Child = Girl) = 1/4"
},
{
"code": null,
"e": 6757,
"s": 6360,
"text": "Then, we have “on condition that at least one of the children is a girl”. We’ll reference a joint probability table. We see that when trying to figure out the probability that “at least one of the children is a girl”, we rule out the scenario where both children are boys. This is actually the compliment to at least one child is a girl. The remaining 3 out of 4 probabilities, fit the condition."
},
{
"code": null,
"e": 6815,
"s": 6757,
"text": "The probability of at least one children being a girl is:"
},
{
"code": null,
"e": 6843,
"s": 6815,
"text": "(1/4) + (1/4) + (1/4) = 3/4"
},
{
"code": null,
"e": 6847,
"s": 6843,
"text": "So:"
},
{
"code": null,
"e": 6921,
"s": 6847,
"text": "P(1st Child = Girl, 2nd Child = Girl) / P(“at least one child is a girl”)"
},
{
"code": null,
"e": 6981,
"s": 6921,
"text": "(1/4) / (3/4) = (1/4) * (4/3) = (4/12) = 1/3 or roughly 33%"
},
{
"code": null,
"e": 7068,
"s": 6981,
"text": "When two events are independent, their joint probability is the product of each event:"
},
{
"code": null,
"e": 7089,
"s": 7068,
"text": "P(E,F) = P(E) * P(F)"
},
{
"code": null,
"e": 7185,
"s": 7089,
"text": "Their conditional probability is the joint probability divided by the conditional (i.e., P(F))."
},
{
"code": null,
"e": 7208,
"s": 7185,
"text": "P(E|F) = P(E,F) / P(F)"
},
{
"code": null,
"e": 7257,
"s": 7208,
"text": "And so for our two challenge scenarios, we have:"
},
{
"code": null,
"e": 7270,
"s": 7257,
"text": "Challenge 1:"
},
{
"code": null,
"e": 7315,
"s": 7270,
"text": "B = probability that both children are girls"
},
{
"code": null,
"e": 7365,
"s": 7315,
"text": "G = probability that the older children is a girl"
},
{
"code": null,
"e": 7411,
"s": 7365,
"text": "This can be stated as: P(B|G) = P(B,G) / P(G)"
},
{
"code": null,
"e": 7424,
"s": 7411,
"text": "Challenge 2:"
},
{
"code": null,
"e": 7469,
"s": 7424,
"text": "B = probability that both children are girls"
},
{
"code": null,
"e": 7522,
"s": 7469,
"text": "L = probability that at least one children is a girl"
},
{
"code": null,
"e": 7568,
"s": 7522,
"text": "This can be stated as: P(B|L) = P(B,L) / P(L)"
},
{
"code": null,
"e": 7692,
"s": 7568,
"text": "Now that we have an intuition and have worked out the problem on paper, we can use code to express conditional probability:"
},
{
"code": null,
"e": 8292,
"s": 7692,
"text": "import enum, randomclass Kid(enum.Enum): BOY = 0 GIRL = 1def random_kid() -> Kid: return random.choice([Kid.BOY, Kid.GIRL])both_girls = 0older_girl = 0either_girl = 0random.seed(0)for _ in range(10000): younger = random_kid() older = random_kid() if older == Kid.GIRL: older_girl += 1 if older == Kid.GIRL and younger == Kid.GIRL: both_girls += 1 if older == Kid.GIRL or younger == Kid.GIRL: either_girl += 1print(\"P(both | older):\", both_girls / older_girl) # 0.5007089325501317print(\"P(both | either):\", both_girls / either_girl) # 0.3311897106109325"
},
{
"code": null,
"e": 8382,
"s": 8292,
"text": "We can see that code confirms our intuition by looking at each of the joint probabilities"
},
{
"code": null,
"e": 8655,
"s": 8382,
"text": "either_girl #7,464 / 10,000 ~ roughly 75% or 3/4 probability that there is at least one girlboth_girls #2,472 / 10,000 ~ roughly 25% or 1/4 probability that both children are girlsolder_girl #4,937 / 10,000 ~ roughly 50% or 1/2 probability that the first child is a girl"
},
{
"code": null,
"e": 8668,
"s": 8655,
"text": "Challenge 1:"
},
{
"code": null,
"e": 8711,
"s": 8668,
"text": "P(B|G) = P(B,G) / P(G) or more explicitly:"
},
{
"code": null,
"e": 8770,
"s": 8711,
"text": "P(both_girls | older_girl) = P(both_girls) / P(older_girl)"
},
{
"code": null,
"e": 8783,
"s": 8770,
"text": "Challenge 2:"
},
{
"code": null,
"e": 8826,
"s": 8783,
"text": "P(B|L) = P(B,L) / P(L) or more explicitly:"
},
{
"code": null,
"e": 8887,
"s": 8826,
"text": "P(both_girls | either_girl) = P(both_girls) / P(either_girl)"
},
{
"code": null,
"e": 8949,
"s": 8887,
"text": "Conditional probabilities are conditional statements in code."
},
{
"code": null,
"e": 9174,
"s": 8949,
"text": "First, we establish a random function that assigns a random.choice method to assign gender such that each child (i.e.,Kid class instance) is equally likely to be a boy or a girl. This is the first assumption of our scenario."
},
{
"code": null,
"e": 9307,
"s": 9174,
"text": "import enum, randomclass Kid(enum.Enum): BOY = 0 GIRL = 1def random_kid() -> Kid: return random.choice([Kid.BOY, Kid.GIRL])"
},
{
"code": null,
"e": 9547,
"s": 9307,
"text": "Next we create variables representing joint distributions; one variable for both children being girls (both_girls), one variable for only the older child being a girl (older_girl), and one for at least one child being a girl (either_girl)."
},
{
"code": null,
"e": 9654,
"s": 9547,
"text": "First the probability of any one child being a girl is (1/2), consistent with our assumption, we’d expect:"
},
{
"code": null,
"e": 9746,
"s": 9654,
"text": "older_girl #4,937 / 10,000 ~ roughly 50% or 1/2 probability that the first child is a girl"
},
{
"code": null,
"e": 9910,
"s": 9746,
"text": "Recall that when we take the product of each child being a girl (1/2), we can figure out the joint probability of both child being a girl (1/4). Thus, we’d expect:"
},
{
"code": null,
"e": 10000,
"s": 9910,
"text": "both_girls #2,472 / 10,000 ~ roughly 25% or 1/4 probability that both children are girls"
},
{
"code": null,
"e": 10233,
"s": 10000,
"text": "Finally, recall that if we’re trying to calculate that at least one (of two) children is a girl, we can rule-out the (1/4) probability that both children are boys leaving (1/4 + 1/4 + 1/4 = 3/4) (see table above). Thus, we’d expect:"
},
{
"code": null,
"e": 10326,
"s": 10233,
"text": "either_girl #7,464 / 10,000 ~ roughly 75% or 3/4 probability that there is at least one girl"
},
{
"code": null,
"e": 10591,
"s": 10326,
"text": "To arrive at the numbers we see above, we create 10,000 simulations of scenarios where the 1st Child and 2nd Child (see table above) are randomly assigned gender in each of the scenario and conditionally filter through the code to see if certain outcomes are True."
},
{
"code": null,
"e": 10870,
"s": 10591,
"text": "random.seed(0)for _ in range(10000): younger = random_kid() older = random_kid() if older == Kid.GIRL: older_girl += 1 if older == Kid.GIRL and younger == Kid.GIRL: both_girls += 1 if older == Kid.GIRL or younger == Kid.GIRL: either_girl += 1"
},
{
"code": null,
"e": 11002,
"s": 10870,
"text": "This simulation yields the joint probabilities which are then used to find the conditional probabilities of the two outcomes above:"
},
{
"code": null,
"e": 11149,
"s": 11002,
"text": "print(\"P(both | older):\", both_girls / older_girl) # 0.5007089325501317print(\"P(both | either):\", both_girls / either_girl) # 0.3311897106109325"
}
] |
What is the difference between 'except Exception as e' and 'except Exception, e' in Python? | The difference between using ',' and 'as' in except statements, is as follows:
Both ',' and 'as' are same functionality wise; but their use depends on the python versions as follows.In Python 2.5 and earlier versions, use of the 'comma' is recommended since 'as' isn't supported.In Python 2.6+ versions, both 'comma' and 'as' can be used. But from Python 3.x, 'as' is required to assign an exception to a variable.As of Python 2.6 using 'as' allows us an elegant way to catch multiple exceptions in a single except block as shown below
except (Exception1, Exception2) as err
is any day better than
except (Exception1, Exception2), err | [
{
"code": null,
"e": 1141,
"s": 1062,
"text": "The difference between using ',' and 'as' in except statements, is as follows:"
},
{
"code": null,
"e": 1598,
"s": 1141,
"text": "Both ',' and 'as' are same functionality wise; but their use depends on the python versions as follows.In Python 2.5 and earlier versions, use of the 'comma' is recommended since 'as' isn't supported.In Python 2.6+ versions, both 'comma' and 'as' can be used. But from Python 3.x, 'as' is required to assign an exception to a variable.As of Python 2.6 using 'as' allows us an elegant way to catch multiple exceptions in a single except block as shown below"
},
{
"code": null,
"e": 1637,
"s": 1598,
"text": "except (Exception1, Exception2) as err"
},
{
"code": null,
"e": 1660,
"s": 1637,
"text": "is any day better than"
},
{
"code": null,
"e": 1697,
"s": 1660,
"text": "except (Exception1, Exception2), err"
}
] |
Tryit Editor v3.7 | Tryit: Template 2 - using float | [] |
Subsets and Splits