title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
Python | Real time currency convertor using Tkinter - GeeksforGeeks | 09 Jun, 2021
Prerequisites : Introduction to tkinter | Get the real time currency exchange ratePython offers multiple options for developing GUI (Graphical User Interface). Out of all the GUI methods, tkinter is the most commonly used method. It is a standard Python interface to the Tk GUI toolkit shipped with Python. Python with tkinter outputs the fastest and easiest way to create the GUI applications.
To create a tkinter :
Importing the module – tkinter
Create the main window (container)
Add any number of widgets to the main window.
Apply the event Trigger on the widgets.
Let’s create a GUI based simple Real-time currency convertor (Using Alpha Vantage API) which can convert amounts from one currency to another currency.
Modules required:
tkinter
requests
json
Below is the implementation :
Python3
# import all functions from the tkinterfrom tkinter import * # Create a GUI windowroot = Tk() # create a global variablesvariable1 = StringVar(root)variable2 = StringVar(root) # initialise the variablesvariable1.set("currency")variable2.set("currency") # Function to perform real time conversion# from one currency to another currencydef RealTimeCurrencyConversion(): # importing required libraries import requests, json # currency code from_currency = variable1.get() to_currency = variable2.get() # enter your api key here api_key = "Your_Api_Key" # base_url variable store base url base_url = r"https://www.alphavantage.co/query?function = CURRENCY_EXCHANGE_RATE" # main_url variable store complete url main_url = base_url + "&from_currency =" + from_currency + "&to_currency =" + to_currency + "&apikey =" + api_key # get method of requests module # return response object req_ob = requests.get(main_url) # json method return json format # data into python dictionary data type. # result contains list of nested dictionaries result = req_ob.json() # parsing the required information Exchange_Rate = float(result["Realtime Currency Exchange Rate"] ['5. Exchange Rate']) # get method of Entry widget # returns current text as a # string from text entry box. amount = float(Amount1_field.get()) # calculation for the conversion new_amount = round(amount * Exchange_Rate, 3) # insert method inserting the # value in the text entry box. Amount2_field.insert(0, str(new_amount)) # Function for clearing the Entry fielddef clear_all() : Amount1_field.delete(0, END) Amount2_field.delete(0, END) # Driver codeif __name__ == "__main__" : # Set the background colour of GUI window root.configure(background = 'light green') # Set the configuration of GUI window (WidthxHeight) root.geometry("400x175") # Create welcome to Real Time Currency Convertor label headlabel = Label(root, text = 'welcome to Real Time Currency Convertor', fg = 'black', bg = "red") # Create a "Amount :" label label1 = Label(root, text = "Amount :", fg = 'black', bg = 'dark green') # Create a "From Currency :" label label2 = Label(root, text = "From Currency", fg = 'black', bg = 'dark green') # Create a "To Currency: " label label3 = Label(root, text = "To Currency :", fg = 'black', bg = 'dark green') # Create a "Converted Amount :" label label4 = Label(root, text = "Converted Amount :", fg = 'black', bg = 'dark green') # grid method is used for placing # the widgets at respective positions # in table like structure . headlabel.grid(row = 0, column = 1) label1.grid(row = 1, column = 0) label2.grid(row = 2, column = 0) label3.grid(row = 3, column = 0) label4.grid(row = 5, column = 0) # Create a text entry box # for filling or typing the information. Amount1_field = Entry(root) Amount2_field = Entry(root) # ipadx keyword argument set width of entry space. Amount1_field.grid(row = 1, column = 1, ipadx ="25") Amount2_field.grid(row = 5, column = 1, ipadx ="25") # list of currency codes CurrenyCode_list = ["INR", "USD", "CAD", "CNY", "DKK", "EUR"] # create a drop down menu using OptionMenu function # which takes window name, variable and choices as # an argument. use * before the name of the list, # to unpack the values FromCurrency_option = OptionMenu(root, variable1, *CurrenyCode_list) ToCurrency_option = OptionMenu(root, variable2, *CurrenyCode_list) FromCurrency_option.grid(row = 2, column = 1, ipadx = 10) ToCurrency_option.grid(row = 3, column = 1, ipadx = 10) # Create a Convert Button and attached # with RealTimeCurrencyExchangeRate function button1 = Button(root, text = "Convert", bg = "red", fg = "black", command = RealTimeCurrencyConversion) button1.grid(row = 4, column = 1) # Create a Clear Button and attached # with delete function button2 = Button(root, text = "Clear", bg = "red", fg = "black", command = clear_all) button2.grid(row = 6, column = 1) # Start the GUI root.mainloop()
Output :
simmytarika5
python-utility
Technical Scripter 2018
Project
Python
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
SDE SHEET - A Complete Guide for SDE Preparation
Java Swing | Simple User Registration Form
Twitter Sentiment Analysis using Python
OpenCV C++ Program for Face Detection
Banking Transaction System using Java
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe | [
{
"code": null,
"e": 25729,
"s": 25701,
"text": "\n09 Jun, 2021"
},
{
"code": null,
"e": 26125,
"s": 25729,
"text": "Prerequisites : Introduction to tkinter | Get the real time currency exchange ratePython offers multiple options for developing GUI (Graphical User Interface). Out of all the GUI methods, tkinter is the most commonly used method. It is a standard Python interface to the Tk GUI toolkit shipped with Python. Python with tkinter outputs the fastest and easiest way to create the GUI applications. "
},
{
"code": null,
"e": 26148,
"s": 26125,
"text": "To create a tkinter : "
},
{
"code": null,
"e": 26179,
"s": 26148,
"text": "Importing the module – tkinter"
},
{
"code": null,
"e": 26214,
"s": 26179,
"text": "Create the main window (container)"
},
{
"code": null,
"e": 26260,
"s": 26214,
"text": "Add any number of widgets to the main window."
},
{
"code": null,
"e": 26300,
"s": 26260,
"text": "Apply the event Trigger on the widgets."
},
{
"code": null,
"e": 26452,
"s": 26300,
"text": "Let’s create a GUI based simple Real-time currency convertor (Using Alpha Vantage API) which can convert amounts from one currency to another currency."
},
{
"code": null,
"e": 26472,
"s": 26452,
"text": "Modules required: "
},
{
"code": null,
"e": 26494,
"s": 26472,
"text": "tkinter\nrequests\njson"
},
{
"code": null,
"e": 26525,
"s": 26494,
"text": "Below is the implementation : "
},
{
"code": null,
"e": 26533,
"s": 26525,
"text": "Python3"
},
{
"code": "# import all functions from the tkinterfrom tkinter import * # Create a GUI windowroot = Tk() # create a global variablesvariable1 = StringVar(root)variable2 = StringVar(root) # initialise the variablesvariable1.set(\"currency\")variable2.set(\"currency\") # Function to perform real time conversion# from one currency to another currencydef RealTimeCurrencyConversion(): # importing required libraries import requests, json # currency code from_currency = variable1.get() to_currency = variable2.get() # enter your api key here api_key = \"Your_Api_Key\" # base_url variable store base url base_url = r\"https://www.alphavantage.co/query?function = CURRENCY_EXCHANGE_RATE\" # main_url variable store complete url main_url = base_url + \"&from_currency =\" + from_currency + \"&to_currency =\" + to_currency + \"&apikey =\" + api_key # get method of requests module # return response object req_ob = requests.get(main_url) # json method return json format # data into python dictionary data type. # result contains list of nested dictionaries result = req_ob.json() # parsing the required information Exchange_Rate = float(result[\"Realtime Currency Exchange Rate\"] ['5. Exchange Rate']) # get method of Entry widget # returns current text as a # string from text entry box. amount = float(Amount1_field.get()) # calculation for the conversion new_amount = round(amount * Exchange_Rate, 3) # insert method inserting the # value in the text entry box. Amount2_field.insert(0, str(new_amount)) # Function for clearing the Entry fielddef clear_all() : Amount1_field.delete(0, END) Amount2_field.delete(0, END) # Driver codeif __name__ == \"__main__\" : # Set the background colour of GUI window root.configure(background = 'light green') # Set the configuration of GUI window (WidthxHeight) root.geometry(\"400x175\") # Create welcome to Real Time Currency Convertor label headlabel = Label(root, text = 'welcome to Real Time Currency Convertor', fg = 'black', bg = \"red\") # Create a \"Amount :\" label label1 = Label(root, text = \"Amount :\", fg = 'black', bg = 'dark green') # Create a \"From Currency :\" label label2 = Label(root, text = \"From Currency\", fg = 'black', bg = 'dark green') # Create a \"To Currency: \" label label3 = Label(root, text = \"To Currency :\", fg = 'black', bg = 'dark green') # Create a \"Converted Amount :\" label label4 = Label(root, text = \"Converted Amount :\", fg = 'black', bg = 'dark green') # grid method is used for placing # the widgets at respective positions # in table like structure . headlabel.grid(row = 0, column = 1) label1.grid(row = 1, column = 0) label2.grid(row = 2, column = 0) label3.grid(row = 3, column = 0) label4.grid(row = 5, column = 0) # Create a text entry box # for filling or typing the information. Amount1_field = Entry(root) Amount2_field = Entry(root) # ipadx keyword argument set width of entry space. Amount1_field.grid(row = 1, column = 1, ipadx =\"25\") Amount2_field.grid(row = 5, column = 1, ipadx =\"25\") # list of currency codes CurrenyCode_list = [\"INR\", \"USD\", \"CAD\", \"CNY\", \"DKK\", \"EUR\"] # create a drop down menu using OptionMenu function # which takes window name, variable and choices as # an argument. use * before the name of the list, # to unpack the values FromCurrency_option = OptionMenu(root, variable1, *CurrenyCode_list) ToCurrency_option = OptionMenu(root, variable2, *CurrenyCode_list) FromCurrency_option.grid(row = 2, column = 1, ipadx = 10) ToCurrency_option.grid(row = 3, column = 1, ipadx = 10) # Create a Convert Button and attached # with RealTimeCurrencyExchangeRate function button1 = Button(root, text = \"Convert\", bg = \"red\", fg = \"black\", command = RealTimeCurrencyConversion) button1.grid(row = 4, column = 1) # Create a Clear Button and attached # with delete function button2 = Button(root, text = \"Clear\", bg = \"red\", fg = \"black\", command = clear_all) button2.grid(row = 6, column = 1) # Start the GUI root.mainloop()",
"e": 30942,
"s": 26533,
"text": null
},
{
"code": null,
"e": 30953,
"s": 30942,
"text": "Output : "
},
{
"code": null,
"e": 30970,
"s": 30957,
"text": "simmytarika5"
},
{
"code": null,
"e": 30985,
"s": 30970,
"text": "python-utility"
},
{
"code": null,
"e": 31009,
"s": 30985,
"text": "Technical Scripter 2018"
},
{
"code": null,
"e": 31017,
"s": 31009,
"text": "Project"
},
{
"code": null,
"e": 31024,
"s": 31017,
"text": "Python"
},
{
"code": null,
"e": 31043,
"s": 31024,
"text": "Technical Scripter"
},
{
"code": null,
"e": 31141,
"s": 31043,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31190,
"s": 31141,
"text": "SDE SHEET - A Complete Guide for SDE Preparation"
},
{
"code": null,
"e": 31233,
"s": 31190,
"text": "Java Swing | Simple User Registration Form"
},
{
"code": null,
"e": 31273,
"s": 31233,
"text": "Twitter Sentiment Analysis using Python"
},
{
"code": null,
"e": 31311,
"s": 31273,
"text": "OpenCV C++ Program for Face Detection"
},
{
"code": null,
"e": 31349,
"s": 31311,
"text": "Banking Transaction System using Java"
},
{
"code": null,
"e": 31377,
"s": 31349,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 31427,
"s": 31377,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 31449,
"s": 31427,
"text": "Python map() function"
}
] |
Aggregation in Java | Aggregation refers to HAS-A relationship. Let’s look at the example first −
public class Vehicle{}
public class Speed{}
public class Van extends Vehicle {
private Speed sp;
}
This shows that class Van HAS-A Speed. By having a separate class for Speed, we do not have to put the entire code that belongs to speed inside the Van class, which makes it possible to reuse the Speed class in multiple applications.
In the Object-Oriented feature, the users do not need to bother about which object is doing the real work. To achieve this, the Van class hides the implementation details from the users of the Van class. So, basically what happens is the users would ask the Van class to do a certain action and the Van class will either do the work by itself or ask another class to perform the action.This concept of containing an object to do action is termed as Aggregation. | [
{
"code": null,
"e": 1138,
"s": 1062,
"text": "Aggregation refers to HAS-A relationship. Let’s look at the example first −"
},
{
"code": null,
"e": 1240,
"s": 1138,
"text": "public class Vehicle{}\npublic class Speed{}\npublic class Van extends Vehicle {\n private Speed sp;\n}"
},
{
"code": null,
"e": 1474,
"s": 1240,
"text": "This shows that class Van HAS-A Speed. By having a separate class for Speed, we do not have to put the entire code that belongs to speed inside the Van class, which makes it possible to reuse the Speed class in multiple applications."
},
{
"code": null,
"e": 1936,
"s": 1474,
"text": "In the Object-Oriented feature, the users do not need to bother about which object is doing the real work. To achieve this, the Van class hides the implementation details from the users of the Van class. So, basically what happens is the users would ask the Van class to do a certain action and the Van class will either do the work by itself or ask another class to perform the action.This concept of containing an object to do action is termed as Aggregation."
}
] |
Tryit Editor v3.7 | Tryit: HTML formatting | [] |
Formatting Strings in Python. How to use the format() method and... | by Luay Matalka | Towards Data Science | There are multiple ways to format a string in python. We will go over the format() method and f-strings to create the following:
We will use the first_name, last_name, and age variables to create a string that contains the first name, last name, and age of someone in the following format:
‘first_name last_name is age years old’
A sample sentence would be: ‘John Doe is 43 years old’.
One way to accomplish this would be to use the format method. The format() method is a string method. Once you call it off a string, it would replace all the curly brackets in the string with the values of the variables passed in the parenthesis, in the order they are passed in. Thus, the curly brackets act as placeholders for the variables.
For example, to use the format method for the above example, we would use the following code:
first_name = 'John'last_name = 'Doe'age = 43sentence = '{} {} is {} years old'.format(first_name, last_name, age)print(sentence)# 'John Doe is 43 years old'
Note: The format method will replace the curly brackets with the values of the variables in the order they appear within the parenthesis.
If we don’t want to worry about passing the variables in the correct order, we can use keys in each curly bracket and later assign the variable for that specific key:
first_name = 'John'last_name = 'Doe'age = 43sentence = '{first} {last} is {age_yrs} years old'.format(last=last_name, age_yrs=age, first=first_name)print(sentence)# 'John Doe is 43 years old'
Note how the variables don’t have to be passed in the correct order if we use specific keys for each placeholder.
Though the format() method does the job, there is a much more elegant way of formatting strings, and that’s by using f-strings.
towardsdatascience.com
F-strings are a new way to format strings in Python 3.6 and above. For most people, they are the preferred way to format strings since they are easy to read and thus much more intuitive.
f’{var_1} {var_2} is {var_3} years old’
To specify that we want to use an f-string, or formatted string, we just put an f infront of the string. Then we can directly add our variables into our curly brackets, in the location we want them to appear. Thus, we didn’t need to use any method or pass in any variables at the end of the string like we did with the format method. It is a much more intuitive way of formatting strings, since we don’t have to make sure that all the variables are in the same order as our placeholders to make sure they are being added in the correct location.
Thus, to accomplish the above task with f-strings, we would use the following code:
first_name = 'John'last_name = 'Doe'age = 43sentence = f'{first_name} {last_name} is {age} years old'print(sentence)# 'John Doe is 43 years old'
Similar to the format method, we can also run methods or functions within the f-string. For example, if we want to make the first name uppercase and last name lowercase, we can do so with the corresponding string methods within the f-string:
first_name = 'John'last_name = 'Doe'age = 43sentence = f'{first_name.upper()} {last_name.lower()} is {age} years old'print(sentence)# 'JOHN doe is 43 years old'
towardsdatascience.com
In this tutorial, we quickly looked at ways to format a string in python. We first looked at the format() method, and then at the more elegant and intuitive f-strings. We then saw how we can use methods or functions within f-strings. | [
{
"code": null,
"e": 301,
"s": 172,
"text": "There are multiple ways to format a string in python. We will go over the format() method and f-strings to create the following:"
},
{
"code": null,
"e": 462,
"s": 301,
"text": "We will use the first_name, last_name, and age variables to create a string that contains the first name, last name, and age of someone in the following format:"
},
{
"code": null,
"e": 502,
"s": 462,
"text": "‘first_name last_name is age years old’"
},
{
"code": null,
"e": 558,
"s": 502,
"text": "A sample sentence would be: ‘John Doe is 43 years old’."
},
{
"code": null,
"e": 902,
"s": 558,
"text": "One way to accomplish this would be to use the format method. The format() method is a string method. Once you call it off a string, it would replace all the curly brackets in the string with the values of the variables passed in the parenthesis, in the order they are passed in. Thus, the curly brackets act as placeholders for the variables."
},
{
"code": null,
"e": 996,
"s": 902,
"text": "For example, to use the format method for the above example, we would use the following code:"
},
{
"code": null,
"e": 1153,
"s": 996,
"text": "first_name = 'John'last_name = 'Doe'age = 43sentence = '{} {} is {} years old'.format(first_name, last_name, age)print(sentence)# 'John Doe is 43 years old'"
},
{
"code": null,
"e": 1291,
"s": 1153,
"text": "Note: The format method will replace the curly brackets with the values of the variables in the order they appear within the parenthesis."
},
{
"code": null,
"e": 1458,
"s": 1291,
"text": "If we don’t want to worry about passing the variables in the correct order, we can use keys in each curly bracket and later assign the variable for that specific key:"
},
{
"code": null,
"e": 1650,
"s": 1458,
"text": "first_name = 'John'last_name = 'Doe'age = 43sentence = '{first} {last} is {age_yrs} years old'.format(last=last_name, age_yrs=age, first=first_name)print(sentence)# 'John Doe is 43 years old'"
},
{
"code": null,
"e": 1764,
"s": 1650,
"text": "Note how the variables don’t have to be passed in the correct order if we use specific keys for each placeholder."
},
{
"code": null,
"e": 1892,
"s": 1764,
"text": "Though the format() method does the job, there is a much more elegant way of formatting strings, and that’s by using f-strings."
},
{
"code": null,
"e": 1915,
"s": 1892,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 2102,
"s": 1915,
"text": "F-strings are a new way to format strings in Python 3.6 and above. For most people, they are the preferred way to format strings since they are easy to read and thus much more intuitive."
},
{
"code": null,
"e": 2142,
"s": 2102,
"text": "f’{var_1} {var_2} is {var_3} years old’"
},
{
"code": null,
"e": 2688,
"s": 2142,
"text": "To specify that we want to use an f-string, or formatted string, we just put an f infront of the string. Then we can directly add our variables into our curly brackets, in the location we want them to appear. Thus, we didn’t need to use any method or pass in any variables at the end of the string like we did with the format method. It is a much more intuitive way of formatting strings, since we don’t have to make sure that all the variables are in the same order as our placeholders to make sure they are being added in the correct location."
},
{
"code": null,
"e": 2772,
"s": 2688,
"text": "Thus, to accomplish the above task with f-strings, we would use the following code:"
},
{
"code": null,
"e": 2917,
"s": 2772,
"text": "first_name = 'John'last_name = 'Doe'age = 43sentence = f'{first_name} {last_name} is {age} years old'print(sentence)# 'John Doe is 43 years old'"
},
{
"code": null,
"e": 3159,
"s": 2917,
"text": "Similar to the format method, we can also run methods or functions within the f-string. For example, if we want to make the first name uppercase and last name lowercase, we can do so with the corresponding string methods within the f-string:"
},
{
"code": null,
"e": 3320,
"s": 3159,
"text": "first_name = 'John'last_name = 'Doe'age = 43sentence = f'{first_name.upper()} {last_name.lower()} is {age} years old'print(sentence)# 'JOHN doe is 43 years old'"
},
{
"code": null,
"e": 3343,
"s": 3320,
"text": "towardsdatascience.com"
}
] |
Unix Socket - Server Examples | To make a process a TCP server, you need to follow the steps given below −
Create a socket with the socket() system call.
Create a socket with the socket() system call.
Bind the socket to an address using the bind() system call. For a server socket on the Internet, an address consists of a port number on the host machine.
Bind the socket to an address using the bind() system call. For a server socket on the Internet, an address consists of a port number on the host machine.
Listen for connections with the listen() system call.
Listen for connections with the listen() system call.
Accept a connection with the accept() system call. This call typically blocks until a client connects with the server.
Accept a connection with the accept() system call. This call typically blocks until a client connects with the server.
Send and receive data using the read() and write() system calls.
Send and receive data using the read() and write() system calls.
Now let us put these steps in the form of source code. Put this code into the file server.c and compile it with gcc compiler.
#include <stdio.h>
#include <stdlib.h>
#include <netdb.h>
#include <netinet/in.h>
#include <string.h>
int main( int argc, char *argv[] ) {
int sockfd, newsockfd, portno, clilen;
char buffer[256];
struct sockaddr_in serv_addr, cli_addr;
int n;
/* First call to socket() function */
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0) {
perror("ERROR opening socket");
exit(1);
}
/* Initialize socket structure */
bzero((char *) &serv_addr, sizeof(serv_addr));
portno = 5001;
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(portno);
/* Now bind the host address using bind() call.*/
if (bind(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0) {
perror("ERROR on binding");
exit(1);
}
/* Now start listening for the clients, here process will
* go in sleep mode and will wait for the incoming connection
*/
listen(sockfd,5);
clilen = sizeof(cli_addr);
/* Accept actual connection from the client */
newsockfd = accept(sockfd, (struct sockaddr *)&cli_addr, &clilen);
if (newsockfd < 0) {
perror("ERROR on accept");
exit(1);
}
/* If connection is established then start communicating */
bzero(buffer,256);
n = read( newsockfd,buffer,255 );
if (n < 0) {
perror("ERROR reading from socket");
exit(1);
}
printf("Here is the message: %s\n",buffer);
/* Write a response to the client */
n = write(newsockfd,"I got your message",18);
if (n < 0) {
perror("ERROR writing to socket");
exit(1);
}
return 0;
}
To allow the server to handle multiple simultaneous connections, we make the following changes in the above code −
Put the accept statement and the following code in an infinite loop.
Put the accept statement and the following code in an infinite loop.
After a connection is established, call fork() to create a new process.
After a connection is established, call fork() to create a new process.
The child process will close sockfd and call doprocessing function, passing the new socket file descriptor as an argument. When the two processes have completed their conversation, as indicated by doprocessing() returning, this process simply exits.
The child process will close sockfd and call doprocessing function, passing the new socket file descriptor as an argument. When the two processes have completed their conversation, as indicated by doprocessing() returning, this process simply exits.
The parent process closes newsockfd. As all of this code is in an infinite loop, it will return to the accept statement to wait for the next connection.
The parent process closes newsockfd. As all of this code is in an infinite loop, it will return to the accept statement to wait for the next connection.
#include <stdio.h>
#include <stdlib.h>
#include <netdb.h>
#include <netinet/in.h>
#include <string.h>
void doprocessing (int sock);
int main( int argc, char *argv[] ) {
int sockfd, newsockfd, portno, clilen;
char buffer[256];
struct sockaddr_in serv_addr, cli_addr;
int n, pid;
/* First call to socket() function */
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0) {
perror("ERROR opening socket");
exit(1);
}
/* Initialize socket structure */
bzero((char *) &serv_addr, sizeof(serv_addr));
portno = 5001;
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(portno);
/* Now bind the host address using bind() call.*/
if (bind(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0) {
perror("ERROR on binding");
exit(1);
}
/* Now start listening for the clients, here
* process will go in sleep mode and will wait
* for the incoming connection
*/
listen(sockfd,5);
clilen = sizeof(cli_addr);
while (1) {
newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);
if (newsockfd < 0) {
perror("ERROR on accept");
exit(1);
}
/* Create child process */
pid = fork();
if (pid < 0) {
perror("ERROR on fork");
exit(1);
}
if (pid == 0) {
/* This is the client process */
close(sockfd);
doprocessing(newsockfd);
exit(0);
}
else {
close(newsockfd);
}
} /* end of while */
}
The following code seqment shows a simple implementation of doprocessing function.
void doprocessing (int sock) {
int n;
char buffer[256];
bzero(buffer,256);
n = read(sock,buffer,255);
if (n < 0) {
perror("ERROR reading from socket");
exit(1);
}
printf("Here is the message: %s\n",buffer);
n = write(sock,"I got your message",18);
if (n < 0) {
perror("ERROR writing to socket");
exit(1);
}
}
129 Lectures
23 hours
Eduonix Learning Solutions
5 Lectures
4.5 hours
Frahaan Hussain
35 Lectures
2 hours
Pradeep D
41 Lectures
2.5 hours
Musab Zayadneh
46 Lectures
4 hours
GUHARAJANM
6 Lectures
4 hours
Uplatz
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2078,
"s": 2003,
"text": "To make a process a TCP server, you need to follow the steps given below −"
},
{
"code": null,
"e": 2125,
"s": 2078,
"text": "Create a socket with the socket() system call."
},
{
"code": null,
"e": 2172,
"s": 2125,
"text": "Create a socket with the socket() system call."
},
{
"code": null,
"e": 2327,
"s": 2172,
"text": "Bind the socket to an address using the bind() system call. For a server socket on the Internet, an address consists of a port number on the host machine."
},
{
"code": null,
"e": 2482,
"s": 2327,
"text": "Bind the socket to an address using the bind() system call. For a server socket on the Internet, an address consists of a port number on the host machine."
},
{
"code": null,
"e": 2536,
"s": 2482,
"text": "Listen for connections with the listen() system call."
},
{
"code": null,
"e": 2590,
"s": 2536,
"text": "Listen for connections with the listen() system call."
},
{
"code": null,
"e": 2709,
"s": 2590,
"text": "Accept a connection with the accept() system call. This call typically blocks until a client connects with the server."
},
{
"code": null,
"e": 2828,
"s": 2709,
"text": "Accept a connection with the accept() system call. This call typically blocks until a client connects with the server."
},
{
"code": null,
"e": 2893,
"s": 2828,
"text": "Send and receive data using the read() and write() system calls."
},
{
"code": null,
"e": 2958,
"s": 2893,
"text": "Send and receive data using the read() and write() system calls."
},
{
"code": null,
"e": 3084,
"s": 2958,
"text": "Now let us put these steps in the form of source code. Put this code into the file server.c and compile it with gcc compiler."
},
{
"code": null,
"e": 4790,
"s": 3084,
"text": "#include <stdio.h>\n#include <stdlib.h>\n\n#include <netdb.h>\n#include <netinet/in.h>\n\n#include <string.h>\n\nint main( int argc, char *argv[] ) {\n int sockfd, newsockfd, portno, clilen;\n char buffer[256];\n struct sockaddr_in serv_addr, cli_addr;\n int n;\n \n /* First call to socket() function */\n sockfd = socket(AF_INET, SOCK_STREAM, 0);\n \n if (sockfd < 0) {\n perror(\"ERROR opening socket\");\n exit(1);\n }\n \n /* Initialize socket structure */\n bzero((char *) &serv_addr, sizeof(serv_addr));\n portno = 5001;\n \n serv_addr.sin_family = AF_INET;\n serv_addr.sin_addr.s_addr = INADDR_ANY;\n serv_addr.sin_port = htons(portno);\n \n /* Now bind the host address using bind() call.*/\n if (bind(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0) {\n perror(\"ERROR on binding\");\n exit(1);\n }\n \n /* Now start listening for the clients, here process will\n * go in sleep mode and will wait for the incoming connection\n */\n \n listen(sockfd,5);\n clilen = sizeof(cli_addr);\n \n /* Accept actual connection from the client */\n newsockfd = accept(sockfd, (struct sockaddr *)&cli_addr, &clilen);\n\t\n if (newsockfd < 0) {\n perror(\"ERROR on accept\");\n exit(1);\n }\n \n /* If connection is established then start communicating */\n bzero(buffer,256);\n n = read( newsockfd,buffer,255 );\n \n if (n < 0) {\n perror(\"ERROR reading from socket\");\n exit(1);\n }\n \n printf(\"Here is the message: %s\\n\",buffer);\n \n /* Write a response to the client */\n n = write(newsockfd,\"I got your message\",18);\n \n if (n < 0) {\n perror(\"ERROR writing to socket\");\n exit(1);\n }\n \n return 0;\n}"
},
{
"code": null,
"e": 4905,
"s": 4790,
"text": "To allow the server to handle multiple simultaneous connections, we make the following changes in the above code −"
},
{
"code": null,
"e": 4974,
"s": 4905,
"text": "Put the accept statement and the following code in an infinite loop."
},
{
"code": null,
"e": 5043,
"s": 4974,
"text": "Put the accept statement and the following code in an infinite loop."
},
{
"code": null,
"e": 5115,
"s": 5043,
"text": "After a connection is established, call fork() to create a new process."
},
{
"code": null,
"e": 5187,
"s": 5115,
"text": "After a connection is established, call fork() to create a new process."
},
{
"code": null,
"e": 5437,
"s": 5187,
"text": "The child process will close sockfd and call doprocessing function, passing the new socket file descriptor as an argument. When the two processes have completed their conversation, as indicated by doprocessing() returning, this process simply exits."
},
{
"code": null,
"e": 5687,
"s": 5437,
"text": "The child process will close sockfd and call doprocessing function, passing the new socket file descriptor as an argument. When the two processes have completed their conversation, as indicated by doprocessing() returning, this process simply exits."
},
{
"code": null,
"e": 5840,
"s": 5687,
"text": "The parent process closes newsockfd. As all of this code is in an infinite loop, it will return to the accept statement to wait for the next connection."
},
{
"code": null,
"e": 5993,
"s": 5840,
"text": "The parent process closes newsockfd. As all of this code is in an infinite loop, it will return to the accept statement to wait for the next connection."
},
{
"code": null,
"e": 7641,
"s": 5993,
"text": "#include <stdio.h>\n#include <stdlib.h>\n\n#include <netdb.h>\n#include <netinet/in.h>\n\n#include <string.h>\n\nvoid doprocessing (int sock);\n\nint main( int argc, char *argv[] ) {\n int sockfd, newsockfd, portno, clilen;\n char buffer[256];\n struct sockaddr_in serv_addr, cli_addr;\n int n, pid;\n \n /* First call to socket() function */\n sockfd = socket(AF_INET, SOCK_STREAM, 0);\n \n if (sockfd < 0) {\n perror(\"ERROR opening socket\");\n exit(1);\n }\n \n /* Initialize socket structure */\n bzero((char *) &serv_addr, sizeof(serv_addr));\n portno = 5001;\n \n serv_addr.sin_family = AF_INET;\n serv_addr.sin_addr.s_addr = INADDR_ANY;\n serv_addr.sin_port = htons(portno);\n \n /* Now bind the host address using bind() call.*/\n if (bind(sockfd, (struct sockaddr *) &serv_addr, sizeof(serv_addr)) < 0) {\n perror(\"ERROR on binding\");\n exit(1);\n }\n \n /* Now start listening for the clients, here\n * process will go in sleep mode and will wait\n * for the incoming connection\n */\n \n listen(sockfd,5);\n clilen = sizeof(cli_addr);\n \n while (1) {\n newsockfd = accept(sockfd, (struct sockaddr *) &cli_addr, &clilen);\n\t\t\n if (newsockfd < 0) {\n perror(\"ERROR on accept\");\n exit(1);\n }\n \n /* Create child process */\n pid = fork();\n\t\t\n if (pid < 0) {\n perror(\"ERROR on fork\");\n exit(1);\n }\n \n if (pid == 0) {\n /* This is the client process */\n close(sockfd);\n doprocessing(newsockfd);\n exit(0);\n }\n else {\n close(newsockfd);\n }\n\t\t\n } /* end of while */\n}"
},
{
"code": null,
"e": 7724,
"s": 7641,
"text": "The following code seqment shows a simple implementation of doprocessing function."
},
{
"code": null,
"e": 8101,
"s": 7724,
"text": "void doprocessing (int sock) {\n int n;\n char buffer[256];\n bzero(buffer,256);\n n = read(sock,buffer,255);\n \n if (n < 0) {\n perror(\"ERROR reading from socket\");\n exit(1);\n }\n \n printf(\"Here is the message: %s\\n\",buffer);\n n = write(sock,\"I got your message\",18);\n \n if (n < 0) {\n perror(\"ERROR writing to socket\");\n exit(1);\n }\n\t\n}"
},
{
"code": null,
"e": 8136,
"s": 8101,
"text": "\n 129 Lectures \n 23 hours \n"
},
{
"code": null,
"e": 8164,
"s": 8136,
"text": " Eduonix Learning Solutions"
},
{
"code": null,
"e": 8198,
"s": 8164,
"text": "\n 5 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 8215,
"s": 8198,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 8248,
"s": 8215,
"text": "\n 35 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 8259,
"s": 8248,
"text": " Pradeep D"
},
{
"code": null,
"e": 8294,
"s": 8259,
"text": "\n 41 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 8310,
"s": 8294,
"text": " Musab Zayadneh"
},
{
"code": null,
"e": 8343,
"s": 8310,
"text": "\n 46 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 8355,
"s": 8343,
"text": " GUHARAJANM"
},
{
"code": null,
"e": 8387,
"s": 8355,
"text": "\n 6 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 8395,
"s": 8387,
"text": " Uplatz"
},
{
"code": null,
"e": 8402,
"s": 8395,
"text": " Print"
},
{
"code": null,
"e": 8413,
"s": 8402,
"text": " Add Notes"
}
] |
Rearrange the string to maximize the number of palindromic substrings in C++ | We are given with a string ‘str’ of any given length. The task is to rearrange the characters in such a manner that there will be maximum substrings that will be a palindrome string without adding or removing a character from the given input string. Palindrome string is the one in which the characters are arranged in such a manner that they pronounce the same from start and last.
Input − string str = "itnin"
Output − Rearrangement of the string to maximize the number of palindromic substrings is: iinnt.
Explanation − We are given a string type variable let’s say, str. Now we will rearrange the characters of an input string in such a manner that it will be a maximum palindrome string and if it’s not possible then it will return ‘NOT POSSIBLE’. So the output with the given input string is ‘iinnt.’
Input − string str = "abaaaabb"
Output − Rearrangement of the string to maximize the number of palindromic substrings is: aaaaabbb.
Explanation − We are given a string type variable let’s say, str. Now we will rearrange the characters of an input string in such a manner that it will be a maximum palindrome string and if it’s not possible then it will return ‘NOT POSSIBLE’. So the output with the given input string is aaaaabbb’
Input a variable of string type, let’s say, str and calculate the size of a string and store it in a length named variable.
Input a variable of string type, let’s say, str and calculate the size of a string and store it in a length named variable.
Pass the data to the function Rearr_string(str, length).
Pass the data to the function Rearr_string(str, length).
Inside the function Rearr_string(str, length)Declare an array of integer type of size 26 let’s say, arr[26] and initialise it with 0.Declare a temporary variable ‘temp’ of type string.Start loop FOR from i to 0 till i is less than length. Inside the loop, set arr[str[i] - 'a']++.Start loop FOR from i to 0 till i less than 26. Inside the loop, start another loop FOR from j to 0 till j less than arr[i]. Inside the loop, set temp to temp + (char)(97 + i).Return temp.
Inside the function Rearr_string(str, length)
Declare an array of integer type of size 26 let’s say, arr[26] and initialise it with 0.
Declare an array of integer type of size 26 let’s say, arr[26] and initialise it with 0.
Declare a temporary variable ‘temp’ of type string.
Declare a temporary variable ‘temp’ of type string.
Start loop FOR from i to 0 till i is less than length. Inside the loop, set arr[str[i] - 'a']++.
Start loop FOR from i to 0 till i is less than length. Inside the loop, set arr[str[i] - 'a']++.
Start loop FOR from i to 0 till i less than 26. Inside the loop, start another loop FOR from j to 0 till j less than arr[i]. Inside the loop, set temp to temp + (char)(97 + i).
Start loop FOR from i to 0 till i less than 26. Inside the loop, start another loop FOR from j to 0 till j less than arr[i]. Inside the loop, set temp to temp + (char)(97 + i).
Return temp.
Return temp.
Print the result.
Print the result.
#include <bits/stdc++.h>
using namespace std;
string Rearr_string(string str, int length){
int arr[26] = { 0 };
string temp = "";
for(int i = 0; i < length; i++){
arr[str[i] - 'a']++;
}
for(int i = 0; i < 26; i++){
for(int j = 0; j < arr[i]; j++){
temp = temp + (char)(97 + i);
}
}
return temp;
}
int main(){
string str = "itinn";
int length = str.length();
cout<<"Rearrangement of the string to maximize the number of palindromic substrings is: "<<Rearr_string(str, length);
return 0;
}
If we run the above code it will generate the following Output
Rearrangement of the string to maximize the number of palindromic substrings is: iinnt | [
{
"code": null,
"e": 1445,
"s": 1062,
"text": "We are given with a string ‘str’ of any given length. The task is to rearrange the characters in such a manner that there will be maximum substrings that will be a palindrome string without adding or removing a character from the given input string. Palindrome string is the one in which the characters are arranged in such a manner that they pronounce the same from start and last."
},
{
"code": null,
"e": 1474,
"s": 1445,
"text": "Input − string str = \"itnin\""
},
{
"code": null,
"e": 1571,
"s": 1474,
"text": "Output − Rearrangement of the string to maximize the number of palindromic substrings is: iinnt."
},
{
"code": null,
"e": 1869,
"s": 1571,
"text": "Explanation − We are given a string type variable let’s say, str. Now we will rearrange the characters of an input string in such a manner that it will be a maximum palindrome string and if it’s not possible then it will return ‘NOT POSSIBLE’. So the output with the given input string is ‘iinnt.’"
},
{
"code": null,
"e": 1901,
"s": 1869,
"text": "Input − string str = \"abaaaabb\""
},
{
"code": null,
"e": 2001,
"s": 1901,
"text": "Output − Rearrangement of the string to maximize the number of palindromic substrings is: aaaaabbb."
},
{
"code": null,
"e": 2300,
"s": 2001,
"text": "Explanation − We are given a string type variable let’s say, str. Now we will rearrange the characters of an input string in such a manner that it will be a maximum palindrome string and if it’s not possible then it will return ‘NOT POSSIBLE’. So the output with the given input string is aaaaabbb’"
},
{
"code": null,
"e": 2424,
"s": 2300,
"text": "Input a variable of string type, let’s say, str and calculate the size of a string and store it in a length named variable."
},
{
"code": null,
"e": 2548,
"s": 2424,
"text": "Input a variable of string type, let’s say, str and calculate the size of a string and store it in a length named variable."
},
{
"code": null,
"e": 2605,
"s": 2548,
"text": "Pass the data to the function Rearr_string(str, length)."
},
{
"code": null,
"e": 2662,
"s": 2605,
"text": "Pass the data to the function Rearr_string(str, length)."
},
{
"code": null,
"e": 3131,
"s": 2662,
"text": "Inside the function Rearr_string(str, length)Declare an array of integer type of size 26 let’s say, arr[26] and initialise it with 0.Declare a temporary variable ‘temp’ of type string.Start loop FOR from i to 0 till i is less than length. Inside the loop, set arr[str[i] - 'a']++.Start loop FOR from i to 0 till i less than 26. Inside the loop, start another loop FOR from j to 0 till j less than arr[i]. Inside the loop, set temp to temp + (char)(97 + i).Return temp."
},
{
"code": null,
"e": 3177,
"s": 3131,
"text": "Inside the function Rearr_string(str, length)"
},
{
"code": null,
"e": 3266,
"s": 3177,
"text": "Declare an array of integer type of size 26 let’s say, arr[26] and initialise it with 0."
},
{
"code": null,
"e": 3355,
"s": 3266,
"text": "Declare an array of integer type of size 26 let’s say, arr[26] and initialise it with 0."
},
{
"code": null,
"e": 3407,
"s": 3355,
"text": "Declare a temporary variable ‘temp’ of type string."
},
{
"code": null,
"e": 3459,
"s": 3407,
"text": "Declare a temporary variable ‘temp’ of type string."
},
{
"code": null,
"e": 3556,
"s": 3459,
"text": "Start loop FOR from i to 0 till i is less than length. Inside the loop, set arr[str[i] - 'a']++."
},
{
"code": null,
"e": 3653,
"s": 3556,
"text": "Start loop FOR from i to 0 till i is less than length. Inside the loop, set arr[str[i] - 'a']++."
},
{
"code": null,
"e": 3830,
"s": 3653,
"text": "Start loop FOR from i to 0 till i less than 26. Inside the loop, start another loop FOR from j to 0 till j less than arr[i]. Inside the loop, set temp to temp + (char)(97 + i)."
},
{
"code": null,
"e": 4007,
"s": 3830,
"text": "Start loop FOR from i to 0 till i less than 26. Inside the loop, start another loop FOR from j to 0 till j less than arr[i]. Inside the loop, set temp to temp + (char)(97 + i)."
},
{
"code": null,
"e": 4020,
"s": 4007,
"text": "Return temp."
},
{
"code": null,
"e": 4033,
"s": 4020,
"text": "Return temp."
},
{
"code": null,
"e": 4051,
"s": 4033,
"text": "Print the result."
},
{
"code": null,
"e": 4069,
"s": 4051,
"text": "Print the result."
},
{
"code": null,
"e": 4617,
"s": 4069,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nstring Rearr_string(string str, int length){\n int arr[26] = { 0 };\n string temp = \"\";\n for(int i = 0; i < length; i++){\n arr[str[i] - 'a']++;\n }\n for(int i = 0; i < 26; i++){\n for(int j = 0; j < arr[i]; j++){\n temp = temp + (char)(97 + i);\n }\n }\n return temp;\n}\nint main(){\n string str = \"itinn\";\n int length = str.length();\n cout<<\"Rearrangement of the string to maximize the number of palindromic substrings is: \"<<Rearr_string(str, length);\n return 0;\n}"
},
{
"code": null,
"e": 4680,
"s": 4617,
"text": "If we run the above code it will generate the following Output"
},
{
"code": null,
"e": 4767,
"s": 4680,
"text": "Rearrangement of the string to maximize the number of palindromic substrings is: iinnt"
}
] |
How to remove underline for anchors tag using CSS? - GeeksforGeeks | 08 Feb, 2019
The anchor tag is used to define the hyperlinks and it display underlined anchor part by default. The underline can be easily remove by using text-decoration property. The text-decoration property of CSS allows to decorate the text according to requirement. By setting the text-decoration to none to remove the underline from anchor tag.
Syntax:
text-decoration: none;
Example 1: This example sets the text-decoration property to none.
<!-- HTML code to remove underline from anchor tag --><!DOCTYPE html> <html> <head> <title> text-decoration property </title> <!-- text-decoration property to remove underline from anchor tag --> <style> #GFG { text-decoration: none; } </style></head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksForGeeks </h1> <h3>Original Link</h3> <a href = "#">Link 1</a> <br> <h3>removed Underline</h3> <a id = "GFG" href = "#">Link 2</a> </body> </html>
Output:
Example 2: Use hover property to remove underline when mouse move over the anchor part.
<!-- HTML code to remove underline from anchor tag --><!DOCTYPE html> <html> <head> <title> text-decoration property </title> <!-- text-decoration property to remove underline from anchor tag --> <style> a:link { text-decoration: underline; } a:hover { text-decoration: none; } </style></head> <body style = "text-align:center;"> <h1 style = "color:green;" > GeeksforGeeks </h1> <a id = "GFG" href = "#">GeeksforGeeks Anchor Part</a> </body> </html>
Output:Before Mouse move over:After Mouse move over:
Picked
Web Technologies
Web technologies Questions
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 10 Front End Developer Skills That You Need in 2022
How to fetch data from an API in ReactJS ?
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
Differences between Functional Components and Class Components in React
File uploading in React.js
How to Open URL in New Tab using JavaScript ?
Types of CSS (Cascading Style Sheet)
How to Insert Form Data into Database using PHP ?
How to execute PHP code using command line ? | [
{
"code": null,
"e": 24471,
"s": 24443,
"text": "\n08 Feb, 2019"
},
{
"code": null,
"e": 24809,
"s": 24471,
"text": "The anchor tag is used to define the hyperlinks and it display underlined anchor part by default. The underline can be easily remove by using text-decoration property. The text-decoration property of CSS allows to decorate the text according to requirement. By setting the text-decoration to none to remove the underline from anchor tag."
},
{
"code": null,
"e": 24817,
"s": 24809,
"text": "Syntax:"
},
{
"code": null,
"e": 24840,
"s": 24817,
"text": "text-decoration: none;"
},
{
"code": null,
"e": 24907,
"s": 24840,
"text": "Example 1: This example sets the text-decoration property to none."
},
{
"code": "<!-- HTML code to remove underline from anchor tag --><!DOCTYPE html> <html> <head> <title> text-decoration property </title> <!-- text-decoration property to remove underline from anchor tag --> <style> #GFG { text-decoration: none; } </style></head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksForGeeks </h1> <h3>Original Link</h3> <a href = \"#\">Link 1</a> <br> <h3>removed Underline</h3> <a id = \"GFG\" href = \"#\">Link 2</a> </body> </html> ",
"e": 25532,
"s": 24907,
"text": null
},
{
"code": null,
"e": 25540,
"s": 25532,
"text": "Output:"
},
{
"code": null,
"e": 25628,
"s": 25540,
"text": "Example 2: Use hover property to remove underline when mouse move over the anchor part."
},
{
"code": "<!-- HTML code to remove underline from anchor tag --><!DOCTYPE html> <html> <head> <title> text-decoration property </title> <!-- text-decoration property to remove underline from anchor tag --> <style> a:link { text-decoration: underline; } a:hover { text-decoration: none; } </style></head> <body style = \"text-align:center;\"> <h1 style = \"color:green;\" > GeeksforGeeks </h1> <a id = \"GFG\" href = \"#\">GeeksforGeeks Anchor Part</a> </body> </html> ",
"e": 26222,
"s": 25628,
"text": null
},
{
"code": null,
"e": 26275,
"s": 26222,
"text": "Output:Before Mouse move over:After Mouse move over:"
},
{
"code": null,
"e": 26282,
"s": 26275,
"text": "Picked"
},
{
"code": null,
"e": 26299,
"s": 26282,
"text": "Web Technologies"
},
{
"code": null,
"e": 26326,
"s": 26299,
"text": "Web technologies Questions"
},
{
"code": null,
"e": 26424,
"s": 26326,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26433,
"s": 26424,
"text": "Comments"
},
{
"code": null,
"e": 26446,
"s": 26433,
"text": "Old Comments"
},
{
"code": null,
"e": 26502,
"s": 26446,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 26545,
"s": 26502,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 26606,
"s": 26545,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 26651,
"s": 26606,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 26723,
"s": 26651,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 26750,
"s": 26723,
"text": "File uploading in React.js"
},
{
"code": null,
"e": 26796,
"s": 26750,
"text": "How to Open URL in New Tab using JavaScript ?"
},
{
"code": null,
"e": 26833,
"s": 26796,
"text": "Types of CSS (Cascading Style Sheet)"
},
{
"code": null,
"e": 26883,
"s": 26833,
"text": "How to Insert Form Data into Database using PHP ?"
}
] |
count_if() in C++ STL | In this article we will be discussing the working, syntax and examples of std::count_if() function in C++ STL.
std::count_if() function is an inbuilt function in C++ STL, which is defined in <algorithm> header file. count_if() is used to get the number of elements in a specified range which satisfy a condition. This function returns an integer value which is the number of elements which satisfies the condition.
The function not only iterates through the given range but also checks if the statement or condition is true and counts how many times the statement or condition was true and returns the result.
count_if(start, end, condition);
The function accepts the following parameter(s) −
start, end − These are the iterators which can be used to give a range in which we have to use the function. start gives the beginning position of the range and end gives the ending position of the range.
condition − This is the condition which we want to check. Condition is the unary function which has to be applied on the given range.
This function returns the number of elements which are meeting the condition.
bool iseve(int i){ return ((i%2)==0); }
int a = count_if( vect.begin(), vect.end(), iseve ); /* vect has 10 integers 1-10*/
even numbers = 2 4 6 8 10
Live Demo
#include <bits/stdc++.h>
using namespace std;
bool check_odd(int i){
if (i % 2!= 0)
return true;
else
return false;
}
int main() {
vector<int> vec;
for (int i = 0; i < 10; i++){
vec.push_back(i);
}
int total_odd = count_if(vec.begin(), vec.end(), check_odd);
cout<<"Number of odd is: "<<total_odd;
return 0;
}
Number of odd is: 5 | [
{
"code": null,
"e": 1173,
"s": 1062,
"text": "In this article we will be discussing the working, syntax and examples of std::count_if() function in C++ STL."
},
{
"code": null,
"e": 1477,
"s": 1173,
"text": "std::count_if() function is an inbuilt function in C++ STL, which is defined in <algorithm> header file. count_if() is used to get the number of elements in a specified range which satisfy a condition. This function returns an integer value which is the number of elements which satisfies the condition."
},
{
"code": null,
"e": 1672,
"s": 1477,
"text": "The function not only iterates through the given range but also checks if the statement or condition is true and counts how many times the statement or condition was true and returns the result."
},
{
"code": null,
"e": 1705,
"s": 1672,
"text": "count_if(start, end, condition);"
},
{
"code": null,
"e": 1755,
"s": 1705,
"text": "The function accepts the following parameter(s) −"
},
{
"code": null,
"e": 1960,
"s": 1755,
"text": "start, end − These are the iterators which can be used to give a range in which we have to use the function. start gives the beginning position of the range and end gives the ending position of the range."
},
{
"code": null,
"e": 2094,
"s": 1960,
"text": "condition − This is the condition which we want to check. Condition is the unary function which has to be applied on the given range."
},
{
"code": null,
"e": 2172,
"s": 2094,
"text": "This function returns the number of elements which are meeting the condition."
},
{
"code": null,
"e": 2296,
"s": 2172,
"text": "bool iseve(int i){ return ((i%2)==0); }\nint a = count_if( vect.begin(), vect.end(), iseve ); /* vect has 10 integers 1-10*/"
},
{
"code": null,
"e": 2322,
"s": 2296,
"text": "even numbers = 2 4 6 8 10"
},
{
"code": null,
"e": 2333,
"s": 2322,
"text": " Live Demo"
},
{
"code": null,
"e": 2686,
"s": 2333,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nbool check_odd(int i){\n if (i % 2!= 0)\n return true;\n else\n return false;\n}\n int main() {\n vector<int> vec;\n for (int i = 0; i < 10; i++){\n vec.push_back(i);\n }\n int total_odd = count_if(vec.begin(), vec.end(), check_odd);\n cout<<\"Number of odd is: \"<<total_odd;\n return 0;\n}"
},
{
"code": null,
"e": 2706,
"s": 2686,
"text": "Number of odd is: 5"
}
] |
How can we return a list from a Python function? | There are so many ways we can return a list from a python function. One such function is given below.
def retList():
list = []
for i in range(0,10):
list.append(i)
return list
a = retList()
print a
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9] | [
{
"code": null,
"e": 1164,
"s": 1062,
"text": "There are so many ways we can return a list from a python function. One such function is given below."
},
{
"code": null,
"e": 1280,
"s": 1164,
"text": "def retList():\n list = []\n for i in range(0,10):\n list.append(i)\n return list\na = retList()\nprint a"
},
{
"code": null,
"e": 1311,
"s": 1280,
"text": "[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]"
}
] |
The Softmax Function, Neural Net Outputs as Probabilities, and Ensemble Classifiers | by Haihan Lan | Towards Data Science | In this article, we’ll look at:
Deriving the softmax function for multinomial (multi-class) classification problems starting from simple logistic regression
Using the softmax activation function in the output layer of a deep neural net to represent a categorical distribution over class labels, and obtaining the probabilities of each input element belonging to a label
Building a robust ensemble neural net classifier with softmax output aggregation using the Keras functional API
Links to my other articles:
Deep Kernel Transfer and Gaussian ProcessesCustom Tensorflow Loss FunctionsRandom forestsClimate analysis
Deep Kernel Transfer and Gaussian Processes
Custom Tensorflow Loss Functions
Random forests
Climate analysis
In many cases when using neural network models such as regular deep feedforward nets and convolutional nets for classification tasks over some set of class labels, one wonders whether it is possible to interpret the output, for example y = [0.02, 0, 0.005, 0.975], as the probability of some input being in a class equal to the respective component values yi in the output vector. Skipping straight to the long answer: no, unless you have a softmax layer as your output layer and train the net with the cross-entropy loss function. This point is important because it is sometimes omitted in online sources and even in some textbooks regarding classification with neural networks. We’ll take a look at how the softmax function is derived in the context of multinomial logistic regression and how to apply it to ensemble deep neural network models for robust classification.
Briefly, the Categorical distribution is the multi-class generalization of the Bernoulli distribution. The Bernoulli distribution is a discrete probability distribution that models the outcome of a single experiment, or single observation of a random variable with two outcomes (e.g. the outcome of a single coin flip). The categorical distribution naturally extends the Bernoulli distribution to experiments with more than two outcomes.
Now, simple logistic regression classification (i.e. logistic regression on only two classes or outcomes) assumes that the output Yi (i being the data sample index) conditioned on inputs xi is Bernoulli distributed:
The link function relating the log odds of the Bernoulli outcomes to the linear predictor is the logit function:
If we exponentiate both sides of the equation above and do a little rearranging, on the right-hand-side (RHS) we get the familiar logistic function:
One way to approach deriving the generalized logistic or softmax function for multinomial logistic regression is to start by having one logit linked linear predictor for each class K, plus some normalization factor to ensure that the total sum of the probabilities over all classes equals to one. This resulting system of K equations is a system of log-linear probabilistic models:
The ln(Z) term in the above system of equations is the (log of the) normalization factor, and Z is known as the partition function. As we are dealing with multinomial regression, this system of equations gives probabilities which are categorically distributed: Yi | xi ~ Categorical(pi).
Exponentiating both sides and imposing the constraint:
gives
The RHS of the equation above is called the Gibbs measure and connects the softmax function to statistical mechanics. Next, solving for Z gives:
And finally the system of equations becomes:
The ratio on the RHS of each equation is the softmax function. In general, the softmax function is defined as:
for j = 1 ... K. We can see that the softmax function normalizes a K dimensional vector z of arbitrary real values into a K dimensional vector σ(z) whose components sum to 1 (in other words, a probability vector), and it also provides a weighted average of each zj relative to the aggregate of zj’s in a way that exaggerates differences (returns a value close to 0 or 1) if the zj’s are very different from each other in terms of scale, but returns a moderate value if zj’s are relatively the same scale. It is desirable for a classifier model to learn parameters which give the former condition rather than the latter (i.e decisive vs indecisive).
Finally, just as the logit function is the link function for simple logistic regression and the logistic function is the inverse of the logit function, the multinomial logit function is the link function for multinomial logistic regression and the softmax can be thought of as the inverse of the multinomial logit function. Typically in multinomial logistic regression, maximum a-posterior (MAP) estimation is used to find the parameters β for each class k.
Now that we have seen where the softmax function comes from, it’s time for us to use them in our neural net classifier models. The loss function to be minimized on softmax output layer equipped neural nets is the cross-entropy loss:
where y is the true label for some iteration i and ŷ is the neural network output at iteration i. This loss function is in fact the same one used for simple and multinomial logistic regression. The general definition of the cross-entropy function is:
The cross-entropy between p and q is defined as the sum of the information entropy of distribution p, where p is some underlying true distribution (in this case would be the categorical distribution of true class labels) and the Kullback–Leibler divergence of the distribution q which is our attempt at approximating p and p itself. Optimizing over this function minimizes the information entropy of p (giving more certain outcomes in p) while at the same time minimizes the ‘distance’ between p and q.
A theoretical treatment of using the softmax in neural nets as the output layer activation is given in Bridle’s article. The gist of the article is that using the softmax output layer with the neural network hidden layer output as each zj, trained with the cross-entropy loss gives the posterior distribution (the categorical distribution) over the class labels. In general deep neural nets can vastly outperform simple and multinomial logistic regression at the expense of not being able to provide statistical significance of the features/parameters, which is a very important aspect of inference or finding out which features affect the outcome of the classification. The complete neural network is optimized using a robust optimizer of choice; RMSprop is usually a good start.
So now we’ll whip up a deep feedforward neural net classifier using the Keras functional API and do some wine classification. We’ll use an ensemble model of several neural nets to give us a robust classification (in practice this is what you should do, the variances in predictions of individual neural nets due to random initialization and stochastic gradient training must be averaged out for good results). The output of the ensemble model should give a vector of probabilities that some test example will belong to each class, i.e. a categorical distribution over the class labels.
One way to aggregate the results of each individual neural net model is to use a softmax at the ensemble output to give a final probability. In order to automatically determine the optimal weighting of the final softmax averaging, we’ll tack on another layer ‘gluing together’ the outputs of each individual neural net in the ensemble. A diagram of the architecture is below.
Each learning model will be differentiable from the final softmax aggregate output backwards. We can merge each of the sub-networks together using the Keras concatenate-merge layer. The concatenate layer concatenates the output tensors from each sub-network and allows the optimizer to optimize over the merged model. To simplify our training, each learning model will be trained on the same dataset. Bootstrapped sub-sets can be used but this makes it more complicated to train, as we would have to train each sub-network individually on its own input and target pair while freezing training updates on the rest of the learning models.
import numpy as np
from sklearn.datasets import load_wine
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from keras.layers import Dense, Input, concatenate, Dropout
from keras.models import Model
from keras.optimizers import rmsprop
dataset = load_wine()
ensemble_num = 10 # number of sub-networks
bootstrap_size = 0.8 # 80% size of original (training) dataset
training_size = 0.8 # 80% for training, 20% for test
num_hidden_neurons = 10 # number of neurons in hidden layer
dropout = 0.25 # percentage of weights dropped out before softmax output (this prevents overfitting)
epochs = 200 # number of epochs (complete training episodes over the training set) to run
batch = 10 # mini batch size for better convergence
# get the holdout training and test set
temp = []
scaler = MinMaxScaler()
one_hot = OneHotEncoder() # one hot encode the target classes
dataset['data'] = scaler.fit_transform(dataset['data'])
dataset['target'] = one_hot.fit_transform(np.reshape(dataset['target'], (-1,1)) ).toarray()
for i in range(len(dataset.data)):
temp.append([dataset['data'][i], np.array(dataset['target'][i])])
# shuffle the row of data and targets
temp = np.array(temp)
np.random.shuffle(temp)
# holdout training and test stop index
stop = int(training_size*len(dataset.data))
train_X = np.array([x for x in temp[:stop,0]])
train_Y = np.array([x for x in temp[:stop,1]])
test_X = np.array([x for x in temp[stop:,0]])
test_Y = np.array([x for x in temp[stop:,1]])
# now build the ensemble neural network
# first, let's build the individual sub-networks, each
# as a Keras functional model.
sub_net_outputs = []
sub_net_inputs = []
for i in range(ensemble_num):
# two hidden layers to keep it simple
# specify input shape to the shape of the training set
net_input = Input(shape = (train_X.shape[1],))
sub_net_inputs.append(net_input)
y = Dense(num_hidden_neurons)(net_input)
y = Dense(num_hidden_neurons)(y)
y = Dropout(dropout)(y)
sub_net_outputs.append(y) # sub_nets contains the output tensors
# now concatenate the output tensors
y = concatenate(sub_net_outputs)
# final softmax output layer
y = Dense(train_Y[0].shape[0], activation='softmax')(y)
# now build the whole funtional model
model = Model(inputs=sub_net_inputs, outputs=y)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
print("Begin training...")
# train the model
model.fit( [train_X] * ensemble_num, train_Y,validation_data=[ [test_X] * ensemble_num, test_Y],
epochs=epochs, batch_size=batch)
Using TensorFlow backend.
Begin training...
Train on 142 samples, validate on 36 samples
Epoch 1/200
142/142 [==============================] - 0s - loss: 1.2639 - val_loss: 0.9777
Epoch 2/200
142/142 [==============================] - 0s - loss: 1.0023 - val_loss: 0.8191
Epoch 3/200
142/142 [==============================] - 0s - loss: 0.7580 - val_loss: 0.6661
Epoch 4/200
142/142 [==============================] - 0s - loss: 0.6359 - val_loss: 0.5535
Epoch 5/200
142/142 [==============================] - 0s - loss: 0.5533 - val_loss: 0.4836
Epoch 6/200
142/142 [==============================] - 0s - loss: 0.4388 - val_loss: 0.4176
Epoch 7/200
142/142 [==============================] - 0s - loss: 0.3697 - val_loss: 0.3526
Epoch 8/200
142/142 [==============================] - 0s - loss: 0.3315 - val_loss: 0.3002
Epoch 9/200
142/142 [==============================] - 0s - loss: 0.2763 - val_loss: 0.2710
Epoch 10/200
142/142 [==============================] - 0s - loss: 0.2453 - val_loss: 0.2417
Epoch 11/200
142/142 [==============================] - 0s - loss: 0.2260 - val_loss: 0.2598
Epoch 12/200
142/142 [==============================] - 0s - loss: 0.1936 - val_loss: 0.1944
Epoch 13/200
142/142 [==============================] - 0s - loss: 0.1860 - val_loss: 0.1998
Epoch 14/200
142/142 [==============================] - 0s - loss: 0.1461 - val_loss: 0.1617
Epoch 15/200
142/142 [==============================] - 0s - loss: 0.1544 - val_loss: 0.1795
Epoch 16/200
142/142 [==============================] - 0s - loss: 0.1296 - val_loss: 0.1413
Epoch 17/200
142/142 [==============================] - 0s - loss: 0.1295 - val_loss: 0.1426
Epoch 18/200
142/142 [==============================] - 0s - loss: 0.0989 - val_loss: 0.1281
Epoch 19/200
142/142 [==============================] - 0s - loss: 0.1128 - val_loss: 0.1735
Epoch 20/200
142/142 [==============================] - 0s - loss: 0.1035 - val_loss: 0.1105
Epoch 21/200
142/142 [==============================] - 0s - loss: 0.1006 - val_loss: 0.1145
Epoch 22/200
142/142 [==============================] - 0s - loss: 0.0940 - val_loss: 0.1156
Epoch 23/200
142/142 [==============================] - 0s - loss: 0.0735 - val_loss: 0.1078
Epoch 24/200
142/142 [==============================] - 0s - loss: 0.0685 - val_loss: 0.1099
Epoch 25/200
142/142 [==============================] - 0s - loss: 0.0850 - val_loss: 0.1055
Epoch 26/200
142/142 [==============================] - 0s - loss: 0.0683 - val_loss: 0.1144
Epoch 27/200
142/142 [==============================] - 0s - loss: 0.0642 - val_loss: 0.1003
Epoch 28/200
142/142 [==============================] - 0s - loss: 0.0680 - val_loss: 0.0795
Epoch 29/200
142/142 [==============================] - 0s - loss: 0.0627 - val_loss: 0.0921
Epoch 30/200
142/142 [==============================] - 0s - loss: 0.0566 - val_loss: 0.0669
Epoch 31/200
142/142 [==============================] - 0s - loss: 0.0520 - val_loss: 0.1085
Epoch 32/200
142/142 [==============================] - 0s - loss: 0.0484 - val_loss: 0.1429
Epoch 33/200
142/142 [==============================] - 0s - loss: 0.0532 - val_loss: 0.0741
Epoch 34/200
142/142 [==============================] - 0s - loss: 0.0440 - val_loss: 0.0610
Epoch 35/200
142/142 [==============================] - 0s - loss: 0.0467 - val_loss: 0.0787
Epoch 36/200
142/142 [==============================] - 0s - loss: 0.0413 - val_loss: 0.0805
Epoch 37/200
142/142 [==============================] - 0s - loss: 0.0439 - val_loss: 0.1362
Epoch 38/200
142/142 [==============================] - 0s - loss: 0.0567 - val_loss: 0.0580
Epoch 39/200
142/142 [==============================] - 0s - loss: 0.0354 - val_loss: 0.0602
Epoch 40/200
142/142 [==============================] - 0s - loss: 0.0344 - val_loss: 0.1200
Epoch 41/200
142/142 [==============================] - 0s - loss: 0.0342 - val_loss: 0.0465
Epoch 42/200
142/142 [==============================] - 0s - loss: 0.0482 - val_loss: 0.0505
Epoch 43/200
142/142 [==============================] - 0s - loss: 0.0428 - val_loss: 0.1038
Epoch 44/200
142/142 [==============================] - 0s - loss: 0.0360 - val_loss: 0.0618
Epoch 45/200
142/142 [==============================] - 0s - loss: 0.0332 - val_loss: 0.1045
Epoch 46/200
142/142 [==============================] - 0s - loss: 0.0302 - val_loss: 0.1006
Epoch 47/200
142/142 [==============================] - 0s - loss: 0.0331 - val_loss: 0.0840
Epoch 48/200
142/142 [==============================] - 0s - loss: 0.0331 - val_loss: 0.0687
Epoch 49/200
142/142 [==============================] - 0s - loss: 0.0331 - val_loss: 0.0892
Epoch 50/200
142/142 [==============================] - 0s - loss: 0.0318 - val_loss: 0.0952
Epoch 51/200
142/142 [==============================] - 0s - loss: 0.0214 - val_loss: 0.0860
Epoch 52/200
142/142 [==============================] - 0s - loss: 0.0207 - val_loss: 0.0601
Epoch 53/200
142/142 [==============================] - 0s - loss: 0.0188 - val_loss: 0.0465
Epoch 54/200
142/142 [==============================] - 0s - loss: 0.0296 - val_loss: 0.0494
Epoch 55/200
142/142 [==============================] - 0s - loss: 0.0309 - val_loss: 0.0511
Epoch 56/200
142/142 [==============================] - 0s - loss: 0.0176 - val_loss: 0.1213
Epoch 57/200
142/142 [==============================] - 0s - loss: 0.0195 - val_loss: 0.0320
Epoch 58/200
142/142 [==============================] - 0s - loss: 0.0316 - val_loss: 0.0614
Epoch 59/200
142/142 [==============================] - 0s - loss: 0.0212 - val_loss: 0.0579
Epoch 60/200
142/142 [==============================] - 0s - loss: 0.0172 - val_loss: 0.0768
Epoch 61/200
142/142 [==============================] - 0s - loss: 0.0207 - val_loss: 0.0546
Epoch 62/200
142/142 [==============================] - 0s - loss: 0.0231 - val_loss: 0.0723
Epoch 63/200
142/142 [==============================] - 0s - loss: 0.0184 - val_loss: 0.0624
Epoch 64/200
142/142 [==============================] - 0s - loss: 0.0168 - val_loss: 0.0860
Epoch 65/200
142/142 [==============================] - 0s - loss: 0.0162 - val_loss: 0.0573
Epoch 66/200
142/142 [==============================] - 0s - loss: 0.0130 - val_loss: 0.0236
Epoch 67/200
142/142 [==============================] - 0s - loss: 0.0201 - val_loss: 0.0453
Epoch 68/200
142/142 [==============================] - 0s - loss: 0.0156 - val_loss: 0.0375
Epoch 69/200
142/142 [==============================] - 0s - loss: 0.0111 - val_loss: 0.0543
Epoch 70/200
142/142 [==============================] - 0s - loss: 0.0122 - val_loss: 0.0284
Epoch 71/200
142/142 [==============================] - 0s - loss: 0.0124 - val_loss: 0.0637
Epoch 72/200
142/142 [==============================] - 0s - loss: 0.0173 - val_loss: 0.0505
Epoch 73/200
142/142 [==============================] - 0s - loss: 0.0181 - val_loss: 0.0522
Epoch 74/200
142/142 [==============================] - 0s - loss: 0.0145 - val_loss: 0.0442
Epoch 75/200
142/142 [==============================] - 0s - loss: 0.0098 - val_loss: 0.0357
Epoch 76/200
142/142 [==============================] - 0s - loss: 0.0084 - val_loss: 0.0290
Epoch 77/200
142/142 [==============================] - 0s - loss: 0.0166 - val_loss: 0.0513
Epoch 78/200
142/142 [==============================] - 0s - loss: 0.0069 - val_loss: 0.0576
Epoch 79/200
142/142 [==============================] - 0s - loss: 0.0058 - val_loss: 0.0383
Epoch 80/200
142/142 [==============================] - 0s - loss: 0.0099 - val_loss: 0.0443
Epoch 81/200
142/142 [==============================] - 0s - loss: 0.0073 - val_loss: 0.0774
Epoch 82/200
142/142 [==============================] - 0s - loss: 0.0092 - val_loss: 0.0427
Epoch 83/200
142/142 [==============================] - 0s - loss: 0.0063 - val_loss: 0.0514
Epoch 84/200
142/142 [==============================] - 0s - loss: 0.0061 - val_loss: 0.0288
Epoch 85/200
142/142 [==============================] - 0s - loss: 0.0086 - val_loss: 0.0276
Epoch 86/200
142/142 [==============================] - 0s - loss: 0.0040 - val_loss: 0.0549
Epoch 87/200
142/142 [==============================] - 0s - loss: 0.0041 - val_loss: 0.0504
Epoch 88/200
142/142 [==============================] - 0s - loss: 0.0058 - val_loss: 0.0253
Epoch 89/200
142/142 [==============================] - 0s - loss: 0.0049 - val_loss: 0.0313
Epoch 90/200
142/142 [==============================] - 0s - loss: 0.0064 - val_loss: 0.0318
Epoch 91/200
142/142 [==============================] - 0s - loss: 0.0037 - val_loss: 0.0977
Epoch 92/200
142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0166
Epoch 93/200
142/142 [==============================] - 0s - loss: 0.0061 - val_loss: 0.0215
Epoch 94/200
142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0422
Epoch 95/200
142/142 [==============================] - 0s - loss: 0.0083 - val_loss: 0.0389
Epoch 96/200
142/142 [==============================] - 0s - loss: 0.0088 - val_loss: 0.0499
Epoch 97/200
142/142 [==============================] - 0s - loss: 0.0047 - val_loss: 0.0300
Epoch 98/200
142/142 [==============================] - 0s - loss: 0.0026 - val_loss: 0.0335
Epoch 99/200
142/142 [==============================] - 0s - loss: 0.0053 - val_loss: 0.0523
Epoch 100/200
142/142 [==============================] - 0s - loss: 0.0029 - val_loss: 0.0359
Epoch 101/200
142/142 [==============================] - 0s - loss: 0.0048 - val_loss: 0.0796
Epoch 102/200
142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0224
Epoch 103/200
142/142 [==============================] - 0s - loss: 0.0039 - val_loss: 0.0922
Epoch 104/200
142/142 [==============================] - 0s - loss: 0.0077 - val_loss: 0.0364
Epoch 105/200
142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0436
Epoch 106/200
142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0407
Epoch 107/200
142/142 [==============================] - 0s - loss: 0.0031 - val_loss: 0.0633
Epoch 108/200
142/142 [==============================] - 0s - loss: 0.0032 - val_loss: 0.0255
Epoch 109/200
142/142 [==============================] - 0s - loss: 0.0030 - val_loss: 0.0595
Epoch 110/200
142/142 [==============================] - 0s - loss: 0.0028 - val_loss: 0.0169
Epoch 111/200
142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0486
Epoch 112/200
142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0299
Epoch 113/200
142/142 [==============================] - 0s - loss: 0.0037 - val_loss: 0.0408
Epoch 114/200
142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0811
Epoch 115/200
142/142 [==============================] - 0s - loss: 0.0026 - val_loss: 0.0164
Epoch 116/200
142/142 [==============================] - 0s - loss: 0.0018 - val_loss: 0.0361
Epoch 117/200
142/142 [==============================] - 0s - loss: 0.0013 - val_loss: 0.0225
Epoch 118/200
142/142 [==============================] - 0s - loss: 0.0013 - val_loss: 0.0351
Epoch 119/200
142/142 [==============================] - 0s - loss: 0.0016 - val_loss: 0.0710
Epoch 120/200
142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0173
Epoch 121/200
142/142 [==============================] - 0s - loss: 0.0013 - val_loss: 0.0276
Epoch 122/200
142/142 [==============================] - 0s - loss: 0.0017 - val_loss: 0.0368
Epoch 123/200
142/142 [==============================] - 0s - loss: 6.6628e-04 - val_loss: 0.0468
Epoch 124/200
142/142 [==============================] - 0s - loss: 0.0016 - val_loss: 0.0216
Epoch 125/200
142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0191
Epoch 126/200
142/142 [==============================] - 0s - loss: 0.0022 - val_loss: 0.0111
Epoch 127/200
142/142 [==============================] - 0s - loss: 0.0018 - val_loss: 0.0152
Epoch 128/200
142/142 [==============================] - 0s - loss: 0.0017 - val_loss: 0.0340
Epoch 129/200
142/142 [==============================] - 0s - loss: 0.0010 - val_loss: 0.0253
Epoch 130/200
142/142 [==============================] - 0s - loss: 0.0015 - val_loss: 0.0298
Epoch 131/200
142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0552
Epoch 132/200
142/142 [==============================] - 0s - loss: 0.0025 - val_loss: 0.0230
Epoch 133/200
142/142 [==============================] - 0s - loss: 0.0012 - val_loss: 0.0148
Epoch 134/200
142/142 [==============================] - 0s - loss: 0.0019 - val_loss: 0.0290
Epoch 135/200
142/142 [==============================] - 0s - loss: 0.0011 - val_loss: 0.0321
Epoch 136/200
142/142 [==============================] - 0s - loss: 0.0014 - val_loss: 0.0224
Epoch 137/200
142/142 [==============================] - 0s - loss: 5.2285e-04 - val_loss: 0.0362
Epoch 138/200
142/142 [==============================] - 0s - loss: 3.7215e-04 - val_loss: 0.0190
Epoch 139/200
142/142 [==============================] - 0s - loss: 0.0030 - val_loss: 0.0438
Epoch 140/200
142/142 [==============================] - 0s - loss: 9.2121e-04 - val_loss: 0.0222
Epoch 141/200
142/142 [==============================] - 0s - loss: 0.0029 - val_loss: 0.0276
Epoch 142/200
142/142 [==============================] - 0s - loss: 5.9754e-04 - val_loss: 0.0195
Epoch 143/200
142/142 [==============================] - 0s - loss: 7.1640e-04 - val_loss: 0.0138
Epoch 144/200
142/142 [==============================] - 0s - loss: 0.0015 - val_loss: 0.0845
Epoch 145/200
142/142 [==============================] - 0s - loss: 8.5187e-04 - val_loss: 0.0247
Epoch 146/200
142/142 [==============================] - 0s - loss: 3.6506e-04 - val_loss: 0.0223
Epoch 147/200
142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0391
Epoch 148/200
142/142 [==============================] - 0s - loss: 4.0802e-04 - val_loss: 0.0439
Epoch 149/200
142/142 [==============================] - 0s - loss: 0.0011 - val_loss: 0.0263
Epoch 150/200
142/142 [==============================] - 0s - loss: 2.8790e-04 - val_loss: 0.0331
Epoch 151/200
142/142 [==============================] - 0s - loss: 6.4799e-04 - val_loss: 0.0251
Epoch 152/200
142/142 [==============================] - 0s - loss: 6.8637e-04 - val_loss: 0.0267
Epoch 153/200
142/142 [==============================] - 0s - loss: 3.1719e-04 - val_loss: 0.0750
Epoch 154/200
142/142 [==============================] - 0s - loss: 3.3267e-04 - val_loss: 0.0246
Epoch 155/200
142/142 [==============================] - 0s - loss: 2.5407e-04 - val_loss: 0.0841
Epoch 156/200
142/142 [==============================] - 0s - loss: 9.4721e-04 - val_loss: 0.0323
Epoch 157/200
142/142 [==============================] - 0s - loss: 2.5022e-04 - val_loss: 0.0311
Epoch 158/200
142/142 [==============================] - 0s - loss: 2.3520e-04 - val_loss: 0.0111
Epoch 159/200
142/142 [==============================] - 0s - loss: 8.9580e-04 - val_loss: 0.0795
Epoch 160/200
142/142 [==============================] - 0s - loss: 2.6490e-04 - val_loss: 0.0306
Epoch 161/200
142/142 [==============================] - 0s - loss: 4.8849e-04 - val_loss: 0.0344
Epoch 162/200
142/142 [==============================] - 0s - loss: 0.0013 - val_loss: 0.0570
Epoch 163/200
142/142 [==============================] - 0s - loss: 2.7203e-04 - val_loss: 0.0501
Epoch 164/200
142/142 [==============================] - 0s - loss: 1.9163e-04 - val_loss: 0.0364
Epoch 165/200
142/142 [==============================] - 0s - loss: 2.8227e-04 - val_loss: 0.0360
Epoch 166/200
142/142 [==============================] - 0s - loss: 2.2753e-04 - val_loss: 0.0315
Epoch 167/200
142/142 [==============================] - 0s - loss: 2.3544e-04 - val_loss: 0.1090
Epoch 168/200
142/142 [==============================] - 0s - loss: 2.5797e-04 - val_loss: 0.0301
Epoch 169/200
142/142 [==============================] - 0s - loss: 1.0038e-04 - val_loss: 0.0390
Epoch 170/200
142/142 [==============================] - 0s - loss: 4.4550e-04 - val_loss: 0.0189
Epoch 171/200
142/142 [==============================] - 0s - loss: 1.5940e-04 - val_loss: 0.0347
Epoch 172/200
142/142 [==============================] - 0s - loss: 1.1927e-04 - val_loss: 0.0388
Epoch 173/200
142/142 [==============================] - 0s - loss: 3.4964e-04 - val_loss: 0.0160
Epoch 174/200
142/142 [==============================] - 0s - loss: 1.3943e-04 - val_loss: 0.0173
Epoch 175/200
142/142 [==============================] - 0s - loss: 2.2197e-04 - val_loss: 0.0536
Epoch 176/200
142/142 [==============================] - 0s - loss: 3.6663e-04 - val_loss: 0.0437
Epoch 177/200
142/142 [==============================] - 0s - loss: 1.1296e-04 - val_loss: 0.0302
Epoch 178/200
142/142 [==============================] - 0s - loss: 1.7213e-04 - val_loss: 0.0206
Epoch 179/200
142/142 [==============================] - 0s - loss: 9.8614e-05 - val_loss: 0.0211
Epoch 180/200
142/142 [==============================] - 0s - loss: 8.6613e-05 - val_loss: 0.0147
Epoch 181/200
142/142 [==============================] - 0s - loss: 2.8344e-04 - val_loss: 0.0244
Epoch 182/200
142/142 [==============================] - 0s - loss: 6.9407e-05 - val_loss: 0.0421
Epoch 183/200
142/142 [==============================] - 0s - loss: 4.7512e-04 - val_loss: 0.0404
Epoch 184/200
142/142 [==============================] - 0s - loss: 3.5294e-04 - val_loss: 0.0420
Epoch 185/200
142/142 [==============================] - 0s - loss: 6.7377e-05 - val_loss: 0.0320
Epoch 186/200
142/142 [==============================] - 0s - loss: 6.7105e-05 - val_loss: 0.0679
Epoch 187/200
142/142 [==============================] - 0s - loss: 8.9703e-05 - val_loss: 0.0457
Epoch 188/200
142/142 [==============================] - 0s - loss: 1.1150e-04 - val_loss: 0.0389
Epoch 189/200
142/142 [==============================] - 0s - loss: 1.9544e-04 - val_loss: 0.0639
Epoch 190/200
142/142 [==============================] - 0s - loss: 7.3062e-05 - val_loss: 0.0403
Epoch 191/200
142/142 [==============================] - 0s - loss: 4.3287e-05 - val_loss: 0.0162
Epoch 192/200
142/142 [==============================] - 0s - loss: 4.9369e-05 - val_loss: 0.0147
Epoch 193/200
142/142 [==============================] - 0s - loss: 1.0524e-04 - val_loss: 0.0321
Epoch 194/200
142/142 [==============================] - 0s - loss: 5.1740e-05 - val_loss: 0.1401
Epoch 195/200
142/142 [==============================] - 0s - loss: 9.1553e-05 - val_loss: 0.0299
Epoch 196/200
142/142 [==============================] - 0s - loss: 1.6222e-05 - val_loss: 0.0209
Epoch 197/200
142/142 [==============================] - 0s - loss: 7.6026e-05 - val_loss: 0.0254
Epoch 198/200
142/142 [==============================] - 0s - loss: 5.7551e-05 - val_loss: 0.0281
Epoch 199/200
142/142 [==============================] - 0s - loss: 1.9276e-05 - val_loss: 0.0433
Epoch 200/200
142/142 [==============================] - 0s - loss: 7.5447e-05 - val_loss: 0.0095
<keras.callbacks.History at 0x2d9142dfbe0>
print("Training complete...")
np.set_printoptions(precision=2,suppress=True)
for i in range(len(test_X)):
print("Prediction: " + str(model.predict([test_X[i].reshape(1,test_X[i].shape[0])] * ensemble_num)) + " | True: " + str(test_Y[i]))
Training complete...
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 0. 0.88 0.12]] | True: [ 0. 1. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]
Prediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]
Prediction: [[ 0.01 0. 0.99]] | True: [ 0. 0. 1.]
Prediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 0.01 0.99 0. ]] | True: [ 0. 1. 0.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]
Prediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 0.82 0.18 0. ]] | True: [ 1. 0. 0.]
Prediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
Prediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]
It can be seen from the results of training that the fancy wines are no match for our ensemble classifier. As well, feature selection is not terribly important for our model, as it learns the dataset quite well using all features. The training and validation losses become small to the order of 10^-5 and 10^-3 respectively after 200 epochs, and this indicates our ensemble neural net model is doing a good job of fitting the data and predicting on the test set. The output probabilities are nearly 100% for the correct class and 0% for the others.
In this article, we derived the softmax activation for multinomial logistic regression and saw how to apply it to neural network classifiers. It is important to remember to be careful when interpreting neural network outputs are probabilities. We then built an ensemble neural net classifier using the Keras functional API. Anyways, have fun running the code and as always please don’t hesitate to ask me about anything. | [
{
"code": null,
"e": 79,
"s": 47,
"text": "In this article, we’ll look at:"
},
{
"code": null,
"e": 204,
"s": 79,
"text": "Deriving the softmax function for multinomial (multi-class) classification problems starting from simple logistic regression"
},
{
"code": null,
"e": 417,
"s": 204,
"text": "Using the softmax activation function in the output layer of a deep neural net to represent a categorical distribution over class labels, and obtaining the probabilities of each input element belonging to a label"
},
{
"code": null,
"e": 529,
"s": 417,
"text": "Building a robust ensemble neural net classifier with softmax output aggregation using the Keras functional API"
},
{
"code": null,
"e": 557,
"s": 529,
"text": "Links to my other articles:"
},
{
"code": null,
"e": 663,
"s": 557,
"text": "Deep Kernel Transfer and Gaussian ProcessesCustom Tensorflow Loss FunctionsRandom forestsClimate analysis"
},
{
"code": null,
"e": 707,
"s": 663,
"text": "Deep Kernel Transfer and Gaussian Processes"
},
{
"code": null,
"e": 740,
"s": 707,
"text": "Custom Tensorflow Loss Functions"
},
{
"code": null,
"e": 755,
"s": 740,
"text": "Random forests"
},
{
"code": null,
"e": 772,
"s": 755,
"text": "Climate analysis"
},
{
"code": null,
"e": 1645,
"s": 772,
"text": "In many cases when using neural network models such as regular deep feedforward nets and convolutional nets for classification tasks over some set of class labels, one wonders whether it is possible to interpret the output, for example y = [0.02, 0, 0.005, 0.975], as the probability of some input being in a class equal to the respective component values yi in the output vector. Skipping straight to the long answer: no, unless you have a softmax layer as your output layer and train the net with the cross-entropy loss function. This point is important because it is sometimes omitted in online sources and even in some textbooks regarding classification with neural networks. We’ll take a look at how the softmax function is derived in the context of multinomial logistic regression and how to apply it to ensemble deep neural network models for robust classification."
},
{
"code": null,
"e": 2083,
"s": 1645,
"text": "Briefly, the Categorical distribution is the multi-class generalization of the Bernoulli distribution. The Bernoulli distribution is a discrete probability distribution that models the outcome of a single experiment, or single observation of a random variable with two outcomes (e.g. the outcome of a single coin flip). The categorical distribution naturally extends the Bernoulli distribution to experiments with more than two outcomes."
},
{
"code": null,
"e": 2299,
"s": 2083,
"text": "Now, simple logistic regression classification (i.e. logistic regression on only two classes or outcomes) assumes that the output Yi (i being the data sample index) conditioned on inputs xi is Bernoulli distributed:"
},
{
"code": null,
"e": 2412,
"s": 2299,
"text": "The link function relating the log odds of the Bernoulli outcomes to the linear predictor is the logit function:"
},
{
"code": null,
"e": 2561,
"s": 2412,
"text": "If we exponentiate both sides of the equation above and do a little rearranging, on the right-hand-side (RHS) we get the familiar logistic function:"
},
{
"code": null,
"e": 2943,
"s": 2561,
"text": "One way to approach deriving the generalized logistic or softmax function for multinomial logistic regression is to start by having one logit linked linear predictor for each class K, plus some normalization factor to ensure that the total sum of the probabilities over all classes equals to one. This resulting system of K equations is a system of log-linear probabilistic models:"
},
{
"code": null,
"e": 3231,
"s": 2943,
"text": "The ln(Z) term in the above system of equations is the (log of the) normalization factor, and Z is known as the partition function. As we are dealing with multinomial regression, this system of equations gives probabilities which are categorically distributed: Yi | xi ~ Categorical(pi)."
},
{
"code": null,
"e": 3286,
"s": 3231,
"text": "Exponentiating both sides and imposing the constraint:"
},
{
"code": null,
"e": 3292,
"s": 3286,
"text": "gives"
},
{
"code": null,
"e": 3437,
"s": 3292,
"text": "The RHS of the equation above is called the Gibbs measure and connects the softmax function to statistical mechanics. Next, solving for Z gives:"
},
{
"code": null,
"e": 3482,
"s": 3437,
"text": "And finally the system of equations becomes:"
},
{
"code": null,
"e": 3593,
"s": 3482,
"text": "The ratio on the RHS of each equation is the softmax function. In general, the softmax function is defined as:"
},
{
"code": null,
"e": 4242,
"s": 3593,
"text": "for j = 1 ... K. We can see that the softmax function normalizes a K dimensional vector z of arbitrary real values into a K dimensional vector σ(z) whose components sum to 1 (in other words, a probability vector), and it also provides a weighted average of each zj relative to the aggregate of zj’s in a way that exaggerates differences (returns a value close to 0 or 1) if the zj’s are very different from each other in terms of scale, but returns a moderate value if zj’s are relatively the same scale. It is desirable for a classifier model to learn parameters which give the former condition rather than the latter (i.e decisive vs indecisive)."
},
{
"code": null,
"e": 4700,
"s": 4242,
"text": "Finally, just as the logit function is the link function for simple logistic regression and the logistic function is the inverse of the logit function, the multinomial logit function is the link function for multinomial logistic regression and the softmax can be thought of as the inverse of the multinomial logit function. Typically in multinomial logistic regression, maximum a-posterior (MAP) estimation is used to find the parameters β for each class k."
},
{
"code": null,
"e": 4933,
"s": 4700,
"text": "Now that we have seen where the softmax function comes from, it’s time for us to use them in our neural net classifier models. The loss function to be minimized on softmax output layer equipped neural nets is the cross-entropy loss:"
},
{
"code": null,
"e": 5185,
"s": 4933,
"text": "where y is the true label for some iteration i and ŷ is the neural network output at iteration i. This loss function is in fact the same one used for simple and multinomial logistic regression. The general definition of the cross-entropy function is:"
},
{
"code": null,
"e": 5688,
"s": 5185,
"text": "The cross-entropy between p and q is defined as the sum of the information entropy of distribution p, where p is some underlying true distribution (in this case would be the categorical distribution of true class labels) and the Kullback–Leibler divergence of the distribution q which is our attempt at approximating p and p itself. Optimizing over this function minimizes the information entropy of p (giving more certain outcomes in p) while at the same time minimizes the ‘distance’ between p and q."
},
{
"code": null,
"e": 6469,
"s": 5688,
"text": "A theoretical treatment of using the softmax in neural nets as the output layer activation is given in Bridle’s article. The gist of the article is that using the softmax output layer with the neural network hidden layer output as each zj, trained with the cross-entropy loss gives the posterior distribution (the categorical distribution) over the class labels. In general deep neural nets can vastly outperform simple and multinomial logistic regression at the expense of not being able to provide statistical significance of the features/parameters, which is a very important aspect of inference or finding out which features affect the outcome of the classification. The complete neural network is optimized using a robust optimizer of choice; RMSprop is usually a good start."
},
{
"code": null,
"e": 7055,
"s": 6469,
"text": "So now we’ll whip up a deep feedforward neural net classifier using the Keras functional API and do some wine classification. We’ll use an ensemble model of several neural nets to give us a robust classification (in practice this is what you should do, the variances in predictions of individual neural nets due to random initialization and stochastic gradient training must be averaged out for good results). The output of the ensemble model should give a vector of probabilities that some test example will belong to each class, i.e. a categorical distribution over the class labels."
},
{
"code": null,
"e": 7431,
"s": 7055,
"text": "One way to aggregate the results of each individual neural net model is to use a softmax at the ensemble output to give a final probability. In order to automatically determine the optimal weighting of the final softmax averaging, we’ll tack on another layer ‘gluing together’ the outputs of each individual neural net in the ensemble. A diagram of the architecture is below."
},
{
"code": null,
"e": 8068,
"s": 7431,
"text": "Each learning model will be differentiable from the final softmax aggregate output backwards. We can merge each of the sub-networks together using the Keras concatenate-merge layer. The concatenate layer concatenates the output tensors from each sub-network and allows the optimizer to optimize over the merged model. To simplify our training, each learning model will be trained on the same dataset. Bootstrapped sub-sets can be used but this makes it more complicated to train, as we would have to train each sub-network individually on its own input and target pair while freezing training updates on the rest of the learning models."
},
{
"code": null,
"e": 10615,
"s": 8068,
"text": "import numpy as np\nfrom sklearn.datasets import load_wine\nfrom sklearn.preprocessing import MinMaxScaler, OneHotEncoder\nfrom keras.layers import Dense, Input, concatenate, Dropout\nfrom keras.models import Model\nfrom keras.optimizers import rmsprop\n\ndataset = load_wine()\n\nensemble_num = 10 # number of sub-networks\nbootstrap_size = 0.8 # 80% size of original (training) dataset\ntraining_size = 0.8 # 80% for training, 20% for test\n\nnum_hidden_neurons = 10 # number of neurons in hidden layer\ndropout = 0.25 # percentage of weights dropped out before softmax output (this prevents overfitting)\n\nepochs = 200 # number of epochs (complete training episodes over the training set) to run\nbatch = 10 # mini batch size for better convergence\n\n# get the holdout training and test set\ntemp = []\nscaler = MinMaxScaler()\none_hot = OneHotEncoder() # one hot encode the target classes\ndataset['data'] = scaler.fit_transform(dataset['data'])\ndataset['target'] = one_hot.fit_transform(np.reshape(dataset['target'], (-1,1)) ).toarray()\nfor i in range(len(dataset.data)):\n temp.append([dataset['data'][i], np.array(dataset['target'][i])])\n\n# shuffle the row of data and targets\ntemp = np.array(temp)\nnp.random.shuffle(temp)\n# holdout training and test stop index\nstop = int(training_size*len(dataset.data))\n\ntrain_X = np.array([x for x in temp[:stop,0]])\ntrain_Y = np.array([x for x in temp[:stop,1]])\ntest_X = np.array([x for x in temp[stop:,0]])\ntest_Y = np.array([x for x in temp[stop:,1]])\n\n# now build the ensemble neural network\n# first, let's build the individual sub-networks, each\n# as a Keras functional model.\nsub_net_outputs = []\nsub_net_inputs = []\nfor i in range(ensemble_num):\n # two hidden layers to keep it simple\n # specify input shape to the shape of the training set\n net_input = Input(shape = (train_X.shape[1],))\n sub_net_inputs.append(net_input)\n y = Dense(num_hidden_neurons)(net_input)\n y = Dense(num_hidden_neurons)(y)\n y = Dropout(dropout)(y)\n sub_net_outputs.append(y) # sub_nets contains the output tensors\n\n# now concatenate the output tensors\ny = concatenate(sub_net_outputs)\n\n# final softmax output layer\ny = Dense(train_Y[0].shape[0], activation='softmax')(y)\n\n# now build the whole funtional model\nmodel = Model(inputs=sub_net_inputs, outputs=y)\nmodel.compile(optimizer='rmsprop', loss='categorical_crossentropy')\n\nprint(\"Begin training...\")\n\n# train the model\nmodel.fit( [train_X] * ensemble_num, train_Y,validation_data=[ [test_X] * ensemble_num, test_Y],\n epochs=epochs, batch_size=batch)\n"
},
{
"code": null,
"e": 10642,
"s": 10615,
"text": "Using TensorFlow backend.\n"
},
{
"code": null,
"e": 29634,
"s": 10642,
"text": "Begin training...\nTrain on 142 samples, validate on 36 samples\nEpoch 1/200\n142/142 [==============================] - 0s - loss: 1.2639 - val_loss: 0.9777\nEpoch 2/200\n142/142 [==============================] - 0s - loss: 1.0023 - val_loss: 0.8191\nEpoch 3/200\n142/142 [==============================] - 0s - loss: 0.7580 - val_loss: 0.6661\nEpoch 4/200\n142/142 [==============================] - 0s - loss: 0.6359 - val_loss: 0.5535\nEpoch 5/200\n142/142 [==============================] - 0s - loss: 0.5533 - val_loss: 0.4836\nEpoch 6/200\n142/142 [==============================] - 0s - loss: 0.4388 - val_loss: 0.4176\nEpoch 7/200\n142/142 [==============================] - 0s - loss: 0.3697 - val_loss: 0.3526\nEpoch 8/200\n142/142 [==============================] - 0s - loss: 0.3315 - val_loss: 0.3002\nEpoch 9/200\n142/142 [==============================] - 0s - loss: 0.2763 - val_loss: 0.2710\nEpoch 10/200\n142/142 [==============================] - 0s - loss: 0.2453 - val_loss: 0.2417\nEpoch 11/200\n142/142 [==============================] - 0s - loss: 0.2260 - val_loss: 0.2598\nEpoch 12/200\n142/142 [==============================] - 0s - loss: 0.1936 - val_loss: 0.1944\nEpoch 13/200\n142/142 [==============================] - 0s - loss: 0.1860 - val_loss: 0.1998\nEpoch 14/200\n142/142 [==============================] - 0s - loss: 0.1461 - val_loss: 0.1617\nEpoch 15/200\n142/142 [==============================] - 0s - loss: 0.1544 - val_loss: 0.1795\nEpoch 16/200\n142/142 [==============================] - 0s - loss: 0.1296 - val_loss: 0.1413\nEpoch 17/200\n142/142 [==============================] - 0s - loss: 0.1295 - val_loss: 0.1426\nEpoch 18/200\n142/142 [==============================] - 0s - loss: 0.0989 - val_loss: 0.1281\nEpoch 19/200\n142/142 [==============================] - 0s - loss: 0.1128 - val_loss: 0.1735\nEpoch 20/200\n142/142 [==============================] - 0s - loss: 0.1035 - val_loss: 0.1105\nEpoch 21/200\n142/142 [==============================] - 0s - loss: 0.1006 - val_loss: 0.1145\nEpoch 22/200\n142/142 [==============================] - 0s - loss: 0.0940 - val_loss: 0.1156\nEpoch 23/200\n142/142 [==============================] - 0s - loss: 0.0735 - val_loss: 0.1078\nEpoch 24/200\n142/142 [==============================] - 0s - loss: 0.0685 - val_loss: 0.1099\nEpoch 25/200\n142/142 [==============================] - 0s - loss: 0.0850 - val_loss: 0.1055\nEpoch 26/200\n142/142 [==============================] - 0s - loss: 0.0683 - val_loss: 0.1144\nEpoch 27/200\n142/142 [==============================] - 0s - loss: 0.0642 - val_loss: 0.1003\nEpoch 28/200\n142/142 [==============================] - 0s - loss: 0.0680 - val_loss: 0.0795\nEpoch 29/200\n142/142 [==============================] - 0s - loss: 0.0627 - val_loss: 0.0921\nEpoch 30/200\n142/142 [==============================] - 0s - loss: 0.0566 - val_loss: 0.0669\nEpoch 31/200\n142/142 [==============================] - 0s - loss: 0.0520 - val_loss: 0.1085\nEpoch 32/200\n142/142 [==============================] - 0s - loss: 0.0484 - val_loss: 0.1429\nEpoch 33/200\n142/142 [==============================] - 0s - loss: 0.0532 - val_loss: 0.0741\nEpoch 34/200\n142/142 [==============================] - 0s - loss: 0.0440 - val_loss: 0.0610\nEpoch 35/200\n142/142 [==============================] - 0s - loss: 0.0467 - val_loss: 0.0787\nEpoch 36/200\n142/142 [==============================] - 0s - loss: 0.0413 - val_loss: 0.0805\nEpoch 37/200\n142/142 [==============================] - 0s - loss: 0.0439 - val_loss: 0.1362\nEpoch 38/200\n142/142 [==============================] - 0s - loss: 0.0567 - val_loss: 0.0580\nEpoch 39/200\n142/142 [==============================] - 0s - loss: 0.0354 - val_loss: 0.0602\nEpoch 40/200\n142/142 [==============================] - 0s - loss: 0.0344 - val_loss: 0.1200\nEpoch 41/200\n142/142 [==============================] - 0s - loss: 0.0342 - val_loss: 0.0465\nEpoch 42/200\n142/142 [==============================] - 0s - loss: 0.0482 - val_loss: 0.0505\nEpoch 43/200\n142/142 [==============================] - 0s - loss: 0.0428 - val_loss: 0.1038\nEpoch 44/200\n142/142 [==============================] - 0s - loss: 0.0360 - val_loss: 0.0618\nEpoch 45/200\n142/142 [==============================] - 0s - loss: 0.0332 - val_loss: 0.1045\nEpoch 46/200\n142/142 [==============================] - 0s - loss: 0.0302 - val_loss: 0.1006\nEpoch 47/200\n142/142 [==============================] - 0s - loss: 0.0331 - val_loss: 0.0840\nEpoch 48/200\n142/142 [==============================] - 0s - loss: 0.0331 - val_loss: 0.0687\nEpoch 49/200\n142/142 [==============================] - 0s - loss: 0.0331 - val_loss: 0.0892\nEpoch 50/200\n142/142 [==============================] - 0s - loss: 0.0318 - val_loss: 0.0952\nEpoch 51/200\n142/142 [==============================] - 0s - loss: 0.0214 - val_loss: 0.0860\nEpoch 52/200\n142/142 [==============================] - 0s - loss: 0.0207 - val_loss: 0.0601\nEpoch 53/200\n142/142 [==============================] - 0s - loss: 0.0188 - val_loss: 0.0465\nEpoch 54/200\n142/142 [==============================] - 0s - loss: 0.0296 - val_loss: 0.0494\nEpoch 55/200\n142/142 [==============================] - 0s - loss: 0.0309 - val_loss: 0.0511\nEpoch 56/200\n142/142 [==============================] - 0s - loss: 0.0176 - val_loss: 0.1213\nEpoch 57/200\n142/142 [==============================] - 0s - loss: 0.0195 - val_loss: 0.0320\nEpoch 58/200\n142/142 [==============================] - 0s - loss: 0.0316 - val_loss: 0.0614\nEpoch 59/200\n142/142 [==============================] - 0s - loss: 0.0212 - val_loss: 0.0579\nEpoch 60/200\n142/142 [==============================] - 0s - loss: 0.0172 - val_loss: 0.0768\nEpoch 61/200\n142/142 [==============================] - 0s - loss: 0.0207 - val_loss: 0.0546\nEpoch 62/200\n142/142 [==============================] - 0s - loss: 0.0231 - val_loss: 0.0723\nEpoch 63/200\n142/142 [==============================] - 0s - loss: 0.0184 - val_loss: 0.0624\nEpoch 64/200\n142/142 [==============================] - 0s - loss: 0.0168 - val_loss: 0.0860\nEpoch 65/200\n142/142 [==============================] - 0s - loss: 0.0162 - val_loss: 0.0573\nEpoch 66/200\n142/142 [==============================] - 0s - loss: 0.0130 - val_loss: 0.0236\nEpoch 67/200\n142/142 [==============================] - 0s - loss: 0.0201 - val_loss: 0.0453\nEpoch 68/200\n142/142 [==============================] - 0s - loss: 0.0156 - val_loss: 0.0375\nEpoch 69/200\n142/142 [==============================] - 0s - loss: 0.0111 - val_loss: 0.0543\nEpoch 70/200\n142/142 [==============================] - 0s - loss: 0.0122 - val_loss: 0.0284\nEpoch 71/200\n142/142 [==============================] - 0s - loss: 0.0124 - val_loss: 0.0637\nEpoch 72/200\n142/142 [==============================] - 0s - loss: 0.0173 - val_loss: 0.0505\nEpoch 73/200\n142/142 [==============================] - 0s - loss: 0.0181 - val_loss: 0.0522\nEpoch 74/200\n142/142 [==============================] - 0s - loss: 0.0145 - val_loss: 0.0442\nEpoch 75/200\n142/142 [==============================] - 0s - loss: 0.0098 - val_loss: 0.0357\nEpoch 76/200\n142/142 [==============================] - 0s - loss: 0.0084 - val_loss: 0.0290\nEpoch 77/200\n142/142 [==============================] - 0s - loss: 0.0166 - val_loss: 0.0513\nEpoch 78/200\n142/142 [==============================] - 0s - loss: 0.0069 - val_loss: 0.0576\nEpoch 79/200\n142/142 [==============================] - 0s - loss: 0.0058 - val_loss: 0.0383\nEpoch 80/200\n142/142 [==============================] - 0s - loss: 0.0099 - val_loss: 0.0443\nEpoch 81/200\n142/142 [==============================] - 0s - loss: 0.0073 - val_loss: 0.0774\nEpoch 82/200\n142/142 [==============================] - 0s - loss: 0.0092 - val_loss: 0.0427\nEpoch 83/200\n142/142 [==============================] - 0s - loss: 0.0063 - val_loss: 0.0514\nEpoch 84/200\n142/142 [==============================] - 0s - loss: 0.0061 - val_loss: 0.0288\nEpoch 85/200\n142/142 [==============================] - 0s - loss: 0.0086 - val_loss: 0.0276\nEpoch 86/200\n142/142 [==============================] - 0s - loss: 0.0040 - val_loss: 0.0549\nEpoch 87/200\n142/142 [==============================] - 0s - loss: 0.0041 - val_loss: 0.0504\nEpoch 88/200\n142/142 [==============================] - 0s - loss: 0.0058 - val_loss: 0.0253\nEpoch 89/200\n142/142 [==============================] - 0s - loss: 0.0049 - val_loss: 0.0313\nEpoch 90/200\n142/142 [==============================] - 0s - loss: 0.0064 - val_loss: 0.0318\nEpoch 91/200\n142/142 [==============================] - 0s - loss: 0.0037 - val_loss: 0.0977\nEpoch 92/200\n142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0166\nEpoch 93/200\n142/142 [==============================] - 0s - loss: 0.0061 - val_loss: 0.0215\nEpoch 94/200\n142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0422\nEpoch 95/200\n142/142 [==============================] - 0s - loss: 0.0083 - val_loss: 0.0389\nEpoch 96/200\n142/142 [==============================] - 0s - loss: 0.0088 - val_loss: 0.0499\nEpoch 97/200\n142/142 [==============================] - 0s - loss: 0.0047 - val_loss: 0.0300\nEpoch 98/200\n142/142 [==============================] - 0s - loss: 0.0026 - val_loss: 0.0335\nEpoch 99/200\n142/142 [==============================] - 0s - loss: 0.0053 - val_loss: 0.0523\nEpoch 100/200\n142/142 [==============================] - 0s - loss: 0.0029 - val_loss: 0.0359\nEpoch 101/200\n142/142 [==============================] - 0s - loss: 0.0048 - val_loss: 0.0796\nEpoch 102/200\n142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0224\nEpoch 103/200\n142/142 [==============================] - 0s - loss: 0.0039 - val_loss: 0.0922\nEpoch 104/200\n142/142 [==============================] - 0s - loss: 0.0077 - val_loss: 0.0364\nEpoch 105/200\n142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0436\nEpoch 106/200\n142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0407\nEpoch 107/200\n142/142 [==============================] - 0s - loss: 0.0031 - val_loss: 0.0633\nEpoch 108/200\n142/142 [==============================] - 0s - loss: 0.0032 - val_loss: 0.0255\nEpoch 109/200\n142/142 [==============================] - 0s - loss: 0.0030 - val_loss: 0.0595\nEpoch 110/200\n142/142 [==============================] - 0s - loss: 0.0028 - val_loss: 0.0169\nEpoch 111/200\n142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0486\nEpoch 112/200\n142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0299\nEpoch 113/200\n142/142 [==============================] - 0s - loss: 0.0037 - val_loss: 0.0408\nEpoch 114/200\n142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0811\nEpoch 115/200\n142/142 [==============================] - 0s - loss: 0.0026 - val_loss: 0.0164\nEpoch 116/200\n142/142 [==============================] - 0s - loss: 0.0018 - val_loss: 0.0361\nEpoch 117/200\n142/142 [==============================] - 0s - loss: 0.0013 - val_loss: 0.0225\nEpoch 118/200\n142/142 [==============================] - 0s - loss: 0.0013 - val_loss: 0.0351\nEpoch 119/200\n142/142 [==============================] - 0s - loss: 0.0016 - val_loss: 0.0710\nEpoch 120/200\n142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0173\nEpoch 121/200\n142/142 [==============================] - 0s - loss: 0.0013 - val_loss: 0.0276\nEpoch 122/200\n142/142 [==============================] - 0s - loss: 0.0017 - val_loss: 0.0368\nEpoch 123/200\n142/142 [==============================] - 0s - loss: 6.6628e-04 - val_loss: 0.0468\nEpoch 124/200\n142/142 [==============================] - 0s - loss: 0.0016 - val_loss: 0.0216\nEpoch 125/200\n142/142 [==============================] - 0s - loss: 0.0024 - val_loss: 0.0191\nEpoch 126/200\n142/142 [==============================] - 0s - loss: 0.0022 - val_loss: 0.0111\nEpoch 127/200\n142/142 [==============================] - 0s - loss: 0.0018 - val_loss: 0.0152\nEpoch 128/200\n142/142 [==============================] - 0s - loss: 0.0017 - val_loss: 0.0340\nEpoch 129/200\n142/142 [==============================] - 0s - loss: 0.0010 - val_loss: 0.0253\nEpoch 130/200\n142/142 [==============================] - 0s - loss: 0.0015 - val_loss: 0.0298\nEpoch 131/200\n142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0552\nEpoch 132/200\n142/142 [==============================] - 0s - loss: 0.0025 - val_loss: 0.0230\nEpoch 133/200\n142/142 [==============================] - 0s - loss: 0.0012 - val_loss: 0.0148\nEpoch 134/200\n142/142 [==============================] - 0s - loss: 0.0019 - val_loss: 0.0290\nEpoch 135/200\n142/142 [==============================] - 0s - loss: 0.0011 - val_loss: 0.0321\nEpoch 136/200\n142/142 [==============================] - 0s - loss: 0.0014 - val_loss: 0.0224\nEpoch 137/200\n142/142 [==============================] - 0s - loss: 5.2285e-04 - val_loss: 0.0362\nEpoch 138/200\n142/142 [==============================] - 0s - loss: 3.7215e-04 - val_loss: 0.0190\nEpoch 139/200\n142/142 [==============================] - 0s - loss: 0.0030 - val_loss: 0.0438\nEpoch 140/200\n142/142 [==============================] - 0s - loss: 9.2121e-04 - val_loss: 0.0222\nEpoch 141/200\n142/142 [==============================] - 0s - loss: 0.0029 - val_loss: 0.0276\nEpoch 142/200\n142/142 [==============================] - 0s - loss: 5.9754e-04 - val_loss: 0.0195\nEpoch 143/200\n142/142 [==============================] - 0s - loss: 7.1640e-04 - val_loss: 0.0138\nEpoch 144/200\n142/142 [==============================] - 0s - loss: 0.0015 - val_loss: 0.0845\nEpoch 145/200\n142/142 [==============================] - 0s - loss: 8.5187e-04 - val_loss: 0.0247\nEpoch 146/200\n142/142 [==============================] - 0s - loss: 3.6506e-04 - val_loss: 0.0223\nEpoch 147/200\n142/142 [==============================] - 0s - loss: 0.0038 - val_loss: 0.0391\nEpoch 148/200\n142/142 [==============================] - 0s - loss: 4.0802e-04 - val_loss: 0.0439\nEpoch 149/200\n142/142 [==============================] - 0s - loss: 0.0011 - val_loss: 0.0263\nEpoch 150/200\n142/142 [==============================] - 0s - loss: 2.8790e-04 - val_loss: 0.0331\nEpoch 151/200\n142/142 [==============================] - 0s - loss: 6.4799e-04 - val_loss: 0.0251\nEpoch 152/200\n142/142 [==============================] - 0s - loss: 6.8637e-04 - val_loss: 0.0267\nEpoch 153/200\n142/142 [==============================] - 0s - loss: 3.1719e-04 - val_loss: 0.0750\nEpoch 154/200\n142/142 [==============================] - 0s - loss: 3.3267e-04 - val_loss: 0.0246\nEpoch 155/200\n142/142 [==============================] - 0s - loss: 2.5407e-04 - val_loss: 0.0841\nEpoch 156/200\n142/142 [==============================] - 0s - loss: 9.4721e-04 - val_loss: 0.0323\nEpoch 157/200\n142/142 [==============================] - 0s - loss: 2.5022e-04 - val_loss: 0.0311\nEpoch 158/200\n142/142 [==============================] - 0s - loss: 2.3520e-04 - val_loss: 0.0111\nEpoch 159/200\n142/142 [==============================] - 0s - loss: 8.9580e-04 - val_loss: 0.0795\nEpoch 160/200\n142/142 [==============================] - 0s - loss: 2.6490e-04 - val_loss: 0.0306\nEpoch 161/200\n142/142 [==============================] - 0s - loss: 4.8849e-04 - val_loss: 0.0344\nEpoch 162/200\n142/142 [==============================] - 0s - loss: 0.0013 - val_loss: 0.0570\nEpoch 163/200\n142/142 [==============================] - 0s - loss: 2.7203e-04 - val_loss: 0.0501\nEpoch 164/200\n142/142 [==============================] - 0s - loss: 1.9163e-04 - val_loss: 0.0364\nEpoch 165/200\n142/142 [==============================] - 0s - loss: 2.8227e-04 - val_loss: 0.0360\nEpoch 166/200\n142/142 [==============================] - 0s - loss: 2.2753e-04 - val_loss: 0.0315\nEpoch 167/200\n142/142 [==============================] - 0s - loss: 2.3544e-04 - val_loss: 0.1090\nEpoch 168/200\n142/142 [==============================] - 0s - loss: 2.5797e-04 - val_loss: 0.0301\nEpoch 169/200\n142/142 [==============================] - 0s - loss: 1.0038e-04 - val_loss: 0.0390\nEpoch 170/200\n142/142 [==============================] - 0s - loss: 4.4550e-04 - val_loss: 0.0189\nEpoch 171/200\n142/142 [==============================] - 0s - loss: 1.5940e-04 - val_loss: 0.0347\nEpoch 172/200\n142/142 [==============================] - 0s - loss: 1.1927e-04 - val_loss: 0.0388\nEpoch 173/200\n142/142 [==============================] - 0s - loss: 3.4964e-04 - val_loss: 0.0160\nEpoch 174/200\n142/142 [==============================] - 0s - loss: 1.3943e-04 - val_loss: 0.0173\nEpoch 175/200\n142/142 [==============================] - 0s - loss: 2.2197e-04 - val_loss: 0.0536\nEpoch 176/200\n142/142 [==============================] - 0s - loss: 3.6663e-04 - val_loss: 0.0437\nEpoch 177/200\n142/142 [==============================] - 0s - loss: 1.1296e-04 - val_loss: 0.0302\nEpoch 178/200\n142/142 [==============================] - 0s - loss: 1.7213e-04 - val_loss: 0.0206\nEpoch 179/200\n142/142 [==============================] - 0s - loss: 9.8614e-05 - val_loss: 0.0211\nEpoch 180/200\n142/142 [==============================] - 0s - loss: 8.6613e-05 - val_loss: 0.0147\nEpoch 181/200\n142/142 [==============================] - 0s - loss: 2.8344e-04 - val_loss: 0.0244\nEpoch 182/200\n142/142 [==============================] - 0s - loss: 6.9407e-05 - val_loss: 0.0421\nEpoch 183/200\n142/142 [==============================] - 0s - loss: 4.7512e-04 - val_loss: 0.0404\nEpoch 184/200\n142/142 [==============================] - 0s - loss: 3.5294e-04 - val_loss: 0.0420\nEpoch 185/200\n142/142 [==============================] - 0s - loss: 6.7377e-05 - val_loss: 0.0320\nEpoch 186/200\n142/142 [==============================] - 0s - loss: 6.7105e-05 - val_loss: 0.0679\nEpoch 187/200\n142/142 [==============================] - 0s - loss: 8.9703e-05 - val_loss: 0.0457\nEpoch 188/200\n142/142 [==============================] - 0s - loss: 1.1150e-04 - val_loss: 0.0389\nEpoch 189/200\n142/142 [==============================] - 0s - loss: 1.9544e-04 - val_loss: 0.0639\nEpoch 190/200\n142/142 [==============================] - 0s - loss: 7.3062e-05 - val_loss: 0.0403\nEpoch 191/200\n142/142 [==============================] - 0s - loss: 4.3287e-05 - val_loss: 0.0162\nEpoch 192/200\n142/142 [==============================] - 0s - loss: 4.9369e-05 - val_loss: 0.0147\nEpoch 193/200\n142/142 [==============================] - 0s - loss: 1.0524e-04 - val_loss: 0.0321\nEpoch 194/200\n142/142 [==============================] - 0s - loss: 5.1740e-05 - val_loss: 0.1401\nEpoch 195/200\n142/142 [==============================] - 0s - loss: 9.1553e-05 - val_loss: 0.0299\nEpoch 196/200\n142/142 [==============================] - 0s - loss: 1.6222e-05 - val_loss: 0.0209\nEpoch 197/200\n142/142 [==============================] - 0s - loss: 7.6026e-05 - val_loss: 0.0254\nEpoch 198/200\n142/142 [==============================] - 0s - loss: 5.7551e-05 - val_loss: 0.0281\nEpoch 199/200\n142/142 [==============================] - 0s - loss: 1.9276e-05 - val_loss: 0.0433\nEpoch 200/200\n142/142 [==============================] - 0s - loss: 7.5447e-05 - val_loss: 0.0095\n"
},
{
"code": null,
"e": 29677,
"s": 29634,
"text": "<keras.callbacks.History at 0x2d9142dfbe0>"
},
{
"code": null,
"e": 29920,
"s": 29677,
"text": "print(\"Training complete...\")\nnp.set_printoptions(precision=2,suppress=True)\nfor i in range(len(test_X)):\n print(\"Prediction: \" + str(model.predict([test_X[i].reshape(1,test_X[i].shape[0])] * ensemble_num)) + \" | True: \" + str(test_Y[i]))\n"
},
{
"code": null,
"e": 31766,
"s": 29920,
"text": "Training complete...\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 0. 0.88 0.12]] | True: [ 0. 1. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]\nPrediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]\nPrediction: [[ 0.01 0. 0.99]] | True: [ 0. 0. 1.]\nPrediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 0.01 0.99 0. ]] | True: [ 0. 1. 0.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]\nPrediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 0.82 0.18 0. ]] | True: [ 1. 0. 0.]\nPrediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 0. 0. 1.]] | True: [ 0. 0. 1.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 1. 0. 0.]] | True: [ 1. 0. 0.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\nPrediction: [[ 0. 1. 0.]] | True: [ 0. 1. 0.]\n"
},
{
"code": null,
"e": 32315,
"s": 31766,
"text": "It can be seen from the results of training that the fancy wines are no match for our ensemble classifier. As well, feature selection is not terribly important for our model, as it learns the dataset quite well using all features. The training and validation losses become small to the order of 10^-5 and 10^-3 respectively after 200 epochs, and this indicates our ensemble neural net model is doing a good job of fitting the data and predicting on the test set. The output probabilities are nearly 100% for the correct class and 0% for the others."
}
] |
Feature Selection and Dimensionality Reduction | by Tara Boyle | Towards Data Science | According to wikipedia, “feature selection is the process of selecting a subset of relevant features for use in model construction” or in other words, the selection of the most important features.
In normal circumstances, domain knowledge plays an important role and we could select features we feel would be the most important. For example, in predicting home prices the number of bedrooms and square footage are often considered important.
Unfortunately, here in the Don’t Overfit II competition, the use of domain knowledge is impossible as we have a binary target and 300 continuous variables “of mysterious origin” forcing us to try automatic feature selection techniques.
The full notebook can be found here.
Often, feature selection and dimensionality reduction are grouped together (like here in this article). While both methods are used for reducing the number of features in a dataset, there is an important difference.
Feature selection is simply selecting and excluding given features without changing them.
Dimensionality reduction transforms features into a lower dimension.
In this article we will explore the following feature selection and dimensionality reduction techniques:
Remove features with missing values
Remove features with low variance
Remove highly correlated features
Univariate feature selection
Recursive feature elimination
Feature selection using SelectFromModel
PCA
We’ll use logistic regression as our baseline model. We first split into test and train sets and scale the data:
# prepare for modelingX_train_df = train.drop(['id', 'target'], axis=1)y_train = train['target']# scaling datascaler = StandardScaler()X_train = scaler.fit_transform(X_train_df)lr = LogisticRegression(solver='liblinear')lr_scores = cross_val_score(lr, X_train, y_train, cv=5, scoring='roc_auc')print('LR Scores: ', lr_scores)LR Scores: [0.80729167 0.71875 0.734375 0.80034722 0.66319444]
We can see the model is overfitting from the variation in cross validation scores. We can attempt to improve these scores through feature selection.
Checking for missing values is a good first step in any machine learning problem. We can then remove columns exceeding a threshold we define.
# check missing valuestrain.isnull().any().any()False
Unfortunately for our dimensionality reduction efforts, this dataset has zero missing values.
In sklearn’s feature selection module we find VarianceThreshold. It removes all features whose variance doesn’t meet some threshold. By default it removes features with zero variance or features that have the same value for all samples.
from sklearn import feature_selectionsel = feature_selection.VarianceThreshold()train_variance = sel.fit_transform(train)train_variance.shape(250, 302)
The competition description stated that our features are all continuous. We can see from above there are no features with the same value in all columns, so we have no features to remove here.
We can always revisit this technique and consider removing features with low variance.
Features that are highly correlated or co-linear can cause overfitting.
When a pair of variables are highly correlated we can remove one to reduce dimensionality without much loss of information. Which one should we keep? The one with a higher correlation to the target.
Let’s explore correlations among our features:
# find correlations to targetcorr_matrix = train.corr().abs()print(corr_matrix['target'].sort_values(ascending=False).head(10))target 1.00000033 0.37360865 0.293846217 0.207215117 0.19749691 0.19253624 0.173096295 0.17050173 0.167557183 0.164146
Here we see the features that are most highly correlated with our target variable. Feature 33 has the highest correlation to the target, but with a correlation value of only 0.37, it is only weakly correlated.
We can also check the correlation of features to other features. Below we can visualize a correlation matrix. It looks like none of our features are very highly correlated.
Let’s try to drop features with a correlation value greater than 0.5:
# Find index of feature columns with high correlationto_drop = [column for column in matrix.columns if any(matrix[column] > 0.50)]print('Columns to drop: ' , (len(to_drop)))Columns to drop: 0
We have no columns to drop using highly correlated features. Let’s continue to explore other strategies.
Univariate feature selection works by selecting the best features based on univariate statistical tests.
We can use sklearn’s SelectKBest to select a number of features to keep. This method uses statistical tests to select features having the highest correlation to the target. Here we will keep the top 100 features.
from sklearn.feature_selection import SelectKBest, f_classif# feature extractionk_best = SelectKBest(score_func=f_classif, k=100)# fit on train setfit = k_best.fit(X_train, y_train)# transform train setunivariate_features = fit.transform(X_train)
Recursive feature selection works by eliminating the least important features. It continues recursively until the specified number of features is reached. Recursive elimination can be used with any model that assigns weights to features, either through coef_ or feature_importances_
Here we will use Random Forest to select the 100 best features:
from sklearn.feature_selection import RFE# feature extractionrfe = RFE(rfc, n_features_to_select=100)# fit on train setfit = rfe.fit(X_train, y_train)# transform train setrecursive_features = fit.transform(X_train)
Like recursive feature selection, sklearn’s SelectFromModel is used with any estimator that has a coef_ or feature_importances_ attribute. It removes features with values below a set threshold.
from sklearn.feature_selection import SelectFromModelfrom sklearn.ensemble import RandomForestClassifier# define modelrfc = RandomForestClassifier(n_estimators=100)# feature extractionselect_model = feature_selection.SelectFromModel(rfc)# fit on train setfit = select_model.fit(X_train, y_train)# transform train setmodel_features = fit.transform(X_train)
PCA (Principle Component Analysis) is a dimensionality reduction technique that projects the data into a lower dimensional space.
While there are many effective dimensionality reduction techniques, PCA is the only example we will explore here.
PCA can be useful in many situations, but especially in cases with excessive multicollinearity or explanation of predictors is not a priority.
Here we will apply PCA and keep 90% of the variance:
from sklearn.decomposition import PCA# pca - keep 90% of variancepca = PCA(0.90)principal_components = pca.fit_transform(X_train)principal_df = pd.DataFrame(data = principal_components)print(principal_df.shape)(250, 139)
We can see that we are left with 139 features that explain 90% of the variance in our data.
Feature selection is an important part of any machine learning process. Here we explored several methods for feature selection and dimensionality reduction that can aid in improving model performance. | [
{
"code": null,
"e": 369,
"s": 172,
"text": "According to wikipedia, “feature selection is the process of selecting a subset of relevant features for use in model construction” or in other words, the selection of the most important features."
},
{
"code": null,
"e": 614,
"s": 369,
"text": "In normal circumstances, domain knowledge plays an important role and we could select features we feel would be the most important. For example, in predicting home prices the number of bedrooms and square footage are often considered important."
},
{
"code": null,
"e": 850,
"s": 614,
"text": "Unfortunately, here in the Don’t Overfit II competition, the use of domain knowledge is impossible as we have a binary target and 300 continuous variables “of mysterious origin” forcing us to try automatic feature selection techniques."
},
{
"code": null,
"e": 887,
"s": 850,
"text": "The full notebook can be found here."
},
{
"code": null,
"e": 1103,
"s": 887,
"text": "Often, feature selection and dimensionality reduction are grouped together (like here in this article). While both methods are used for reducing the number of features in a dataset, there is an important difference."
},
{
"code": null,
"e": 1193,
"s": 1103,
"text": "Feature selection is simply selecting and excluding given features without changing them."
},
{
"code": null,
"e": 1262,
"s": 1193,
"text": "Dimensionality reduction transforms features into a lower dimension."
},
{
"code": null,
"e": 1367,
"s": 1262,
"text": "In this article we will explore the following feature selection and dimensionality reduction techniques:"
},
{
"code": null,
"e": 1403,
"s": 1367,
"text": "Remove features with missing values"
},
{
"code": null,
"e": 1437,
"s": 1403,
"text": "Remove features with low variance"
},
{
"code": null,
"e": 1471,
"s": 1437,
"text": "Remove highly correlated features"
},
{
"code": null,
"e": 1500,
"s": 1471,
"text": "Univariate feature selection"
},
{
"code": null,
"e": 1530,
"s": 1500,
"text": "Recursive feature elimination"
},
{
"code": null,
"e": 1570,
"s": 1530,
"text": "Feature selection using SelectFromModel"
},
{
"code": null,
"e": 1574,
"s": 1570,
"text": "PCA"
},
{
"code": null,
"e": 1687,
"s": 1574,
"text": "We’ll use logistic regression as our baseline model. We first split into test and train sets and scale the data:"
},
{
"code": null,
"e": 2189,
"s": 1687,
"text": "# prepare for modelingX_train_df = train.drop(['id', 'target'], axis=1)y_train = train['target']# scaling datascaler = StandardScaler()X_train = scaler.fit_transform(X_train_df)lr = LogisticRegression(solver='liblinear')lr_scores = cross_val_score(lr, X_train, y_train, cv=5, scoring='roc_auc')print('LR Scores: ', lr_scores)LR Scores: [0.80729167 0.71875 0.734375 0.80034722 0.66319444]"
},
{
"code": null,
"e": 2338,
"s": 2189,
"text": "We can see the model is overfitting from the variation in cross validation scores. We can attempt to improve these scores through feature selection."
},
{
"code": null,
"e": 2480,
"s": 2338,
"text": "Checking for missing values is a good first step in any machine learning problem. We can then remove columns exceeding a threshold we define."
},
{
"code": null,
"e": 2534,
"s": 2480,
"text": "# check missing valuestrain.isnull().any().any()False"
},
{
"code": null,
"e": 2628,
"s": 2534,
"text": "Unfortunately for our dimensionality reduction efforts, this dataset has zero missing values."
},
{
"code": null,
"e": 2865,
"s": 2628,
"text": "In sklearn’s feature selection module we find VarianceThreshold. It removes all features whose variance doesn’t meet some threshold. By default it removes features with zero variance or features that have the same value for all samples."
},
{
"code": null,
"e": 3017,
"s": 2865,
"text": "from sklearn import feature_selectionsel = feature_selection.VarianceThreshold()train_variance = sel.fit_transform(train)train_variance.shape(250, 302)"
},
{
"code": null,
"e": 3209,
"s": 3017,
"text": "The competition description stated that our features are all continuous. We can see from above there are no features with the same value in all columns, so we have no features to remove here."
},
{
"code": null,
"e": 3296,
"s": 3209,
"text": "We can always revisit this technique and consider removing features with low variance."
},
{
"code": null,
"e": 3368,
"s": 3296,
"text": "Features that are highly correlated or co-linear can cause overfitting."
},
{
"code": null,
"e": 3567,
"s": 3368,
"text": "When a pair of variables are highly correlated we can remove one to reduce dimensionality without much loss of information. Which one should we keep? The one with a higher correlation to the target."
},
{
"code": null,
"e": 3614,
"s": 3567,
"text": "Let’s explore correlations among our features:"
},
{
"code": null,
"e": 3922,
"s": 3614,
"text": "# find correlations to targetcorr_matrix = train.corr().abs()print(corr_matrix['target'].sort_values(ascending=False).head(10))target 1.00000033 0.37360865 0.293846217 0.207215117 0.19749691 0.19253624 0.173096295 0.17050173 0.167557183 0.164146"
},
{
"code": null,
"e": 4132,
"s": 3922,
"text": "Here we see the features that are most highly correlated with our target variable. Feature 33 has the highest correlation to the target, but with a correlation value of only 0.37, it is only weakly correlated."
},
{
"code": null,
"e": 4305,
"s": 4132,
"text": "We can also check the correlation of features to other features. Below we can visualize a correlation matrix. It looks like none of our features are very highly correlated."
},
{
"code": null,
"e": 4375,
"s": 4305,
"text": "Let’s try to drop features with a correlation value greater than 0.5:"
},
{
"code": null,
"e": 4567,
"s": 4375,
"text": "# Find index of feature columns with high correlationto_drop = [column for column in matrix.columns if any(matrix[column] > 0.50)]print('Columns to drop: ' , (len(to_drop)))Columns to drop: 0"
},
{
"code": null,
"e": 4672,
"s": 4567,
"text": "We have no columns to drop using highly correlated features. Let’s continue to explore other strategies."
},
{
"code": null,
"e": 4777,
"s": 4672,
"text": "Univariate feature selection works by selecting the best features based on univariate statistical tests."
},
{
"code": null,
"e": 4990,
"s": 4777,
"text": "We can use sklearn’s SelectKBest to select a number of features to keep. This method uses statistical tests to select features having the highest correlation to the target. Here we will keep the top 100 features."
},
{
"code": null,
"e": 5237,
"s": 4990,
"text": "from sklearn.feature_selection import SelectKBest, f_classif# feature extractionk_best = SelectKBest(score_func=f_classif, k=100)# fit on train setfit = k_best.fit(X_train, y_train)# transform train setunivariate_features = fit.transform(X_train)"
},
{
"code": null,
"e": 5520,
"s": 5237,
"text": "Recursive feature selection works by eliminating the least important features. It continues recursively until the specified number of features is reached. Recursive elimination can be used with any model that assigns weights to features, either through coef_ or feature_importances_"
},
{
"code": null,
"e": 5584,
"s": 5520,
"text": "Here we will use Random Forest to select the 100 best features:"
},
{
"code": null,
"e": 5799,
"s": 5584,
"text": "from sklearn.feature_selection import RFE# feature extractionrfe = RFE(rfc, n_features_to_select=100)# fit on train setfit = rfe.fit(X_train, y_train)# transform train setrecursive_features = fit.transform(X_train)"
},
{
"code": null,
"e": 5993,
"s": 5799,
"text": "Like recursive feature selection, sklearn’s SelectFromModel is used with any estimator that has a coef_ or feature_importances_ attribute. It removes features with values below a set threshold."
},
{
"code": null,
"e": 6349,
"s": 5993,
"text": "from sklearn.feature_selection import SelectFromModelfrom sklearn.ensemble import RandomForestClassifier# define modelrfc = RandomForestClassifier(n_estimators=100)# feature extractionselect_model = feature_selection.SelectFromModel(rfc)# fit on train setfit = select_model.fit(X_train, y_train)# transform train setmodel_features = fit.transform(X_train)"
},
{
"code": null,
"e": 6479,
"s": 6349,
"text": "PCA (Principle Component Analysis) is a dimensionality reduction technique that projects the data into a lower dimensional space."
},
{
"code": null,
"e": 6593,
"s": 6479,
"text": "While there are many effective dimensionality reduction techniques, PCA is the only example we will explore here."
},
{
"code": null,
"e": 6736,
"s": 6593,
"text": "PCA can be useful in many situations, but especially in cases with excessive multicollinearity or explanation of predictors is not a priority."
},
{
"code": null,
"e": 6789,
"s": 6736,
"text": "Here we will apply PCA and keep 90% of the variance:"
},
{
"code": null,
"e": 7010,
"s": 6789,
"text": "from sklearn.decomposition import PCA# pca - keep 90% of variancepca = PCA(0.90)principal_components = pca.fit_transform(X_train)principal_df = pd.DataFrame(data = principal_components)print(principal_df.shape)(250, 139)"
},
{
"code": null,
"e": 7102,
"s": 7010,
"text": "We can see that we are left with 139 features that explain 90% of the variance in our data."
}
] |
QlikView - Rotating Tables | The Rotating table in QlikView is similar to the column and row transpose feature in
Microsoft Excel but with some additional options. We can transpose columns in multiple directions and they give different results. In this chapter, we will be seeing the normal transpose option of converting rows to columns.
Let us consider the following input data, which represents the actual and forecasted sales figures.
Month,Forecast,Actual
March,2145,2247
April,2458,
May,1245,
June,5124,3652
July,7421,7514
August,2584,
September,5314,4251
October,7846,6354
November,6532,7451
December,4625,1424
January,8547,7852
February,3265,
The above data is loaded to QlikView memory by using the script editor. Open the script editor from the File menu or press Control+E. Choose the "Table Files" option from the "Data from Files" tab and browse for the file containing the above data.
After clicking Next, we choose the Enable Transformation Step button to carry out the required data transformation.
As we are going to use the Rotate function, let us choose the Rotate tab which displays the values of all the fields.
We click the Transpose button to transpose the above data. The transposed data appears as shown below.
The load script for the Transformed data can be seen using the script editor. The script shows the expression, which replaces the empty cell values.
The transformed data can be seen by creating a Table Box using the option in the menu Layout → New Sheet Object.
70 Lectures
5 hours
Arthur Fong
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3230,
"s": 2920,
"text": "The Rotating table in QlikView is similar to the column and row transpose feature in\nMicrosoft Excel but with some additional options. We can transpose columns in multiple directions and they give different results. In this chapter, we will be seeing the normal transpose option of converting rows to columns."
},
{
"code": null,
"e": 3330,
"s": 3230,
"text": "Let us consider the following input data, which represents the actual and forecasted sales figures."
},
{
"code": null,
"e": 3543,
"s": 3330,
"text": "Month,Forecast,Actual\nMarch,2145,2247\nApril,2458,\nMay,1245,\nJune,5124,3652\nJuly,7421,7514\nAugust,2584,\nSeptember,5314,4251\nOctober,7846,6354\nNovember,6532,7451\nDecember,4625,1424\nJanuary,8547,7852\nFebruary,3265,\n"
},
{
"code": null,
"e": 3791,
"s": 3543,
"text": "The above data is loaded to QlikView memory by using the script editor. Open the script editor from the File menu or press Control+E. Choose the \"Table Files\" option from the \"Data from Files\" tab and browse for the file containing the above data."
},
{
"code": null,
"e": 3907,
"s": 3791,
"text": "After clicking Next, we choose the Enable Transformation Step button to carry out the required data transformation."
},
{
"code": null,
"e": 4025,
"s": 3907,
"text": "As we are going to use the Rotate function, let us choose the Rotate tab which displays the values of all the fields."
},
{
"code": null,
"e": 4128,
"s": 4025,
"text": "We click the Transpose button to transpose the above data. The transposed data appears as shown below."
},
{
"code": null,
"e": 4278,
"s": 4128,
"text": "The load script for the Transformed data can be seen using the script editor. The script shows the expression, which replaces the empty cell values. "
},
{
"code": null,
"e": 4391,
"s": 4278,
"text": "The transformed data can be seen by creating a Table Box using the option in the menu Layout → New Sheet Object."
},
{
"code": null,
"e": 4424,
"s": 4391,
"text": "\n 70 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 4437,
"s": 4424,
"text": " Arthur Fong"
},
{
"code": null,
"e": 4444,
"s": 4437,
"text": " Print"
},
{
"code": null,
"e": 4455,
"s": 4444,
"text": " Add Notes"
}
] |
DAX Text - FIND function | Returns the starting position of one text string within another text string.
DAX FIND function is case sensitive.
FIND (<find_text>, <within_text>, [<start_num>], [<NotFoundValue>])
find_text
The text you want to find.
Use double quotes (empty text) to match the first character in within_text.
You can use wildcard characters — the question mark (?) and asterisk (*) — in find_text.
A question mark matches any single character.
An asterisk matches any sequence of characters.
If you want to find an actual question mark or asterisk, type a tilde (~) before the character.
within_text
The text in which you want to search.
start_num
Optional.
The character at which to start the search.
If omitted, start_num = 1. The first character in within_text is character number 1.
NotFoundValue
Optional.
The value that should be returned when the DAX FIND function does not find find_text in within_text.
It should be an Integer or BLANK ().
Number (Integer) that shows the starting position of the find_text in within_text, if it is found.
Number (Integer) that shows the starting position of the find_text in within_text, if it is found.
If find_text is not found in within_text and NotFoundValue is specified, then that value (an Integer or BLANK ()).
If find_text is not found in within_text and NotFoundValue is specified, then that value (an Integer or BLANK ()).
If you provide the argument find_text as a text string, it should be enclosed in double quotation marks.
If you provide the argument find_text as a text string, it should be enclosed in double quotation marks.
If find_text is not found in within_text and NotFoundValue is omitted, DAX FIND function returns #ERROR.
If find_text is not found in within_text and NotFoundValue is omitted, DAX FIND function returns #ERROR.
NotFoundValue should be an Integer or BLANK (). It should not be any other value.
NotFoundValue should be an Integer or BLANK (). It should not be any other value.
If you specify start_num that is greater than the start position of the first instance of find_text in within_text, then FIND function returns a number only if a second instance of find_text exists in within_text. Otherwise, it returns NotFoundValue. You can use this to find the duplicated text within a text string.
If you specify start_num that is greater than the start position of the first instance of find_text in within_text, then FIND function returns a number only if a second instance of find_text exists in within_text. Otherwise, it returns NotFoundValue. You can use this to find the duplicated text within a text string.
= FIND ([ProductName], [Product Description],, BLANK ())
This returns a blank, if the product name is not mentioned in the product description.
You can use such verification to ensure that the product description contains the product name at least once.
= FIND (“Powder”, [ProductName],, BLANK ())
This returns an integer only if the product name contains the text – Powder. Otherwise, it returns blank.
You can use such verification to find different types of products.
53 Lectures
5.5 hours
Abhay Gadiya
24 Lectures
2 hours
Randy Minder
26 Lectures
4.5 hours
Randy Minder
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2078,
"s": 2001,
"text": "Returns the starting position of one text string within another text string."
},
{
"code": null,
"e": 2115,
"s": 2078,
"text": "DAX FIND function is case sensitive."
},
{
"code": null,
"e": 2185,
"s": 2115,
"text": "FIND (<find_text>, <within_text>, [<start_num>], [<NotFoundValue>]) \n"
},
{
"code": null,
"e": 2195,
"s": 2185,
"text": "find_text"
},
{
"code": null,
"e": 2222,
"s": 2195,
"text": "The text you want to find."
},
{
"code": null,
"e": 2298,
"s": 2222,
"text": "Use double quotes (empty text) to match the first character in within_text."
},
{
"code": null,
"e": 2387,
"s": 2298,
"text": "You can use wildcard characters — the question mark (?) and asterisk (*) — in find_text."
},
{
"code": null,
"e": 2433,
"s": 2387,
"text": "A question mark matches any single character."
},
{
"code": null,
"e": 2481,
"s": 2433,
"text": "An asterisk matches any sequence of characters."
},
{
"code": null,
"e": 2577,
"s": 2481,
"text": "If you want to find an actual question mark or asterisk, type a tilde (~) before the character."
},
{
"code": null,
"e": 2589,
"s": 2577,
"text": "within_text"
},
{
"code": null,
"e": 2627,
"s": 2589,
"text": "The text in which you want to search."
},
{
"code": null,
"e": 2637,
"s": 2627,
"text": "start_num"
},
{
"code": null,
"e": 2647,
"s": 2637,
"text": "Optional."
},
{
"code": null,
"e": 2691,
"s": 2647,
"text": "The character at which to start the search."
},
{
"code": null,
"e": 2776,
"s": 2691,
"text": "If omitted, start_num = 1. The first character in within_text is character number 1."
},
{
"code": null,
"e": 2790,
"s": 2776,
"text": "NotFoundValue"
},
{
"code": null,
"e": 2800,
"s": 2790,
"text": "Optional."
},
{
"code": null,
"e": 2901,
"s": 2800,
"text": "The value that should be returned when the DAX FIND function does not find find_text in within_text."
},
{
"code": null,
"e": 2938,
"s": 2901,
"text": "It should be an Integer or BLANK ()."
},
{
"code": null,
"e": 3037,
"s": 2938,
"text": "Number (Integer) that shows the starting position of the find_text in within_text, if it is found."
},
{
"code": null,
"e": 3136,
"s": 3037,
"text": "Number (Integer) that shows the starting position of the find_text in within_text, if it is found."
},
{
"code": null,
"e": 3251,
"s": 3136,
"text": "If find_text is not found in within_text and NotFoundValue is specified, then that value (an Integer or BLANK ())."
},
{
"code": null,
"e": 3366,
"s": 3251,
"text": "If find_text is not found in within_text and NotFoundValue is specified, then that value (an Integer or BLANK ())."
},
{
"code": null,
"e": 3471,
"s": 3366,
"text": "If you provide the argument find_text as a text string, it should be enclosed in double quotation marks."
},
{
"code": null,
"e": 3576,
"s": 3471,
"text": "If you provide the argument find_text as a text string, it should be enclosed in double quotation marks."
},
{
"code": null,
"e": 3681,
"s": 3576,
"text": "If find_text is not found in within_text and NotFoundValue is omitted, DAX FIND function returns #ERROR."
},
{
"code": null,
"e": 3786,
"s": 3681,
"text": "If find_text is not found in within_text and NotFoundValue is omitted, DAX FIND function returns #ERROR."
},
{
"code": null,
"e": 3868,
"s": 3786,
"text": "NotFoundValue should be an Integer or BLANK (). It should not be any other value."
},
{
"code": null,
"e": 3950,
"s": 3868,
"text": "NotFoundValue should be an Integer or BLANK (). It should not be any other value."
},
{
"code": null,
"e": 4268,
"s": 3950,
"text": "If you specify start_num that is greater than the start position of the first instance of find_text in within_text, then FIND function returns a number only if a second instance of find_text exists in within_text. Otherwise, it returns NotFoundValue. You can use this to find the duplicated text within a text string."
},
{
"code": null,
"e": 4586,
"s": 4268,
"text": "If you specify start_num that is greater than the start position of the first instance of find_text in within_text, then FIND function returns a number only if a second instance of find_text exists in within_text. Otherwise, it returns NotFoundValue. You can use this to find the duplicated text within a text string."
},
{
"code": null,
"e": 4643,
"s": 4586,
"text": "= FIND ([ProductName], [Product Description],, BLANK ())"
},
{
"code": null,
"e": 4730,
"s": 4643,
"text": "This returns a blank, if the product name is not mentioned in the product description."
},
{
"code": null,
"e": 4840,
"s": 4730,
"text": "You can use such verification to ensure that the product description contains the product name at least once."
},
{
"code": null,
"e": 4885,
"s": 4840,
"text": "= FIND (“Powder”, [ProductName],, BLANK ()) "
},
{
"code": null,
"e": 4991,
"s": 4885,
"text": "This returns an integer only if the product name contains the text – Powder. Otherwise, it returns blank."
},
{
"code": null,
"e": 5058,
"s": 4991,
"text": "You can use such verification to find different types of products."
},
{
"code": null,
"e": 5093,
"s": 5058,
"text": "\n 53 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 5107,
"s": 5093,
"text": " Abhay Gadiya"
},
{
"code": null,
"e": 5140,
"s": 5107,
"text": "\n 24 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 5154,
"s": 5140,
"text": " Randy Minder"
},
{
"code": null,
"e": 5189,
"s": 5154,
"text": "\n 26 Lectures \n 4.5 hours \n"
},
{
"code": null,
"e": 5203,
"s": 5189,
"text": " Randy Minder"
},
{
"code": null,
"e": 5210,
"s": 5203,
"text": " Print"
},
{
"code": null,
"e": 5221,
"s": 5210,
"text": " Add Notes"
}
] |
Fuzzy C-Means Clustering with Python | by Yufeng | Towards Data Science | Fuzzy C-means clustering algorithm is an unsupervised learning method. Before learning the details, let me first decipher its fancy name.
So, “fuzzy” here means “not sure”, which indicates that it’s a soft clustering method. “C-means” means c cluster centers, which only replaces the “K” in “K-means” with a “C” to make it look different.
In a clustering algorithm, if the probability of one data point belonging to a cluster can only take the value of 1 or 0, it’s hard clustering. The boundary of a cluster in a hard clustering method can be visualized as a crisp boundary. On the contrary, in a soft clustering method, the probability of one data point belonging to a cluster can take any value between 0 and 1, such as 75%, for which the boundary of a cluster can be visualized as a fuzzy boundary.
Alright, after knowing the concept of a soft clustering method, we can intuitively realize the several important parameters that could possibly drive the clustering. First, of course, the cluster centers. Second, the probability of one point belonging to a specific cluster.
In the Fuzzy c-means (FCM) clustering method, we have two parameters, μ_ij and c_i, and one hyperparameter, m.
μ_ij, membership value, is the probability that the jth data point belongs to the ith cluster, and it is constrained to that the sum of μ_ij over C cluster centers is 1 for every data point j. c_i is the center of the ith cluster (the same dimension with X). And m is the fuzzifier, which controls how fuzzy the cluster boundary should be.
In the example plot above, we are looking at the 5th data point X_5, and suppose we know there are only two clusters and the current cluster centers are c_1 and c_2. μ_25 is the probability of the 5th data point belonging to the 2nd cluster, and μ_15 is the probability of the 5th data point belonging to the 1st cluster. Then, we see that the 5th data point is much closer to c_2 than c_1, so μ_25 (0.6) is larger than μ_15 (0.4). And they meet the constraint that the sum of μ for each data point is 1, where μ_15 + μ_25 = 1.
The example above is just for illustration of the parameters in FCM, but the real values don’t have to be exactly like that.
I choose to show the objective function after introducing the parameters because it will look much clearer here.
You can understand the objective function as a weighted sum of the distance between the data points (X_j) and the cluster centers (C_i). The “distance” term is the L2 norm in the equation above, and in the example (the 5th data point) above, it is exactly the length of the arrows.
If m = 1, then the objective function is simply the probability-weighted sum of distances between data points and cluster centers. What does it mean? It means that the data points that are close to the cluster centers are given higher weights.
In the example above, the distance between the 5th data point and the 2nd cluster center has a larger contribution to the objective function than that between the 5th data point and the 1st cluster center, because μ_25 = 0.6 and μ_15 = 0.4.
Remember we want to minimize the objective function, so it’s good to have small μs for those long distances (between data points and cluster centers).
What if m =2, 3, ...? Then the distances’ contribution difference is becoming smaller and smaller as shown below (μ_15 and μ_25), which are all close to 0.
Therefore, for a very large m, the cluster centers tend to locate at a place with nearly equal distance to all the data points because every point’s distance to centers contributes equally small to the objective function. So, where are the cluster centers? Yes, the center of all the data points.
If m is super large, with all cluster centers located at the centroid of all data points, the clustering will be super “fuzzy” because it doesn’t cluster at all!
So, to make a meaningful clustering, we use m = 2 for most cases.
The constrained parameter optimization could be solved by hand using the Lagrangian function, but we will not cover it in this post. Instead, I am listing the final equations to calculate c_i and μ_ij.
The equation to solve c_i is relatively straightforward, where it represents the weighted average of X to the center of cluster i. Why do we have to divide it by the sum of (μ_ij)^m? That’s because the sum of weights equals 1 only when m=1 (in which case we can remove the entire denominator). When m>1, the sum of weights is the scaler for the weighted sum calculation for cluster centers.
However, the equation of μ_ij doesn’t look that easy to understand, right? If we transform it to the following one, it should be much easier to interpret.
Suppose m=2, then the numerator part is just the reciprocal of the distance between the jth data point to the ith cluster center (the length of the arrow in the illustration plot above). As aforementioned, a larger distance should correspond to a smaller μ_ij, which is also reflected in this equation. The denominator of the equation is the sum of these reciprocals of the distances from x_j to the centers of every cluster.
The full FCM algorithm could be described in the figure below.
The procedure is actually the same as the EM algorithm. If you are interested to read about EM, you can go to the following post.
towardsdatascience.com
The implementation of fuzzy c-means clustering in Python is very simple. The fitting procedure is shown below,
import numpy as npfrom fcmeans import FCMmy_model = FCM(n_clusters=2) # we use two cluster as an examplemy_model.fit(X) ## X, numpy array. rows:samples columns:features
Extract information from the clustering model,
centers = my_model.centerslabels = my_model.predict(X)
That’s it!
Hope the article is useful. | [
{
"code": null,
"e": 309,
"s": 171,
"text": "Fuzzy C-means clustering algorithm is an unsupervised learning method. Before learning the details, let me first decipher its fancy name."
},
{
"code": null,
"e": 510,
"s": 309,
"text": "So, “fuzzy” here means “not sure”, which indicates that it’s a soft clustering method. “C-means” means c cluster centers, which only replaces the “K” in “K-means” with a “C” to make it look different."
},
{
"code": null,
"e": 974,
"s": 510,
"text": "In a clustering algorithm, if the probability of one data point belonging to a cluster can only take the value of 1 or 0, it’s hard clustering. The boundary of a cluster in a hard clustering method can be visualized as a crisp boundary. On the contrary, in a soft clustering method, the probability of one data point belonging to a cluster can take any value between 0 and 1, such as 75%, for which the boundary of a cluster can be visualized as a fuzzy boundary."
},
{
"code": null,
"e": 1249,
"s": 974,
"text": "Alright, after knowing the concept of a soft clustering method, we can intuitively realize the several important parameters that could possibly drive the clustering. First, of course, the cluster centers. Second, the probability of one point belonging to a specific cluster."
},
{
"code": null,
"e": 1360,
"s": 1249,
"text": "In the Fuzzy c-means (FCM) clustering method, we have two parameters, μ_ij and c_i, and one hyperparameter, m."
},
{
"code": null,
"e": 1700,
"s": 1360,
"text": "μ_ij, membership value, is the probability that the jth data point belongs to the ith cluster, and it is constrained to that the sum of μ_ij over C cluster centers is 1 for every data point j. c_i is the center of the ith cluster (the same dimension with X). And m is the fuzzifier, which controls how fuzzy the cluster boundary should be."
},
{
"code": null,
"e": 2228,
"s": 1700,
"text": "In the example plot above, we are looking at the 5th data point X_5, and suppose we know there are only two clusters and the current cluster centers are c_1 and c_2. μ_25 is the probability of the 5th data point belonging to the 2nd cluster, and μ_15 is the probability of the 5th data point belonging to the 1st cluster. Then, we see that the 5th data point is much closer to c_2 than c_1, so μ_25 (0.6) is larger than μ_15 (0.4). And they meet the constraint that the sum of μ for each data point is 1, where μ_15 + μ_25 = 1."
},
{
"code": null,
"e": 2353,
"s": 2228,
"text": "The example above is just for illustration of the parameters in FCM, but the real values don’t have to be exactly like that."
},
{
"code": null,
"e": 2466,
"s": 2353,
"text": "I choose to show the objective function after introducing the parameters because it will look much clearer here."
},
{
"code": null,
"e": 2748,
"s": 2466,
"text": "You can understand the objective function as a weighted sum of the distance between the data points (X_j) and the cluster centers (C_i). The “distance” term is the L2 norm in the equation above, and in the example (the 5th data point) above, it is exactly the length of the arrows."
},
{
"code": null,
"e": 2992,
"s": 2748,
"text": "If m = 1, then the objective function is simply the probability-weighted sum of distances between data points and cluster centers. What does it mean? It means that the data points that are close to the cluster centers are given higher weights."
},
{
"code": null,
"e": 3233,
"s": 2992,
"text": "In the example above, the distance between the 5th data point and the 2nd cluster center has a larger contribution to the objective function than that between the 5th data point and the 1st cluster center, because μ_25 = 0.6 and μ_15 = 0.4."
},
{
"code": null,
"e": 3384,
"s": 3233,
"text": "Remember we want to minimize the objective function, so it’s good to have small μs for those long distances (between data points and cluster centers)."
},
{
"code": null,
"e": 3540,
"s": 3384,
"text": "What if m =2, 3, ...? Then the distances’ contribution difference is becoming smaller and smaller as shown below (μ_15 and μ_25), which are all close to 0."
},
{
"code": null,
"e": 3837,
"s": 3540,
"text": "Therefore, for a very large m, the cluster centers tend to locate at a place with nearly equal distance to all the data points because every point’s distance to centers contributes equally small to the objective function. So, where are the cluster centers? Yes, the center of all the data points."
},
{
"code": null,
"e": 3999,
"s": 3837,
"text": "If m is super large, with all cluster centers located at the centroid of all data points, the clustering will be super “fuzzy” because it doesn’t cluster at all!"
},
{
"code": null,
"e": 4065,
"s": 3999,
"text": "So, to make a meaningful clustering, we use m = 2 for most cases."
},
{
"code": null,
"e": 4267,
"s": 4065,
"text": "The constrained parameter optimization could be solved by hand using the Lagrangian function, but we will not cover it in this post. Instead, I am listing the final equations to calculate c_i and μ_ij."
},
{
"code": null,
"e": 4658,
"s": 4267,
"text": "The equation to solve c_i is relatively straightforward, where it represents the weighted average of X to the center of cluster i. Why do we have to divide it by the sum of (μ_ij)^m? That’s because the sum of weights equals 1 only when m=1 (in which case we can remove the entire denominator). When m>1, the sum of weights is the scaler for the weighted sum calculation for cluster centers."
},
{
"code": null,
"e": 4813,
"s": 4658,
"text": "However, the equation of μ_ij doesn’t look that easy to understand, right? If we transform it to the following one, it should be much easier to interpret."
},
{
"code": null,
"e": 5239,
"s": 4813,
"text": "Suppose m=2, then the numerator part is just the reciprocal of the distance between the jth data point to the ith cluster center (the length of the arrow in the illustration plot above). As aforementioned, a larger distance should correspond to a smaller μ_ij, which is also reflected in this equation. The denominator of the equation is the sum of these reciprocals of the distances from x_j to the centers of every cluster."
},
{
"code": null,
"e": 5302,
"s": 5239,
"text": "The full FCM algorithm could be described in the figure below."
},
{
"code": null,
"e": 5432,
"s": 5302,
"text": "The procedure is actually the same as the EM algorithm. If you are interested to read about EM, you can go to the following post."
},
{
"code": null,
"e": 5455,
"s": 5432,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 5566,
"s": 5455,
"text": "The implementation of fuzzy c-means clustering in Python is very simple. The fitting procedure is shown below,"
},
{
"code": null,
"e": 5735,
"s": 5566,
"text": "import numpy as npfrom fcmeans import FCMmy_model = FCM(n_clusters=2) # we use two cluster as an examplemy_model.fit(X) ## X, numpy array. rows:samples columns:features"
},
{
"code": null,
"e": 5782,
"s": 5735,
"text": "Extract information from the clustering model,"
},
{
"code": null,
"e": 5837,
"s": 5782,
"text": "centers = my_model.centerslabels = my_model.predict(X)"
},
{
"code": null,
"e": 5848,
"s": 5837,
"text": "That’s it!"
}
] |
Log Compacted Topics in Apache Kafka | by Seyed Morteza Mousavi | Towards Data Science | When I had begun reading Kafka documentation, although log compacted topic seemed a simple concept, it wasn’t clear to me how internally Kafka keeps the states of them in the file system. This month I had some time to read more about this feature and I want to share my understandings with you.
In this article, I will describe the log compacted topics in Kafka. Then I will show you how Kafka internally keeps the states of these topics in the file system.
I assume that you are already familiar with Apache Kafka basic concepts such as broker, topic, partition, consumer and producer. Also if you want to run the sample commands, you have to run a Kafka broker and a Zookeeper server.
Kafka documentation says:
Log compaction is a mechanism to give finer-grained per-record retention, rather than the coarser-grained time-based retention. The idea is to selectively remove records where we have a more recent update with the same primary key. This way the log is guaranteed to have at least the last state for each key.
To simplify this description, Kafka removes any old records when there is a newer version of it with the same key in the partition log. As an example consider following partition of a log compacted topic called latest-product-price:
As you see at first there are two records with key p3. But because it is a log compacted topic, Kafka removes the older record in a background thread (more on this in next sections). Now assume we have a producer that sends new records to this partition. The producer produces 3 records with keys p6,p5,p5 respectively:
Again a background thread inside Kafka broker removes older records with keys p5 and p6. Note that compacted log is composed of two part: a tail and a head. Kafka makes sure that all records inside the tail part have a unique key because the tail section is scanned in the previous cycle of the cleaning process. But the head section can have duplicate values.
Now that we learned what is log compacted topic its time to create them using kafka-topics tool.
Create a compacted topic (I will describe all configs in details):
kafka-topics --create --zookeeper zookeeper:2181 --topic latest-product-price --replication-factor 1 --partitions 1 --config "cleanup.policy=compact" --config "delete.retention.ms=100" --config "segment.ms=100" --config "min.cleanable.dirty.ratio=0.01"
Produce some records:
kafka-console-producer --broker-list localhost:9092 --topic latest-product-price --property parse.key=true --property key.separator=:>p3:10$>p5:7$>p3:11$>p6:25$>p6:12$>p5:14$>p5:17$
Notice that in the above command I separated key and value by :. Now Consume the topic:
kafka-console-consumer --bootstrap-server localhost:9092 --topic latest-product-price --property print.key=true --property key.separator=: --from-beginningp3:11$p6:12$p5:14$p5:17$
As you see records with duplicated keys are removed. The p5:14$ record is not removed which we will see the reason when I describe the cleaning process. But we must first look at how Kafka internally stores messages.
Partition log is an abstraction that allows us to easily consume ordered messages inside the partition, without being worried about the internal storage of Kafka. In reality, however, the partition log is divided by Kafka broker into segments. Segments are files stored in the file system (inside data directory and in the directory of the partition), which their name ends with .log. In the below image a partition log is divided into 3 segments:
As you see, we have a partition log that holds 7 records residing in 3 separate segment files. The first offset of a segment is called the base offset of the segment. The segment file name is always equal to its base offset value.
The last segment in the partition is called the active segment. Only the active segment of a log can receive the newly produced messages. We will see how Kafka behaves with the active segment in the cleaning process of a compacted log.
Returning to our example, we can view segment files of our topic partition by the following command (assuming your Kafka data directory is /var/lib/kafka/data):
ls /var/lib/kafka/data/latest-product-price-0/00000000000000000000.index 00000000000000000006.log00000000000000000000.log 00000000000000000006.snapshot00000000000000000000.timeindex 00000000000000000006.timeindex00000000000000000005.snapshot leader-epoch-checkpoint00000000000000000006.index
00000000000000000000.log and 00000000000000000006.log are segments of this partition and 00000000000000000006.log is the active segment.
When does Kafka create a new segment? One option is by setting segment.bytes (default is 1GB) config during topic creation. When your segment size become bigger than this value, Kafka will create a new segment. Another option is by setting segment.ms as you saw earlier. With this option, when Kafka receives a produce request, it will check that the active segment is older than segment.ms value. If it is older, then it will create a new segment. In our command, we set segment.ms=100 to make sure that every 100 milliseconds a new segment is created.
Interesting point is that when you set segment.ms=100 you probably will have smaller segments. After the cleaning process (see next section) Kafka broker, will merge non-active segments and creates a large segment from them.
For more information about segments and internal storage of Kafka, you can read How Kafka’s Storage Internals Work and A Practical Introduction to Kafka Storage Internals articles.
During startup, the Kafka broker creates a number of cleaner threads, responsible for cleaning compacted logs (The number of these threads are configurable through log.cleaner.threads config). The cleaner thread, constantly will try to find the filthiest log in the broker and then try to clean it. For each log, it calculates the dirty ratio as below:
dirty ratio = the number of bytes in the head / total number of bytes in the log(tail + head)
The cleaner thread then chooses the log with the highest dirty ratio. This log is called the filthiest log and if its value is greater than min.cleanable.dirty.ratio config, it will be cleaned. Otherwise, the cleaner thread will be blocked for a number of milliseconds (configurable with log.cleaner.backoff.ms).
After finding the filthiest log we want to find the portion of the log that is cleanable. Note that some portion of the log is not cleanable and will not be scanned:
All records inside the active segment. That’s why we still see duplicated p5:14$ record in our consumer.
If you set min.compaction.lag.ms config bigger than 0, then any segment that has a record with a timestamp younger than this config, will not be cleaned. These segments will not be scanned for compaction.
Now we know which records we are going to compact. From the first record in the log to the first record that is not cleanable. For simplicity in this article, we assume that all records in the head are cleanable.
Note that we know that every record in the tail section of a log has a unique key because duplicates were removed in the last clean. It is only possible that we have some records in the head section which their keys is not unique in the log. To find duplicate record faster, Kafka creates a map for records in the head section. Returning to our example the offset map structure is something like this:
As you see Kafka creates a structure called offset map that for each key in the head section holds its corresponding offset. If we have duplicates in the head, Kafka uses the newest offset. In the above image, record with key p6 is at offset 5 and p5 newest offset is 7. Now cleaner thread checks every record in the log and removes it if there is any record with the same key inside offset map and its offset is different from the entry in the map (we don’t want to delete newest records).
During the cleaning process of a compacted log, not only duplicated message will be removed but Kafka also removes records that have a value of null. These records are called tombstone. You can delay the removal of them by setting delete.retention.ms config. By setting this config, Kafka checks the modification timestamp of the segment that contains this record and if the modification time is younger than the config value, the record will be retained.
Now the log became clean. After this cleaning process, we have a new tail and a new head! The last offset that is scanned for cleaning (in our example the last record in the old head) is the last offset of the new tail.
Kafka keeps the start offset of the new head in a file named cleaner-offset-checkpoint in the root of the data directory. This file is used for the next cleaning cycle for the log. We can view our topic checkpoint file:
cat /var/lib/kafka/data/cleaner-offset-checkpoint01latest-product-price 0 6
As you see there are three lines. The first line is the version of the file (I think for backward compatibility), the second line has value 1 which shows how many lines will follow this line (just one line), and the last line contains the name of compacted log topic, the partition number, and the head offset of this partition.
In this article, I showed you what is log compacted topic, how they are stored and how Kafka cleans them periodically. In the end, I want to point out that log compaction is great for caching scenarios where you want to just keep the latest value for each record in near real time. Assume you want to build your cache in the startup of your application. You can just read your compacted topic and build your cache and because Kafka read messages sequentially, it is much faster than warming your cache using a SQL database.
You can read more about this technique in Martin Kleppmann Turning the database inside-out article. You may also find my previous article Beat Cache Invalidation in ASP.NET Core Using Kafka and Debezium useful which is an implementation of this technique. | [
{
"code": null,
"e": 467,
"s": 172,
"text": "When I had begun reading Kafka documentation, although log compacted topic seemed a simple concept, it wasn’t clear to me how internally Kafka keeps the states of them in the file system. This month I had some time to read more about this feature and I want to share my understandings with you."
},
{
"code": null,
"e": 630,
"s": 467,
"text": "In this article, I will describe the log compacted topics in Kafka. Then I will show you how Kafka internally keeps the states of these topics in the file system."
},
{
"code": null,
"e": 859,
"s": 630,
"text": "I assume that you are already familiar with Apache Kafka basic concepts such as broker, topic, partition, consumer and producer. Also if you want to run the sample commands, you have to run a Kafka broker and a Zookeeper server."
},
{
"code": null,
"e": 885,
"s": 859,
"text": "Kafka documentation says:"
},
{
"code": null,
"e": 1194,
"s": 885,
"text": "Log compaction is a mechanism to give finer-grained per-record retention, rather than the coarser-grained time-based retention. The idea is to selectively remove records where we have a more recent update with the same primary key. This way the log is guaranteed to have at least the last state for each key."
},
{
"code": null,
"e": 1427,
"s": 1194,
"text": "To simplify this description, Kafka removes any old records when there is a newer version of it with the same key in the partition log. As an example consider following partition of a log compacted topic called latest-product-price:"
},
{
"code": null,
"e": 1747,
"s": 1427,
"text": "As you see at first there are two records with key p3. But because it is a log compacted topic, Kafka removes the older record in a background thread (more on this in next sections). Now assume we have a producer that sends new records to this partition. The producer produces 3 records with keys p6,p5,p5 respectively:"
},
{
"code": null,
"e": 2108,
"s": 1747,
"text": "Again a background thread inside Kafka broker removes older records with keys p5 and p6. Note that compacted log is composed of two part: a tail and a head. Kafka makes sure that all records inside the tail part have a unique key because the tail section is scanned in the previous cycle of the cleaning process. But the head section can have duplicate values."
},
{
"code": null,
"e": 2205,
"s": 2108,
"text": "Now that we learned what is log compacted topic its time to create them using kafka-topics tool."
},
{
"code": null,
"e": 2272,
"s": 2205,
"text": "Create a compacted topic (I will describe all configs in details):"
},
{
"code": null,
"e": 2526,
"s": 2272,
"text": "kafka-topics --create --zookeeper zookeeper:2181 --topic latest-product-price --replication-factor 1 --partitions 1 --config \"cleanup.policy=compact\" --config \"delete.retention.ms=100\" --config \"segment.ms=100\" --config \"min.cleanable.dirty.ratio=0.01\""
},
{
"code": null,
"e": 2548,
"s": 2526,
"text": "Produce some records:"
},
{
"code": null,
"e": 2730,
"s": 2548,
"text": "kafka-console-producer --broker-list localhost:9092 --topic latest-product-price --property parse.key=true --property key.separator=:>p3:10$>p5:7$>p3:11$>p6:25$>p6:12$>p5:14$>p5:17$"
},
{
"code": null,
"e": 2818,
"s": 2730,
"text": "Notice that in the above command I separated key and value by :. Now Consume the topic:"
},
{
"code": null,
"e": 2999,
"s": 2818,
"text": "kafka-console-consumer --bootstrap-server localhost:9092 --topic latest-product-price --property print.key=true --property key.separator=: --from-beginningp3:11$p6:12$p5:14$p5:17$"
},
{
"code": null,
"e": 3216,
"s": 2999,
"text": "As you see records with duplicated keys are removed. The p5:14$ record is not removed which we will see the reason when I describe the cleaning process. But we must first look at how Kafka internally stores messages."
},
{
"code": null,
"e": 3664,
"s": 3216,
"text": "Partition log is an abstraction that allows us to easily consume ordered messages inside the partition, without being worried about the internal storage of Kafka. In reality, however, the partition log is divided by Kafka broker into segments. Segments are files stored in the file system (inside data directory and in the directory of the partition), which their name ends with .log. In the below image a partition log is divided into 3 segments:"
},
{
"code": null,
"e": 3895,
"s": 3664,
"text": "As you see, we have a partition log that holds 7 records residing in 3 separate segment files. The first offset of a segment is called the base offset of the segment. The segment file name is always equal to its base offset value."
},
{
"code": null,
"e": 4131,
"s": 3895,
"text": "The last segment in the partition is called the active segment. Only the active segment of a log can receive the newly produced messages. We will see how Kafka behaves with the active segment in the cleaning process of a compacted log."
},
{
"code": null,
"e": 4292,
"s": 4131,
"text": "Returning to our example, we can view segment files of our topic partition by the following command (assuming your Kafka data directory is /var/lib/kafka/data):"
},
{
"code": null,
"e": 4584,
"s": 4292,
"text": "ls /var/lib/kafka/data/latest-product-price-0/00000000000000000000.index 00000000000000000006.log00000000000000000000.log 00000000000000000006.snapshot00000000000000000000.timeindex 00000000000000000006.timeindex00000000000000000005.snapshot leader-epoch-checkpoint00000000000000000006.index"
},
{
"code": null,
"e": 4721,
"s": 4584,
"text": "00000000000000000000.log and 00000000000000000006.log are segments of this partition and 00000000000000000006.log is the active segment."
},
{
"code": null,
"e": 5275,
"s": 4721,
"text": "When does Kafka create a new segment? One option is by setting segment.bytes (default is 1GB) config during topic creation. When your segment size become bigger than this value, Kafka will create a new segment. Another option is by setting segment.ms as you saw earlier. With this option, when Kafka receives a produce request, it will check that the active segment is older than segment.ms value. If it is older, then it will create a new segment. In our command, we set segment.ms=100 to make sure that every 100 milliseconds a new segment is created."
},
{
"code": null,
"e": 5500,
"s": 5275,
"text": "Interesting point is that when you set segment.ms=100 you probably will have smaller segments. After the cleaning process (see next section) Kafka broker, will merge non-active segments and creates a large segment from them."
},
{
"code": null,
"e": 5681,
"s": 5500,
"text": "For more information about segments and internal storage of Kafka, you can read How Kafka’s Storage Internals Work and A Practical Introduction to Kafka Storage Internals articles."
},
{
"code": null,
"e": 6034,
"s": 5681,
"text": "During startup, the Kafka broker creates a number of cleaner threads, responsible for cleaning compacted logs (The number of these threads are configurable through log.cleaner.threads config). The cleaner thread, constantly will try to find the filthiest log in the broker and then try to clean it. For each log, it calculates the dirty ratio as below:"
},
{
"code": null,
"e": 6128,
"s": 6034,
"text": "dirty ratio = the number of bytes in the head / total number of bytes in the log(tail + head)"
},
{
"code": null,
"e": 6441,
"s": 6128,
"text": "The cleaner thread then chooses the log with the highest dirty ratio. This log is called the filthiest log and if its value is greater than min.cleanable.dirty.ratio config, it will be cleaned. Otherwise, the cleaner thread will be blocked for a number of milliseconds (configurable with log.cleaner.backoff.ms)."
},
{
"code": null,
"e": 6607,
"s": 6441,
"text": "After finding the filthiest log we want to find the portion of the log that is cleanable. Note that some portion of the log is not cleanable and will not be scanned:"
},
{
"code": null,
"e": 6712,
"s": 6607,
"text": "All records inside the active segment. That’s why we still see duplicated p5:14$ record in our consumer."
},
{
"code": null,
"e": 6917,
"s": 6712,
"text": "If you set min.compaction.lag.ms config bigger than 0, then any segment that has a record with a timestamp younger than this config, will not be cleaned. These segments will not be scanned for compaction."
},
{
"code": null,
"e": 7130,
"s": 6917,
"text": "Now we know which records we are going to compact. From the first record in the log to the first record that is not cleanable. For simplicity in this article, we assume that all records in the head are cleanable."
},
{
"code": null,
"e": 7532,
"s": 7130,
"text": "Note that we know that every record in the tail section of a log has a unique key because duplicates were removed in the last clean. It is only possible that we have some records in the head section which their keys is not unique in the log. To find duplicate record faster, Kafka creates a map for records in the head section. Returning to our example the offset map structure is something like this:"
},
{
"code": null,
"e": 8023,
"s": 7532,
"text": "As you see Kafka creates a structure called offset map that for each key in the head section holds its corresponding offset. If we have duplicates in the head, Kafka uses the newest offset. In the above image, record with key p6 is at offset 5 and p5 newest offset is 7. Now cleaner thread checks every record in the log and removes it if there is any record with the same key inside offset map and its offset is different from the entry in the map (we don’t want to delete newest records)."
},
{
"code": null,
"e": 8479,
"s": 8023,
"text": "During the cleaning process of a compacted log, not only duplicated message will be removed but Kafka also removes records that have a value of null. These records are called tombstone. You can delay the removal of them by setting delete.retention.ms config. By setting this config, Kafka checks the modification timestamp of the segment that contains this record and if the modification time is younger than the config value, the record will be retained."
},
{
"code": null,
"e": 8699,
"s": 8479,
"text": "Now the log became clean. After this cleaning process, we have a new tail and a new head! The last offset that is scanned for cleaning (in our example the last record in the old head) is the last offset of the new tail."
},
{
"code": null,
"e": 8919,
"s": 8699,
"text": "Kafka keeps the start offset of the new head in a file named cleaner-offset-checkpoint in the root of the data directory. This file is used for the next cleaning cycle for the log. We can view our topic checkpoint file:"
},
{
"code": null,
"e": 8995,
"s": 8919,
"text": "cat /var/lib/kafka/data/cleaner-offset-checkpoint01latest-product-price 0 6"
},
{
"code": null,
"e": 9324,
"s": 8995,
"text": "As you see there are three lines. The first line is the version of the file (I think for backward compatibility), the second line has value 1 which shows how many lines will follow this line (just one line), and the last line contains the name of compacted log topic, the partition number, and the head offset of this partition."
},
{
"code": null,
"e": 9848,
"s": 9324,
"text": "In this article, I showed you what is log compacted topic, how they are stored and how Kafka cleans them periodically. In the end, I want to point out that log compaction is great for caching scenarios where you want to just keep the latest value for each record in near real time. Assume you want to build your cache in the startup of your application. You can just read your compacted topic and build your cache and because Kafka read messages sequentially, it is much faster than warming your cache using a SQL database."
}
] |
Maximum count of pairs which generate the same sum in C++ | We are given with an array of integers. The goal is to find the maximum number of pairs in the
array that when added produce the same sum. We have to find the maximum count of such pairs.
Arr[]= { 1,2,3,4,2 }
Maximum count of pairs with same sum : 3
Explanation − Sum of pairs of numbers −
{1,2}, {1,2} Sum:3
{1,3},{2,2} Sum:4
{1,4},{2,3},{3,2} Sum:5
{2,4} Sum:6
{3,4} Sum:7
Maximum count of pairs with same sum is 3 ( for sum = 5 )
Arr[]= { 5,3,6,1 }
Maximum count of pairs with same sum : 1
Explanation − Sum of pairs of numbers −
{5,3} Sum:8
{5,6} Sum:11
{5,1} Sum:6
{3,6} Sum:9
{3,1} Sum:4
{6,1} Sum:7
Maximum count of pairs with the same sum is 1.
The integer array Arr[] is used to store the integers.
The integer array Arr[] is used to store the integers.
Integer ‘size’ stores the length of the array.
Integer ‘size’ stores the length of the array.
Function countEqualSum( int arr[], int n) takes an array , its size as input and returns the
maximum count of pairs which generate the same sum.
Function countEqualSum( int arr[], int n) takes an array , its size as input and returns the
maximum count of pairs which generate the same sum.
First of all we will take ‘sum’ array to store the frequency of unique sums.
First of all we will take ‘sum’ array to store the frequency of unique sums.
At each index of sum, increment count of that element.
At each index of sum, increment count of that element.
Each index of array sum is the sum of a pair of elements.
Each index of array sum is the sum of a pair of elements.
Find maximum such count by search for max element inside array sum and store in maxC.
Find maximum such count by search for max element inside array sum and store in maxC.
Return maxC as result
Return maxC as result
Live Demo
#include <bits/stdc++.h>
using namespace std;
// Function to return the maximum
// count of pairs with equal sum
int countEqualSum(int arr[], int n){
int sum[20]={0};
int maxC = 0;
// Store counts of sum of all pairs
for (int i = 0; i < n - 1; i++)
for (int j = i + 1; j < n; j++){
sum[ arr[i]+arr[j] ]++;
}
for(int i=0;i<20;i++)
if(sum[i]>maxC)
maxC=sum[i];
return maxC;
}
int main(){
int Arr[] = { 1,2,3,4,2 };
int size = 5;
cout <<”Maximum count of pairs which generate the same sum”
<< countEqualSum(Arr, size);
return 0;
}
Maximum count of pairs which generate the same sum : 3 | [
{
"code": null,
"e": 1250,
"s": 1062,
"text": "We are given with an array of integers. The goal is to find the maximum number of pairs in the\narray that when added produce the same sum. We have to find the maximum count of such pairs."
},
{
"code": null,
"e": 1271,
"s": 1250,
"text": "Arr[]= { 1,2,3,4,2 }"
},
{
"code": null,
"e": 1312,
"s": 1271,
"text": "Maximum count of pairs with same sum : 3"
},
{
"code": null,
"e": 1352,
"s": 1312,
"text": "Explanation − Sum of pairs of numbers −"
},
{
"code": null,
"e": 1495,
"s": 1352,
"text": "{1,2}, {1,2} Sum:3\n{1,3},{2,2} Sum:4\n{1,4},{2,3},{3,2} Sum:5\n{2,4} Sum:6\n{3,4} Sum:7\nMaximum count of pairs with same sum is 3 ( for sum = 5 )"
},
{
"code": null,
"e": 1514,
"s": 1495,
"text": "Arr[]= { 5,3,6,1 }"
},
{
"code": null,
"e": 1555,
"s": 1514,
"text": "Maximum count of pairs with same sum : 1"
},
{
"code": null,
"e": 1595,
"s": 1555,
"text": "Explanation − Sum of pairs of numbers −"
},
{
"code": null,
"e": 1715,
"s": 1595,
"text": "{5,3} Sum:8\n{5,6} Sum:11\n{5,1} Sum:6\n{3,6} Sum:9\n{3,1} Sum:4\n{6,1} Sum:7\nMaximum count of pairs with the same sum is 1."
},
{
"code": null,
"e": 1770,
"s": 1715,
"text": "The integer array Arr[] is used to store the integers."
},
{
"code": null,
"e": 1825,
"s": 1770,
"text": "The integer array Arr[] is used to store the integers."
},
{
"code": null,
"e": 1872,
"s": 1825,
"text": "Integer ‘size’ stores the length of the array."
},
{
"code": null,
"e": 1919,
"s": 1872,
"text": "Integer ‘size’ stores the length of the array."
},
{
"code": null,
"e": 2064,
"s": 1919,
"text": "Function countEqualSum( int arr[], int n) takes an array , its size as input and returns the\nmaximum count of pairs which generate the same sum."
},
{
"code": null,
"e": 2209,
"s": 2064,
"text": "Function countEqualSum( int arr[], int n) takes an array , its size as input and returns the\nmaximum count of pairs which generate the same sum."
},
{
"code": null,
"e": 2286,
"s": 2209,
"text": "First of all we will take ‘sum’ array to store the frequency of unique sums."
},
{
"code": null,
"e": 2363,
"s": 2286,
"text": "First of all we will take ‘sum’ array to store the frequency of unique sums."
},
{
"code": null,
"e": 2418,
"s": 2363,
"text": "At each index of sum, increment count of that element."
},
{
"code": null,
"e": 2473,
"s": 2418,
"text": "At each index of sum, increment count of that element."
},
{
"code": null,
"e": 2531,
"s": 2473,
"text": "Each index of array sum is the sum of a pair of elements."
},
{
"code": null,
"e": 2589,
"s": 2531,
"text": "Each index of array sum is the sum of a pair of elements."
},
{
"code": null,
"e": 2675,
"s": 2589,
"text": "Find maximum such count by search for max element inside array sum and store in maxC."
},
{
"code": null,
"e": 2761,
"s": 2675,
"text": "Find maximum such count by search for max element inside array sum and store in maxC."
},
{
"code": null,
"e": 2783,
"s": 2761,
"text": "Return maxC as result"
},
{
"code": null,
"e": 2805,
"s": 2783,
"text": "Return maxC as result"
},
{
"code": null,
"e": 2816,
"s": 2805,
"text": " Live Demo"
},
{
"code": null,
"e": 3422,
"s": 2816,
"text": "#include <bits/stdc++.h>\nusing namespace std;\n// Function to return the maximum\n// count of pairs with equal sum\nint countEqualSum(int arr[], int n){\n int sum[20]={0};\n int maxC = 0;\n // Store counts of sum of all pairs\n for (int i = 0; i < n - 1; i++)\n for (int j = i + 1; j < n; j++){\n sum[ arr[i]+arr[j] ]++;\n }\n for(int i=0;i<20;i++)\n if(sum[i]>maxC)\n maxC=sum[i];\n return maxC;\n}\nint main(){\n int Arr[] = { 1,2,3,4,2 };\n int size = 5;\n cout <<”Maximum count of pairs which generate the same sum”\n << countEqualSum(Arr, size);\n return 0;\n}"
},
{
"code": null,
"e": 3477,
"s": 3422,
"text": "Maximum count of pairs which generate the same sum : 3"
}
] |
Can we store CSS color values in MySQL? | Yes, we can. In order to store CSS color value, you can use CHAR(6) without # symbol for hexadecimal. Let us see an example and create a table
mysql> create table storeCSSColorDemo
-> (
-> CSSValue char(6)
-> );
Query OK, 0 rows affected (0.53 sec)
Insert some records in the table using insert command. The records here are individual color values in hexadecimal, for which we have used char(6)
mysql> insert into storeCSSColorDemo values('FF0000');
Query OK, 1 row affected (0.13 sec)
mysql> insert into storeCSSColorDemo values('FFA500');
Query OK, 1 row affected (0.86 sec)
mysql> insert into storeCSSColorDemo values('FFFF00');
Query OK, 1 row affected (0.19 sec)
Display all records from the table using select statement. The query is as follows −
mysql> select *from storeCSSColorDemo;
The following output displays the color value successfully stored in MySQL table
+----------+
| CSSValue |
+----------+
| FF0000 |
| FFA500 |
| FFFF00 |
+----------+
3 rows in set (0.00 sec)
In the above sample output, the three colors are RED, ORANGE and YELLOW.
NOTE If your column is nullable then you can use VARCHAR(6). | [
{
"code": null,
"e": 1205,
"s": 1062,
"text": "Yes, we can. In order to store CSS color value, you can use CHAR(6) without # symbol for hexadecimal. Let us see an example and create a table"
},
{
"code": null,
"e": 1320,
"s": 1205,
"text": "mysql> create table storeCSSColorDemo\n -> (\n -> CSSValue char(6)\n -> );\nQuery OK, 0 rows affected (0.53 sec)"
},
{
"code": null,
"e": 1467,
"s": 1320,
"text": "Insert some records in the table using insert command. The records here are individual color values in hexadecimal, for which we have used char(6)"
},
{
"code": null,
"e": 1740,
"s": 1467,
"text": "mysql> insert into storeCSSColorDemo values('FF0000');\nQuery OK, 1 row affected (0.13 sec)\nmysql> insert into storeCSSColorDemo values('FFA500');\nQuery OK, 1 row affected (0.86 sec)\nmysql> insert into storeCSSColorDemo values('FFFF00');\nQuery OK, 1 row affected (0.19 sec)"
},
{
"code": null,
"e": 1825,
"s": 1740,
"text": "Display all records from the table using select statement. The query is as follows −"
},
{
"code": null,
"e": 1864,
"s": 1825,
"text": "mysql> select *from storeCSSColorDemo;"
},
{
"code": null,
"e": 1945,
"s": 1864,
"text": "The following output displays the color value successfully stored in MySQL table"
},
{
"code": null,
"e": 2061,
"s": 1945,
"text": "+----------+\n| CSSValue |\n+----------+\n| FF0000 |\n| FFA500 |\n| FFFF00 |\n+----------+\n3 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2134,
"s": 2061,
"text": "In the above sample output, the three colors are RED, ORANGE and YELLOW."
},
{
"code": null,
"e": 2195,
"s": 2134,
"text": "NOTE If your column is nullable then you can use VARCHAR(6)."
}
] |
How to create and use a Syntax Highlighter with JavaScript? | To create and use syntax highligher, the code is as follows −
Live Demo
<!DOCTYPE html>
<html>
<head>
<style>
body {
font-family: "Segoe UI", Tahoma, Geneva, Verdana, sans-serif;
}
.colorLinks {
color: rgb(131, 44, 212);
text-decoration: none;
font-size: 20px;
font-weight: bold;
}
.setColor {
margin: 20px;
padding: 10px;
border: none;
font-size: 18px;
background-color: rgb(226, 43, 89);
color: white;
}
</style>
</head>
<body>
<h1>Syntax Highlighting example</h1>
<div class="CodeDiv">
<a href="https://www.google.com">Go to google.com</a>
<a href="https://www.facebook.com">Go to facebook.com</a>
</div>
<button class="setColor" onclick="highlightLinks()">Click Here</button>
<h2>Click the above button to highlight the links</h2>
<script>
var anchrorRg = /[^<]*(<a href="([^"]+)">([^<]+)<\/a>)/g;
function highlightLinks() {
var text = document.querySelector(".CodeDiv").innerHTML;
text = text.replace(anchrorRg, function() {
function check(arguments) {
let tempArr = [];
tempArr.push(Array.prototype.slice.call(arguments, 2, 4));
return `<a class="colorLinks" href=${tempArr[0][0]}>${tempArr[0][1]}</a><br>`;
}
return check(arguments);
});
document.querySelector(".CodeDiv").innerHTML = text;
}
</script>
</body>
</html>
The above code will produce the following output on window size greater than 700px −
On clicking the “Click Here” button − | [
{
"code": null,
"e": 1124,
"s": 1062,
"text": "To create and use syntax highligher, the code is as follows −"
},
{
"code": null,
"e": 1135,
"s": 1124,
"text": " Live Demo"
},
{
"code": null,
"e": 2470,
"s": 1135,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<style>\n body {\n font-family: \"Segoe UI\", Tahoma, Geneva, Verdana, sans-serif;\n }\n .colorLinks {\n color: rgb(131, 44, 212);\n text-decoration: none;\n font-size: 20px;\n font-weight: bold;\n }\n .setColor {\n margin: 20px;\n padding: 10px;\n border: none;\n font-size: 18px;\n background-color: rgb(226, 43, 89);\n color: white;\n }\n</style>\n</head>\n<body>\n<h1>Syntax Highlighting example</h1>\n<div class=\"CodeDiv\">\n<a href=\"https://www.google.com\">Go to google.com</a>\n<a href=\"https://www.facebook.com\">Go to facebook.com</a>\n</div>\n<button class=\"setColor\" onclick=\"highlightLinks()\">Click Here</button>\n<h2>Click the above button to highlight the links</h2>\n<script>\n var anchrorRg = /[^<]*(<a href=\"([^\"]+)\">([^<]+)<\\/a>)/g;\n function highlightLinks() {\n var text = document.querySelector(\".CodeDiv\").innerHTML;\n text = text.replace(anchrorRg, function() {\n function check(arguments) {\n let tempArr = [];\n tempArr.push(Array.prototype.slice.call(arguments, 2, 4));\n return `<a class=\"colorLinks\" href=${tempArr[0][0]}>${tempArr[0][1]}</a><br>`;\n }\n return check(arguments);\n });\n document.querySelector(\".CodeDiv\").innerHTML = text;\n }\n</script>\n</body>\n</html>"
},
{
"code": null,
"e": 2555,
"s": 2470,
"text": "The above code will produce the following output on window size greater than 700px −"
},
{
"code": null,
"e": 2593,
"s": 2555,
"text": "On clicking the “Click Here” button −"
}
] |
CSS - Shake Effect | It provides to move (an object) up and down or from side to side for an element.
@keyframes shake {
0%, 100% {transform: translateX(0);}
10%, 30%, 50%, 70%, 90% {transform: translateX(-10px);}
20%, 40%, 60%, 80% {transform: translateX(10px);}
}
Transform − Transform applies to 2d and 3d transformation to an element.
Transform − Transform applies to 2d and 3d transformation to an element.
Opacity − Opacity applies to an element to make translucence.
Opacity − Opacity applies to an element to make translucence.
<html>
<head>
<style>
.animated {
background-image: url(/css/images/logo.png);
background-repeat: no-repeat;
background-position: left top;
padding-top:95px;
margin-bottom:60px;
-webkit-animation-duration: 10s;
animation-duration: 10s;
-webkit-animation-fill-mode: both;
animation-fill-mode: both;
}
@-webkit-keyframes shake {
0%, 100% {-webkit-transform: translateX(0);}
10%, 30%, 50%, 70%, 90% {-webkit-transform: translateX(-10px);}
20%, 40%, 60%, 80% {-webkit-transform: translateX(10px);}
}
@keyframes shake {
0%, 100% {transform: translateX(0);}
10%, 30%, 50%, 70%, 90% {transform: translateX(-10px);}
20%, 40%, 60%, 80% {transform: translateX(10px);}
}
.shake {
-webkit-animation-name: shake;
animation-name: shake;
}
</style>
</head>
<body>
<div id = "animated-example" class = "animated shake"></div>
<button onclick = "myFunction()">Reload page</button>
<script>
function myFunction() {
location.reload();
}
</script>
</body>
</html>
It will produce the following result −
Academic Tutorials
Big Data & Analytics
Computer Programming
Computer Science
Databases
DevOps
Digital Marketing
Engineering Tutorials
Exams Syllabus
Famous Monuments
GATE Exams Tutorials
Latest Technologies
Machine Learning
Mainframe Development
Management Tutorials
Mathematics Tutorials
Microsoft Technologies
Misc tutorials
Mobile Development
Java Technologies
Python Technologies
SAP Tutorials
Programming Scripts
Selected Reading
Software Quality
Soft Skills
Telecom Tutorials
UPSC IAS Exams
Web Development
Sports Tutorials
XML Technologies
Multi-Language
Interview Questions
Academic Tutorials
Big Data & Analytics
Computer Programming
Computer Science
Databases
DevOps
Digital Marketing
Engineering Tutorials
Exams Syllabus
Famous Monuments
GATE Exams Tutorials
Latest Technologies
Machine Learning
Mainframe Development
Management Tutorials
Mathematics Tutorials
Microsoft Technologies
Misc tutorials
Mobile Development
Java Technologies
Python Technologies
SAP Tutorials
Programming Scripts
Selected Reading
Software Quality
Soft Skills
Telecom Tutorials
UPSC IAS Exams
Web Development
Sports Tutorials
XML Technologies
Multi-Language
Interview Questions
Selected Reading
UPSC IAS Exams Notes
Developer's Best Practices
Questions and Answers
Effective Resume Writing
HR Interview Questions
Computer Glossary
Who is Who
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2707,
"s": 2626,
"text": "It provides to move (an object) up and down or from side to side for an element."
},
{
"code": null,
"e": 2884,
"s": 2707,
"text": "@keyframes shake {\n 0%, 100% {transform: translateX(0);} \n 10%, 30%, 50%, 70%, 90% {transform: translateX(-10px);} \n 20%, 40%, 60%, 80% {transform: translateX(10px);} \n} "
},
{
"code": null,
"e": 2957,
"s": 2884,
"text": "Transform − Transform applies to 2d and 3d transformation to an element."
},
{
"code": null,
"e": 3030,
"s": 2957,
"text": "Transform − Transform applies to 2d and 3d transformation to an element."
},
{
"code": null,
"e": 3092,
"s": 3030,
"text": "Opacity − Opacity applies to an element to make translucence."
},
{
"code": null,
"e": 3154,
"s": 3092,
"text": "Opacity − Opacity applies to an element to make translucence."
},
{
"code": null,
"e": 4516,
"s": 3154,
"text": "<html>\n <head>\n <style>\n .animated {\n background-image: url(/css/images/logo.png); \n background-repeat: no-repeat;\n background-position: left top; \n padding-top:95px;\n margin-bottom:60px;\n -webkit-animation-duration: 10s; \n animation-duration: 10s; \n -webkit-animation-fill-mode: both; \n animation-fill-mode: both;\n }\n \n @-webkit-keyframes shake {\n 0%, 100% {-webkit-transform: translateX(0);} \n 10%, 30%, 50%, 70%, 90% {-webkit-transform: translateX(-10px);} \n 20%, 40%, 60%, 80% {-webkit-transform: translateX(10px);} \n }\n \n @keyframes shake { \n 0%, 100% {transform: translateX(0);} \n 10%, 30%, 50%, 70%, 90% {transform: translateX(-10px);} \n 20%, 40%, 60%, 80% {transform: translateX(10px);} \n }\n \n .shake { \n -webkit-animation-name: shake; \n animation-name: shake; \n }\n </style>\n </head>\n\n <body>\n \n <div id = \"animated-example\" class = \"animated shake\"></div>\n <button onclick = \"myFunction()\">Reload page</button>\n \n <script>\n function myFunction() {\n location.reload();\n }\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 4555,
"s": 4516,
"text": "It will produce the following result −"
},
{
"code": null,
"e": 5202,
"s": 4555,
"text": "\n\n Academic Tutorials\n Big Data & Analytics \n Computer Programming \n Computer Science \n Databases \n DevOps \n Digital Marketing \n Engineering Tutorials \n Exams Syllabus \n Famous Monuments \n GATE Exams Tutorials\n Latest Technologies \n Machine Learning \n Mainframe Development \n Management Tutorials \n Mathematics Tutorials\n Microsoft Technologies \n Misc tutorials \n Mobile Development \n Java Technologies \n Python Technologies \n SAP Tutorials \nProgramming Scripts \n Selected Reading \n Software Quality \n Soft Skills \n Telecom Tutorials \n UPSC IAS Exams \n Web Development \n Sports Tutorials \n XML Technologies \n Multi-Language\n Interview Questions\n\n"
},
{
"code": null,
"e": 5222,
"s": 5202,
"text": " Academic Tutorials"
},
{
"code": null,
"e": 5245,
"s": 5222,
"text": " Big Data & Analytics "
},
{
"code": null,
"e": 5268,
"s": 5245,
"text": " Computer Programming "
},
{
"code": null,
"e": 5287,
"s": 5268,
"text": " Computer Science "
},
{
"code": null,
"e": 5299,
"s": 5287,
"text": " Databases "
},
{
"code": null,
"e": 5308,
"s": 5299,
"text": " DevOps "
},
{
"code": null,
"e": 5328,
"s": 5308,
"text": " Digital Marketing "
},
{
"code": null,
"e": 5352,
"s": 5328,
"text": " Engineering Tutorials "
},
{
"code": null,
"e": 5369,
"s": 5352,
"text": " Exams Syllabus "
},
{
"code": null,
"e": 5388,
"s": 5369,
"text": " Famous Monuments "
},
{
"code": null,
"e": 5410,
"s": 5388,
"text": " GATE Exams Tutorials"
},
{
"code": null,
"e": 5432,
"s": 5410,
"text": " Latest Technologies "
},
{
"code": null,
"e": 5451,
"s": 5432,
"text": " Machine Learning "
},
{
"code": null,
"e": 5475,
"s": 5451,
"text": " Mainframe Development "
},
{
"code": null,
"e": 5498,
"s": 5475,
"text": " Management Tutorials "
},
{
"code": null,
"e": 5521,
"s": 5498,
"text": " Mathematics Tutorials"
},
{
"code": null,
"e": 5546,
"s": 5521,
"text": " Microsoft Technologies "
},
{
"code": null,
"e": 5563,
"s": 5546,
"text": " Misc tutorials "
},
{
"code": null,
"e": 5584,
"s": 5563,
"text": " Mobile Development "
},
{
"code": null,
"e": 5604,
"s": 5584,
"text": " Java Technologies "
},
{
"code": null,
"e": 5626,
"s": 5604,
"text": " Python Technologies "
},
{
"code": null,
"e": 5642,
"s": 5626,
"text": " SAP Tutorials "
},
{
"code": null,
"e": 5663,
"s": 5642,
"text": "Programming Scripts "
},
{
"code": null,
"e": 5682,
"s": 5663,
"text": " Selected Reading "
},
{
"code": null,
"e": 5701,
"s": 5682,
"text": " Software Quality "
},
{
"code": null,
"e": 5715,
"s": 5701,
"text": " Soft Skills "
},
{
"code": null,
"e": 5735,
"s": 5715,
"text": " Telecom Tutorials "
},
{
"code": null,
"e": 5752,
"s": 5735,
"text": " UPSC IAS Exams "
},
{
"code": null,
"e": 5770,
"s": 5752,
"text": " Web Development "
},
{
"code": null,
"e": 5789,
"s": 5770,
"text": " Sports Tutorials "
},
{
"code": null,
"e": 5808,
"s": 5789,
"text": " XML Technologies "
},
{
"code": null,
"e": 5824,
"s": 5808,
"text": " Multi-Language"
},
{
"code": null,
"e": 5845,
"s": 5824,
"text": " Interview Questions"
},
{
"code": null,
"e": 5862,
"s": 5845,
"text": "Selected Reading"
},
{
"code": null,
"e": 5883,
"s": 5862,
"text": "UPSC IAS Exams Notes"
},
{
"code": null,
"e": 5910,
"s": 5883,
"text": "Developer's Best Practices"
},
{
"code": null,
"e": 5932,
"s": 5910,
"text": "Questions and Answers"
},
{
"code": null,
"e": 5957,
"s": 5932,
"text": "Effective Resume Writing"
},
{
"code": null,
"e": 5980,
"s": 5957,
"text": "HR Interview Questions"
},
{
"code": null,
"e": 5998,
"s": 5980,
"text": "Computer Glossary"
},
{
"code": null,
"e": 6009,
"s": 5998,
"text": "Who is Who"
},
{
"code": null,
"e": 6016,
"s": 6009,
"text": " Print"
},
{
"code": null,
"e": 6027,
"s": 6016,
"text": " Add Notes"
}
] |
Check if the number is Fibonacci | Practice | GeeksforGeeks | Check if a given number N is the Fibonacci number. A Fibonacci number is a number that occurs in the Fibonacci series.
Example 1:
Input:
N = 34
Output:
Yes
Explanation:
34 is one of the numbers
of the Fibonacci series.
Example 2:
Input:
N = 41
Output:
No
Explanation:
41 is one of the numbers
of the Fibonacci series.
Your Task:
You don't need to read input or print anything. Your task is to complete the function checkFibonacci() which takes an integer N and returns a string "Yes" if it is a Fibonacci number else "No".
Expected Time Complexity: O(1)
Expected Auxiliary Space: O(1)
Constraints:
1 <= N <= 100
0
mohdraza2 months ago
0.1simport java.io.*;import java.util.*;
class GFG{ public static void main(String args[])throws IOException { BufferedReader read = new BufferedReader(new InputStreamReader(System.in)); int t = Integer.parseInt(read.readLine()); while(t-- > 0) { int N = Integer.parseInt(read.readLine()); Solution ob = new Solution(); System.out.println(ob.checkFibonacci(N)); } }}// } Driver Code Ends
//User function Template for Javaclass Solution{ static String checkFibonacci(int N){ int x=0,y=1; int z=0; String ans=""; if(N==2||N==3||N==1||N==0||N==5){ return "Yes"; } for(int i=0;i<=N/2;i++){ z=x+y; x=y; y=z; if(z!=N) { ans="No"; continue; }
+1
madhucoder2 months ago
Java Code:
static String checkFibonacci(int N){
// code here
int x = 5*N*N+4;
int y = 5*N*N-4;
if((Math.sqrt(x)==Math.floor(Math.sqrt(x))) || (Math.sqrt(y)==Math.floor(Math.sqrt(y))))
{
return "Yes";
}
else
{
return "No";
}
}
0
madhucoder
This comment was deleted.
0
chechipresh2 months ago
class Solution{
public:
string checkFibonacci(int N){
// code here
if (N==0 || N==1)
return "Yes";
int a=0,b=1;
int s=a+b;
while (s<N){
a=b;
b=s;
s=a+b;
if (s==N)
return "Yes";
}
return "No";
}
};
0
ramdurgasais2 months ago
IN python
def checkFibonacci (ob,N):
# code here
a, b = 0, 1
if N in (a,b) : return "Yes"
while N > b:
a, b = b, a+b
if b == N : return "Yes"
return "No"
0
nikhillko023 months ago
def checkFibonacci(ob,N): # code here l=[0 for i in range(N+3)] if N==0 or N==1 or N==2: return "Yes" l[1]=1 for i in range(2,N+3): l[i]=l[i-1]+l[i-2] if N in l: return "Yes" else: return "No"
0
pranavtripathi20013 months ago
string checkFibonacci(int N){ int x=0,y=1; int z=0; string ans=""; for(int i=0;i<=N;i++) { z=x+y; x=y; y=z; if(z==N) { ans= "Yes"; break; } else{ ans= "No"; } } return ans; }
+1
mdmozammilraza064 months ago
class Solution{ static String checkFibonacci(int N){ // code here int a = 0; int b = 1; int c = 0; String ans = ""; for (int i=0; i<=N; i++) { c = a+b; a = b; b = c; if (c == N) { ans = "Yes"; break; } else { ans = "No"; } } return ans; }}
0
mohammadadnaan20034 months ago
int ans = (N * (N+1)) / 2; for(int i=1;i<=ans;i++) { for(int j=1;j<=N;j++) { cout<<i<<" "; } cout<<endl; }
+1
mayank20214 months ago
#python3 def checkFibonacci (ob,N): # code here i=1 j=1 while N > j : temp=i i=j j=temp+j if N <j: return "No" elif N==j: return "Yes"
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 357,
"s": 238,
"text": "Check if a given number N is the Fibonacci number. A Fibonacci number is a number that occurs in the Fibonacci series."
},
{
"code": null,
"e": 370,
"s": 359,
"text": "Example 1:"
},
{
"code": null,
"e": 460,
"s": 370,
"text": "Input:\nN = 34\nOutput:\nYes\nExplanation:\n34 is one of the numbers \nof the Fibonacci series."
},
{
"code": null,
"e": 473,
"s": 462,
"text": "Example 2:"
},
{
"code": null,
"e": 562,
"s": 473,
"text": "Input:\nN = 41\nOutput:\nNo\nExplanation:\n41 is one of the numbers \nof the Fibonacci series."
},
{
"code": null,
"e": 577,
"s": 566,
"text": "Your Task:"
},
{
"code": null,
"e": 771,
"s": 577,
"text": "You don't need to read input or print anything. Your task is to complete the function checkFibonacci() which takes an integer N and returns a string \"Yes\" if it is a Fibonacci number else \"No\"."
},
{
"code": null,
"e": 835,
"s": 773,
"text": "Expected Time Complexity: O(1)\nExpected Auxiliary Space: O(1)"
},
{
"code": null,
"e": 864,
"s": 837,
"text": "Constraints:\n1 <= N <= 100"
},
{
"code": null,
"e": 868,
"s": 866,
"text": "0"
},
{
"code": null,
"e": 889,
"s": 868,
"text": "mohdraza2 months ago"
},
{
"code": null,
"e": 930,
"s": 889,
"text": "0.1simport java.io.*;import java.util.*;"
},
{
"code": null,
"e": 1343,
"s": 930,
"text": "class GFG{ public static void main(String args[])throws IOException { BufferedReader read = new BufferedReader(new InputStreamReader(System.in)); int t = Integer.parseInt(read.readLine()); while(t-- > 0) { int N = Integer.parseInt(read.readLine()); Solution ob = new Solution(); System.out.println(ob.checkFibonacci(N)); } }}// } Driver Code Ends"
},
{
"code": null,
"e": 1697,
"s": 1343,
"text": "//User function Template for Javaclass Solution{ static String checkFibonacci(int N){ int x=0,y=1; int z=0; String ans=\"\"; if(N==2||N==3||N==1||N==0||N==5){ return \"Yes\"; } for(int i=0;i<=N/2;i++){ z=x+y; x=y; y=z; if(z!=N) { ans=\"No\"; continue; }"
},
{
"code": null,
"e": 1700,
"s": 1697,
"text": "+1"
},
{
"code": null,
"e": 1723,
"s": 1700,
"text": "madhucoder2 months ago"
},
{
"code": null,
"e": 2037,
"s": 1723,
"text": "Java Code:\nstatic String checkFibonacci(int N){\n // code here\n int x = 5*N*N+4;\n int y = 5*N*N-4;\n if((Math.sqrt(x)==Math.floor(Math.sqrt(x))) || (Math.sqrt(y)==Math.floor(Math.sqrt(y))))\n {\n return \"Yes\";\n }\n else\n {\n return \"No\";\n }\n }"
},
{
"code": null,
"e": 2039,
"s": 2037,
"text": "0"
},
{
"code": null,
"e": 2050,
"s": 2039,
"text": "madhucoder"
},
{
"code": null,
"e": 2076,
"s": 2050,
"text": "This comment was deleted."
},
{
"code": null,
"e": 2078,
"s": 2076,
"text": "0"
},
{
"code": null,
"e": 2102,
"s": 2078,
"text": "chechipresh2 months ago"
},
{
"code": null,
"e": 2445,
"s": 2102,
"text": "\nclass Solution{ \npublic:\n string checkFibonacci(int N){\n // code here \n if (N==0 || N==1)\n return \"Yes\";\n int a=0,b=1;\n int s=a+b;\n while (s<N){\n a=b;\n b=s;\n s=a+b;\n if (s==N)\n return \"Yes\";\n }\n return \"No\";\n }\n};\n"
},
{
"code": null,
"e": 2447,
"s": 2445,
"text": "0"
},
{
"code": null,
"e": 2472,
"s": 2447,
"text": "ramdurgasais2 months ago"
},
{
"code": null,
"e": 2483,
"s": 2472,
"text": " IN python"
},
{
"code": null,
"e": 2693,
"s": 2483,
"text": "def checkFibonacci (ob,N):\n # code here \n a, b = 0, 1 \n if N in (a,b) : return \"Yes\"\n while N > b:\n a, b = b, a+b\n if b == N : return \"Yes\"\n return \"No\""
},
{
"code": null,
"e": 2695,
"s": 2693,
"text": "0"
},
{
"code": null,
"e": 2719,
"s": 2695,
"text": "nikhillko023 months ago"
},
{
"code": null,
"e": 2994,
"s": 2719,
"text": "def checkFibonacci(ob,N): # code here l=[0 for i in range(N+3)] if N==0 or N==1 or N==2: return \"Yes\" l[1]=1 for i in range(2,N+3): l[i]=l[i-1]+l[i-2] if N in l: return \"Yes\" else: return \"No\""
},
{
"code": null,
"e": 2996,
"s": 2994,
"text": "0"
},
{
"code": null,
"e": 3027,
"s": 2996,
"text": "pranavtripathi20013 months ago"
},
{
"code": null,
"e": 3372,
"s": 3027,
"text": "string checkFibonacci(int N){ int x=0,y=1; int z=0; string ans=\"\"; for(int i=0;i<=N;i++) { z=x+y; x=y; y=z; if(z==N) { ans= \"Yes\"; break; } else{ ans= \"No\"; } } return ans; }"
},
{
"code": null,
"e": 3375,
"s": 3372,
"text": "+1"
},
{
"code": null,
"e": 3404,
"s": 3375,
"text": "mdmozammilraza064 months ago"
},
{
"code": null,
"e": 3837,
"s": 3404,
"text": "class Solution{ static String checkFibonacci(int N){ // code here int a = 0; int b = 1; int c = 0; String ans = \"\"; for (int i=0; i<=N; i++) { c = a+b; a = b; b = c; if (c == N) { ans = \"Yes\"; break; } else { ans = \"No\"; } } return ans; }} "
},
{
"code": null,
"e": 3839,
"s": 3837,
"text": "0"
},
{
"code": null,
"e": 3870,
"s": 3839,
"text": "mohammadadnaan20034 months ago"
},
{
"code": null,
"e": 4010,
"s": 3870,
"text": " int ans = (N * (N+1)) / 2; for(int i=1;i<=ans;i++) { for(int j=1;j<=N;j++) { cout<<i<<\" \"; } cout<<endl; }"
},
{
"code": null,
"e": 4013,
"s": 4010,
"text": "+1"
},
{
"code": null,
"e": 4036,
"s": 4013,
"text": "mayank20214 months ago"
},
{
"code": null,
"e": 4261,
"s": 4036,
"text": "#python3 def checkFibonacci (ob,N): # code here i=1 j=1 while N > j : temp=i i=j j=temp+j if N <j: return \"No\" elif N==j: return \"Yes\""
},
{
"code": null,
"e": 4407,
"s": 4261,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 4443,
"s": 4407,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 4453,
"s": 4443,
"text": "\nProblem\n"
},
{
"code": null,
"e": 4463,
"s": 4453,
"text": "\nContest\n"
},
{
"code": null,
"e": 4526,
"s": 4463,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 4674,
"s": 4526,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 4882,
"s": 4674,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 4988,
"s": 4882,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
Explain the array of structures in C language | An array of structure in C programming is a collection of different datatype variables, grouped together under a single name.
General form of structure declaration
The structural declaration is as follows −
struct tagname{
datatype member1;
datatype member2;
datatype member n;
};
Here, struct is the keyword
tagname specifies name of structure
member1, member2 specifies the data items that make up structure.
The following example shows the usage of array of structures in C programming −
struct book{
int pages;
char author [30];
float price;
};
The most common use of structure in C programming is an array of structures.
The most common use of structure in C programming is an array of structures.
To declare an array of structure, first the structure must be defined and then an array variable of that type should be defined.
To declare an array of structure, first the structure must be defined and then an array variable of that type should be defined.
For Example − struct book b[10]; //10 elements in an array of structures of type ‘book’
For Example − struct book b[10]; //10 elements in an array of structures of type ‘book’
The following program shows the usage of array of structures.
Live Demo
#include <stdio.h>
#include <string.h>
struct student{
int id;
char name[30];
float percentage;
};
int main(){
int i;
struct student record[2];
// 1st student's record
record[0].id=1;
strcpy(record[0].name, "Bhanu");
record[0].percentage = 86.5;
// 2nd student's record
record[1].id=2;
strcpy(record[1].name, "Priya");
record[1].percentage = 90.5;
// 3rd student's record
record[2].id=3;
strcpy(record[2].name, "Hari");
record[2].percentage = 81.5;
for(i=0; i<3; i++){
printf(" Records of STUDENT : %d \n", i+1);
printf(" Id is: %d \n", record[i].id);
printf(" Name is: %s \n", record[i].name);
printf(" Percentage is: %f\n\n",record[i].percentage);
}
return 0;
}
When the above program is executed, it produces the following result −
Records of STUDENT : 1
Id is: 1
Name is: Bhanu
Percentage is: 86.500000
Records of STUDENT : 2
Id is: 2
Name is: Priya
Percentage is: 90.500000
Records of STUDENT : 3
Id is: 3
Name is: Hari
Percentage is: 81.500000 | [
{
"code": null,
"e": 1188,
"s": 1062,
"text": "An array of structure in C programming is a collection of different datatype variables, grouped together under a single name."
},
{
"code": null,
"e": 1226,
"s": 1188,
"text": "General form of structure declaration"
},
{
"code": null,
"e": 1269,
"s": 1226,
"text": "The structural declaration is as follows −"
},
{
"code": null,
"e": 1352,
"s": 1269,
"text": "struct tagname{\n datatype member1;\n datatype member2;\n datatype member n;\n};"
},
{
"code": null,
"e": 1380,
"s": 1352,
"text": "Here, struct is the keyword"
},
{
"code": null,
"e": 1420,
"s": 1380,
"text": " tagname specifies name of structure"
},
{
"code": null,
"e": 1490,
"s": 1420,
"text": " member1, member2 specifies the data items that make up structure."
},
{
"code": null,
"e": 1570,
"s": 1490,
"text": "The following example shows the usage of array of structures in C programming −"
},
{
"code": null,
"e": 1637,
"s": 1570,
"text": "struct book{\n int pages;\n char author [30];\n float price;\n};"
},
{
"code": null,
"e": 1714,
"s": 1637,
"text": "The most common use of structure in C programming is an array of structures."
},
{
"code": null,
"e": 1791,
"s": 1714,
"text": "The most common use of structure in C programming is an array of structures."
},
{
"code": null,
"e": 1920,
"s": 1791,
"text": "To declare an array of structure, first the structure must be defined and then an array variable of that type should be defined."
},
{
"code": null,
"e": 2049,
"s": 1920,
"text": "To declare an array of structure, first the structure must be defined and then an array variable of that type should be defined."
},
{
"code": null,
"e": 2137,
"s": 2049,
"text": "For Example − struct book b[10]; //10 elements in an array of structures of type ‘book’"
},
{
"code": null,
"e": 2225,
"s": 2137,
"text": "For Example − struct book b[10]; //10 elements in an array of structures of type ‘book’"
},
{
"code": null,
"e": 2287,
"s": 2225,
"text": "The following program shows the usage of array of structures."
},
{
"code": null,
"e": 2298,
"s": 2287,
"text": " Live Demo"
},
{
"code": null,
"e": 3046,
"s": 2298,
"text": "#include <stdio.h>\n#include <string.h>\nstruct student{\n int id;\n char name[30];\n float percentage;\n};\nint main(){\n int i;\n struct student record[2];\n // 1st student's record\n record[0].id=1;\n strcpy(record[0].name, \"Bhanu\");\n record[0].percentage = 86.5;\n // 2nd student's record\n record[1].id=2;\n strcpy(record[1].name, \"Priya\");\n record[1].percentage = 90.5;\n // 3rd student's record\n record[2].id=3;\n strcpy(record[2].name, \"Hari\");\n record[2].percentage = 81.5;\n for(i=0; i<3; i++){\n printf(\" Records of STUDENT : %d \\n\", i+1);\n printf(\" Id is: %d \\n\", record[i].id);\n printf(\" Name is: %s \\n\", record[i].name);\n printf(\" Percentage is: %f\\n\\n\",record[i].percentage);\n }\n return 0;\n}"
},
{
"code": null,
"e": 3117,
"s": 3046,
"text": "When the above program is executed, it produces the following result −"
},
{
"code": null,
"e": 3332,
"s": 3117,
"text": "Records of STUDENT : 1\nId is: 1\nName is: Bhanu\nPercentage is: 86.500000\nRecords of STUDENT : 2\nId is: 2\nName is: Priya\nPercentage is: 90.500000\nRecords of STUDENT : 3\nId is: 3\nName is: Hari\nPercentage is: 81.500000"
}
] |
How to use openssl for generating ssl certificates private keys and csrs | OpenSSL is a CLI (Command Line Tool) which can be used to secure the server to generate public key infrastructure (PKI) and HTTPS. This article helps you as a quick reference to understand OpenSSL commands which are very useful in common, and for everyday scenarios especially for system administrators.
If we want to obtain SSL certificate from a certificate authority (CA), we must generate a certificate signing request (CSR). A CSR consists of mainly the public key of a key pair, and some additional information. Both these components are merged into the certificate whenever we are signing for the CSR.
While generating a CSR, the system will prompt for information regarding the certificate and this information is called as Distinguished Name (DN). The important field in the DN is the Common Name (CN) which should be the FQND (Fully Qualified Domain Name) of the server or the host where we intend to use the certificate with.
The next item in a DN is to provide the additional information about our business or organization. If we purchase an SSL certificate from a certificate authority (CA), it is very important and required that these additional fields like “Organization” should reflect your organization for details.
Here is a general example for the CSR information prompt, when we run the OpenSSL command to generate the CSR.
Country Name (2 letter code) [US]:IN
State or Province Name (full name) [Some-State]:Telengana
Locality Name (eg, city) []:Hyderabad
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Ansole Pvt Ltd.
Organizational Unit Name (eg, section) []:Application
Common Name (e.g. server FQDN or YOUR name) []:domainname.com
Email Address []:[email protected]
We can also provide the information by non-interactive answers for the CSR information generation, we can do this by adding the –subj option to any OpenSSL commands that we try to generate or run.
Below is an example for the –subj option where we can provide the information of the organization where we want to use this CSR.
-subj "/C=IN/ST=Telengana/L=Hyderabad/O=Ansole Pvt Ltd/CN=domainname.com"
In this section, we will cover about OpenSSL commands which are related to generating the CSR. This CSR can be used to request an SSL certificate from a certificate authority.
If we want to use HTTPS (HTTP over TLS) to secure the Apache or Nginx web servers (using a Certificate Authority (CA) to issue the SSL certificate). Also, the ‘.CSR’ which we will be generating has to be sent to a CA for requesting the certificate for obtaining CA-signed SSL.
Below is the command to create a 2048-bit private key for ‘domain.key’ and a CSR ‘domain.csr’ from the scratch.
$ openssl req -newkey rsa:2048 -nodes -keyout domain.key -out domain.csr
Generating a 2048 bit RSA private key
..............................+++
.......................................+++
writing new private key to 'domain.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IN
State or Province Name (full name) [Some-State]:Telengana
Locality Name (eg, city) []:Hyderabad
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Ansol Pvt Ltd
Organizational Unit Name (eg, section) []:Application
Common Name (e.g. server FQDN or YOUR name) []:domainname.com
Email Address []:[email protected]
The ‘–newkey rsa:2048’ is the option which we are specifying that the key should be 2048-bit using the RSA algorithm. The ’ –nodes’ option is to specifying that the private key should not be encrypted with a pass phrase. The ‘-new’ option, indicates that a CSR is being generated.
Here we will learn about, how to generate a CSR for which you have the private key.
Below is the command to create a new .csr file based on the private key which we already have.
$ openssl req -key domain.key -new -out domain.csr
You are about to be asked to enter information that will be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IN
State or Province Name (full name) [Some-State]:Telengana
Locality Name (eg, city) []:Hyderabad
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Ansol Pvt Ltd
Organizational Unit Name (eg, section) []:Applicatoin
Common Name (e.g. server FQDN or YOUR name) []:domainname.com
Email Address []:[email protected]
Here we can generate or renew an existing certificate where we miss the CSR file due to some reason. Here, the CSR will extract the information using the .CRT file which we have.
Below is the example for generating –
$ openssl x509 in domain.crt-signkey domain.key -x509toreq -out domain.csr
Where -x509toreq is specified that we are using the x509 certificate files to make a CSR.
Here we will generate the Certificate to secure the web server where we use the self-signed certificate to use for development and testing purpose.
$ openssl req -newkey rsa:2048 -nodes -keyout domain.key-x509 -days 365 -out domain.crt
Generating a 2048 bit RSA private key
................+++
.......................................................+++
writing new private key to 'domain.key'
-----
You are about to be asked to enter information that will be incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:IN
State or Province Name (full name) [Some-State]:Telengana
Locality Name (eg, city) []:Hyderabad
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Ansol Pvt Ltd
Organizational Unit Name (eg, section) []:Application
Common Name (e.g. server FQDN or YOUR name) []:domainname.com
Email Address []:[email protected]
Here, we generate self-signed certificate using –x509 option, we can generate certificates with a validity of 365 days using –days 365 and a temporary .CSR files are generated using the above information.
Please note that, CSR files are encoded with .PEM format (which is not readable by the humans). This is required to view a certificate. In this section, we can cover the OpenSSL commands which are encoded with .PEM files.
The below command will be used to view the contents of the .CRT files Ex (domain.crt) in the plain text format.
$ sudo openssl x509 -text -noout -in domain.crt
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9717655772991591277 (0x86dc0c706eb0136d)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=IN, ST=Telengana, L=Hyderabad, O=Ansol Pvt Ltd, OU=Application , CN=domainname.com/[email protected]
Validity
Not Before: Jun 13 14:23:52 2016 GMT
Not After : Jun 13 14:23:52 2017 GMT
Subject: C=IN, ST=Telengana, L=Hyderabad, O=Ansol Pvt Ltd, OU=Applicatio n, CN=domainname.com/[email protected]
....
In this section, will see how to use OpenSSL commands that are specific to creating and verifying the private keys.
Below is the command to create a password-protected and, 2048-bit encrypted private key file (ex. domain.key) –
$ openssl genrsa -des3 -out domain.key 2048
Enter a password when prompted to complete the process.
Below is the command to check that a private key which we have generated (ex: domain.key) is a valid key or not
$ openssl rsa -check -in domain.key
If the private key is encrypted, you will be prompted to enter the pass phrase. Upon the successful entry, the unencrypted key will be the output on the terminal.
In this article, we have learnt some commands and usage of OpenSSL commands which deals with SSL certificates where the OpenSSL has lots of features. We will learn more features and usage in the future. I hope this article will help us to understand some basic features of the OpenSSL. | [
{
"code": null,
"e": 1366,
"s": 1062,
"text": "OpenSSL is a CLI (Command Line Tool) which can be used to secure the server to generate public key infrastructure (PKI) and HTTPS. This article helps you as a quick reference to understand OpenSSL commands which are very useful in common, and for everyday scenarios especially for system administrators."
},
{
"code": null,
"e": 1671,
"s": 1366,
"text": "If we want to obtain SSL certificate from a certificate authority (CA), we must generate a certificate signing request (CSR). A CSR consists of mainly the public key of a key pair, and some additional information. Both these components are merged into the certificate whenever we are signing for the CSR."
},
{
"code": null,
"e": 1999,
"s": 1671,
"text": "While generating a CSR, the system will prompt for information regarding the certificate and this information is called as Distinguished Name (DN). The important field in the DN is the Common Name (CN) which should be the FQND (Fully Qualified Domain Name) of the server or the host where we intend to use the certificate with."
},
{
"code": null,
"e": 2296,
"s": 1999,
"text": "The next item in a DN is to provide the additional information about our business or organization. If we purchase an SSL certificate from a certificate authority (CA), it is very important and required that these additional fields like “Organization” should reflect your organization for details."
},
{
"code": null,
"e": 2407,
"s": 2296,
"text": "Here is a general example for the CSR information prompt, when we run the OpenSSL command to generate the CSR."
},
{
"code": null,
"e": 2771,
"s": 2407,
"text": "Country Name (2 letter code) [US]:IN\nState or Province Name (full name) [Some-State]:Telengana\nLocality Name (eg, city) []:Hyderabad\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:Ansole Pvt Ltd.\nOrganizational Unit Name (eg, section) []:Application\nCommon Name (e.g. server FQDN or YOUR name) []:domainname.com\nEmail Address []:[email protected]"
},
{
"code": null,
"e": 2968,
"s": 2771,
"text": "We can also provide the information by non-interactive answers for the CSR information generation, we can do this by adding the –subj option to any OpenSSL commands that we try to generate or run."
},
{
"code": null,
"e": 3097,
"s": 2968,
"text": "Below is an example for the –subj option where we can provide the information of the organization where we want to use this CSR."
},
{
"code": null,
"e": 3171,
"s": 3097,
"text": "-subj \"/C=IN/ST=Telengana/L=Hyderabad/O=Ansole Pvt Ltd/CN=domainname.com\""
},
{
"code": null,
"e": 3347,
"s": 3171,
"text": "In this section, we will cover about OpenSSL commands which are related to generating the CSR. This CSR can be used to request an SSL certificate from a certificate authority."
},
{
"code": null,
"e": 3624,
"s": 3347,
"text": "If we want to use HTTPS (HTTP over TLS) to secure the Apache or Nginx web servers (using a Certificate Authority (CA) to issue the SSL certificate). Also, the ‘.CSR’ which we will be generating has to be sent to a CA for requesting the certificate for obtaining CA-signed SSL."
},
{
"code": null,
"e": 3736,
"s": 3624,
"text": "Below is the command to create a 2048-bit private key for ‘domain.key’ and a CSR ‘domain.csr’ from the scratch."
},
{
"code": null,
"e": 4671,
"s": 3736,
"text": "$ openssl req -newkey rsa:2048 -nodes -keyout domain.key -out domain.csr\nGenerating a 2048 bit RSA private key\n..............................+++\n.......................................+++\nwriting new private key to 'domain.key'\n-----\nYou are about to be asked to enter information that will be incorporated\ninto your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:IN\nState or Province Name (full name) [Some-State]:Telengana\nLocality Name (eg, city) []:Hyderabad\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:Ansol Pvt Ltd\nOrganizational Unit Name (eg, section) []:Application\nCommon Name (e.g. server FQDN or YOUR name) []:domainname.com\nEmail Address []:[email protected]"
},
{
"code": null,
"e": 4952,
"s": 4671,
"text": "The ‘–newkey rsa:2048’ is the option which we are specifying that the key should be 2048-bit using the RSA algorithm. The ’ –nodes’ option is to specifying that the private key should not be encrypted with a pass phrase. The ‘-new’ option, indicates that a CSR is being generated."
},
{
"code": null,
"e": 5036,
"s": 4952,
"text": "Here we will learn about, how to generate a CSR for which you have the private key."
},
{
"code": null,
"e": 5131,
"s": 5036,
"text": "Below is the command to create a new .csr file based on the private key which we already have."
},
{
"code": null,
"e": 5883,
"s": 5131,
"text": "$ openssl req -key domain.key -new -out domain.csr\nYou are about to be asked to enter information that will be incorporated into your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:IN\nState or Province Name (full name) [Some-State]:Telengana\nLocality Name (eg, city) []:Hyderabad\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:Ansol Pvt Ltd\nOrganizational Unit Name (eg, section) []:Applicatoin\nCommon Name (e.g. server FQDN or YOUR name) []:domainname.com\nEmail Address []:[email protected]"
},
{
"code": null,
"e": 6062,
"s": 5883,
"text": "Here we can generate or renew an existing certificate where we miss the CSR file due to some reason. Here, the CSR will extract the information using the .CRT file which we have."
},
{
"code": null,
"e": 6100,
"s": 6062,
"text": "Below is the example for generating –"
},
{
"code": null,
"e": 6175,
"s": 6100,
"text": "$ openssl x509 in domain.crt-signkey domain.key -x509toreq -out domain.csr"
},
{
"code": null,
"e": 6265,
"s": 6175,
"text": "Where -x509toreq is specified that we are using the x509 certificate files to make a CSR."
},
{
"code": null,
"e": 6413,
"s": 6265,
"text": "Here we will generate the Certificate to secure the web server where we use the self-signed certificate to use for development and testing purpose."
},
{
"code": null,
"e": 7365,
"s": 6413,
"text": "$ openssl req -newkey rsa:2048 -nodes -keyout domain.key-x509 -days 365 -out domain.crt\nGenerating a 2048 bit RSA private key\n................+++\n.......................................................+++\nwriting new private key to 'domain.key'\n-----\nYou are about to be asked to enter information that will be incorporated into your certificate request.\nWhat you are about to enter is what is called a Distinguished Name or a DN.\nThere are quite a few fields but you can leave some blank\nFor some fields there will be a default value,\nIf you enter '.', the field will be left blank.\n-----\nCountry Name (2 letter code) [AU]:IN\nState or Province Name (full name) [Some-State]:Telengana\nLocality Name (eg, city) []:Hyderabad\nOrganization Name (eg, company) [Internet Widgits Pty Ltd]:Ansol Pvt Ltd\nOrganizational Unit Name (eg, section) []:Application\nCommon Name (e.g. server FQDN or YOUR name) []:domainname.com\nEmail Address []:[email protected]"
},
{
"code": null,
"e": 7570,
"s": 7365,
"text": "Here, we generate self-signed certificate using –x509 option, we can generate certificates with a validity of 365 days using –days 365 and a temporary .CSR files are generated using the above information."
},
{
"code": null,
"e": 7792,
"s": 7570,
"text": "Please note that, CSR files are encoded with .PEM format (which is not readable by the humans). This is required to view a certificate. In this section, we can cover the OpenSSL commands which are encoded with .PEM files."
},
{
"code": null,
"e": 7904,
"s": 7792,
"text": "The below command will be used to view the contents of the .CRT files Ex (domain.crt) in the plain text format."
},
{
"code": null,
"e": 8436,
"s": 7904,
"text": "$ sudo openssl x509 -text -noout -in domain.crt\nCertificate:\nData:\nVersion: 3 (0x2)\nSerial Number: 9717655772991591277 (0x86dc0c706eb0136d)\nSignature Algorithm: sha256WithRSAEncryption\nIssuer: C=IN, ST=Telengana, L=Hyderabad, O=Ansol Pvt Ltd, OU=Application , CN=domainname.com/[email protected]\nValidity\nNot Before: Jun 13 14:23:52 2016 GMT\nNot After : Jun 13 14:23:52 2017 GMT\nSubject: C=IN, ST=Telengana, L=Hyderabad, O=Ansol Pvt Ltd, OU=Applicatio n, CN=domainname.com/[email protected]\n...."
},
{
"code": null,
"e": 8552,
"s": 8436,
"text": "In this section, will see how to use OpenSSL commands that are specific to creating and verifying the private keys."
},
{
"code": null,
"e": 8664,
"s": 8552,
"text": "Below is the command to create a password-protected and, 2048-bit encrypted private key file (ex. domain.key) –"
},
{
"code": null,
"e": 8708,
"s": 8664,
"text": "$ openssl genrsa -des3 -out domain.key 2048"
},
{
"code": null,
"e": 8764,
"s": 8708,
"text": "Enter a password when prompted to complete the process."
},
{
"code": null,
"e": 8877,
"s": 8764,
"text": "Below is the command to check that a private key which we have generated (ex: domain.key) is a valid key or not "
},
{
"code": null,
"e": 8913,
"s": 8877,
"text": "$ openssl rsa -check -in domain.key"
},
{
"code": null,
"e": 9076,
"s": 8913,
"text": "If the private key is encrypted, you will be prompted to enter the pass phrase. Upon the successful entry, the unencrypted key will be the output on the terminal."
},
{
"code": null,
"e": 9362,
"s": 9076,
"text": "In this article, we have learnt some commands and usage of OpenSSL commands which deals with SSL certificates where the OpenSSL has lots of features. We will learn more features and usage in the future. I hope this article will help us to understand some basic features of the OpenSSL."
}
] |
JavaScript Array Sort | The sort() method sorts an array alphabetically:
The reverse() method reverses the elements in an array.
You can use it to
sort an array in descending order:
By default, the sort() function sorts values as strings.
This works well for strings ("Apple" comes before "Banana").
However, if numbers are sorted as strings, "25" is bigger than "100",
because "2" is bigger than "1".
Because of this, the sort() method will produce incorrect result when sorting
numbers.
You can fix this by providing a compare function:
Use the same trick to sort an array descending:
The purpose of the compare function is to define an alternative sort
order.
The compare function should return a negative, zero, or positive value, depending on
the arguments:
When the sort() function compares two values, it sends the values to the
compare function, and sorts the values according to the returned (negative,
zero, positive) value.
If the result is negative a is sorted before
b.
If the result is positive b is sorted
before a.
If the result is 0 no changes are done with the sort order of the two
values.
Example:
The compare function compares all the values in the array, two values at a
time (a, b).
When comparing 40 and 100, the sort() method calls the compare function(40, 100).
The function calculates 40 - 100 (a - b), and
since the result is negative (-60), the sort function will sort 40 as a value lower than 100.
You can use this code snippet to experiment with numerically and
alphabetically sorting:
Try it Yourself »
The above example, array.sort(), is not accurate, it will favor some
numbers over the others.
The most popular correct method, is called the Fisher Yates shuffle, and was
introduced in data science as early as 1938!
In JavaScript the method can be translated to this:
Try it Yourself »
There are no built-in functions for finding the max or min
value in an array.
However, after you have sorted an array, you can use the
index to obtain the highest and lowest values.
Sorting ascending:
Sorting descending:
Sorting a whole array is a very inefficient method if you only want to find the highest (or lowest) value.
You can use Math.max.apply to find the highest number in an array:
Try it Yourself »
Math.max.apply(null, [1, 2, 3]) is equivalent to Math.max(1, 2, 3).
You can use Math.min.apply to find the lowest number in an array:
Try it Yourself »
Math.min.apply(null, [1, 2, 3]) is equivalent to Math.min(1, 2, 3).
The fastest solution is to use a "home made" method.
This function loops through an array comparing each value with the highest
value found:
Try it Yourself »
This function loops through an array comparing each value with the lowest
value found:
Try it Yourself »
JavaScript arrays often contain objects:
Even if objects have properties of different data types, the sort() method
can be used to sort the array.
The solution is to write a compare function to compare the property values:
Comparing string properties is a little more complex:
For a complete Array reference, go to our:
Complete JavaScript Array Reference.
The reference contains descriptions and examples of all Array
properties and methods.
Use the correct Array method to sort the fruits array alphabetically.
const fruits = ["Banana", "Orange", "Apple", "Kiwi"];
;
Start the Exercise
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
[email protected]
Your message has been sent to W3Schools. | [
{
"code": null,
"e": 50,
"s": 0,
"text": "The sort() method sorts an array alphabetically: "
},
{
"code": null,
"e": 106,
"s": 50,
"text": "The reverse() method reverses the elements in an array."
},
{
"code": null,
"e": 161,
"s": 106,
"text": "You can use it to \nsort an array in descending order: "
},
{
"code": null,
"e": 218,
"s": 161,
"text": "By default, the sort() function sorts values as strings."
},
{
"code": null,
"e": 279,
"s": 218,
"text": "This works well for strings (\"Apple\" comes before \"Banana\")."
},
{
"code": null,
"e": 382,
"s": 279,
"text": "However, if numbers are sorted as strings, \"25\" is bigger than \"100\", \nbecause \"2\" is bigger than \"1\"."
},
{
"code": null,
"e": 470,
"s": 382,
"text": "Because of this, the sort() method will produce incorrect result when sorting \nnumbers."
},
{
"code": null,
"e": 520,
"s": 470,
"text": "You can fix this by providing a compare function:"
},
{
"code": null,
"e": 568,
"s": 520,
"text": "Use the same trick to sort an array descending:"
},
{
"code": null,
"e": 645,
"s": 568,
"text": "The purpose of the compare function is to define an alternative sort \norder."
},
{
"code": null,
"e": 746,
"s": 645,
"text": "The compare function should return a negative, zero, or positive value, depending on \nthe arguments:"
},
{
"code": null,
"e": 920,
"s": 746,
"text": "When the sort() function compares two values, it sends the values to the \ncompare function, and sorts the values according to the returned (negative, \nzero, positive) value."
},
{
"code": null,
"e": 968,
"s": 920,
"text": "If the result is negative a is sorted before\nb."
},
{
"code": null,
"e": 1017,
"s": 968,
"text": "If the result is positive b is sorted \nbefore a."
},
{
"code": null,
"e": 1096,
"s": 1017,
"text": "If the result is 0 no changes are done with the sort order of the two \nvalues."
},
{
"code": null,
"e": 1105,
"s": 1096,
"text": "Example:"
},
{
"code": null,
"e": 1194,
"s": 1105,
"text": "The compare function compares all the values in the array, two values at a \ntime (a, b)."
},
{
"code": null,
"e": 1276,
"s": 1194,
"text": "When comparing 40 and 100, the sort() method calls the compare function(40, 100)."
},
{
"code": null,
"e": 1418,
"s": 1276,
"text": "The function calculates 40 - 100 (a - b), and \nsince the result is negative (-60), the sort function will sort 40 as a value lower than 100."
},
{
"code": null,
"e": 1508,
"s": 1418,
"text": "You can use this code snippet to experiment with numerically and \nalphabetically sorting:"
},
{
"code": null,
"e": 1528,
"s": 1508,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1623,
"s": 1528,
"text": "The above example, array.sort(), is not accurate, it will favor some \nnumbers over the others."
},
{
"code": null,
"e": 1746,
"s": 1623,
"text": "The most popular correct method, is called the Fisher Yates shuffle, and was \nintroduced in data science as early as 1938!"
},
{
"code": null,
"e": 1798,
"s": 1746,
"text": "In JavaScript the method can be translated to this:"
},
{
"code": null,
"e": 1818,
"s": 1798,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1897,
"s": 1818,
"text": "There are no built-in functions for finding the max or min \nvalue in an array."
},
{
"code": null,
"e": 2002,
"s": 1897,
"text": "However, after you have sorted an array, you can use the \nindex to obtain the highest and lowest values."
},
{
"code": null,
"e": 2021,
"s": 2002,
"text": "Sorting ascending:"
},
{
"code": null,
"e": 2041,
"s": 2021,
"text": "Sorting descending:"
},
{
"code": null,
"e": 2148,
"s": 2041,
"text": "Sorting a whole array is a very inefficient method if you only want to find the highest (or lowest) value."
},
{
"code": null,
"e": 2215,
"s": 2148,
"text": "You can use Math.max.apply to find the highest number in an array:"
},
{
"code": null,
"e": 2235,
"s": 2215,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 2303,
"s": 2235,
"text": "Math.max.apply(null, [1, 2, 3]) is equivalent to Math.max(1, 2, 3)."
},
{
"code": null,
"e": 2369,
"s": 2303,
"text": "You can use Math.min.apply to find the lowest number in an array:"
},
{
"code": null,
"e": 2389,
"s": 2369,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 2457,
"s": 2389,
"text": "Math.min.apply(null, [1, 2, 3]) is equivalent to Math.min(1, 2, 3)."
},
{
"code": null,
"e": 2510,
"s": 2457,
"text": "The fastest solution is to use a \"home made\" method."
},
{
"code": null,
"e": 2600,
"s": 2510,
"text": "This function loops through an array comparing each value with the highest \nvalue found: "
},
{
"code": null,
"e": 2620,
"s": 2600,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 2709,
"s": 2620,
"text": "This function loops through an array comparing each value with the lowest \nvalue found: "
},
{
"code": null,
"e": 2729,
"s": 2709,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 2770,
"s": 2729,
"text": "JavaScript arrays often contain objects:"
},
{
"code": null,
"e": 2878,
"s": 2770,
"text": "Even if objects have properties of different data types, the sort() method \ncan be used to sort the array. "
},
{
"code": null,
"e": 2954,
"s": 2878,
"text": "The solution is to write a compare function to compare the property values:"
},
{
"code": null,
"e": 3008,
"s": 2954,
"text": "Comparing string properties is a little more complex:"
},
{
"code": null,
"e": 3051,
"s": 3008,
"text": "For a complete Array reference, go to our:"
},
{
"code": null,
"e": 3088,
"s": 3051,
"text": "Complete JavaScript Array Reference."
},
{
"code": null,
"e": 3175,
"s": 3088,
"text": "The reference contains descriptions and examples of all Array \nproperties and methods."
},
{
"code": null,
"e": 3245,
"s": 3175,
"text": "Use the correct Array method to sort the fruits array alphabetically."
},
{
"code": null,
"e": 3302,
"s": 3245,
"text": "const fruits = [\"Banana\", \"Orange\", \"Apple\", \"Kiwi\"];\n;\n"
},
{
"code": null,
"e": 3321,
"s": 3302,
"text": "Start the Exercise"
},
{
"code": null,
"e": 3354,
"s": 3321,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 3396,
"s": 3354,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 3503,
"s": 3396,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 3522,
"s": 3503,
"text": "[email protected]"
}
] |
How to find operating system in the client machine using JavaScript? | To detect the operating system on the client machine, your script can analyze the value of navigator.appVersion or navigator.userAgent.
Let us look at detecting the client OS using a simple script −
var operatingSystem = "Unknown OS";
// Check For Windows
if (navigator.appVersion.indexOf("Win") !=- 1) operatingSystem = "Windows";
// Check for Mac
if (navigator.appVersion.indexOf("Mac") !=- 1) operatingSystem = "MacOS";
// Check for UNIX
if (navigator.appVersion.indexOf("X11") !=- 1) operatingSystem = "UNIX";
// Check for Linux
if (navigator.appVersion.indexOf("Linux") !=- 1) operatingSystem = "Linux";
console.log('Your OS: ' + operatingSystem);
If you have a windows system, it'll give the output:
Your OS: Windows
This can also be used to find the browser of the user. | [
{
"code": null,
"e": 1198,
"s": 1062,
"text": "To detect the operating system on the client machine, your script can analyze the value of navigator.appVersion or navigator.userAgent."
},
{
"code": null,
"e": 1261,
"s": 1198,
"text": "Let us look at detecting the client OS using a simple script −"
},
{
"code": null,
"e": 1715,
"s": 1261,
"text": "var operatingSystem = \"Unknown OS\";\n// Check For Windows\nif (navigator.appVersion.indexOf(\"Win\") !=- 1) operatingSystem = \"Windows\";\n// Check for Mac\nif (navigator.appVersion.indexOf(\"Mac\") !=- 1) operatingSystem = \"MacOS\";\n// Check for UNIX\nif (navigator.appVersion.indexOf(\"X11\") !=- 1) operatingSystem = \"UNIX\";\n// Check for Linux\nif (navigator.appVersion.indexOf(\"Linux\") !=- 1) operatingSystem = \"Linux\";\nconsole.log('Your OS: ' + operatingSystem);"
},
{
"code": null,
"e": 1768,
"s": 1715,
"text": "If you have a windows system, it'll give the output:"
},
{
"code": null,
"e": 1785,
"s": 1768,
"text": "Your OS: Windows"
},
{
"code": null,
"e": 1840,
"s": 1785,
"text": "This can also be used to find the browser of the user."
}
] |
Get first element with maximum value in list of tuples in Python | We have a list of tuple. We are required to find out that tuple which was maximum value in it. But in case more than one tuple has same value we need the first tuple which has maximum value.
With itemgetter(1) we get all the value from index position 1 and then apply a max function to get the item with max value. But in case more than one result is returned, we apply index zero to get the first tuple with maximum element in it.
Live Demo
from operator import itemgetter
# initializing list
listA = [('Mon', 3), ('Tue', 20), ('Wed', 9)]
# Given list
print("Given list : \n" ,listA)
# using max() and itemgetter()
res = max(listA, key=itemgetter(1))[0]
# printing result
print("Day with maximum score is : \n",res)
Running the above code gives us the following result −
Given list :
[('Mon', 3), ('Tue', 20), ('Wed', 9)]
Day with maximum score is :
Tue
We use the lambda function to get the elements at index position 1 then apply the max function. Then we apply the index position 0 to get the first among multiple matches to get the final result.
Live Demo
# initializing list
listA = [('Mon', 3), ('Tue', 20), ('Wed', 9)]
# Given list
print("Given list : \n" ,listA)
# using max() and lambda
res = max(listA, key = lambda i : i[1])[0]
# printing result
print("Day with maximum score is : \n",res)
Running the above code gives us the following result −
Given list :
[('Mon', 3), ('Tue', 20), ('Wed', 9)]
Day with maximum score is :
Tue
In this approach we use the sorted function with reversed equal to true condition when applying the lambda function.
Live Demo
# initializing list
listA = [('Mon', 3), ('Tue', 20), ('Wed', 9)]
# Given list
print("Given list : \n" ,listA)
# using sorted() and lambda
res = sorted(listA, key = lambda i: i[1], reverse = True)[0][0]
# printing result
print("Day with maximum score is : \n",res)
Running the above code gives us the following result −
Given list :
[('Mon', 3), ('Tue', 20), ('Wed', 9)]
Day with maximum score is :
Tue | [
{
"code": null,
"e": 1253,
"s": 1062,
"text": "We have a list of tuple. We are required to find out that tuple which was maximum value in it. But in case more than one tuple has same value we need the first tuple which has maximum value."
},
{
"code": null,
"e": 1494,
"s": 1253,
"text": "With itemgetter(1) we get all the value from index position 1 and then apply a max function to get the item with max value. But in case more than one result is returned, we apply index zero to get the first tuple with maximum element in it."
},
{
"code": null,
"e": 1505,
"s": 1494,
"text": " Live Demo"
},
{
"code": null,
"e": 1780,
"s": 1505,
"text": "from operator import itemgetter\n# initializing list\nlistA = [('Mon', 3), ('Tue', 20), ('Wed', 9)]\n# Given list\nprint(\"Given list : \\n\" ,listA)\n# using max() and itemgetter()\nres = max(listA, key=itemgetter(1))[0]\n# printing result\nprint(\"Day with maximum score is : \\n\",res)"
},
{
"code": null,
"e": 1835,
"s": 1780,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 1918,
"s": 1835,
"text": "Given list :\n[('Mon', 3), ('Tue', 20), ('Wed', 9)]\nDay with maximum score is :\nTue"
},
{
"code": null,
"e": 2114,
"s": 1918,
"text": "We use the lambda function to get the elements at index position 1 then apply the max function. Then we apply the index position 0 to get the first among multiple matches to get the final result."
},
{
"code": null,
"e": 2125,
"s": 2114,
"text": " Live Demo"
},
{
"code": null,
"e": 2366,
"s": 2125,
"text": "# initializing list\nlistA = [('Mon', 3), ('Tue', 20), ('Wed', 9)]\n# Given list\nprint(\"Given list : \\n\" ,listA)\n# using max() and lambda\nres = max(listA, key = lambda i : i[1])[0]\n# printing result\nprint(\"Day with maximum score is : \\n\",res)"
},
{
"code": null,
"e": 2421,
"s": 2366,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 2504,
"s": 2421,
"text": "Given list :\n[('Mon', 3), ('Tue', 20), ('Wed', 9)]\nDay with maximum score is :\nTue"
},
{
"code": null,
"e": 2621,
"s": 2504,
"text": "In this approach we use the sorted function with reversed equal to true condition when applying the lambda function."
},
{
"code": null,
"e": 2632,
"s": 2621,
"text": " Live Demo"
},
{
"code": null,
"e": 2897,
"s": 2632,
"text": "# initializing list\nlistA = [('Mon', 3), ('Tue', 20), ('Wed', 9)]\n# Given list\nprint(\"Given list : \\n\" ,listA)\n# using sorted() and lambda\nres = sorted(listA, key = lambda i: i[1], reverse = True)[0][0]\n# printing result\nprint(\"Day with maximum score is : \\n\",res)"
},
{
"code": null,
"e": 2952,
"s": 2897,
"text": "Running the above code gives us the following result −"
},
{
"code": null,
"e": 3035,
"s": 2952,
"text": "Given list :\n[('Mon', 3), ('Tue', 20), ('Wed', 9)]\nDay with maximum score is :\nTue"
}
] |
Java Program to wrap a Primitive DataType in a Wrapper Object | Every Java primitive data type has a class dedicated to it. These classes wrap the primitive data type into an object of that class. Therefore, it is known as wrapper classes.
The following is the program that displays a Primitive DataType in a Wrapper Object.
Live Demo
public class Demo {
public static void main(String[] args) {
Boolean myBoolean = new Boolean(true);
boolean val1 = myBoolean.booleanValue();
System.out.println(val1);
Character myChar = new Character('a');
char val2 = myChar.charValue();
System.out.println(val2);
Short myShort = new Short((short) 654);
short val3 = myShort.shortValue();
System.out.println(val3);
Integer myInt = new Integer(878);
int val4 = myInt.intValue();
System.out.println(val4);
Long myLong = new Long(956L);
long val5 = myLong.longValue();
System.out.println(val5);
Float myFloat = new Float(10.4F);
float val6 = myFloat.floatValue();
System.out.println(val6);
Double myDouble = new Double(12.3D);
double val7 = myDouble.doubleValue();
System.out.println(val7);
}
}
True
a
654
878
956
10.4
12.3
In the above program, we have taken every data type one by one. An example can be seen is for boolean. Our wrapper is −
Boolean myBoolean = new Boolean(true);
Now, the Primitive DataType is wrapped in a Wrapper Object
boolean val1 = myBoolean.booleanValue(); | [
{
"code": null,
"e": 1238,
"s": 1062,
"text": "Every Java primitive data type has a class dedicated to it. These classes wrap the primitive data type into an object of that class. Therefore, it is known as wrapper classes."
},
{
"code": null,
"e": 1323,
"s": 1238,
"text": "The following is the program that displays a Primitive DataType in a Wrapper Object."
},
{
"code": null,
"e": 1334,
"s": 1323,
"text": " Live Demo"
},
{
"code": null,
"e": 2214,
"s": 1334,
"text": "public class Demo {\n public static void main(String[] args) {\n Boolean myBoolean = new Boolean(true);\n boolean val1 = myBoolean.booleanValue();\n System.out.println(val1);\n\n Character myChar = new Character('a');\n char val2 = myChar.charValue();\n System.out.println(val2);\n\n Short myShort = new Short((short) 654);\n short val3 = myShort.shortValue();\n System.out.println(val3);\n\n Integer myInt = new Integer(878);\n int val4 = myInt.intValue();\n System.out.println(val4);\n\n Long myLong = new Long(956L);\n long val5 = myLong.longValue();\n System.out.println(val5);\n\n Float myFloat = new Float(10.4F);\n float val6 = myFloat.floatValue();\n System.out.println(val6);\n\n Double myDouble = new Double(12.3D);\n double val7 = myDouble.doubleValue();\n System.out.println(val7);\n }\n}"
},
{
"code": null,
"e": 2243,
"s": 2214,
"text": "True\na\n654\n878\n956\n10.4\n12.3"
},
{
"code": null,
"e": 2363,
"s": 2243,
"text": "In the above program, we have taken every data type one by one. An example can be seen is for boolean. Our wrapper is −"
},
{
"code": null,
"e": 2402,
"s": 2363,
"text": "Boolean myBoolean = new Boolean(true);"
},
{
"code": null,
"e": 2461,
"s": 2402,
"text": "Now, the Primitive DataType is wrapped in a Wrapper Object"
},
{
"code": null,
"e": 2502,
"s": 2461,
"text": "boolean val1 = myBoolean.booleanValue();"
}
] |
Find all pairs with a given sum | Practice | GeeksforGeeks | Given two unsorted arrays A of size N and B of size M of distinct elements, the task is to find all pairs from both arrays whose sum is equal to X.
Example 1:
Input:
A[] = {1, 2, 4, 5, 7}
B[] = {5, 6, 3, 4, 8}
X = 9
Output:
1 8
4 5
5 4
Explanation:
(1, 8), (4, 5), (5, 4) are the
pairs which sum to 9.
Input:
A[] = {-1, -2, 4, -6, 5, 7}
B[] = {6, 3, 4, 0}
X = 8
Output:
4 4
5 3
Your Task:
You don't need to read input or print anything. Your task is to complete the function allPairs() which takes the array A[], B[], its size N and M respectively and an integer X as inputs and returns the sorted vector pair values of all the pairs u,v where u belongs to array A and v belongs to array B. If no such pair exist return empty vector pair.
Note : All pairs should be printed in increasing order of u. For eg. for two pairs (u1,v1) and (u2,v2), if u1<u2 then
(u1,v1) should be printed first else second.
Expected Time Complexity: O(NLog(N))
Expected Auxiliary Space: O(N)
Constraints:
1 ≤ N, M ≤ 106
-106 ≤ X, A[i], B[i] ≤ 106
+1
somya95singh1 week ago
class Solution { public pair[] allPairs( long A[], long B[], long N, long M, long X) { ArrayList<pair> ans = new ArrayList<pair>(); Arrays.sort(A); Arrays.sort(B); Set<Long> s = new HashSet<Long>(); for(int i=0;i<M;i++){ s.add(B[i]); } for(int i=0;i<N;i++){ long diff = X-A[i]; if(s.contains(diff)){ ans.add(new pair(A[i] , diff)); } } int n = ans.size(); pair p[] = new pair[n]; for(int i=0;i<n;i++){ p[i] = new pair(ans.get(i).first , ans.get(i).second); } return p;
}
}
+1
nikipyati2 weeks ago
class Solution:
def allPairs(self, A, B, N, M, X):
l=[]
for i in range(N):
s=0
for j in range(M):
if A[i]+B[j]==X:
l.append((A[i],B[j]))
l.sort()
return l
+1
lindan1232 weeks ago
vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X)
{
vector<pair<int,int>> vec;
set<int> s;
for(int i=0;i<M;i++)
{
s.insert(B[i]);
}
for(int i=0;i<N;i++)
{
int temp = X-A[i];
if(s.find(temp)!=s.end())
{
vec.push_back({A[i],temp});
}
}
sort(vec.begin(),vec.end());
return vec;
}
Time Taken : 0.06sec
Cpp
0
subhamsen13 weeks ago
Python solution
def allPairs(A, B, N,M, X): # Your code goes here """ Traverse list 1 and then make a dictionary of required number X-I if the required number found in dictionary get the key and this i a_dic={key:None for key in A } b_dic= {key:None for key in B } res=[] for element in a_dic: required_number=X-element if required_number in b_dic: res.append((element,required_number)) # output has to be sorted res.sort() return res
0
tupesanket19993 weeks ago
vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X){
vector<pair<int,int>>vec;
unordered_set<int>st(B,B+M);
for(int i=0;i< N;i++)if(st.find(X-A[i])!=st.end())vec.push_back({A[i],X-A[i]});
sort(vec.begin(),vec.end());
return vec;
}
0
madhukartemba4 weeks ago
JAVA SOLUTION:
class Solution
{
public pair[] allPairs( long A[], long B[], long N, long M, long X)
{
ArrayList<pair> ans = new ArrayList<>();
Arrays.sort(A);
Arrays.sort(B);
int low = 0, high = (int)M-1;
while(low<N && high>=0)
{
long sum = A[low] + B[high];
if(sum>X)
{
high--;
}
else if(sum<X)
{
low++;
}
else
{
ans.add(new pair(A[low], B[high]));
high--;
low++;
}
}
int n = ans.size();
pair p[] = new pair[n];
for(int i=0; i<n; i++)
{
p[i] = new pair(ans.get(i).first, ans.get(i).second);
}
return p;
}
}
0
pritamaber4 weeks ago
vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X)
{
// Your code goes here
vector<pair<int, int>> ans;
unordered_set<int> s2(B, B+M);
for(int i = 0; i<N; i++){
if(s2.find(X-A[i]) != s2.end()){
ans.push_back(make_pair(A[i], X-A[i]));
}
}
sort(ans.begin(), ans.end());
return ans;
}
0
arpitjaiswal481 month ago
vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X)
{
// Your code goes here
sort(A,A+N);
vector<pair<int,int>> v;
unordered_map<int,int> m;
for(int i=0;i<M;i++){
m[B[i]]++;
}
for(int i=0;i<N;i++){
if(m.find(X-A[i]) != m.end()){
v.push_back(make_pair(A[i],X-A[i]));
}else{
continue;
}
}
return v;
}
0
praveenkumar1391521 month ago
#User function Template for python3
class Solution: def allPairs(self, A, B, N, M, X): A.sort() d=[] s=set(B) for i in A: if X-i in s: d.append((i,X-i)) return d
0
ayushkumar54512 months ago
vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X) { // Your code goes here map<int,int>mp; for(int i=0;i<N;i++){ mp[A[i]]++; } vector<pair<int,int>>v; for(int i=0;i<M;i++){ if(mp.find(X-B[i])!=mp.end()){ v.push_back({X-B[i],B[i]}); } } sort(v.begin(),v.end()); return v; }
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 386,
"s": 238,
"text": "Given two unsorted arrays A of size N and B of size M of distinct elements, the task is to find all pairs from both arrays whose sum is equal to X."
},
{
"code": null,
"e": 399,
"s": 388,
"text": "Example 1:"
},
{
"code": null,
"e": 547,
"s": 399,
"text": "Input:\nA[] = {1, 2, 4, 5, 7}\nB[] = {5, 6, 3, 4, 8} \nX = 9 \nOutput: \n1 8\n4 5 \n5 4\nExplanation:\n(1, 8), (4, 5), (5, 4) are the\npairs which sum to 9.\n"
},
{
"code": null,
"e": 627,
"s": 547,
"text": "Input:\nA[] = {-1, -2, 4, -6, 5, 7}\nB[] = {6, 3, 4, 0} \nX = 8 \nOutput:\n4 4 \n5 3\n"
},
{
"code": null,
"e": 1154,
"s": 627,
"text": "\nYour Task: \nYou don't need to read input or print anything. Your task is to complete the function allPairs() which takes the array A[], B[], its size N and M respectively and an integer X as inputs and returns the sorted vector pair values of all the pairs u,v where u belongs to array A and v belongs to array B. If no such pair exist return empty vector pair.\nNote : All pairs should be printed in increasing order of u. For eg. for two pairs (u1,v1) and (u2,v2), if u1<u2 then\n(u1,v1) should be printed first else second."
},
{
"code": null,
"e": 1223,
"s": 1154,
"text": "\nExpected Time Complexity: O(NLog(N))\nExpected Auxiliary Space: O(N)"
},
{
"code": null,
"e": 1279,
"s": 1223,
"text": "\nConstraints:\n1 ≤ N, M ≤ 106\n-106 ≤ X, A[i], B[i] ≤ 106"
},
{
"code": null,
"e": 1282,
"s": 1279,
"text": "+1"
},
{
"code": null,
"e": 1305,
"s": 1282,
"text": "somya95singh1 week ago"
},
{
"code": null,
"e": 1925,
"s": 1305,
"text": "class Solution { public pair[] allPairs( long A[], long B[], long N, long M, long X) { ArrayList<pair> ans = new ArrayList<pair>(); Arrays.sort(A); Arrays.sort(B); Set<Long> s = new HashSet<Long>(); for(int i=0;i<M;i++){ s.add(B[i]); } for(int i=0;i<N;i++){ long diff = X-A[i]; if(s.contains(diff)){ ans.add(new pair(A[i] , diff)); } } int n = ans.size(); pair p[] = new pair[n]; for(int i=0;i<n;i++){ p[i] = new pair(ans.get(i).first , ans.get(i).second); } return p;"
},
{
"code": null,
"e": 1931,
"s": 1925,
"text": " }"
},
{
"code": null,
"e": 1933,
"s": 1931,
"text": "}"
},
{
"code": null,
"e": 1936,
"s": 1933,
"text": "+1"
},
{
"code": null,
"e": 1957,
"s": 1936,
"text": "nikipyati2 weeks ago"
},
{
"code": null,
"e": 2208,
"s": 1957,
"text": "class Solution:\n def allPairs(self, A, B, N, M, X):\n l=[]\n for i in range(N):\n s=0\n for j in range(M):\n if A[i]+B[j]==X:\n l.append((A[i],B[j]))\n l.sort()\n return l"
},
{
"code": null,
"e": 2211,
"s": 2208,
"text": "+1"
},
{
"code": null,
"e": 2232,
"s": 2211,
"text": "lindan1232 weeks ago"
},
{
"code": null,
"e": 2702,
"s": 2232,
"text": "vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X)\n {\n vector<pair<int,int>> vec;\n set<int> s;\n for(int i=0;i<M;i++)\n {\n s.insert(B[i]);\n }\n \n for(int i=0;i<N;i++)\n {\n int temp = X-A[i];\n if(s.find(temp)!=s.end())\n {\n vec.push_back({A[i],temp});\n }\n }\n sort(vec.begin(),vec.end());\n return vec;\n }"
},
{
"code": null,
"e": 2723,
"s": 2702,
"text": "Time Taken : 0.06sec"
},
{
"code": null,
"e": 2727,
"s": 2723,
"text": "Cpp"
},
{
"code": null,
"e": 2729,
"s": 2727,
"text": "0"
},
{
"code": null,
"e": 2751,
"s": 2729,
"text": "subhamsen13 weeks ago"
},
{
"code": null,
"e": 2767,
"s": 2751,
"text": "Python solution"
},
{
"code": null,
"e": 3247,
"s": 2769,
"text": "def allPairs(A, B, N,M, X): # Your code goes here \"\"\" Traverse list 1 and then make a dictionary of required number X-I if the required number found in dictionary get the key and this i a_dic={key:None for key in A } b_dic= {key:None for key in B } res=[] for element in a_dic: required_number=X-element if required_number in b_dic: res.append((element,required_number)) # output has to be sorted res.sort() return res "
},
{
"code": null,
"e": 3249,
"s": 3247,
"text": "0"
},
{
"code": null,
"e": 3275,
"s": 3249,
"text": "tupesanket19993 weeks ago"
},
{
"code": null,
"e": 3562,
"s": 3275,
"text": "vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X){\n vector<pair<int,int>>vec;\n unordered_set<int>st(B,B+M);\n for(int i=0;i< N;i++)if(st.find(X-A[i])!=st.end())vec.push_back({A[i],X-A[i]});\n sort(vec.begin(),vec.end());\n return vec;\n }"
},
{
"code": null,
"e": 3564,
"s": 3562,
"text": "0"
},
{
"code": null,
"e": 3589,
"s": 3564,
"text": "madhukartemba4 weeks ago"
},
{
"code": null,
"e": 3604,
"s": 3589,
"text": "JAVA SOLUTION:"
},
{
"code": null,
"e": 4489,
"s": 3604,
"text": "class Solution\n{\n public pair[] allPairs( long A[], long B[], long N, long M, long X)\n {\n ArrayList<pair> ans = new ArrayList<>();\n Arrays.sort(A);\n Arrays.sort(B);\n \n int low = 0, high = (int)M-1;\n \n while(low<N && high>=0)\n {\n long sum = A[low] + B[high];\n if(sum>X)\n {\n high--;\n }\n else if(sum<X)\n {\n low++;\n }\n else\n {\n ans.add(new pair(A[low], B[high]));\n high--;\n low++;\n }\n }\n \n int n = ans.size();\n \n pair p[] = new pair[n];\n \n for(int i=0; i<n; i++)\n {\n p[i] = new pair(ans.get(i).first, ans.get(i).second);\n }\n \n return p;\n \n }\n}"
},
{
"code": null,
"e": 4491,
"s": 4489,
"text": "0"
},
{
"code": null,
"e": 4513,
"s": 4491,
"text": "pritamaber4 weeks ago"
},
{
"code": null,
"e": 4948,
"s": 4513,
"text": " \n vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X)\n {\n // Your code goes here \n vector<pair<int, int>> ans;\n \n unordered_set<int> s2(B, B+M);\n \n for(int i = 0; i<N; i++){\n if(s2.find(X-A[i]) != s2.end()){\n ans.push_back(make_pair(A[i], X-A[i]));\n }\n }\n sort(ans.begin(), ans.end());\n return ans;\n }"
},
{
"code": null,
"e": 4950,
"s": 4948,
"text": "0"
},
{
"code": null,
"e": 4976,
"s": 4950,
"text": "arpitjaiswal481 month ago"
},
{
"code": null,
"e": 5454,
"s": 4976,
"text": "vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X)\n {\n // Your code goes here \n sort(A,A+N);\n vector<pair<int,int>> v;\n unordered_map<int,int> m;\n for(int i=0;i<M;i++){\n m[B[i]]++;\n }\n for(int i=0;i<N;i++){\n if(m.find(X-A[i]) != m.end()){\n v.push_back(make_pair(A[i],X-A[i]));\n }else{\n continue;\n }\n }\n return v;\n }"
},
{
"code": null,
"e": 5456,
"s": 5454,
"text": "0"
},
{
"code": null,
"e": 5486,
"s": 5456,
"text": "praveenkumar1391521 month ago"
},
{
"code": null,
"e": 5522,
"s": 5486,
"text": "#User function Template for python3"
},
{
"code": null,
"e": 5712,
"s": 5522,
"text": "class Solution: def allPairs(self, A, B, N, M, X): A.sort() d=[] s=set(B) for i in A: if X-i in s: d.append((i,X-i)) return d "
},
{
"code": null,
"e": 5714,
"s": 5712,
"text": "0"
},
{
"code": null,
"e": 5741,
"s": 5714,
"text": "ayushkumar54512 months ago"
},
{
"code": null,
"e": 6112,
"s": 5741,
"text": "vector<pair<int,int>> allPairs(int A[], int B[], int N, int M, int X) { // Your code goes here map<int,int>mp; for(int i=0;i<N;i++){ mp[A[i]]++; } vector<pair<int,int>>v; for(int i=0;i<M;i++){ if(mp.find(X-B[i])!=mp.end()){ v.push_back({X-B[i],B[i]}); } } sort(v.begin(),v.end()); return v; }"
},
{
"code": null,
"e": 6258,
"s": 6112,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 6294,
"s": 6258,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 6304,
"s": 6294,
"text": "\nProblem\n"
},
{
"code": null,
"e": 6314,
"s": 6304,
"text": "\nContest\n"
},
{
"code": null,
"e": 6377,
"s": 6314,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 6525,
"s": 6377,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 6733,
"s": 6525,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 6839,
"s": 6733,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
Explain Lifecycle Methods of React Components - GeeksforGeeks | 20 Oct, 2021
We have seen so far that React web apps are actually a collection of independent components which run according to the interactions made with them. Every React Component has a lifecycle of its own, lifecycle of a component can be defined as the series of methods that are invoked in different stages of the component’s existence. The definition is pretty straightforward but what do we mean by different stages? A React Component can go through four stages of its life as follows.
Initialization: This is the stage where the component is constructed with the given Props and default state. This is done in the constructor of a Component Class.
Mounting: Mounting is the stage of rendering the JSX returned by the render method itself.
Updating: Updating is the stage when the state of a component is updated and the application is repainted.
Unmounting: As the name suggests Unmounting is the final step of the component lifecycle where the component is removed from the page.
React provides the developers a set of predefined functions that if present is invoked around specific events in the lifetime of the component. Developers are supposed to override the functions with desired logic to execute accordingly. We have illustrated the gist in the following diagram.
Now let us describe each phase and its corresponding functions.Functions of each Phase of Lifecycle
Initialization: In this phase the developer has to define the props and initial state ofthe component this is generally done in the constructor of the component. The following code snippet describes the initialization process.
javascript
class Clock extends React.Component { constructor(props) { // Calling the constructor of // Parent Class React.Component super(props); // Setting the initial state this.state = { date: new Date() }; }}
Mounting: Mounting is the phase of the component lifecyle when the initialization of the component is completed and the component is mounted on the DOM and rendered for the first time in the webpage. Now React follows a default procedure in the Naming Conventions of this predefined functions where the functions containing “Will” represents before some specific phase and “Did” represents after the completion of that phase. Mounting phase consists of two such predefined functions as described below.componentWillMount() Function: As the name clearly suggests, this function is invoked right before the component is mounted on the DOM i.e. this function gets invoked once before the render() function is executed for the first time.componentDidMount() Function: Similarly as the previous one this function is invoked right after the component is mounted on the DOM i.e. this function gets invoked once after the render() function is executed for the first time.
componentWillMount() Function: As the name clearly suggests, this function is invoked right before the component is mounted on the DOM i.e. this function gets invoked once before the render() function is executed for the first time.
componentDidMount() Function: Similarly as the previous one this function is invoked right after the component is mounted on the DOM i.e. this function gets invoked once after the render() function is executed for the first time.
Updation: React is a JS library that helps create Active web pages easily. Now active web pages are specific pages that behave according to its user. For example, let’s take the GeeksforGeeks {IDE} webpage, the webpage acts differently with each user. User A might write some code in C in the Light Theme while another User may write a Python code in the Dark Theme all at the same time. This dynamic behavior that partially depends upon the user itself makes the webpage an Active webpage. Now how can this be related to Updation? Updation is the phase where the states and props of a component are updated followed by some user events such as clicking, pressing a key on keyboard etc. The following are the descriptions of functions that are invoked at different points of Updation phase.componentWillRecieveProps() Function: This is a Props exclusive Function and is independent of States. This function is invoked before a mounted component gets its props reassigned. The function is passed the new set of Props which may or may not be identical to the original Props. Thus checking is a mandatory step in this regards. The following code snippet shows a sample use-case.
componentWillRecieveProps() Function: This is a Props exclusive Function and is independent of States. This function is invoked before a mounted component gets its props reassigned. The function is passed the new set of Props which may or may not be identical to the original Props. Thus checking is a mandatory step in this regards. The following code snippet shows a sample use-case.
javascript
componentWillRecieveProps(newProps){ if(this.props !== newProps) { console.log(" New Props have been assigned "); // Use this.setState() to rerender the page. } }
setState() Function: This is not particularly a Lifecycle function and can be invoked explicitly at any instant. This function is used to update the State of a component. You may refer to this article for detailed information.
shouldComponentUpdate() Function: By default, every state or props update re-render the page but this may not always be the desired outcome, sometimes it is desired that on updating the page will not be repainted. The shouldComponentUpdate() Function fulfills the requirement by letting React know whether the component’s output will be affected by the update or not. shouldComponentUpdate() is invoked before rendering an already mounted component when new props or state are being received. If returned false then the subsequent steps of rendering will not be carried out. This function can’t be used in case of forceUpdate(). The Function takes the new Props and new State as the arguments and returns whether to re-render or not.
componentWillUpdate() Function: As the name clearly suggests, this function is invoked before the component is rerendered i.e. this function gets invoked once before the render() function is executed after the updation of State or Props.
componentDidUpdate() Function: Similarly this function is invoked after the component is rerendered i.e. this function gets invoked once after the render() function is executed after the updation of State or Props.
Unmounting: This is the final phase of the lifeycle of the component that is the phase of unmounting the component from the DOM. The following function is the sole member of this phase.componentWillUnmount() Function: This function is invoked before the component is finally unmounted from the DOM i.e. this function gets invoked once before the component is removed from the page and this denotes the end of the lifecycle.
componentWillUnmount() Function: This function is invoked before the component is finally unmounted from the DOM i.e. this function gets invoked once before the component is removed from the page and this denotes the end of the lifecycle.
We have so far discussed every predefined function there was in the lifecycle of the component and we have also specified the order of execution of the function. Let us now see one final example to finish of the article while revising what’s discussed above.
Javascript
import React from 'react';import ReactDOM from 'react-dom'; class Test extends React.Component { constructor(props) { super(props); this.state = { hello : "World!" }; } componentWillMount() { console.log("componentWillMount()"); } componentDidMount() { console.log("componentDidMount()"); } changeState() { this.setState({ hello : "Geek!" }); } render() { return ( <div> <h1>GeeksForGeeks.org, Hello{ this.state.hello }</h1> <h2> <a onClick={this.changeState.bind(this)}>Press Here!</a> </h2> </div>); } shouldComponentUpdate(nextProps, nextState) { console.log("shouldComponentUpdate()"); return true; } componentWillUpdate() { console.log("componentWillUpdate()"); } componentDidUpdate() { console.log("componentDidUpdate()"); }} ReactDOM.render( <Test />, document.getElementById('root'));
Output:
In the next article, we will use the state and the lifecycle functions to recreate the clock we created before using multiple render() calls.
Picked
React-Questions
TrueGeek-2021
ReactJS
TrueGeek
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to set background images in ReactJS ?
How to create a table in ReactJS ?
How to navigate on path by button click in react router ?
ReactJS useNavigate() Hook
React-Router Hooks
How to remove duplicate elements from JavaScript Array ?
SQL Statement to Remove Part of a String
Basics of API Testing Using Postman
Types of Internet Protocols
How to Convert Char to String in Java? | [
{
"code": null,
"e": 24852,
"s": 24824,
"text": "\n20 Oct, 2021"
},
{
"code": null,
"e": 25333,
"s": 24852,
"text": "We have seen so far that React web apps are actually a collection of independent components which run according to the interactions made with them. Every React Component has a lifecycle of its own, lifecycle of a component can be defined as the series of methods that are invoked in different stages of the component’s existence. The definition is pretty straightforward but what do we mean by different stages? A React Component can go through four stages of its life as follows."
},
{
"code": null,
"e": 25496,
"s": 25333,
"text": "Initialization: This is the stage where the component is constructed with the given Props and default state. This is done in the constructor of a Component Class."
},
{
"code": null,
"e": 25587,
"s": 25496,
"text": "Mounting: Mounting is the stage of rendering the JSX returned by the render method itself."
},
{
"code": null,
"e": 25694,
"s": 25587,
"text": "Updating: Updating is the stage when the state of a component is updated and the application is repainted."
},
{
"code": null,
"e": 25829,
"s": 25694,
"text": "Unmounting: As the name suggests Unmounting is the final step of the component lifecycle where the component is removed from the page."
},
{
"code": null,
"e": 26121,
"s": 25829,
"text": "React provides the developers a set of predefined functions that if present is invoked around specific events in the lifetime of the component. Developers are supposed to override the functions with desired logic to execute accordingly. We have illustrated the gist in the following diagram."
},
{
"code": null,
"e": 26221,
"s": 26121,
"text": "Now let us describe each phase and its corresponding functions.Functions of each Phase of Lifecycle"
},
{
"code": null,
"e": 26448,
"s": 26221,
"text": "Initialization: In this phase the developer has to define the props and initial state ofthe component this is generally done in the constructor of the component. The following code snippet describes the initialization process."
},
{
"code": null,
"e": 26461,
"s": 26450,
"text": "javascript"
},
{
"code": "class Clock extends React.Component { constructor(props) { // Calling the constructor of // Parent Class React.Component super(props); // Setting the initial state this.state = { date: new Date() }; }}",
"e": 26685,
"s": 26461,
"text": null
},
{
"code": null,
"e": 27649,
"s": 26685,
"text": "Mounting: Mounting is the phase of the component lifecyle when the initialization of the component is completed and the component is mounted on the DOM and rendered for the first time in the webpage. Now React follows a default procedure in the Naming Conventions of this predefined functions where the functions containing “Will” represents before some specific phase and “Did” represents after the completion of that phase. Mounting phase consists of two such predefined functions as described below.componentWillMount() Function: As the name clearly suggests, this function is invoked right before the component is mounted on the DOM i.e. this function gets invoked once before the render() function is executed for the first time.componentDidMount() Function: Similarly as the previous one this function is invoked right after the component is mounted on the DOM i.e. this function gets invoked once after the render() function is executed for the first time."
},
{
"code": null,
"e": 27882,
"s": 27649,
"text": "componentWillMount() Function: As the name clearly suggests, this function is invoked right before the component is mounted on the DOM i.e. this function gets invoked once before the render() function is executed for the first time."
},
{
"code": null,
"e": 28112,
"s": 27882,
"text": "componentDidMount() Function: Similarly as the previous one this function is invoked right after the component is mounted on the DOM i.e. this function gets invoked once after the render() function is executed for the first time."
},
{
"code": null,
"e": 29288,
"s": 28112,
"text": "Updation: React is a JS library that helps create Active web pages easily. Now active web pages are specific pages that behave according to its user. For example, let’s take the GeeksforGeeks {IDE} webpage, the webpage acts differently with each user. User A might write some code in C in the Light Theme while another User may write a Python code in the Dark Theme all at the same time. This dynamic behavior that partially depends upon the user itself makes the webpage an Active webpage. Now how can this be related to Updation? Updation is the phase where the states and props of a component are updated followed by some user events such as clicking, pressing a key on keyboard etc. The following are the descriptions of functions that are invoked at different points of Updation phase.componentWillRecieveProps() Function: This is a Props exclusive Function and is independent of States. This function is invoked before a mounted component gets its props reassigned. The function is passed the new set of Props which may or may not be identical to the original Props. Thus checking is a mandatory step in this regards. The following code snippet shows a sample use-case."
},
{
"code": null,
"e": 29674,
"s": 29288,
"text": "componentWillRecieveProps() Function: This is a Props exclusive Function and is independent of States. This function is invoked before a mounted component gets its props reassigned. The function is passed the new set of Props which may or may not be identical to the original Props. Thus checking is a mandatory step in this regards. The following code snippet shows a sample use-case."
},
{
"code": null,
"e": 29685,
"s": 29674,
"text": "javascript"
},
{
"code": "componentWillRecieveProps(newProps){ if(this.props !== newProps) { console.log(\" New Props have been assigned \"); // Use this.setState() to rerender the page. } }",
"e": 29871,
"s": 29685,
"text": null
},
{
"code": null,
"e": 30098,
"s": 29871,
"text": "setState() Function: This is not particularly a Lifecycle function and can be invoked explicitly at any instant. This function is used to update the State of a component. You may refer to this article for detailed information."
},
{
"code": null,
"e": 30832,
"s": 30098,
"text": "shouldComponentUpdate() Function: By default, every state or props update re-render the page but this may not always be the desired outcome, sometimes it is desired that on updating the page will not be repainted. The shouldComponentUpdate() Function fulfills the requirement by letting React know whether the component’s output will be affected by the update or not. shouldComponentUpdate() is invoked before rendering an already mounted component when new props or state are being received. If returned false then the subsequent steps of rendering will not be carried out. This function can’t be used in case of forceUpdate(). The Function takes the new Props and new State as the arguments and returns whether to re-render or not."
},
{
"code": null,
"e": 31070,
"s": 30832,
"text": "componentWillUpdate() Function: As the name clearly suggests, this function is invoked before the component is rerendered i.e. this function gets invoked once before the render() function is executed after the updation of State or Props."
},
{
"code": null,
"e": 31285,
"s": 31070,
"text": "componentDidUpdate() Function: Similarly this function is invoked after the component is rerendered i.e. this function gets invoked once after the render() function is executed after the updation of State or Props."
},
{
"code": null,
"e": 31709,
"s": 31285,
"text": "Unmounting: This is the final phase of the lifeycle of the component that is the phase of unmounting the component from the DOM. The following function is the sole member of this phase.componentWillUnmount() Function: This function is invoked before the component is finally unmounted from the DOM i.e. this function gets invoked once before the component is removed from the page and this denotes the end of the lifecycle."
},
{
"code": null,
"e": 31948,
"s": 31709,
"text": "componentWillUnmount() Function: This function is invoked before the component is finally unmounted from the DOM i.e. this function gets invoked once before the component is removed from the page and this denotes the end of the lifecycle."
},
{
"code": null,
"e": 32207,
"s": 31948,
"text": "We have so far discussed every predefined function there was in the lifecycle of the component and we have also specified the order of execution of the function. Let us now see one final example to finish of the article while revising what’s discussed above."
},
{
"code": null,
"e": 32218,
"s": 32207,
"text": "Javascript"
},
{
"code": "import React from 'react';import ReactDOM from 'react-dom'; class Test extends React.Component { constructor(props) { super(props); this.state = { hello : \"World!\" }; } componentWillMount() { console.log(\"componentWillMount()\"); } componentDidMount() { console.log(\"componentDidMount()\"); } changeState() { this.setState({ hello : \"Geek!\" }); } render() { return ( <div> <h1>GeeksForGeeks.org, Hello{ this.state.hello }</h1> <h2> <a onClick={this.changeState.bind(this)}>Press Here!</a> </h2> </div>); } shouldComponentUpdate(nextProps, nextState) { console.log(\"shouldComponentUpdate()\"); return true; } componentWillUpdate() { console.log(\"componentWillUpdate()\"); } componentDidUpdate() { console.log(\"componentDidUpdate()\"); }} ReactDOM.render( <Test />, document.getElementById('root'));",
"e": 33241,
"s": 32218,
"text": null
},
{
"code": null,
"e": 33249,
"s": 33241,
"text": "Output:"
},
{
"code": null,
"e": 33391,
"s": 33249,
"text": "In the next article, we will use the state and the lifecycle functions to recreate the clock we created before using multiple render() calls."
},
{
"code": null,
"e": 33398,
"s": 33391,
"text": "Picked"
},
{
"code": null,
"e": 33414,
"s": 33398,
"text": "React-Questions"
},
{
"code": null,
"e": 33428,
"s": 33414,
"text": "TrueGeek-2021"
},
{
"code": null,
"e": 33436,
"s": 33428,
"text": "ReactJS"
},
{
"code": null,
"e": 33445,
"s": 33436,
"text": "TrueGeek"
},
{
"code": null,
"e": 33462,
"s": 33445,
"text": "Web Technologies"
},
{
"code": null,
"e": 33560,
"s": 33462,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 33602,
"s": 33560,
"text": "How to set background images in ReactJS ?"
},
{
"code": null,
"e": 33637,
"s": 33602,
"text": "How to create a table in ReactJS ?"
},
{
"code": null,
"e": 33695,
"s": 33637,
"text": "How to navigate on path by button click in react router ?"
},
{
"code": null,
"e": 33722,
"s": 33695,
"text": "ReactJS useNavigate() Hook"
},
{
"code": null,
"e": 33741,
"s": 33722,
"text": "React-Router Hooks"
},
{
"code": null,
"e": 33798,
"s": 33741,
"text": "How to remove duplicate elements from JavaScript Array ?"
},
{
"code": null,
"e": 33839,
"s": 33798,
"text": "SQL Statement to Remove Part of a String"
},
{
"code": null,
"e": 33875,
"s": 33839,
"text": "Basics of API Testing Using Postman"
},
{
"code": null,
"e": 33903,
"s": 33875,
"text": "Types of Internet Protocols"
}
] |
RxJava - MayBe Observable | The MayBe class represents deferred response. MayBe observable can emit either a single successful value or no value.
Following is the declaration for io.reactivex.Single<T> class −
public abstract class Maybe<T>
extends Object
implements MaybeSource<T>
Following is the sequential protocol that MayBe Observable operates −
onSubscribe (onSuccess | onError | OnComplete)?
Create the following Java program using any editor of your choice in, say, C:\> RxJava.
import java.util.concurrent.TimeUnit;
import io.reactivex.Maybe;
import io.reactivex.disposables.Disposable;
import io.reactivex.observers.DisposableMaybeObserver;
import io.reactivex.schedulers.Schedulers;
public class ObservableTester {
public static void main(String[] args) throws InterruptedException {
//Create an observer
Disposable disposable = Maybe.just("Hello World")
.delay(2, TimeUnit.SECONDS, Schedulers.io())
.subscribeWith(new DisposableMaybeObserver<String>() {
@Override
public void onError(Throwable e) {
e.printStackTrace();
}
@Override
public void onSuccess(String value) {
System.out.println(value);
}
@Override
public void onComplete() {
System.out.println("Done!");
}
});
Thread.sleep(3000);
//start observing
disposable.dispose();
}
}
Compile the class using javac compiler as follows −
C:\RxJava>javac ObservableTester.java
Now run the ObservableTester as follows −
C:\RxJava>java ObservableTester
It should produce the following output −
Hello World
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2519,
"s": 2401,
"text": "The MayBe class represents deferred response. MayBe observable can emit either a single successful value or no value."
},
{
"code": null,
"e": 2583,
"s": 2519,
"text": "Following is the declaration for io.reactivex.Single<T> class −"
},
{
"code": null,
"e": 2665,
"s": 2583,
"text": "public abstract class Maybe<T>\n extends Object\n implements MaybeSource<T>\n"
},
{
"code": null,
"e": 2735,
"s": 2665,
"text": "Following is the sequential protocol that MayBe Observable operates −"
},
{
"code": null,
"e": 2784,
"s": 2735,
"text": "onSubscribe (onSuccess | onError | OnComplete)?\n"
},
{
"code": null,
"e": 2872,
"s": 2784,
"text": "Create the following Java program using any editor of your choice in, say, C:\\> RxJava."
},
{
"code": null,
"e": 3815,
"s": 2872,
"text": "import java.util.concurrent.TimeUnit;\n\nimport io.reactivex.Maybe;\nimport io.reactivex.disposables.Disposable;\nimport io.reactivex.observers.DisposableMaybeObserver;\nimport io.reactivex.schedulers.Schedulers;\n\npublic class ObservableTester {\n public static void main(String[] args) throws InterruptedException {\n //Create an observer\n Disposable disposable = Maybe.just(\"Hello World\")\n .delay(2, TimeUnit.SECONDS, Schedulers.io())\n .subscribeWith(new DisposableMaybeObserver<String>() {\n @Override\n public void onError(Throwable e) { \n e.printStackTrace();\n }\n\n @Override\n public void onSuccess(String value) {\n System.out.println(value);\n }\n\n @Override\n public void onComplete() {\n System.out.println(\"Done!\");\n }\n }); \n Thread.sleep(3000);\n //start observing\n disposable.dispose();\n }\n}"
},
{
"code": null,
"e": 3867,
"s": 3815,
"text": "Compile the class using javac compiler as follows −"
},
{
"code": null,
"e": 3906,
"s": 3867,
"text": "C:\\RxJava>javac ObservableTester.java\n"
},
{
"code": null,
"e": 3948,
"s": 3906,
"text": "Now run the ObservableTester as follows −"
},
{
"code": null,
"e": 3981,
"s": 3948,
"text": "C:\\RxJava>java ObservableTester\n"
},
{
"code": null,
"e": 4022,
"s": 3981,
"text": "It should produce the following output −"
},
{
"code": null,
"e": 4035,
"s": 4022,
"text": "Hello World\n"
},
{
"code": null,
"e": 4042,
"s": 4035,
"text": " Print"
},
{
"code": null,
"e": 4053,
"s": 4042,
"text": " Add Notes"
}
] |
Strip last two characters of a column in MySQL? | You can strip last two characters with the help of SUBSTRING() and CHAR_LENGTH() methods. The syntax is as follows −
select yourColumnName,SUBSTRING(yourColumnName,1,CHAR_LENGTH(yourColumnName) - 2) AS anyVariableName from yourTableName;
To understand the above syntax, let us create a table −
mysql> create table LastTwoCharacters
−> (
−> Words varchar(200)
−> );
Query OK, 0 rows affected (0.71 sec)
Now you can insert some records in the table with the help of select statement. The query to insert records is as follows −
mysql> insert into LastTwoCharacters values('Hellooo');
Query OK, 1 row affected (0.23 sec)
mysql> insert into LastTwoCharacters values('Worldsss');
Query OK, 1 row affected (0.10 sec)
mysql> insert into LastTwoCharacters values('Johnson');
Query OK, 1 row affected (0.22 sec)
Display all records with the help of select statement −
mysql> select *from LastTwoCharacters;
The following is the output −
+----------+
| Words |
+----------+
| Hellooo |
| Worldsss |
| Johnson |
+----------+
3 rows in set (0.00 sec)
The following is the query to strip the last two characters of a column −
mysql> select Words,SUBSTRING(Words,1,CHAR_LENGTH(Words) - 2) AS AfterStripLastTwoChar from LastTwoCharacters;
The following is the output −
+----------+-----------------------+
| Words | AfterStripLastTwoChar |
+----------+-----------------------+
| Hellooo | Hello |
| Worldsss | Worlds |
| Johnson | Johns |
+----------+-----------------------+
3 rows in set (0.00 sec) | [
{
"code": null,
"e": 1179,
"s": 1062,
"text": "You can strip last two characters with the help of SUBSTRING() and CHAR_LENGTH() methods. The syntax is as follows −"
},
{
"code": null,
"e": 1300,
"s": 1179,
"text": "select yourColumnName,SUBSTRING(yourColumnName,1,CHAR_LENGTH(yourColumnName) - 2) AS anyVariableName from yourTableName;"
},
{
"code": null,
"e": 1356,
"s": 1300,
"text": "To understand the above syntax, let us create a table −"
},
{
"code": null,
"e": 1473,
"s": 1356,
"text": "mysql> create table LastTwoCharacters\n −> (\n −> Words varchar(200)\n −> );\nQuery OK, 0 rows affected (0.71 sec)"
},
{
"code": null,
"e": 1597,
"s": 1473,
"text": "Now you can insert some records in the table with the help of select statement. The query to insert records is as follows −"
},
{
"code": null,
"e": 1876,
"s": 1597,
"text": "mysql> insert into LastTwoCharacters values('Hellooo');\nQuery OK, 1 row affected (0.23 sec)\n\nmysql> insert into LastTwoCharacters values('Worldsss');\nQuery OK, 1 row affected (0.10 sec)\n\nmysql> insert into LastTwoCharacters values('Johnson');\nQuery OK, 1 row affected (0.22 sec)"
},
{
"code": null,
"e": 1932,
"s": 1876,
"text": "Display all records with the help of select statement −"
},
{
"code": null,
"e": 1971,
"s": 1932,
"text": "mysql> select *from LastTwoCharacters;"
},
{
"code": null,
"e": 2001,
"s": 1971,
"text": "The following is the output −"
},
{
"code": null,
"e": 2117,
"s": 2001,
"text": "+----------+\n| Words |\n+----------+\n| Hellooo |\n| Worldsss |\n| Johnson |\n+----------+\n3 rows in set (0.00 sec)"
},
{
"code": null,
"e": 2191,
"s": 2117,
"text": "The following is the query to strip the last two characters of a column −"
},
{
"code": null,
"e": 2302,
"s": 2191,
"text": "mysql> select Words,SUBSTRING(Words,1,CHAR_LENGTH(Words) - 2) AS AfterStripLastTwoChar from LastTwoCharacters;"
},
{
"code": null,
"e": 2332,
"s": 2302,
"text": "The following is the output −"
},
{
"code": null,
"e": 2616,
"s": 2332,
"text": "+----------+-----------------------+\n| Words | AfterStripLastTwoChar |\n+----------+-----------------------+\n| Hellooo | Hello |\n| Worldsss | Worlds |\n| Johnson | Johns |\n+----------+-----------------------+\n3 rows in set (0.00 sec)"
}
] |
Component State in React Native - GeeksforGeeks | 16 Mar, 2022
Introduction to React Native How React Native works?
There are two types of data that control a component :
props : are immutable and are set by the parent and they are fixed throughout the lifetime of a component.
state : is mutable. This means that state can be updated in the future while props can’t. we can initialize state in the constructor, and then call setState when we want to change it.
props v/s state
Use props to pass data and settings through the component tree.
Never modify this.props inside of a component; consider props immutable.
Use props to for event handlers to communicate with child components.
Use state for storing simple view state like whether or not drop-down options are visible.
Never modify this.state directly, use this.setstate instead.
store: A store holds the whole state tree of the application. The only way to change the state inside it is to dispatch an action on it. A store is not a class. It’s just an object with a few methods on it and I’ll explain about it upcoming articles on React Native. A more explained way to understand the difference between props, state and store on how and where to use these components.
Lets take an example to know more about the state:
For example, let’s say we want to make text that shows/hide the password from the TextInput layout. The “whether the password is hidden or not” changes over time, so that should be kept in state.
JavaScript
import React, { Component } from 'react';import { AppRegistry, StyleSheet, Text, View, TextInput, TouchableOpacity} from 'react-native'; export default class GeeksForGeeks extends Component { state: { password: string, isPasswordHidden: boolean, toggleText: string, } constructor(props: Props) { super(props); this.state = { password: '', isPasswordHidden: true, toggleText: 'Show', }; } handleToggle = () => { const { isPasswordHidden } = this.state; if (isPasswordHidden) { this.setState({ isPasswordHidden: false }); this.setState({ toggleText: 'Hide' }); } else { this.setState({ isPasswordHidden: true }); this.setState({ toggleText: 'Show' }); } }; render() { return ( <View style={styles.container}> <TextInput secureTextEntry={this.state.isPasswordHidden} style={{ width: 400, height: 50, backgroundColor: '#212D3B', color: 'white' }} /> <TouchableOpacity onPress={this.handleToggle} > <Text>{this.state.toggleText}</Text> </TouchableOpacity> </View> ); }} const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', }}); AppRegistry.registerComponent('GeeksForGeeks', () => GeeksForGeeks);
Here, We have initialized state in the constructor, and then called setState when we want to update it. We’re actually using two states. One for updating the boolean for password is hidden or not and one for the string text of Show/Hide password. After running the application, you will see something like this :
For full working application, Checkout the link : GithubNow, lets see an example to know more about the props :
JavaScript
import React, { Component } from 'react';import { AppRegistry, Image} from 'react-native'; export default class GeeksForGeeks extends Component { render() { const image = { url: 'https://www.appcoda.com/wp-content/uploads/2015/04/react-native.png' }; return ( <Image source={image} style={{padding: 186, flex: 1, alignSelf: 'center', justifyContent: 'center', resizeMode: 'contain', }}/> ); }} AppRegistry.registerComponent('GeeksForGeeks', () => GeeksForGeeks);
Now Here, we’re actually fetching the image from the url and displaying it on the app. If you’ll notice, now we are just using one link displaying the image. There is no update done by someone else using the app. This is basically called props.
Demo after running the application:
This article is contributed by Saket Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
sagar0719kumar
GBlog
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Roadmap to Become a Web Developer in 2022
DSA Sheet by Love Babbar
Working with PDF files in Python
10 Best IDE For Web Developers in 2022
Top 10 System Design Interview Questions and Answers
Genetic Algorithms
A Freshers Guide To Programming
Supervised and Unsupervised learning
Top 5 Python Libraries For Big Data
XML parsing in Python | [
{
"code": null,
"e": 25122,
"s": 25094,
"text": "\n16 Mar, 2022"
},
{
"code": null,
"e": 25175,
"s": 25122,
"text": "Introduction to React Native How React Native works?"
},
{
"code": null,
"e": 25231,
"s": 25175,
"text": "There are two types of data that control a component : "
},
{
"code": null,
"e": 25338,
"s": 25231,
"text": "props : are immutable and are set by the parent and they are fixed throughout the lifetime of a component."
},
{
"code": null,
"e": 25523,
"s": 25338,
"text": "state : is mutable. This means that state can be updated in the future while props can’t. we can initialize state in the constructor, and then call setState when we want to change it. "
},
{
"code": null,
"e": 25540,
"s": 25523,
"text": "props v/s state "
},
{
"code": null,
"e": 25604,
"s": 25540,
"text": "Use props to pass data and settings through the component tree."
},
{
"code": null,
"e": 25677,
"s": 25604,
"text": "Never modify this.props inside of a component; consider props immutable."
},
{
"code": null,
"e": 25747,
"s": 25677,
"text": "Use props to for event handlers to communicate with child components."
},
{
"code": null,
"e": 25838,
"s": 25747,
"text": "Use state for storing simple view state like whether or not drop-down options are visible."
},
{
"code": null,
"e": 25899,
"s": 25838,
"text": "Never modify this.state directly, use this.setstate instead."
},
{
"code": null,
"e": 26290,
"s": 25899,
"text": "store: A store holds the whole state tree of the application. The only way to change the state inside it is to dispatch an action on it. A store is not a class. It’s just an object with a few methods on it and I’ll explain about it upcoming articles on React Native. A more explained way to understand the difference between props, state and store on how and where to use these components. "
},
{
"code": null,
"e": 26342,
"s": 26290,
"text": "Lets take an example to know more about the state: "
},
{
"code": null,
"e": 26540,
"s": 26342,
"text": "For example, let’s say we want to make text that shows/hide the password from the TextInput layout. The “whether the password is hidden or not” changes over time, so that should be kept in state. "
},
{
"code": null,
"e": 26551,
"s": 26540,
"text": "JavaScript"
},
{
"code": "import React, { Component } from 'react';import { AppRegistry, StyleSheet, Text, View, TextInput, TouchableOpacity} from 'react-native'; export default class GeeksForGeeks extends Component { state: { password: string, isPasswordHidden: boolean, toggleText: string, } constructor(props: Props) { super(props); this.state = { password: '', isPasswordHidden: true, toggleText: 'Show', }; } handleToggle = () => { const { isPasswordHidden } = this.state; if (isPasswordHidden) { this.setState({ isPasswordHidden: false }); this.setState({ toggleText: 'Hide' }); } else { this.setState({ isPasswordHidden: true }); this.setState({ toggleText: 'Show' }); } }; render() { return ( <View style={styles.container}> <TextInput secureTextEntry={this.state.isPasswordHidden} style={{ width: 400, height: 50, backgroundColor: '#212D3B', color: 'white' }} /> <TouchableOpacity onPress={this.handleToggle} > <Text>{this.state.toggleText}</Text> </TouchableOpacity> </View> ); }} const styles = StyleSheet.create({ container: { flex: 1, justifyContent: 'center', alignItems: 'center', }}); AppRegistry.registerComponent('GeeksForGeeks', () => GeeksForGeeks);",
"e": 27913,
"s": 26551,
"text": null
},
{
"code": null,
"e": 28227,
"s": 27913,
"text": "Here, We have initialized state in the constructor, and then called setState when we want to update it. We’re actually using two states. One for updating the boolean for password is hidden or not and one for the string text of Show/Hide password. After running the application, you will see something like this : "
},
{
"code": null,
"e": 28341,
"s": 28229,
"text": "For full working application, Checkout the link : GithubNow, lets see an example to know more about the props :"
},
{
"code": null,
"e": 28352,
"s": 28341,
"text": "JavaScript"
},
{
"code": "import React, { Component } from 'react';import { AppRegistry, Image} from 'react-native'; export default class GeeksForGeeks extends Component { render() { const image = { url: 'https://www.appcoda.com/wp-content/uploads/2015/04/react-native.png' }; return ( <Image source={image} style={{padding: 186, flex: 1, alignSelf: 'center', justifyContent: 'center', resizeMode: 'contain', }}/> ); }} AppRegistry.registerComponent('GeeksForGeeks', () => GeeksForGeeks);",
"e": 28846,
"s": 28352,
"text": null
},
{
"code": null,
"e": 29091,
"s": 28846,
"text": "Now Here, we’re actually fetching the image from the url and displaying it on the app. If you’ll notice, now we are just using one link displaying the image. There is no update done by someone else using the app. This is basically called props."
},
{
"code": null,
"e": 29128,
"s": 29091,
"text": "Demo after running the application: "
},
{
"code": null,
"e": 29548,
"s": 29128,
"text": "This article is contributed by Saket Kumar. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 29563,
"s": 29548,
"text": "sagar0719kumar"
},
{
"code": null,
"e": 29569,
"s": 29563,
"text": "GBlog"
},
{
"code": null,
"e": 29667,
"s": 29569,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29709,
"s": 29667,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 29734,
"s": 29709,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 29767,
"s": 29734,
"text": "Working with PDF files in Python"
},
{
"code": null,
"e": 29806,
"s": 29767,
"text": "10 Best IDE For Web Developers in 2022"
},
{
"code": null,
"e": 29859,
"s": 29806,
"text": "Top 10 System Design Interview Questions and Answers"
},
{
"code": null,
"e": 29878,
"s": 29859,
"text": "Genetic Algorithms"
},
{
"code": null,
"e": 29910,
"s": 29878,
"text": "A Freshers Guide To Programming"
},
{
"code": null,
"e": 29947,
"s": 29910,
"text": "Supervised and Unsupervised learning"
},
{
"code": null,
"e": 29983,
"s": 29947,
"text": "Top 5 Python Libraries For Big Data"
}
] |
Iterate Over Keys and Values in Python Dictionaries | Towards Data Science | Dictionaries are among the most useful data structures in Python. Iterating over the keys and values of dictionaries is one of the most commonly used operations that we need to perform while working with such objects.
In today’s short guide, we are going to explore how to iterate over dictionaries in Python 3. Specifically, we’ll discuss how to
iterate over just keys
iterate over just values
iterate over both keys and values in one go
First, let’s create an example dictionary that we’ll reference across the article to demonstrate a few concepts.
my_dict = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
In order to iterate only over keys you need to call keys() method that returns a new view of the dictionary’s keys.
for key in my_dict.keys(): print(key)# Outputabcd
In order to iterate over the values of the dictionary, you simply need to call values() method that returns a new view containing dictionary’s values.
for value in my_dict.values(): print(value)# Output1234
Now in case you need to iterate over both keys and values in one go, you can call items(). The method will return a tuple containing the key value pairs in the form (key, value).
for key, value in my_dict.items(): print(f'Key: {key}, Value: {value}')# OutputKey: a, Value: 1Key: b, Value: 2Key: c, Value: 3Key: d, Value: 4
Note that the objects returned by the three methods explored above, return view objects. In other words, these objects provide a dynamic view of the entries in the dictionary that means that once an entry gets modified, the view object will reflect this change as well.
The notation in Python 2 is slightly different than Python 3. In order to iterate over keys and values you need to call iteritems() method as shown below:
for key, value in my_dict.iteritems(): print(f'Key: {key}, Value: {value}')# OutputKey: a, Value: 1Key: b, Value: 2Key: c, Value: 3Key: d, Value: 4
In today’s article we discussed how to iterate over keys and values independently or at the same time, in one go using the methods keys(), values() and items(). Additionally, we discussed how to iterate over keys and values using the Python 2 notation and iteritems() method.
Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read.
You may also like | [
{
"code": null,
"e": 390,
"s": 172,
"text": "Dictionaries are among the most useful data structures in Python. Iterating over the keys and values of dictionaries is one of the most commonly used operations that we need to perform while working with such objects."
},
{
"code": null,
"e": 519,
"s": 390,
"text": "In today’s short guide, we are going to explore how to iterate over dictionaries in Python 3. Specifically, we’ll discuss how to"
},
{
"code": null,
"e": 542,
"s": 519,
"text": "iterate over just keys"
},
{
"code": null,
"e": 567,
"s": 542,
"text": "iterate over just values"
},
{
"code": null,
"e": 611,
"s": 567,
"text": "iterate over both keys and values in one go"
},
{
"code": null,
"e": 724,
"s": 611,
"text": "First, let’s create an example dictionary that we’ll reference across the article to demonstrate a few concepts."
},
{
"code": null,
"e": 767,
"s": 724,
"text": "my_dict = {'a': 1, 'b': 2, 'c': 3, 'd': 4}"
},
{
"code": null,
"e": 883,
"s": 767,
"text": "In order to iterate only over keys you need to call keys() method that returns a new view of the dictionary’s keys."
},
{
"code": null,
"e": 936,
"s": 883,
"text": "for key in my_dict.keys(): print(key)# Outputabcd"
},
{
"code": null,
"e": 1087,
"s": 936,
"text": "In order to iterate over the values of the dictionary, you simply need to call values() method that returns a new view containing dictionary’s values."
},
{
"code": null,
"e": 1146,
"s": 1087,
"text": "for value in my_dict.values(): print(value)# Output1234"
},
{
"code": null,
"e": 1325,
"s": 1146,
"text": "Now in case you need to iterate over both keys and values in one go, you can call items(). The method will return a tuple containing the key value pairs in the form (key, value)."
},
{
"code": null,
"e": 1472,
"s": 1325,
"text": "for key, value in my_dict.items(): print(f'Key: {key}, Value: {value}')# OutputKey: a, Value: 1Key: b, Value: 2Key: c, Value: 3Key: d, Value: 4"
},
{
"code": null,
"e": 1742,
"s": 1472,
"text": "Note that the objects returned by the three methods explored above, return view objects. In other words, these objects provide a dynamic view of the entries in the dictionary that means that once an entry gets modified, the view object will reflect this change as well."
},
{
"code": null,
"e": 1897,
"s": 1742,
"text": "The notation in Python 2 is slightly different than Python 3. In order to iterate over keys and values you need to call iteritems() method as shown below:"
},
{
"code": null,
"e": 2048,
"s": 1897,
"text": "for key, value in my_dict.iteritems(): print(f'Key: {key}, Value: {value}')# OutputKey: a, Value: 1Key: b, Value: 2Key: c, Value: 3Key: d, Value: 4"
},
{
"code": null,
"e": 2324,
"s": 2048,
"text": "In today’s article we discussed how to iterate over keys and values independently or at the same time, in one go using the methods keys(), values() and items(). Additionally, we discussed how to iterate over keys and values using the Python 2 notation and iteritems() method."
},
{
"code": null,
"e": 2441,
"s": 2324,
"text": "Become a member and read every story on Medium. Your membership fee directly supports me and other writers you read."
}
] |
Datagram in Python - GeeksforGeeks | 21 Jan, 2019
Datagram is a type of wireless communication between two endpoints and it requires the ip address and port number to establish connection. Datagram uses UDP(User Datagram Protocol), it converts user data into small packets or chunks of data so that it can be sent over a network in a continuous manner. In UDP type of connection data packets have no records at the sender’s end, it just sends by the user then it’s upon receiver that it wants to accept data packets or not. UDP type of connection we use in video conferencing or in video calls.
Let’s learn how to send some data on localhost network and receive the same data as a receiver on the same machine to demonstrate the use of datagram.
To know the IP configurations of your computer, use this command on terminal:
ipconfig
Code #1: For sender’s end.
# importing socket moduleimport socket UDP_IP = "localhost"UDP_PORT = 8080MESSAGE = "GeeksforGeeks" print ("message:", MESSAGE) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)sock.sendto(bytes(MESSAGE, "utf-8"), (UDP_IP, UDP_PORT))
Output:
message: Geeksforgeeks
Explanation:
Specify your IPv4 address in the place of UDP_IP and remember port number 8080 is your localhost port number so you need to specify the same.
socket.socket() uses AF_INET in order to use the IPv4 address of your system and SOCK_DGRAM is a type of protocol that we are using for communication.
Use sock object to call function sendto(), then pass argument of tuple containing IP address and port number.
Step #2: At the receiver’s end.
# importing socket moduleimport socket UDP_IP = "localhost"UDP_PORT = 8080 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)sock.bind((UDP_IP, UDP_PORT)) while True: # buffer size is 1024 bytes data, addr = sock.recvfrom(1024) print ("Received message:", data)
Output:
Received Message: b'Geeksforgeeks'
Explanation:
We need to specify the localhost IPaddress in UDP_IP and use the socket.socket() function same as above.
bind the parameter to the socket object so that anything receive at these port address will we catch at this end. In the loop, define the buffer size as 1024, As message is not big, it is sufficient buffer size.
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Defaultdict in Python
Python | os.path.join() method
Python | Get unique values from a list
Selecting rows in pandas DataFrame based on conditions
Create a directory in Python
Python | Pandas dataframe.groupby() | [
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n21 Jan, 2019"
},
{
"code": null,
"e": 24837,
"s": 24292,
"text": "Datagram is a type of wireless communication between two endpoints and it requires the ip address and port number to establish connection. Datagram uses UDP(User Datagram Protocol), it converts user data into small packets or chunks of data so that it can be sent over a network in a continuous manner. In UDP type of connection data packets have no records at the sender’s end, it just sends by the user then it’s upon receiver that it wants to accept data packets or not. UDP type of connection we use in video conferencing or in video calls."
},
{
"code": null,
"e": 24988,
"s": 24837,
"text": "Let’s learn how to send some data on localhost network and receive the same data as a receiver on the same machine to demonstrate the use of datagram."
},
{
"code": null,
"e": 25066,
"s": 24988,
"text": "To know the IP configurations of your computer, use this command on terminal:"
},
{
"code": null,
"e": 25075,
"s": 25066,
"text": "ipconfig"
},
{
"code": null,
"e": 25102,
"s": 25075,
"text": "Code #1: For sender’s end."
},
{
"code": "# importing socket moduleimport socket UDP_IP = \"localhost\"UDP_PORT = 8080MESSAGE = \"GeeksforGeeks\" print (\"message:\", MESSAGE) sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)sock.sendto(bytes(MESSAGE, \"utf-8\"), (UDP_IP, UDP_PORT))",
"e": 25345,
"s": 25102,
"text": null
},
{
"code": null,
"e": 25353,
"s": 25345,
"text": "Output:"
},
{
"code": null,
"e": 25376,
"s": 25353,
"text": "message: Geeksforgeeks"
},
{
"code": null,
"e": 25389,
"s": 25376,
"text": "Explanation:"
},
{
"code": null,
"e": 25531,
"s": 25389,
"text": "Specify your IPv4 address in the place of UDP_IP and remember port number 8080 is your localhost port number so you need to specify the same."
},
{
"code": null,
"e": 25682,
"s": 25531,
"text": "socket.socket() uses AF_INET in order to use the IPv4 address of your system and SOCK_DGRAM is a type of protocol that we are using for communication."
},
{
"code": null,
"e": 25792,
"s": 25682,
"text": "Use sock object to call function sendto(), then pass argument of tuple containing IP address and port number."
},
{
"code": null,
"e": 25824,
"s": 25792,
"text": "Step #2: At the receiver’s end."
},
{
"code": "# importing socket moduleimport socket UDP_IP = \"localhost\"UDP_PORT = 8080 sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)sock.bind((UDP_IP, UDP_PORT)) while True: # buffer size is 1024 bytes data, addr = sock.recvfrom(1024) print (\"Received message:\", data)",
"e": 26104,
"s": 25824,
"text": null
},
{
"code": null,
"e": 26112,
"s": 26104,
"text": "Output:"
},
{
"code": null,
"e": 26147,
"s": 26112,
"text": "Received Message: b'Geeksforgeeks'"
},
{
"code": null,
"e": 26160,
"s": 26147,
"text": "Explanation:"
},
{
"code": null,
"e": 26265,
"s": 26160,
"text": "We need to specify the localhost IPaddress in UDP_IP and use the socket.socket() function same as above."
},
{
"code": null,
"e": 26477,
"s": 26265,
"text": "bind the parameter to the socket object so that anything receive at these port address will we catch at this end. In the loop, define the buffer size as 1024, As message is not big, it is sufficient buffer size."
},
{
"code": null,
"e": 26484,
"s": 26477,
"text": "Python"
},
{
"code": null,
"e": 26582,
"s": 26484,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26614,
"s": 26582,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26670,
"s": 26614,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 26712,
"s": 26670,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 26754,
"s": 26712,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 26776,
"s": 26754,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26807,
"s": 26776,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 26846,
"s": 26807,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 26901,
"s": 26846,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 26930,
"s": 26901,
"text": "Create a directory in Python"
}
] |
Quickly extract all links from a web page using the browser console | by Phil Gorman | Towards Data Science | Extracting and cleaning data from websites and documents is my bread and butter and I have really enjoyed learning how to systematically extract data from multiple web pages and even multiple websites using Python and R. But sometimes a project only needs a small amount of data, from just one page on a website.
Previously when a case like this arose, I would still fire up my Python IDE or RStudio, write and execute a script to extract this information. This is using a sledgehammer to crack a nut. Regular old JavaScript is powerful enough to extract information from a single web page, and the JavaScript in question can be run in the browser’s developer console.
In this example, I am extracting all links from a web page, as this is a task I regularly perform on web pages. However, this code would work equally well to extract any other text element types in HTML documents, with a few small changes. When this code runs, it opens a new tab in the browser and outputs a table containing the text of each hyperlink and the link itself, so there is some context to what each link is pointing to.
Literally any browser made in the past 10 years.
The bit of code I’ll be providing further down the page.
That’s it!
Open up your browser (yes, this even works in Internet Explorer if you’re a glutton for punishment) and navigate to the page from which you’d like to extract links.
I’m using the Select Committee inquiries list from the 2017 Parliament page as an example — it is a page with a massive amount of links that, as a grouping, may be useful to a lot of people.
Now we just need to open up the developer console and run the code. To open the developer console, you can either hit F12 or right-click the page and select “Inspect” or ”Inspect Element” depending on your browser of choice. This will open up the console, into which you can type or copy and paste snippets of code.
This is the code snippet you will need to place into the console:
var x = document.querySelectorAll("a");var myarray = []for (var i=0; i<x.length; i++){var nametext = x[i].textContent;var cleantext = nametext.replace(/\s+/g, ' ').trim();var cleanlink = x[i].href;myarray.push([cleantext,cleanlink]);};function make_table() { var table = '<table><thead><th>Name</th><th>Links</th></thead><tbody>'; for (var i=0; i<myarray.length; i++) { table += '<tr><td>'+ myarray[i][0] + '</td><td>'+myarray[i][1]+'</td></tr>'; }; var w = window.open("");w.document.write(table); }make_table()
Then just hit enter (or the run button in IE. But really, why are you using IE?!). This will open up a new tab in your browser with a table containing all the link text and hyperlinks from your chosen web page.
This table can then be copied and pasted into a spreadsheet or document to be used as you please.
Here is a breakdown of the code and what each aspect does.
var x = document.querySelectorAll("a");var myarray = []
Here we are finding all of the “a” elements on the page (a elements are links) and assigning them to the variable x. Then we create an array variable, but leave it empty for now.
for (var i=0; i<x.length; i++){var nametext = x[i].textContent;var cleantext = nametext.replace(/\s+/g, ' ').trim();var cleanlink = x[i].href;myarray.push([cleantext,cleanlink]);};
We then loop over all the “a” elements in x, and for each element, we try to find the text content of the element and the link.
For the text content, we replace the white space with single spaces and trim the text, as there may be large swathes of white space that would make our table unreadable.
function make_table() { var table = '<table><thead><th>Name</th><th>Links</th></thead><tbody>'; for (var i=0; i<myarray.length; i++) { table += '<tr><td>'+ myarray[i][0] + '</td><td>'+myarray[i][1]+'</td></tr>'; }; var w = window.open("");w.document.write(table); }make_table()
We then make the table using the function “make_table()”. This creates a variable, table, with the beginnings of a HTML table and the table headers. We then use a for loop to add table rows containing the link text and hyperlinks.
We then open a new window using “window.open()” and write the HTML table to that window using “document.write()”.
There is a drawback to the current code — it will take ALL of the links on a page. This means all the links in the menus, any jump links that take you to different points on the current page, the contact, T&Cs, sitemap links at the base of the page, etc...
You could be more specific and look for all “a” elements within a certain area of the web page. In the case of the committee inquiries page above we just need to change the first line of code to make the querySelector look at a specific area of the page — perhaps just the a-z listing in the centre of the page.
To do this, right-click the area of the page from which you’d like to take the links and click “Inspect”. You should be able to then see what that element of the page is called. In this case, the element is a “div” with the class “a-to-z-listing”. We can simply change the first line to look for all “a” elements in the element with class “a-to-z-listing”:
var x = document.querySelectorAll(".a-to-z-listing a");
I’m still going to use Python and R in the vast majority of my web scraping activity, but it is helpful to have a quick and easy way to extract information from a web page without the need to open any other applications.
What would you use this code for? Is there a more elegant way to construct this code? Please let me know in the responses below. | [
{
"code": null,
"e": 484,
"s": 171,
"text": "Extracting and cleaning data from websites and documents is my bread and butter and I have really enjoyed learning how to systematically extract data from multiple web pages and even multiple websites using Python and R. But sometimes a project only needs a small amount of data, from just one page on a website."
},
{
"code": null,
"e": 840,
"s": 484,
"text": "Previously when a case like this arose, I would still fire up my Python IDE or RStudio, write and execute a script to extract this information. This is using a sledgehammer to crack a nut. Regular old JavaScript is powerful enough to extract information from a single web page, and the JavaScript in question can be run in the browser’s developer console."
},
{
"code": null,
"e": 1273,
"s": 840,
"text": "In this example, I am extracting all links from a web page, as this is a task I regularly perform on web pages. However, this code would work equally well to extract any other text element types in HTML documents, with a few small changes. When this code runs, it opens a new tab in the browser and outputs a table containing the text of each hyperlink and the link itself, so there is some context to what each link is pointing to."
},
{
"code": null,
"e": 1322,
"s": 1273,
"text": "Literally any browser made in the past 10 years."
},
{
"code": null,
"e": 1379,
"s": 1322,
"text": "The bit of code I’ll be providing further down the page."
},
{
"code": null,
"e": 1390,
"s": 1379,
"text": "That’s it!"
},
{
"code": null,
"e": 1555,
"s": 1390,
"text": "Open up your browser (yes, this even works in Internet Explorer if you’re a glutton for punishment) and navigate to the page from which you’d like to extract links."
},
{
"code": null,
"e": 1746,
"s": 1555,
"text": "I’m using the Select Committee inquiries list from the 2017 Parliament page as an example — it is a page with a massive amount of links that, as a grouping, may be useful to a lot of people."
},
{
"code": null,
"e": 2062,
"s": 1746,
"text": "Now we just need to open up the developer console and run the code. To open the developer console, you can either hit F12 or right-click the page and select “Inspect” or ”Inspect Element” depending on your browser of choice. This will open up the console, into which you can type or copy and paste snippets of code."
},
{
"code": null,
"e": 2128,
"s": 2062,
"text": "This is the code snippet you will need to place into the console:"
},
{
"code": null,
"e": 2664,
"s": 2128,
"text": "var x = document.querySelectorAll(\"a\");var myarray = []for (var i=0; i<x.length; i++){var nametext = x[i].textContent;var cleantext = nametext.replace(/\\s+/g, ' ').trim();var cleanlink = x[i].href;myarray.push([cleantext,cleanlink]);};function make_table() { var table = '<table><thead><th>Name</th><th>Links</th></thead><tbody>'; for (var i=0; i<myarray.length; i++) { table += '<tr><td>'+ myarray[i][0] + '</td><td>'+myarray[i][1]+'</td></tr>'; }; var w = window.open(\"\");w.document.write(table); }make_table()"
},
{
"code": null,
"e": 2875,
"s": 2664,
"text": "Then just hit enter (or the run button in IE. But really, why are you using IE?!). This will open up a new tab in your browser with a table containing all the link text and hyperlinks from your chosen web page."
},
{
"code": null,
"e": 2973,
"s": 2875,
"text": "This table can then be copied and pasted into a spreadsheet or document to be used as you please."
},
{
"code": null,
"e": 3032,
"s": 2973,
"text": "Here is a breakdown of the code and what each aspect does."
},
{
"code": null,
"e": 3088,
"s": 3032,
"text": "var x = document.querySelectorAll(\"a\");var myarray = []"
},
{
"code": null,
"e": 3267,
"s": 3088,
"text": "Here we are finding all of the “a” elements on the page (a elements are links) and assigning them to the variable x. Then we create an array variable, but leave it empty for now."
},
{
"code": null,
"e": 3448,
"s": 3267,
"text": "for (var i=0; i<x.length; i++){var nametext = x[i].textContent;var cleantext = nametext.replace(/\\s+/g, ' ').trim();var cleanlink = x[i].href;myarray.push([cleantext,cleanlink]);};"
},
{
"code": null,
"e": 3576,
"s": 3448,
"text": "We then loop over all the “a” elements in x, and for each element, we try to find the text content of the element and the link."
},
{
"code": null,
"e": 3746,
"s": 3576,
"text": "For the text content, we replace the white space with single spaces and trim the text, as there may be large swathes of white space that would make our table unreadable."
},
{
"code": null,
"e": 4047,
"s": 3746,
"text": "function make_table() { var table = '<table><thead><th>Name</th><th>Links</th></thead><tbody>'; for (var i=0; i<myarray.length; i++) { table += '<tr><td>'+ myarray[i][0] + '</td><td>'+myarray[i][1]+'</td></tr>'; }; var w = window.open(\"\");w.document.write(table); }make_table()"
},
{
"code": null,
"e": 4278,
"s": 4047,
"text": "We then make the table using the function “make_table()”. This creates a variable, table, with the beginnings of a HTML table and the table headers. We then use a for loop to add table rows containing the link text and hyperlinks."
},
{
"code": null,
"e": 4392,
"s": 4278,
"text": "We then open a new window using “window.open()” and write the HTML table to that window using “document.write()”."
},
{
"code": null,
"e": 4649,
"s": 4392,
"text": "There is a drawback to the current code — it will take ALL of the links on a page. This means all the links in the menus, any jump links that take you to different points on the current page, the contact, T&Cs, sitemap links at the base of the page, etc..."
},
{
"code": null,
"e": 4961,
"s": 4649,
"text": "You could be more specific and look for all “a” elements within a certain area of the web page. In the case of the committee inquiries page above we just need to change the first line of code to make the querySelector look at a specific area of the page — perhaps just the a-z listing in the centre of the page."
},
{
"code": null,
"e": 5318,
"s": 4961,
"text": "To do this, right-click the area of the page from which you’d like to take the links and click “Inspect”. You should be able to then see what that element of the page is called. In this case, the element is a “div” with the class “a-to-z-listing”. We can simply change the first line to look for all “a” elements in the element with class “a-to-z-listing”:"
},
{
"code": null,
"e": 5374,
"s": 5318,
"text": "var x = document.querySelectorAll(\".a-to-z-listing a\");"
},
{
"code": null,
"e": 5595,
"s": 5374,
"text": "I’m still going to use Python and R in the vast majority of my web scraping activity, but it is helpful to have a quick and easy way to extract information from a web page without the need to open any other applications."
}
] |
PHP | ImagickDraw line() Function - GeeksforGeeks | 10 Oct, 2021
The ImagickDraw::line() function is an inbuilt function in Imagick library of PHP which is used to draw a line. This function draw the line using the current stroke color, stroke opacity, and stroke width.
Syntax:
bool ImagickDraw::line( $sx, $sy, $ex, $ey )
Parameters: This function accepts four parameters as mentioned above and described below:
$sx: This parameter takes the value of starting x coordinate.
$sy: This parameter takes the value of starting y coordinate.
$ex: This parameter takes the value of ending x coordinate.
$ey: This parameter takes the value of ending y coordinate.
Return Value: This function returns TRUE on success.Below program illustrate the ImagickDraw::line() function in PHP:Program:
php
<?php // Create an Imagick object$draw = new \ImagickDraw(); // Function to fill color$draw->setFillColor('red'); // Function to draw line$draw->line(10, 30, 180, 200); // Create Imagick object$imagick = new \Imagick(); // Create new image of given size$imagick->newImage(300, 300, 'white'); // Set the image format$imagick->setImageFormat("png"); // Function to draw the image$imagick->drawImage($draw); header("Content-Type: image/png"); // Display the output image echo $imagick->getImageBlob();?>
Output:
Reference: http://php.net/manual/en/imagickdraw.line.php
saurabh1990aror
gabaa406
Image-Processing
PHP-function
PHP-Imagick
PHP
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Insert Form Data into Database using PHP ?
How to execute PHP code using command line ?
How to pop an alert message box using PHP ?
PHP in_array() Function
How to receive JSON POST with PHP ?
Express.js express.Router() Function
Installation of Node.js on Linux
How to set input type date in dd-mm-yyyy format using HTML ?
Differences between Functional Components and Class Components in React
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 40527,
"s": 40499,
"text": "\n10 Oct, 2021"
},
{
"code": null,
"e": 40734,
"s": 40527,
"text": "The ImagickDraw::line() function is an inbuilt function in Imagick library of PHP which is used to draw a line. This function draw the line using the current stroke color, stroke opacity, and stroke width. "
},
{
"code": null,
"e": 40744,
"s": 40734,
"text": "Syntax: "
},
{
"code": null,
"e": 40789,
"s": 40744,
"text": "bool ImagickDraw::line( $sx, $sy, $ex, $ey )"
},
{
"code": null,
"e": 40881,
"s": 40789,
"text": "Parameters: This function accepts four parameters as mentioned above and described below: "
},
{
"code": null,
"e": 40943,
"s": 40881,
"text": "$sx: This parameter takes the value of starting x coordinate."
},
{
"code": null,
"e": 41005,
"s": 40943,
"text": "$sy: This parameter takes the value of starting y coordinate."
},
{
"code": null,
"e": 41065,
"s": 41005,
"text": "$ex: This parameter takes the value of ending x coordinate."
},
{
"code": null,
"e": 41125,
"s": 41065,
"text": "$ey: This parameter takes the value of ending y coordinate."
},
{
"code": null,
"e": 41253,
"s": 41125,
"text": "Return Value: This function returns TRUE on success.Below program illustrate the ImagickDraw::line() function in PHP:Program: "
},
{
"code": null,
"e": 41257,
"s": 41253,
"text": "php"
},
{
"code": "<?php // Create an Imagick object$draw = new \\ImagickDraw(); // Function to fill color$draw->setFillColor('red'); // Function to draw line$draw->line(10, 30, 180, 200); // Create Imagick object$imagick = new \\Imagick(); // Create new image of given size$imagick->newImage(300, 300, 'white'); // Set the image format$imagick->setImageFormat(\"png\"); // Function to draw the image$imagick->drawImage($draw); header(\"Content-Type: image/png\"); // Display the output image echo $imagick->getImageBlob();?>",
"e": 41760,
"s": 41257,
"text": null
},
{
"code": null,
"e": 41769,
"s": 41760,
"text": "Output: "
},
{
"code": null,
"e": 41827,
"s": 41769,
"text": "Reference: http://php.net/manual/en/imagickdraw.line.php "
},
{
"code": null,
"e": 41843,
"s": 41827,
"text": "saurabh1990aror"
},
{
"code": null,
"e": 41852,
"s": 41843,
"text": "gabaa406"
},
{
"code": null,
"e": 41869,
"s": 41852,
"text": "Image-Processing"
},
{
"code": null,
"e": 41882,
"s": 41869,
"text": "PHP-function"
},
{
"code": null,
"e": 41894,
"s": 41882,
"text": "PHP-Imagick"
},
{
"code": null,
"e": 41898,
"s": 41894,
"text": "PHP"
},
{
"code": null,
"e": 41915,
"s": 41898,
"text": "Web Technologies"
},
{
"code": null,
"e": 41919,
"s": 41915,
"text": "PHP"
},
{
"code": null,
"e": 42017,
"s": 41919,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 42026,
"s": 42017,
"text": "Comments"
},
{
"code": null,
"e": 42039,
"s": 42026,
"text": "Old Comments"
},
{
"code": null,
"e": 42089,
"s": 42039,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 42134,
"s": 42089,
"text": "How to execute PHP code using command line ?"
},
{
"code": null,
"e": 42178,
"s": 42134,
"text": "How to pop an alert message box using PHP ?"
},
{
"code": null,
"e": 42202,
"s": 42178,
"text": "PHP in_array() Function"
},
{
"code": null,
"e": 42238,
"s": 42202,
"text": "How to receive JSON POST with PHP ?"
},
{
"code": null,
"e": 42275,
"s": 42238,
"text": "Express.js express.Router() Function"
},
{
"code": null,
"e": 42308,
"s": 42275,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 42369,
"s": 42308,
"text": "How to set input type date in dd-mm-yyyy format using HTML ?"
},
{
"code": null,
"e": 42441,
"s": 42369,
"text": "Differences between Functional Components and Class Components in React"
}
] |
Pendulum: Probably The Best Python DateTime Library | by Christopher Tao | Towards Data Science | The datetime module is one of the most important build-in modules in Python. It is very flexible and powerful by providing many out-of-the-box solutions to real programming problems. For example, the timedelta is one of my favourite tools.
However, I have to say that there are also some limitations of the datetime module. For example, when we deal with timezones, it usually reveals the shortage. Sometimes, we have to introduce some 3rd party libraries as supplements. Also, some aspects in the datetime module are not very intuitive or commonly used in other programming languages.
In this article, I’ll introduce a 3rd party library called Pendulum which will resolve all of the issues of the build-in datetime module.
It is not uncommon to use some 3rd party libraries such as pytz to solve some issues that Python datetime is not good at. However, we still need to import the datetime module and use it as essential, because we need to use it to instantiate datetime objects.
Let me show you why Pendulum is a drop-in replacement. Firstly, we need to install it using pip.
pip install pendulum
The library's name is a bit long, so I would recommend importing it with an alias.
import pendulum as pdl
Although pd is a shorter abbreviation, but I would reserve it for Pandas. Don’t want to create any confusion.
Let’s create a datetime object using Pendulum, and have a look at its object type.
from datetime import datetimedt = pdl.datetime(2021, 11, 6)isinstance(dt, datetime)
There is no magic. It is only because Pendulum inherits the Python datetime object. So, we don’t need to worry about using some original features from the datetime module. Literally, a Pendulum datetime object is a Python datetime object.
The most impressive feature of the Pendulum library will be the timezones. This is also one of the critical issues of the built-in datetime module. Before Python 3.9, we have to involve pytz if we want to use the IANA timezones.
With the Pendulum library, we can create a datetime object with a timezone as easy as such.
dt_melbourne = pdl.datetime(2021, 11, 6, tz='Australia/Melbourne')dt_brisbane = pdl.datetime(2021, 11, 6, tz='Australia/Queensland')print(dt_melbourne)print(dt_brisbane)
In the above example, we have created two objects for the same time. However, the timezones are different. Pendulum also allows us to compare the times easily.
dt_melbourne.diff(dt_brisbane).in_hours()
How easy it is! Comparing two datetime objects with different timezones and getting the exact result!
If we need to define multiple datetime objects and want to re-use the timezone string, we can create a timezone object and pass it to the datetime constructor.
my_timezone = pdl.timezone('Australia/Melbourne')dt_melbourne = pdl.datetime(2021, 11, 6, tz=my_timezone)print(dt_melbourne)print(dt_melbourne.timezone.name)
One more cool feature is to return the time to a different time zone. For example, it is midnight in Melbourne, then what’s the time in Brisbane?
Parsing a datetime is probably the most common use case in programming. Python datetime module does that very well. However, compared to most other programming languages, Python uses a different format %Y%m%d.
Pendulum allows us to use the common format codes as follows.
pdl.from_format('2021-11-06 22:00:00', 'YYYY-MM-DD HH:mm:ss')
Also, it fully supports the RFC 3339 and ISO 8601 formats, as well as some other common formats. That means we don’t have to specify the format codes to parse a string into datetime.
pdl.parse('2021-11-01 22:00:00')
Pendulum also integrates many common datetime extensions such as dateutil. If we want the library to fall back on the dateutil parser, we can pass the flag strict=False.
pdl.parse('21-11-06', strict=False)
Apart from these, Pendulum supports much more formats on the fly. For example, the datetime with numbers only.
pdl.parse('20211106')
And this one is very interesting, specifying the year, the week number and the day of that week, Pendulum gives you the correct datetime.
pdl.parse('2021-W44-6')
If we want to have a date object or a time object particularly, just specify exact=True, which is much easier than the Python datetime module.
pdl.parse('20211106', exact=True)pdl.parse('22:00:00', exact=True)
After parsing strings into datetime objects, the next important thing will be to output the datetime into strings with formats.
First of all, let’s have a datetime object. Since Pendulum inherits Python datetime, we can use all the methods such as now().
dt = pdl.now()
Then, let me pick up several examples of “to string” methods from Pendulum to see how easy it is to output datetime with formats out-of-the-box.
dt.to_date_string() # with date onlydt.to_time_string() # with time onlydt.to_formatted_date_string() # month_abbr date, yeardt.to_day_datetime_string() # day, month_abbr date, year hh:mm am/pmdt.to_iso8601_string() # to ISO 9601 standarddt.to_atom_string() # to Atom formatdt.to_cookie_string() # to cookie style format
Of course, we can use the format code to customise the output string, and the format code is more intuitive.
dt.format('DD MMMM, YYYY dddd HH:mm:ss A')
Another cool stuff is that we can easily add some irrelevant strings into the format string and have them escape from the format.
dt.format('[Hi! Today is] DD MMMM, YYYY dddd HH:mm:ss A')
In the built-in Python datetime module, the timedelta tool does the comparing jobs very well. However, Pendulum can even improve it by providing some more human-friendly output when comparing two datetime objects.
For example, the method diff_for_humans() will compare a datetime object with the current time and return a very human-friendly output.
dt1 = pdl.datetime(2021, 1, 1)dt1.diff_for_humans()dt2 = pdl.datetime(2021, 11, 7, 1)dt2.diff_for_humans()
One of the aspects that the built-in Python datetime could improve is finding the relative datetime based on a given one. For example, when we want to find the last day of the current month, we have to use relativedelta from the datetutil module.
from dateutil.relativedelta import relativedeltadatetime.datetime(2013, 2, 21) + relativedelta(day=31)
Also, the code is not quite readable because we are using day=31 as the argument, although it does the tricks when the month has less than 31 days.
While in Pendulum, it can’t be easier.
pdl.now().start_of('day') # find the start time of the daypdl.now().start_of('month')pdl.now().end_of('day')pdl.now().end_of('month')
Another inconvenience of the built-in datetime module is to find a day of a week. For example, if we want to find the date of next Monday, this is probably the easiest way to do so.
from datetime import datetime, timedeltadatetime.now() + timedelta(days=(0-datetime.now().weekday()+7)%7)
It does the job, but with poor readability. A developer will need to spend some time to understand what is the logic for this line of code.
With Pendulum, it is as easy as such.
pdl.now().next(pdl.MONDAY)
We are even not bothered with thinking about whether it should be 0 or 1 to indicate Monday because Pendulum uses enumeration for that.
Similarly, we can use previous() method to find the previous Tuesday as follows. Also, we can keep the time part by setting the argument keep_time=True.
pdl.now().previous(pdl.TUESDAY)pdl.now().previous(pdl.TUESDAY, keep_time=True)
There are more “sugars” that are hidden in this library. Just a few more examples, such as getting yesterday or tomorrow.
print(pdl.yesterday())print(pdl.today())print(pdl.tomorrow())
It is also very easy to output the datetime with locale for different cultures and languages.
pdl.now().format('dddd DD MMMM YYYY', locale='zh')
One more example. If a person was born on Jan 1st, 1988, what’s the age of this person?
pdl.datetime(1988, 1, 1).age
There is more interesting stuff that can be found in the official documentation.
pendulum.eustace.io
In this article, I have introduced the 3rd party Python library Pendulum. It is a drop-in replacement for the built-in datetime module of Python. By using this library, many issues that the datetime module can solve, such as finding relative dates, can be solved easily now. More importantly, Pendulum provided neat and clean APIs to improve the readability of our code and these solutions are more intuitive.
medium.com
If you feel my articles are helpful, please consider joining Medium Membership to support me and thousands of other writers! (Click the link above) | [
{
"code": null,
"e": 411,
"s": 171,
"text": "The datetime module is one of the most important build-in modules in Python. It is very flexible and powerful by providing many out-of-the-box solutions to real programming problems. For example, the timedelta is one of my favourite tools."
},
{
"code": null,
"e": 757,
"s": 411,
"text": "However, I have to say that there are also some limitations of the datetime module. For example, when we deal with timezones, it usually reveals the shortage. Sometimes, we have to introduce some 3rd party libraries as supplements. Also, some aspects in the datetime module are not very intuitive or commonly used in other programming languages."
},
{
"code": null,
"e": 895,
"s": 757,
"text": "In this article, I’ll introduce a 3rd party library called Pendulum which will resolve all of the issues of the build-in datetime module."
},
{
"code": null,
"e": 1154,
"s": 895,
"text": "It is not uncommon to use some 3rd party libraries such as pytz to solve some issues that Python datetime is not good at. However, we still need to import the datetime module and use it as essential, because we need to use it to instantiate datetime objects."
},
{
"code": null,
"e": 1251,
"s": 1154,
"text": "Let me show you why Pendulum is a drop-in replacement. Firstly, we need to install it using pip."
},
{
"code": null,
"e": 1272,
"s": 1251,
"text": "pip install pendulum"
},
{
"code": null,
"e": 1355,
"s": 1272,
"text": "The library's name is a bit long, so I would recommend importing it with an alias."
},
{
"code": null,
"e": 1378,
"s": 1355,
"text": "import pendulum as pdl"
},
{
"code": null,
"e": 1488,
"s": 1378,
"text": "Although pd is a shorter abbreviation, but I would reserve it for Pandas. Don’t want to create any confusion."
},
{
"code": null,
"e": 1571,
"s": 1488,
"text": "Let’s create a datetime object using Pendulum, and have a look at its object type."
},
{
"code": null,
"e": 1655,
"s": 1571,
"text": "from datetime import datetimedt = pdl.datetime(2021, 11, 6)isinstance(dt, datetime)"
},
{
"code": null,
"e": 1894,
"s": 1655,
"text": "There is no magic. It is only because Pendulum inherits the Python datetime object. So, we don’t need to worry about using some original features from the datetime module. Literally, a Pendulum datetime object is a Python datetime object."
},
{
"code": null,
"e": 2123,
"s": 1894,
"text": "The most impressive feature of the Pendulum library will be the timezones. This is also one of the critical issues of the built-in datetime module. Before Python 3.9, we have to involve pytz if we want to use the IANA timezones."
},
{
"code": null,
"e": 2215,
"s": 2123,
"text": "With the Pendulum library, we can create a datetime object with a timezone as easy as such."
},
{
"code": null,
"e": 2385,
"s": 2215,
"text": "dt_melbourne = pdl.datetime(2021, 11, 6, tz='Australia/Melbourne')dt_brisbane = pdl.datetime(2021, 11, 6, tz='Australia/Queensland')print(dt_melbourne)print(dt_brisbane)"
},
{
"code": null,
"e": 2545,
"s": 2385,
"text": "In the above example, we have created two objects for the same time. However, the timezones are different. Pendulum also allows us to compare the times easily."
},
{
"code": null,
"e": 2587,
"s": 2545,
"text": "dt_melbourne.diff(dt_brisbane).in_hours()"
},
{
"code": null,
"e": 2689,
"s": 2587,
"text": "How easy it is! Comparing two datetime objects with different timezones and getting the exact result!"
},
{
"code": null,
"e": 2849,
"s": 2689,
"text": "If we need to define multiple datetime objects and want to re-use the timezone string, we can create a timezone object and pass it to the datetime constructor."
},
{
"code": null,
"e": 3007,
"s": 2849,
"text": "my_timezone = pdl.timezone('Australia/Melbourne')dt_melbourne = pdl.datetime(2021, 11, 6, tz=my_timezone)print(dt_melbourne)print(dt_melbourne.timezone.name)"
},
{
"code": null,
"e": 3153,
"s": 3007,
"text": "One more cool feature is to return the time to a different time zone. For example, it is midnight in Melbourne, then what’s the time in Brisbane?"
},
{
"code": null,
"e": 3363,
"s": 3153,
"text": "Parsing a datetime is probably the most common use case in programming. Python datetime module does that very well. However, compared to most other programming languages, Python uses a different format %Y%m%d."
},
{
"code": null,
"e": 3425,
"s": 3363,
"text": "Pendulum allows us to use the common format codes as follows."
},
{
"code": null,
"e": 3487,
"s": 3425,
"text": "pdl.from_format('2021-11-06 22:00:00', 'YYYY-MM-DD HH:mm:ss')"
},
{
"code": null,
"e": 3670,
"s": 3487,
"text": "Also, it fully supports the RFC 3339 and ISO 8601 formats, as well as some other common formats. That means we don’t have to specify the format codes to parse a string into datetime."
},
{
"code": null,
"e": 3703,
"s": 3670,
"text": "pdl.parse('2021-11-01 22:00:00')"
},
{
"code": null,
"e": 3873,
"s": 3703,
"text": "Pendulum also integrates many common datetime extensions such as dateutil. If we want the library to fall back on the dateutil parser, we can pass the flag strict=False."
},
{
"code": null,
"e": 3909,
"s": 3873,
"text": "pdl.parse('21-11-06', strict=False)"
},
{
"code": null,
"e": 4020,
"s": 3909,
"text": "Apart from these, Pendulum supports much more formats on the fly. For example, the datetime with numbers only."
},
{
"code": null,
"e": 4042,
"s": 4020,
"text": "pdl.parse('20211106')"
},
{
"code": null,
"e": 4180,
"s": 4042,
"text": "And this one is very interesting, specifying the year, the week number and the day of that week, Pendulum gives you the correct datetime."
},
{
"code": null,
"e": 4204,
"s": 4180,
"text": "pdl.parse('2021-W44-6')"
},
{
"code": null,
"e": 4347,
"s": 4204,
"text": "If we want to have a date object or a time object particularly, just specify exact=True, which is much easier than the Python datetime module."
},
{
"code": null,
"e": 4414,
"s": 4347,
"text": "pdl.parse('20211106', exact=True)pdl.parse('22:00:00', exact=True)"
},
{
"code": null,
"e": 4542,
"s": 4414,
"text": "After parsing strings into datetime objects, the next important thing will be to output the datetime into strings with formats."
},
{
"code": null,
"e": 4669,
"s": 4542,
"text": "First of all, let’s have a datetime object. Since Pendulum inherits Python datetime, we can use all the methods such as now()."
},
{
"code": null,
"e": 4684,
"s": 4669,
"text": "dt = pdl.now()"
},
{
"code": null,
"e": 4829,
"s": 4684,
"text": "Then, let me pick up several examples of “to string” methods from Pendulum to see how easy it is to output datetime with formats out-of-the-box."
},
{
"code": null,
"e": 5156,
"s": 4829,
"text": "dt.to_date_string() # with date onlydt.to_time_string() # with time onlydt.to_formatted_date_string() # month_abbr date, yeardt.to_day_datetime_string() # day, month_abbr date, year hh:mm am/pmdt.to_iso8601_string() # to ISO 9601 standarddt.to_atom_string() # to Atom formatdt.to_cookie_string() # to cookie style format"
},
{
"code": null,
"e": 5265,
"s": 5156,
"text": "Of course, we can use the format code to customise the output string, and the format code is more intuitive."
},
{
"code": null,
"e": 5308,
"s": 5265,
"text": "dt.format('DD MMMM, YYYY dddd HH:mm:ss A')"
},
{
"code": null,
"e": 5438,
"s": 5308,
"text": "Another cool stuff is that we can easily add some irrelevant strings into the format string and have them escape from the format."
},
{
"code": null,
"e": 5496,
"s": 5438,
"text": "dt.format('[Hi! Today is] DD MMMM, YYYY dddd HH:mm:ss A')"
},
{
"code": null,
"e": 5710,
"s": 5496,
"text": "In the built-in Python datetime module, the timedelta tool does the comparing jobs very well. However, Pendulum can even improve it by providing some more human-friendly output when comparing two datetime objects."
},
{
"code": null,
"e": 5846,
"s": 5710,
"text": "For example, the method diff_for_humans() will compare a datetime object with the current time and return a very human-friendly output."
},
{
"code": null,
"e": 5953,
"s": 5846,
"text": "dt1 = pdl.datetime(2021, 1, 1)dt1.diff_for_humans()dt2 = pdl.datetime(2021, 11, 7, 1)dt2.diff_for_humans()"
},
{
"code": null,
"e": 6200,
"s": 5953,
"text": "One of the aspects that the built-in Python datetime could improve is finding the relative datetime based on a given one. For example, when we want to find the last day of the current month, we have to use relativedelta from the datetutil module."
},
{
"code": null,
"e": 6303,
"s": 6200,
"text": "from dateutil.relativedelta import relativedeltadatetime.datetime(2013, 2, 21) + relativedelta(day=31)"
},
{
"code": null,
"e": 6451,
"s": 6303,
"text": "Also, the code is not quite readable because we are using day=31 as the argument, although it does the tricks when the month has less than 31 days."
},
{
"code": null,
"e": 6490,
"s": 6451,
"text": "While in Pendulum, it can’t be easier."
},
{
"code": null,
"e": 6625,
"s": 6490,
"text": "pdl.now().start_of('day') # find the start time of the daypdl.now().start_of('month')pdl.now().end_of('day')pdl.now().end_of('month')"
},
{
"code": null,
"e": 6807,
"s": 6625,
"text": "Another inconvenience of the built-in datetime module is to find a day of a week. For example, if we want to find the date of next Monday, this is probably the easiest way to do so."
},
{
"code": null,
"e": 6913,
"s": 6807,
"text": "from datetime import datetime, timedeltadatetime.now() + timedelta(days=(0-datetime.now().weekday()+7)%7)"
},
{
"code": null,
"e": 7053,
"s": 6913,
"text": "It does the job, but with poor readability. A developer will need to spend some time to understand what is the logic for this line of code."
},
{
"code": null,
"e": 7091,
"s": 7053,
"text": "With Pendulum, it is as easy as such."
},
{
"code": null,
"e": 7118,
"s": 7091,
"text": "pdl.now().next(pdl.MONDAY)"
},
{
"code": null,
"e": 7254,
"s": 7118,
"text": "We are even not bothered with thinking about whether it should be 0 or 1 to indicate Monday because Pendulum uses enumeration for that."
},
{
"code": null,
"e": 7407,
"s": 7254,
"text": "Similarly, we can use previous() method to find the previous Tuesday as follows. Also, we can keep the time part by setting the argument keep_time=True."
},
{
"code": null,
"e": 7486,
"s": 7407,
"text": "pdl.now().previous(pdl.TUESDAY)pdl.now().previous(pdl.TUESDAY, keep_time=True)"
},
{
"code": null,
"e": 7608,
"s": 7486,
"text": "There are more “sugars” that are hidden in this library. Just a few more examples, such as getting yesterday or tomorrow."
},
{
"code": null,
"e": 7670,
"s": 7608,
"text": "print(pdl.yesterday())print(pdl.today())print(pdl.tomorrow())"
},
{
"code": null,
"e": 7764,
"s": 7670,
"text": "It is also very easy to output the datetime with locale for different cultures and languages."
},
{
"code": null,
"e": 7815,
"s": 7764,
"text": "pdl.now().format('dddd DD MMMM YYYY', locale='zh')"
},
{
"code": null,
"e": 7903,
"s": 7815,
"text": "One more example. If a person was born on Jan 1st, 1988, what’s the age of this person?"
},
{
"code": null,
"e": 7932,
"s": 7903,
"text": "pdl.datetime(1988, 1, 1).age"
},
{
"code": null,
"e": 8013,
"s": 7932,
"text": "There is more interesting stuff that can be found in the official documentation."
},
{
"code": null,
"e": 8033,
"s": 8013,
"text": "pendulum.eustace.io"
},
{
"code": null,
"e": 8443,
"s": 8033,
"text": "In this article, I have introduced the 3rd party Python library Pendulum. It is a drop-in replacement for the built-in datetime module of Python. By using this library, many issues that the datetime module can solve, such as finding relative dates, can be solved easily now. More importantly, Pendulum provided neat and clean APIs to improve the readability of our code and these solutions are more intuitive."
},
{
"code": null,
"e": 8454,
"s": 8443,
"text": "medium.com"
}
] |
MongoDB query to filter object where all elements from nested array match the condition | For this, use aggregate(). Let us first create a collection with documents −
> db.demo418.insertOne(
... {
... "details":[
... {
... "CountryName":"US",
... "Marks":45
... },
... {
... "CountryName":"US",
... "Marks":56
... },
...
... ],
... "Name":"John"
... }
... );
{
"acknowledged" : true,
"insertedId" : ObjectId("5e724324b912067e57771ae6")
}
> db.demo418.insertOne(
... {
... "details":[
... {
... "CountryName":"US",
... "Marks":78
... },
... {
... "CountryName":"UK",
... "Marks":97
... },
...
... ],
... "Name":"Mike"
... }
... );
{
"acknowledged" : true,
"insertedId" : ObjectId("5e724325b912067e57771ae7")
}
Display all documents from a collection with the help of find() method −
> db.demo418.find();
This will produce the following output −
{ "_id" : ObjectId("5e724324b912067e57771ae6"), "details" : [ { "CountryName" : "US", "Marks" : 45 }, { "CountryName" : "US", "Marks" : 56 } ], "Name" : "John" }
{ "_id" : ObjectId("5e724325b912067e57771ae7"), "details" : [ { "CountryName" : "US", "Marks" : 78 }, { "CountryName" : "UK", "Marks" : 97 } ], "Name" : "Mike" }
Following is the query to filter object where all elements from nested array match the condition −
> db.demo418.aggregate([
... {$unwind:"$details"},
... { $group: {
... _id: '$_id',
... details : { $first: '$details' },
... alldetails: { $sum: 1 },
... alldetailsmatch: { $sum: { $cond: [ { $eq: [ '$details.CountryName', "US" ] }, 1, 0 ] } },
... Name: { $first: '$Name' }
... }},
... { $project: {
... _id: 1,
... Name: 1,
... details: 1,
... arrayValue: { $cond: [ { $eq: [ '$alldetails', '$alldetailsmatch' ] }, 1, 0 ] }
... }},
... { $match: { 'arrayValue' : 1 } }
... ])
This will produce the following output −
{ "_id" : ObjectId("5e724324b912067e57771ae6"), "details" : { "CountryName" : "US", "Marks" : 45 }, "Name" : "John", "arrayValue" : 1 } | [
{
"code": null,
"e": 1139,
"s": 1062,
"text": "For this, use aggregate(). Let us first create a collection with documents −"
},
{
"code": null,
"e": 1909,
"s": 1139,
"text": "> db.demo418.insertOne(\n... {\n... \"details\":[\n... {\n... \"CountryName\":\"US\",\n... \"Marks\":45\n... },\n... {\n... \"CountryName\":\"US\",\n... \"Marks\":56\n... },\n...\n... ],\n... \"Name\":\"John\"\n... }\n... );\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e724324b912067e57771ae6\")\n}\n> db.demo418.insertOne(\n... {\n... \"details\":[\n... {\n... \"CountryName\":\"US\",\n... \"Marks\":78\n... },\n... {\n... \"CountryName\":\"UK\",\n... \"Marks\":97\n... },\n...\n... ],\n... \"Name\":\"Mike\"\n... }\n... );\n{\n \"acknowledged\" : true,\n \"insertedId\" : ObjectId(\"5e724325b912067e57771ae7\")\n}"
},
{
"code": null,
"e": 1982,
"s": 1909,
"text": "Display all documents from a collection with the help of find() method −"
},
{
"code": null,
"e": 2003,
"s": 1982,
"text": "> db.demo418.find();"
},
{
"code": null,
"e": 2044,
"s": 2003,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2368,
"s": 2044,
"text": "{ \"_id\" : ObjectId(\"5e724324b912067e57771ae6\"), \"details\" : [ { \"CountryName\" : \"US\", \"Marks\" : 45 }, { \"CountryName\" : \"US\", \"Marks\" : 56 } ], \"Name\" : \"John\" }\n{ \"_id\" : ObjectId(\"5e724325b912067e57771ae7\"), \"details\" : [ { \"CountryName\" : \"US\", \"Marks\" : 78 }, { \"CountryName\" : \"UK\", \"Marks\" : 97 } ], \"Name\" : \"Mike\" }"
},
{
"code": null,
"e": 2467,
"s": 2368,
"text": "Following is the query to filter object where all elements from nested array match the condition −"
},
{
"code": null,
"e": 2991,
"s": 2467,
"text": "> db.demo418.aggregate([\n... {$unwind:\"$details\"},\n... { $group: {\n... _id: '$_id',\n... details : { $first: '$details' },\n... alldetails: { $sum: 1 },\n... alldetailsmatch: { $sum: { $cond: [ { $eq: [ '$details.CountryName', \"US\" ] }, 1, 0 ] } },\n... Name: { $first: '$Name' }\n... }},\n... { $project: {\n... _id: 1,\n... Name: 1,\n... details: 1,\n... arrayValue: { $cond: [ { $eq: [ '$alldetails', '$alldetailsmatch' ] }, 1, 0 ] }\n... }},\n... { $match: { 'arrayValue' : 1 } }\n... ])"
},
{
"code": null,
"e": 3032,
"s": 2991,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 3168,
"s": 3032,
"text": "{ \"_id\" : ObjectId(\"5e724324b912067e57771ae6\"), \"details\" : { \"CountryName\" : \"US\", \"Marks\" : 45 }, \"Name\" : \"John\", \"arrayValue\" : 1 }"
}
] |
Beautifulsoup Installation - Python - GeeksforGeeks | 05 Oct, 2021
Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work. The latest Version of Beautifulsoup is v4.9.3 as of now.
Python
Pip
To install Beautifulsoup on Windows, Linux, or any operating system, one would need pip package. To check how to install pip on your operating system, check out – PIP Installation – Windows || Linux. Now, run a simple command,
pip install beautifulsoup4
Wait and relax, Beautifulsoup would be installed shortly.
One can install beautifulsoup, using source code directly, install beautifulsoup tarball from here – download the Beautiful Soup 4 source tarball after downloading cd into the directory and run,
Python setup.py install
To check whether the installation is complete or not, let’s try implementing it using python
singhaniket
how-to-install
Python-BS3
How To
Installation Guide
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Align Text in HTML?
How to Install OpenCV for Python on Windows?
Java Tutorial
How to Install FFmpeg on Windows?
How to filter object array based on attributes?
Installation of Node.js on Linux
How to Install OpenCV for Python on Windows?
How to Install FFmpeg on Windows?
How to Install and Run Apache Kafka on Windows?
How to Install Pygame on Windows ? | [
{
"code": null,
"e": 24973,
"s": 24945,
"text": "\n05 Oct, 2021"
},
{
"code": null,
"e": 25281,
"s": 24973,
"text": "Beautiful Soup is a Python library for pulling data out of HTML and XML files. It works with your favorite parser to provide idiomatic ways of navigating, searching, and modifying the parse tree. It commonly saves programmers hours or days of work. The latest Version of Beautifulsoup is v4.9.3 as of now. "
},
{
"code": null,
"e": 25288,
"s": 25281,
"text": "Python"
},
{
"code": null,
"e": 25294,
"s": 25288,
"text": "Pip "
},
{
"code": null,
"e": 25525,
"s": 25296,
"text": "To install Beautifulsoup on Windows, Linux, or any operating system, one would need pip package. To check how to install pip on your operating system, check out – PIP Installation – Windows || Linux. Now, run a simple command, "
},
{
"code": null,
"e": 25552,
"s": 25525,
"text": "pip install beautifulsoup4"
},
{
"code": null,
"e": 25611,
"s": 25552,
"text": "Wait and relax, Beautifulsoup would be installed shortly. "
},
{
"code": null,
"e": 25810,
"s": 25613,
"text": "One can install beautifulsoup, using source code directly, install beautifulsoup tarball from here – download the Beautiful Soup 4 source tarball after downloading cd into the directory and run, "
},
{
"code": null,
"e": 25834,
"s": 25810,
"text": "Python setup.py install"
},
{
"code": null,
"e": 25931,
"s": 25836,
"text": "To check whether the installation is complete or not, let’s try implementing it using python "
},
{
"code": null,
"e": 25945,
"s": 25933,
"text": "singhaniket"
},
{
"code": null,
"e": 25960,
"s": 25945,
"text": "how-to-install"
},
{
"code": null,
"e": 25971,
"s": 25960,
"text": "Python-BS3"
},
{
"code": null,
"e": 25978,
"s": 25971,
"text": "How To"
},
{
"code": null,
"e": 25997,
"s": 25978,
"text": "Installation Guide"
},
{
"code": null,
"e": 26004,
"s": 25997,
"text": "Python"
},
{
"code": null,
"e": 26102,
"s": 26004,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26129,
"s": 26102,
"text": "How to Align Text in HTML?"
},
{
"code": null,
"e": 26174,
"s": 26129,
"text": "How to Install OpenCV for Python on Windows?"
},
{
"code": null,
"e": 26188,
"s": 26174,
"text": "Java Tutorial"
},
{
"code": null,
"e": 26222,
"s": 26188,
"text": "How to Install FFmpeg on Windows?"
},
{
"code": null,
"e": 26270,
"s": 26222,
"text": "How to filter object array based on attributes?"
},
{
"code": null,
"e": 26303,
"s": 26270,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 26348,
"s": 26303,
"text": "How to Install OpenCV for Python on Windows?"
},
{
"code": null,
"e": 26382,
"s": 26348,
"text": "How to Install FFmpeg on Windows?"
},
{
"code": null,
"e": 26430,
"s": 26382,
"text": "How to Install and Run Apache Kafka on Windows?"
}
] |
How to Install and Setup Visual Studio for ASP.NET? - GeeksforGeeks | 06 Oct, 2021
Visual Studio is an Integrated Development Environment(IDE) developed by Microsoft to develop GUI(Graphical User Interface), Web applications, console, web apps, mobile apps, cloud, and web services, etc. To install and use Visual Studio for the commercial purpose one must buy a license from Microsoft. For learning (non-commercial) purpose, Microsoft provided a free Visual Studio Community Version. We will use the Visual Studio Community Version 2019. The latest version of Visual Studio makes the whole process very easy for ASP.NET applications. There may be some variations in the steps for installing and setting up the Visual Studio IDE. So we recommend installing the latest version of Visual Studio.
Step 1: Download the Visual Studio Community Version 2019
Step 2: Run the .exe file and follow the instructions to install Visual Studio Community Version on the system.
Step 3: Select ASP.NET and web development from the options and click to install in bottom right corner as shown below. We can also select .NET desktop development option for windows forms and console applications etc. Here we are selecting both the options. We can also modify it after installation.
Step 4: Click on launch and it will be prompted to sign in for the first time. The sign-in step is optional so it can be skipped. The dialog box will appear for the first time only and ask to choose the Development Settings and color theme. Once select required options, click on Start Visual Studio option. This step is optional in some versions.
Step 5: To create a new ASP.NET Web application, Go to File –> New –>Project like as shown below:
Step 6: As soon as we select the project, we will notice the different options of the Project. We can filter them according to our choice. We can see the 3 filters(Language, Platform, Project Type) on the right side of the search bar in the below given screenshot. Here we are not using the filters. We are simply choosing ASP.NET Web Application(.NET Framework) and click Next. We can see the options of C#, Windows, and Library below the chosen project. There are two choices as we can also find the ASP.NET Web Application(.NET Framework) using VB(Visual Basic).
Step 7: The next step is to configure the project. Here, we have to choose the Project Name and Solution name and click on Create Button. We can also change the location of the project. A project name is a subset of the solution name. We can put a different name for the solution. In other words, the solution is like a container for projects.
We are putting Project name and Solution name as GeeksforGeeks as shown in the below screenshot.
Step 8: Here, we have to choose the type of the ASP.NET Web Application. We are creating a web application so first, choose the project type as Empty to understand a simple application. Then choose the Web-Forms which will add the basic folders to create Web Forms Application. After that click Create button.
In the below image, in the right-hand side, the Solution Explorer is open by default. There we can see a file Global.asax.cs which is a common file for the entire application. This file contains specific information related to the application and used to initialize application specific variables to their default values.
Step 9: Now add a Web Form file to the project “GeeksforGeeks” which contains the web-specific code for the project. Just Right click on GeeksforGeeks in the Solution Explorer. Select Add and then select Web Form from the menu as shown below.
It will prompt for the name of the Web Form. We are putting the name as TestingWebForm and click OK.
The default code for the TestingWebForm is shown as below:
Step 10: Now write a sample code in TestingWebForm.aspx file which will display “Hello Geeks!” as an output. The explanation of code will be discussed further.
<%@ Page Language="C#" AutoEventWireup="true" CodeBehind="TestingWebForm.aspx.cs" Inherits="GeeksforGeeks.TestingWebForm" %> <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"><head runat="server"> <title></title></head><body> <form id="form1" runat="server"> <div> <%Response.Write("Hello Geeks!"); %> </div> </form></body></html>
Now to execute the code, click on the Run button as shown in below screenshot. For the first time, we may need to set up the browser configuration.
Finally the Output:
CSharp ASP-NET
how-to-install
C#
How To
Installation Guide
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 50 C# Interview Questions & Answers
Extension Method in C#
HashSet in C# with Examples
Partial Classes in C#
C# | Inheritance
How to Install PIP on Windows ?
How to Find the Wi-Fi Password Using CMD in Windows?
How to install Jupyter Notebook on Windows?
How to Align Text in HTML?
How to Install OpenCV for Python on Windows? | [
{
"code": null,
"e": 23935,
"s": 23907,
"text": "\n06 Oct, 2021"
},
{
"code": null,
"e": 24646,
"s": 23935,
"text": "Visual Studio is an Integrated Development Environment(IDE) developed by Microsoft to develop GUI(Graphical User Interface), Web applications, console, web apps, mobile apps, cloud, and web services, etc. To install and use Visual Studio for the commercial purpose one must buy a license from Microsoft. For learning (non-commercial) purpose, Microsoft provided a free Visual Studio Community Version. We will use the Visual Studio Community Version 2019. The latest version of Visual Studio makes the whole process very easy for ASP.NET applications. There may be some variations in the steps for installing and setting up the Visual Studio IDE. So we recommend installing the latest version of Visual Studio."
},
{
"code": null,
"e": 24704,
"s": 24646,
"text": "Step 1: Download the Visual Studio Community Version 2019"
},
{
"code": null,
"e": 24816,
"s": 24704,
"text": "Step 2: Run the .exe file and follow the instructions to install Visual Studio Community Version on the system."
},
{
"code": null,
"e": 25117,
"s": 24816,
"text": "Step 3: Select ASP.NET and web development from the options and click to install in bottom right corner as shown below. We can also select .NET desktop development option for windows forms and console applications etc. Here we are selecting both the options. We can also modify it after installation."
},
{
"code": null,
"e": 25465,
"s": 25117,
"text": "Step 4: Click on launch and it will be prompted to sign in for the first time. The sign-in step is optional so it can be skipped. The dialog box will appear for the first time only and ask to choose the Development Settings and color theme. Once select required options, click on Start Visual Studio option. This step is optional in some versions."
},
{
"code": null,
"e": 25563,
"s": 25465,
"text": "Step 5: To create a new ASP.NET Web application, Go to File –> New –>Project like as shown below:"
},
{
"code": null,
"e": 26129,
"s": 25563,
"text": "Step 6: As soon as we select the project, we will notice the different options of the Project. We can filter them according to our choice. We can see the 3 filters(Language, Platform, Project Type) on the right side of the search bar in the below given screenshot. Here we are not using the filters. We are simply choosing ASP.NET Web Application(.NET Framework) and click Next. We can see the options of C#, Windows, and Library below the chosen project. There are two choices as we can also find the ASP.NET Web Application(.NET Framework) using VB(Visual Basic)."
},
{
"code": null,
"e": 26473,
"s": 26129,
"text": "Step 7: The next step is to configure the project. Here, we have to choose the Project Name and Solution name and click on Create Button. We can also change the location of the project. A project name is a subset of the solution name. We can put a different name for the solution. In other words, the solution is like a container for projects."
},
{
"code": null,
"e": 26570,
"s": 26473,
"text": "We are putting Project name and Solution name as GeeksforGeeks as shown in the below screenshot."
},
{
"code": null,
"e": 26880,
"s": 26570,
"text": "Step 8: Here, we have to choose the type of the ASP.NET Web Application. We are creating a web application so first, choose the project type as Empty to understand a simple application. Then choose the Web-Forms which will add the basic folders to create Web Forms Application. After that click Create button."
},
{
"code": null,
"e": 27202,
"s": 26880,
"text": "In the below image, in the right-hand side, the Solution Explorer is open by default. There we can see a file Global.asax.cs which is a common file for the entire application. This file contains specific information related to the application and used to initialize application specific variables to their default values."
},
{
"code": null,
"e": 27445,
"s": 27202,
"text": "Step 9: Now add a Web Form file to the project “GeeksforGeeks” which contains the web-specific code for the project. Just Right click on GeeksforGeeks in the Solution Explorer. Select Add and then select Web Form from the menu as shown below."
},
{
"code": null,
"e": 27546,
"s": 27445,
"text": "It will prompt for the name of the Web Form. We are putting the name as TestingWebForm and click OK."
},
{
"code": null,
"e": 27605,
"s": 27546,
"text": "The default code for the TestingWebForm is shown as below:"
},
{
"code": null,
"e": 27765,
"s": 27605,
"text": "Step 10: Now write a sample code in TestingWebForm.aspx file which will display “Hello Geeks!” as an output. The explanation of code will be discussed further."
},
{
"code": "<%@ Page Language=\"C#\" AutoEventWireup=\"true\" CodeBehind=\"TestingWebForm.aspx.cs\" Inherits=\"GeeksforGeeks.TestingWebForm\" %> <!DOCTYPE html> <html xmlns=\"http://www.w3.org/1999/xhtml\"><head runat=\"server\"> <title></title></head><body> <form id=\"form1\" runat=\"server\"> <div> <%Response.Write(\"Hello Geeks!\"); %> </div> </form></body></html>",
"e": 28141,
"s": 27765,
"text": null
},
{
"code": null,
"e": 28289,
"s": 28141,
"text": "Now to execute the code, click on the Run button as shown in below screenshot. For the first time, we may need to set up the browser configuration."
},
{
"code": null,
"e": 28309,
"s": 28289,
"text": "Finally the Output:"
},
{
"code": null,
"e": 28324,
"s": 28309,
"text": "CSharp ASP-NET"
},
{
"code": null,
"e": 28339,
"s": 28324,
"text": "how-to-install"
},
{
"code": null,
"e": 28342,
"s": 28339,
"text": "C#"
},
{
"code": null,
"e": 28349,
"s": 28342,
"text": "How To"
},
{
"code": null,
"e": 28368,
"s": 28349,
"text": "Installation Guide"
},
{
"code": null,
"e": 28466,
"s": 28368,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28475,
"s": 28466,
"text": "Comments"
},
{
"code": null,
"e": 28488,
"s": 28475,
"text": "Old Comments"
},
{
"code": null,
"e": 28528,
"s": 28488,
"text": "Top 50 C# Interview Questions & Answers"
},
{
"code": null,
"e": 28551,
"s": 28528,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 28579,
"s": 28551,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 28601,
"s": 28579,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 28618,
"s": 28601,
"text": "C# | Inheritance"
},
{
"code": null,
"e": 28650,
"s": 28618,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 28703,
"s": 28650,
"text": "How to Find the Wi-Fi Password Using CMD in Windows?"
},
{
"code": null,
"e": 28747,
"s": 28703,
"text": "How to install Jupyter Notebook on Windows?"
},
{
"code": null,
"e": 28774,
"s": 28747,
"text": "How to Align Text in HTML?"
}
] |
How to sort array of strings by their lengths following longest to shortest pattern in Java | At first, let us create and array of strings:
String[] strArr = { "ABCD", "AB", "ABCDEFG", "ABC", "A", "ABCDE", "ABCDEF", "ABCDEFGHIJ" }
Now, for longest to shortest pattern, for example ABCDEFGHIJ, ABCDEFG, ABCDEF, etc.; get the length of both the string arrays and work them like this:
Arrays.sort(strArr, (str1, str2) → str2.length() - str1.length());
The following is an example to sort array of strings by their lengths with longest to shortest pattern in Java:
import java.util.Arrays;
public class Demo {
public static void main(String[] args) {
String[] strArr = { "ABCD", "AB", "ABCDEFG", "ABC", "A", "ABCDE", "ABCDEF", "ABCDEFGHIJ" };
System.out.println("Sorting array on the basis of their lengths (longest to shortest) = ");
Arrays.sort(strArr, (str1, str2) → str2.length() - str1.length());
Arrays.asList(strArr).forEach(System.out::println);
}
}
Sorting array on the basis of their lengths (longest to shortest) =
ABCDEFGHIJ
ABCDEFG
ABCDEF
ABCDE
ABCD
ABC
AB
A | [
{
"code": null,
"e": 1108,
"s": 1062,
"text": "At first, let us create and array of strings:"
},
{
"code": null,
"e": 1199,
"s": 1108,
"text": "String[] strArr = { \"ABCD\", \"AB\", \"ABCDEFG\", \"ABC\", \"A\", \"ABCDE\", \"ABCDEF\", \"ABCDEFGHIJ\" }"
},
{
"code": null,
"e": 1350,
"s": 1199,
"text": "Now, for longest to shortest pattern, for example ABCDEFGHIJ, ABCDEFG, ABCDEF, etc.; get the length of both the string arrays and work them like this:"
},
{
"code": null,
"e": 1417,
"s": 1350,
"text": "Arrays.sort(strArr, (str1, str2) → str2.length() - str1.length());"
},
{
"code": null,
"e": 1529,
"s": 1417,
"text": "The following is an example to sort array of strings by their lengths with longest to shortest pattern in Java:"
},
{
"code": null,
"e": 1952,
"s": 1529,
"text": "import java.util.Arrays;\npublic class Demo {\n public static void main(String[] args) {\n String[] strArr = { \"ABCD\", \"AB\", \"ABCDEFG\", \"ABC\", \"A\", \"ABCDE\", \"ABCDEF\", \"ABCDEFGHIJ\" };\n System.out.println(\"Sorting array on the basis of their lengths (longest to shortest) = \");\n Arrays.sort(strArr, (str1, str2) → str2.length() - str1.length());\n Arrays.asList(strArr).forEach(System.out::println);\n }\n}"
},
{
"code": null,
"e": 2066,
"s": 1952,
"text": "Sorting array on the basis of their lengths (longest to shortest) =\nABCDEFGHIJ\nABCDEFG\nABCDEF\nABCDE\nABCD\nABC\nAB\nA"
}
] |
Strategies for Global Optimization | by Syed Sadat Nazrul | Towards Data Science | Local and global optimization is usually known and somewhat ignored once we leave high school calculus. For a quick review, take the cover image of this blog. For a given function, there are multiple points where there are dips across space. Each dip is a minimum. However, you can see that only one point is the deepest, known as the global minimum. All other points are local minima. The first section, I will go over calculating this global value for a known function. However, once we leave the realm of this known function of high school math, most optimization problems deal with local optimizations only. Even this, I will go over and provide reasons why. Quick note, there are plenty of other strategies out that I have not mentioned in this blog. However, I am hoping that it gets you interested enough to explore more options.
Let’s review some basic high school calculus. First, going over this simple univariate function:
Graphically, we can see that the global minimum is around -3. However, let’s try to use calculus as we know the function of this plot.
We see that f(-3)=-22.75, the smallest possible value of f(x). We can even go beyond 2D and enter the realm of multivariate calculus for solving this problem.
Most of the time when dealing with data science, we do not have access to the function to perform any calculus. Usually, f(x) is a system, where we can enter the variable, x, and gain some output, y. One possible solution is to perform a stochastic gradient descent, where we iteratively go down the slope till we hit a minimum.
Now let’s implement this problem in python, assuming f to be out black box function.
#Unknown Functionf = lambda x:(1/4*x**4)+(1/3*x**3)-(3*x**2)-7def next_step(x,f,step_size): y=f(x) #Left Point x_left=x-step_size y_left=f(x_left) diff_left=y_left-y #Right Point x_right=x+step_size y_right=f(x_right) diff_right=y_right-y #Comparison if diff_right<diff_left: return x_right, y_right, diff_right else: return x_left, y_left, diff_leftdef gradient_descent(f,start,step_size=0.01,tol=0.1, debug=False): x=start diff=9999999 while abs(diff)>tol: x, y, diff=next_step(x,f,step_size) if debug: print("Current Point: (%.2f,%.2f), Change: %.2f"%(x,y,diff)) print("Minimum Point: (%.2f,%.2f)"%(x,y))
Now that we have our function, let’s try searching from x=4.
gradient_descent(f, start=4)#Minimum Point: (2.65,-9.54)
This gives us a minimum point of 2.65 . Now, let’s try the same for another starting point, x=-4.
gradient_descent(f, start=-4)#Minimum Point: (-3.51,-20.43)
Using x=-5, we get our global minimum at -3.51 . This is when finding global and local minimum becomes tricky. Our final result is dependent on the starting point.
As it is an unknown function, we do not know:
The ideal start point
The ideal step size
The ideal domain (we cannot traverse an infinitely large space of x)
One possible solution is the use of Simulated Annealing, which gives us a reasonable probability of hitting the global minimum.
The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. Both are attributes of the material that depend on its thermodynamic free energy. Heating and cooling the material affects both the temperature and the thermodynamic free energy. The simulation of annealing can be used to find an approximation of a global minimum for a function with a large number of variables.
This notion of slow cooling implemented in the simulated annealing algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution space is explored. Accepting worse solutions is a fundamental property of metaheuristics because it allows for a more extensive search for the global optimal solution.
In general, the simulated annealing algorithms work as follows:
Let x = x0
For k = 0 through kmax (exclusive):- T decreased at each step- Pick a random neighbour, x_new ← neighbour(x)- If P(E(x), E(x_new), T) ≥ random(0, 1): x ← x_new
Output: the final state x
Here is the Python implementation from our specific function:
import numpy as np#Unknown Functionf = lambda x:(1/4*x**4)+(1/3*x**3)-(3*x**2)-7def acceptance_probability(E, E_new, T): return np.exp(-(E-E_new)/T)def random_neighbour(x): return x += np.random.uniform(-1,1)def simulated_annealing(f, steps): x = np.random.random() E = f(x) print("x=%.2f, fmin=%.2f"%(x, E)) for k in range(steps): T = T*0.9 x = random_neighbour(x) E_new = f(x) P = acceptance_probability(E, E_new, T) if P > np.random.random(): E = E_new print("x=%.4f, fmin=%.4f, Prob.=%.4f"%(x,E,P)) return Esimulated_annealing(f,20)
Running this equation, we get the following output:
x=0.62, fmin=-8.04x=-0.3446, fmin=-7.3664, Prob.=0.4753x=-0.8717, fmin=-9.3559, Prob.=11.6601x=-0.8329, fmin=-9.1534, Prob.=0.7575x=-1.6213, fmin=-14.5791, Prob.=3903.7178x=-2.3907, fmin=-20.5342, Prob.=23982.6510x=-1.8220, fmin=-20.5342, Prob.=0.0003x=-1.1582, fmin=-20.5342, Prob.=0.0000x=-0.2298, fmin=-20.5342, Prob.=0.0000x=-0.8731, fmin=-20.5342, Prob.=0.0000x=-1.8032, fmin=-20.5342, Prob.=0.0000x=-2.1873, fmin=-20.5342, Prob.=0.0110x=-1.8673, fmin=-20.5342, Prob.=0.0000x=-2.7618, fmin=-22.3598, Prob.=1315.6210x=-2.3266, fmin=-22.3598, Prob.=0.0001x=-2.5017, fmin=-22.3598, Prob.=0.0036x=-2.6164, fmin=-22.3598, Prob.=0.0466x=-1.7016, fmin=-22.3598, Prob.=0.0000x=-1.7248, fmin=-22.3598, Prob.=0.0000x=-1.6569, fmin=-22.3598, Prob.=0.0000x=-1.5051, fmin=-22.3598, Prob.=0.0000
As seen above, the value converges to -22.75. While this method doesn’t guarantee a global optimum, it is reasonably close, given that we are not restricting our domain. Also notice that, although the initial value was much closer to the local minimum, it still managed to find the global minimum.
This section has more to do with time complexity than accuracy. Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems, so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial. For example, if we write simple recursive solution for Fibonacci Numbers, we get exponential time complexity and if we optimize it by storing solutions of subproblems, time complexity reduces to linear.
Let’s use the Fibonacci sequence as an example:
def f(n): if n <= 1: return n else: return f(n-1)+f(n-2)
The complexity of this problem is O(2^n). Seems insanely complex if you ask me! However, let’s analyze the problem a bit better.
If you look at a recursion tree for n=6, there are a lot of repetitions. Just count the number of values for f(3). In Dynamic Programming, we will store the value in memory and not re-calculate if called upon again. Here is the Dynamic Programming version of Fibonacci:
store = {}store[0] = 0store[1] = 1def f(n): store[n] = f(n-1)+f(n-2) return store[n]
Using the new method, the complexity of each step is O(1). This allows us to explore the entire domain of n to find the optimal solution with minimal time complexity. | [
{
"code": null,
"e": 1008,
"s": 171,
"text": "Local and global optimization is usually known and somewhat ignored once we leave high school calculus. For a quick review, take the cover image of this blog. For a given function, there are multiple points where there are dips across space. Each dip is a minimum. However, you can see that only one point is the deepest, known as the global minimum. All other points are local minima. The first section, I will go over calculating this global value for a known function. However, once we leave the realm of this known function of high school math, most optimization problems deal with local optimizations only. Even this, I will go over and provide reasons why. Quick note, there are plenty of other strategies out that I have not mentioned in this blog. However, I am hoping that it gets you interested enough to explore more options."
},
{
"code": null,
"e": 1105,
"s": 1008,
"text": "Let’s review some basic high school calculus. First, going over this simple univariate function:"
},
{
"code": null,
"e": 1240,
"s": 1105,
"text": "Graphically, we can see that the global minimum is around -3. However, let’s try to use calculus as we know the function of this plot."
},
{
"code": null,
"e": 1399,
"s": 1240,
"text": "We see that f(-3)=-22.75, the smallest possible value of f(x). We can even go beyond 2D and enter the realm of multivariate calculus for solving this problem."
},
{
"code": null,
"e": 1728,
"s": 1399,
"text": "Most of the time when dealing with data science, we do not have access to the function to perform any calculus. Usually, f(x) is a system, where we can enter the variable, x, and gain some output, y. One possible solution is to perform a stochastic gradient descent, where we iteratively go down the slope till we hit a minimum."
},
{
"code": null,
"e": 1813,
"s": 1728,
"text": "Now let’s implement this problem in python, assuming f to be out black box function."
},
{
"code": null,
"e": 2454,
"s": 1813,
"text": "#Unknown Functionf = lambda x:(1/4*x**4)+(1/3*x**3)-(3*x**2)-7def next_step(x,f,step_size): y=f(x) #Left Point x_left=x-step_size y_left=f(x_left) diff_left=y_left-y #Right Point x_right=x+step_size y_right=f(x_right) diff_right=y_right-y #Comparison if diff_right<diff_left: return x_right, y_right, diff_right else: return x_left, y_left, diff_leftdef gradient_descent(f,start,step_size=0.01,tol=0.1, debug=False): x=start diff=9999999 while abs(diff)>tol: x, y, diff=next_step(x,f,step_size) if debug: print(\"Current Point: (%.2f,%.2f), Change: %.2f\"%(x,y,diff)) print(\"Minimum Point: (%.2f,%.2f)\"%(x,y))"
},
{
"code": null,
"e": 2515,
"s": 2454,
"text": "Now that we have our function, let’s try searching from x=4."
},
{
"code": null,
"e": 2572,
"s": 2515,
"text": "gradient_descent(f, start=4)#Minimum Point: (2.65,-9.54)"
},
{
"code": null,
"e": 2670,
"s": 2572,
"text": "This gives us a minimum point of 2.65 . Now, let’s try the same for another starting point, x=-4."
},
{
"code": null,
"e": 2730,
"s": 2670,
"text": "gradient_descent(f, start=-4)#Minimum Point: (-3.51,-20.43)"
},
{
"code": null,
"e": 2894,
"s": 2730,
"text": "Using x=-5, we get our global minimum at -3.51 . This is when finding global and local minimum becomes tricky. Our final result is dependent on the starting point."
},
{
"code": null,
"e": 2940,
"s": 2894,
"text": "As it is an unknown function, we do not know:"
},
{
"code": null,
"e": 2962,
"s": 2940,
"text": "The ideal start point"
},
{
"code": null,
"e": 2982,
"s": 2962,
"text": "The ideal step size"
},
{
"code": null,
"e": 3051,
"s": 2982,
"text": "The ideal domain (we cannot traverse an infinitely large space of x)"
},
{
"code": null,
"e": 3179,
"s": 3051,
"text": "One possible solution is the use of Simulated Annealing, which gives us a reasonable probability of hitting the global minimum."
},
{
"code": null,
"e": 3682,
"s": 3179,
"text": "The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. Both are attributes of the material that depend on its thermodynamic free energy. Heating and cooling the material affects both the temperature and the thermodynamic free energy. The simulation of annealing can be used to find an approximation of a global minimum for a function with a large number of variables."
},
{
"code": null,
"e": 4025,
"s": 3682,
"text": "This notion of slow cooling implemented in the simulated annealing algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution space is explored. Accepting worse solutions is a fundamental property of metaheuristics because it allows for a more extensive search for the global optimal solution."
},
{
"code": null,
"e": 4089,
"s": 4025,
"text": "In general, the simulated annealing algorithms work as follows:"
},
{
"code": null,
"e": 4100,
"s": 4089,
"text": "Let x = x0"
},
{
"code": null,
"e": 4260,
"s": 4100,
"text": "For k = 0 through kmax (exclusive):- T decreased at each step- Pick a random neighbour, x_new ← neighbour(x)- If P(E(x), E(x_new), T) ≥ random(0, 1): x ← x_new"
},
{
"code": null,
"e": 4286,
"s": 4260,
"text": "Output: the final state x"
},
{
"code": null,
"e": 4348,
"s": 4286,
"text": "Here is the Python implementation from our specific function:"
},
{
"code": null,
"e": 4959,
"s": 4348,
"text": "import numpy as np#Unknown Functionf = lambda x:(1/4*x**4)+(1/3*x**3)-(3*x**2)-7def acceptance_probability(E, E_new, T): return np.exp(-(E-E_new)/T)def random_neighbour(x): return x += np.random.uniform(-1,1)def simulated_annealing(f, steps): x = np.random.random() E = f(x) print(\"x=%.2f, fmin=%.2f\"%(x, E)) for k in range(steps): T = T*0.9 x = random_neighbour(x) E_new = f(x) P = acceptance_probability(E, E_new, T) if P > np.random.random(): E = E_new print(\"x=%.4f, fmin=%.4f, Prob.=%.4f\"%(x,E,P)) return Esimulated_annealing(f,20)"
},
{
"code": null,
"e": 5011,
"s": 4959,
"text": "Running this equation, we get the following output:"
},
{
"code": null,
"e": 5798,
"s": 5011,
"text": "x=0.62, fmin=-8.04x=-0.3446, fmin=-7.3664, Prob.=0.4753x=-0.8717, fmin=-9.3559, Prob.=11.6601x=-0.8329, fmin=-9.1534, Prob.=0.7575x=-1.6213, fmin=-14.5791, Prob.=3903.7178x=-2.3907, fmin=-20.5342, Prob.=23982.6510x=-1.8220, fmin=-20.5342, Prob.=0.0003x=-1.1582, fmin=-20.5342, Prob.=0.0000x=-0.2298, fmin=-20.5342, Prob.=0.0000x=-0.8731, fmin=-20.5342, Prob.=0.0000x=-1.8032, fmin=-20.5342, Prob.=0.0000x=-2.1873, fmin=-20.5342, Prob.=0.0110x=-1.8673, fmin=-20.5342, Prob.=0.0000x=-2.7618, fmin=-22.3598, Prob.=1315.6210x=-2.3266, fmin=-22.3598, Prob.=0.0001x=-2.5017, fmin=-22.3598, Prob.=0.0036x=-2.6164, fmin=-22.3598, Prob.=0.0466x=-1.7016, fmin=-22.3598, Prob.=0.0000x=-1.7248, fmin=-22.3598, Prob.=0.0000x=-1.6569, fmin=-22.3598, Prob.=0.0000x=-1.5051, fmin=-22.3598, Prob.=0.0000"
},
{
"code": null,
"e": 6096,
"s": 5798,
"text": "As seen above, the value converges to -22.75. While this method doesn’t guarantee a global optimum, it is reasonably close, given that we are not restricting our domain. Also notice that, although the initial value was much closer to the local minimum, it still managed to find the global minimum."
},
{
"code": null,
"e": 6755,
"s": 6096,
"text": "This section has more to do with time complexity than accuracy. Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems, so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial. For example, if we write simple recursive solution for Fibonacci Numbers, we get exponential time complexity and if we optimize it by storing solutions of subproblems, time complexity reduces to linear."
},
{
"code": null,
"e": 6803,
"s": 6755,
"text": "Let’s use the Fibonacci sequence as an example:"
},
{
"code": null,
"e": 6880,
"s": 6803,
"text": "def f(n): if n <= 1: return n else: return f(n-1)+f(n-2)"
},
{
"code": null,
"e": 7009,
"s": 6880,
"text": "The complexity of this problem is O(2^n). Seems insanely complex if you ask me! However, let’s analyze the problem a bit better."
},
{
"code": null,
"e": 7279,
"s": 7009,
"text": "If you look at a recursion tree for n=6, there are a lot of repetitions. Just count the number of values for f(3). In Dynamic Programming, we will store the value in memory and not re-calculate if called upon again. Here is the Dynamic Programming version of Fibonacci:"
},
{
"code": null,
"e": 7370,
"s": 7279,
"text": "store = {}store[0] = 0store[1] = 1def f(n): store[n] = f(n-1)+f(n-2) return store[n]"
}
] |
What is a remote interface in java? | A Remote interface is available in the java.rmi package it is a marking/tagging interface, it is used with remote method invocation(RMI).
RMI is a mechanism that allows an object residing in one system (JVM) to access/invoke an object running on another JVM.
To it is a marking interface, to mark an object of a class remote, you need to implement this interface.
To create a remote interface −
Create an interface that extends the predefined interface Remote which belongs to the package or, implement the Remote interface with the class, which you need to make remote.
Create an interface that extends the predefined interface Remote which belongs to the package or, implement the Remote interface with the class, which you need to make remote.
Declare all the business methods that can be invoked by the client in this interface.
Declare all the business methods that can be invoked by the client in this interface.
Since there is a chance of network issues during remote calls, an exception named RemoteException may occur; throw it.
Since there is a chance of network issues during remote calls, an exception named RemoteException may occur; throw it.
import java.rmi.Remote;
import java.rmi.RemoteException;
// Creating Remote class for our application
public class RemoteExample implements Remote {
}
Or,
import java.rmi.Remote;
import java.rmi.RemoteException;
// Creating Remote interface for our application
public interface Hello extends Remote {
void printMsg() throws RemoteException;
} | [
{
"code": null,
"e": 1200,
"s": 1062,
"text": "A Remote interface is available in the java.rmi package it is a marking/tagging interface, it is used with remote method invocation(RMI)."
},
{
"code": null,
"e": 1321,
"s": 1200,
"text": "RMI is a mechanism that allows an object residing in one system (JVM) to access/invoke an object running on another JVM."
},
{
"code": null,
"e": 1426,
"s": 1321,
"text": "To it is a marking interface, to mark an object of a class remote, you need to implement this interface."
},
{
"code": null,
"e": 1457,
"s": 1426,
"text": "To create a remote interface −"
},
{
"code": null,
"e": 1633,
"s": 1457,
"text": "Create an interface that extends the predefined interface Remote which belongs to the package or, implement the Remote interface with the class, which you need to make remote."
},
{
"code": null,
"e": 1809,
"s": 1633,
"text": "Create an interface that extends the predefined interface Remote which belongs to the package or, implement the Remote interface with the class, which you need to make remote."
},
{
"code": null,
"e": 1895,
"s": 1809,
"text": "Declare all the business methods that can be invoked by the client in this interface."
},
{
"code": null,
"e": 1981,
"s": 1895,
"text": "Declare all the business methods that can be invoked by the client in this interface."
},
{
"code": null,
"e": 2100,
"s": 1981,
"text": "Since there is a chance of network issues during remote calls, an exception named RemoteException may occur; throw it."
},
{
"code": null,
"e": 2219,
"s": 2100,
"text": "Since there is a chance of network issues during remote calls, an exception named RemoteException may occur; throw it."
},
{
"code": null,
"e": 2565,
"s": 2219,
"text": "import java.rmi.Remote;\nimport java.rmi.RemoteException;\n// Creating Remote class for our application\npublic class RemoteExample implements Remote {\n}\nOr,\nimport java.rmi.Remote;\nimport java.rmi.RemoteException;\n// Creating Remote interface for our application\npublic interface Hello extends Remote {\n void printMsg() throws RemoteException;\n}"
}
] |
C++ Stream Classes Structure | In C++ stream refers to the stream of characters that are transferred between the program thread and i/o.
Stream classes in C++ are used to input and output operations on files and io devices. These classes have specific features and to handle input and output of the program.
The iostream.h library holds all the stream classes in the C++ programming language.
Let's see the hierarchy and learn about them,
Now, let’s learn about the classes of the iostream library.
ios class − This class is the base class for all stream classes. The streams can be input or output streams. This class defines members that are independent of how the templates of the class are defined.
istream Class − The istream class handles the input stream in c++ programming language. These input stream objects are used to read and interpret the input as a sequence of characters. The cin handles the input.
ostream class − The ostream class handles the output stream in c++ programming language. These output stream objects are used to write data as a sequence of characters on the screen. cout and puts handle the out streams in c++ programming language.
COUT
Live Demo
#include <iostream>
using namespace std;
int main(){
cout<<"This output is printed on screen";
}
Output
This output is printed on screen
PUTS
Live Demo
#include <iostream>
using namespace std;
int main(){
puts("This output is printed using puts");
}
Output
This output is printed using puts
CIN
Live Demo
#include <iostream>
using namespace std;
int main(){
int no;
cout<<"Enter a number ";
cin>>no;
cout<<"Number entered using cin is "<
Output
Enter a number 3453
Number entered using cin is 3453
gets
#include <iostream>
using namespace std;
int main(){
char ch[10];
puts("Enter a character array");
gets(ch);
puts("The character array entered using gets is : ");
puts(ch);
}
Output
Enter a character array
thdgf
The character array entered using gets is :
thdgf | [
{
"code": null,
"e": 1168,
"s": 1062,
"text": "In C++ stream refers to the stream of characters that are transferred between the program thread and i/o."
},
{
"code": null,
"e": 1339,
"s": 1168,
"text": "Stream classes in C++ are used to input and output operations on files and io devices. These classes have specific features and to handle input and output of the program."
},
{
"code": null,
"e": 1424,
"s": 1339,
"text": "The iostream.h library holds all the stream classes in the C++ programming language."
},
{
"code": null,
"e": 1470,
"s": 1424,
"text": "Let's see the hierarchy and learn about them,"
},
{
"code": null,
"e": 1530,
"s": 1470,
"text": "Now, let’s learn about the classes of the iostream library."
},
{
"code": null,
"e": 1734,
"s": 1530,
"text": "ios class − This class is the base class for all stream classes. The streams can be input or output streams. This class defines members that are independent of how the templates of the class are defined."
},
{
"code": null,
"e": 1946,
"s": 1734,
"text": "istream Class − The istream class handles the input stream in c++ programming language. These input stream objects are used to read and interpret the input as a sequence of characters. The cin handles the input."
},
{
"code": null,
"e": 2195,
"s": 1946,
"text": "ostream class − The ostream class handles the output stream in c++ programming language. These output stream objects are used to write data as a sequence of characters on the screen. cout and puts handle the out streams in c++ programming language."
},
{
"code": null,
"e": 2200,
"s": 2195,
"text": "COUT"
},
{
"code": null,
"e": 2211,
"s": 2200,
"text": " Live Demo"
},
{
"code": null,
"e": 2311,
"s": 2211,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n cout<<\"This output is printed on screen\";\n}"
},
{
"code": null,
"e": 2318,
"s": 2311,
"text": "Output"
},
{
"code": null,
"e": 2351,
"s": 2318,
"text": "This output is printed on screen"
},
{
"code": null,
"e": 2356,
"s": 2351,
"text": "PUTS"
},
{
"code": null,
"e": 2367,
"s": 2356,
"text": " Live Demo"
},
{
"code": null,
"e": 2468,
"s": 2367,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n puts(\"This output is printed using puts\");\n}"
},
{
"code": null,
"e": 2475,
"s": 2468,
"text": "Output"
},
{
"code": null,
"e": 2509,
"s": 2475,
"text": "This output is printed using puts"
},
{
"code": null,
"e": 2513,
"s": 2509,
"text": "CIN"
},
{
"code": null,
"e": 2524,
"s": 2513,
"text": " Live Demo"
},
{
"code": null,
"e": 2669,
"s": 2524,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n int no;\n cout<<\"Enter a number \";\n cin>>no;\n cout<<\"Number entered using cin is \"<"
},
{
"code": null,
"e": 2676,
"s": 2669,
"text": "Output"
},
{
"code": null,
"e": 2729,
"s": 2676,
"text": "Enter a number 3453\nNumber entered using cin is 3453"
},
{
"code": null,
"e": 2734,
"s": 2729,
"text": "gets"
},
{
"code": null,
"e": 2924,
"s": 2734,
"text": "#include <iostream>\nusing namespace std;\nint main(){\n char ch[10];\n puts(\"Enter a character array\");\n gets(ch);\n puts(\"The character array entered using gets is : \");\n puts(ch);\n}"
},
{
"code": null,
"e": 2931,
"s": 2924,
"text": "Output"
},
{
"code": null,
"e": 3011,
"s": 2931,
"text": "Enter a character array\nthdgf\nThe character array entered using gets is :\nthdgf"
}
] |
Hands-on: Predict Customer Churn. Long story short — in this article we... | by flo.tausend | Towards Data Science | Long story short — in this article we want to get our hands dirty: building a model that identifies our beloved customers with the intention to leave us in the near future. We do this by implementing a predictive model with the help of python. Don’t expect a perfect model, but expect something you can use in your own company / project today!
Before we start, let’s briefly recap what churn actually is: Churn quantifies the number of customers who have unsubscribed or canceled their service contract. Customers turning their back to your service or product are no fun for any business. It is very expensive to win them back once lost, not even thinking that they will not do the best word to mouth marketing if unsatisfied. Learn all about the basics of customer churn in one of my previous articles. Now let’s get started!
The basic layer for predicting future customer churn is data from the past. We look at data from customers that already have churned (response) and their characteristics / behaviour (predictors) before the churn happened. By fitting a statistical model that relates the predictors to the response, we will try to predict the response for existing customers. This method belongs to the supervised learning category, just in case you needed one more buzzing expression. In practice we conduct the following steps to make these precise predictions:
Use Case / Business CaseStep one is actually understanding the business or use case with the desired outcome. Only by understanding the final objective we can build a model that is actually of use. In our case the objective is reducing customer churn by identifying potential churn candidates beforehand, and take proactive actions to make them stay.Data collection & cleaningWith understanding the context it is possible to identify the right data sources, cleansing the data sets and preparing for feature selection or engineering. It sounds quite simple, but this is likely the hardest part. The predicting model is only as good as the data source. And especially Startups or small companies have often trouble to find enough data to train the model adequately.Feature selection & engineeringWith the third step we decide which features we want to include in our model and prepare the cleansed data to be used for the machine learning algorithm to predict customer churn.ModellingWith the prepared data we are ready to feed our model. But to make good predictions, we firstly need to find the right model (selection) and secondly need to evaluate that the algorithm actually works. While this usually takes a few iterations, we will keep this quite simple and stop as soon as the results fit our needs.Insights and ActionsLast but not least we have to evaluate and interpret the outcomes. What does it mean and what actions can we derive from the results? Because predicting customer churn is only half of the part and many people forget that by just predicting, they can still leave. In our case we actually want to make them stop leaving.
Use Case / Business CaseStep one is actually understanding the business or use case with the desired outcome. Only by understanding the final objective we can build a model that is actually of use. In our case the objective is reducing customer churn by identifying potential churn candidates beforehand, and take proactive actions to make them stay.
Data collection & cleaningWith understanding the context it is possible to identify the right data sources, cleansing the data sets and preparing for feature selection or engineering. It sounds quite simple, but this is likely the hardest part. The predicting model is only as good as the data source. And especially Startups or small companies have often trouble to find enough data to train the model adequately.
Feature selection & engineeringWith the third step we decide which features we want to include in our model and prepare the cleansed data to be used for the machine learning algorithm to predict customer churn.
ModellingWith the prepared data we are ready to feed our model. But to make good predictions, we firstly need to find the right model (selection) and secondly need to evaluate that the algorithm actually works. While this usually takes a few iterations, we will keep this quite simple and stop as soon as the results fit our needs.
Insights and ActionsLast but not least we have to evaluate and interpret the outcomes. What does it mean and what actions can we derive from the results? Because predicting customer churn is only half of the part and many people forget that by just predicting, they can still leave. In our case we actually want to make them stop leaving.
To predict if a customer will churn or not, we are working with Python and it’s amazing open source libraries. First of all we use Jupyter Notebook, that is an open-source application for live coding and it allows us to tell a story with the code. Furthermore we import Pandas, which puts our data in an easy-to-use structure for data analysis and data transformation. To make data exploration more graspable, we use plotly to visualise some of our insights. Finally with scikit-learn we will split our dataset and train our predictive model.
One of the most valuable assets a company has is data. As data is rarely shared publicly, we take an available dataset you can find on IBMs website as well as on other pages like Kaggle: Telcom Customer Churn Dataset. The raw dataset contains more than 7000 entries. All entries have several features and of course a column stating if the customer has churned or not.To better understand the data we will first load it into pandas and explore it with the help of some very basic commands.
import numpy as npimport pandas as pd import matplotlib.pyplot as plt import seaborn as snsimport warningswarnings.filterwarnings("ignore")from pylab import rcParams%matplotlib inline# Loading the CSV with pandasdata = pd.read_csv('..Customer Churn/Telco-Customer-Churn.csv')
This section is kept quite short as you can learn more about general data exploration in better tutorials. Nevertheless to get initial insights and learn what story you can tell with the data, data exploration makes definitely sense. By using the python functions data.head(5) and “data.shape” we get a general overview of the dataset.
In detail we have a look at the target feature, the actual “Churn”. Therefore we plot it accordingly and see that 26,5% Of the total amount of customer churn. This is important to know so we have the same proportion of Churned Customers to Non-Churned Customers in our training data.
# Data to plotsizes = data['Churn'].value_counts(sort = True)colors = ["grey","purple"] rcParams['figure.figsize'] = 5,5# Plotplt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=270,)plt.title('Percentage of Churn in Dataset')plt.show()
Be aware that the better we prepare our data for the machine learning model, the better our prediction will be. We can have to most advanced algorithm, but if our training data sucks, our result will suck too. For that reason data scientists spend so much time on preparing the data. And as data preprocessing takes a lot of time, but is not the focus here, we will work through a few transformations exemplary.
Dropping irrelevant dataThere may be data included that is not needed to improve our results. Best is that to identify by logic thinking or by creating a correlation matrix. In this data set we have the customerID for example. As it does not influence our predicted outcome, we drop the column with the pandas “drop()” function.
Dropping irrelevant dataThere may be data included that is not needed to improve our results. Best is that to identify by logic thinking or by creating a correlation matrix. In this data set we have the customerID for example. As it does not influence our predicted outcome, we drop the column with the pandas “drop()” function.
data.drop(['customerID'], axis=1, inplace=True)
2. Missing ValuesFurthermore it is important to handle missing data. The values can be identified by the “.isnull()” function in pandas for example. After identifying the null values it depends on each case if it makes sense to fill the missing value for example with the mean, median or the mode, or in case there is enough training data drop the entry completely. In the data set we are working with there is a very unusual case — there are no null values. Lucky us for today, but important to know usually we have to handle this.
3. Converting Numerical Features From ObjectFrom our data exploration (in this case “data.dtypes()”) we can see that the the columns MonthlyCharges and TotalCharges are numbers, but actually in the object format. Why is this bad? Our machine learning model can only work with actual numeric data. Therefore with the “to_numeric” function we can change the format and prepare the data for our machine learning model.
data['TotalCharges'] = pd.to_numeric(data['TotalCharges'])
4. Categorical data into numerical dataAs we cannot calculate anything with string values, we have to convert these values intro numeric ones. A simple example in the Telcom dataset is the gender. By using the Pandas function “get_dummies()” two columns will replace the gender column with “gender_Female” and “gender_Male”.
Additionally to that we could use the “get_dummies()” function for all the categorical variables in the dataset. This is a powerful function, but it may be disturbing to have so many additional columns.
5. Splitting the datasetFirst our model needs to be trained, second our model needs to be tested. Therefore it is best to have two different dataset. As for now we only have one, it is very common to split the data accordingly. X is the data with the independent variables, Y is the data with the dependent variable. The test size variable determines in which ratio the data will be split. It is quite common to do this in a 80 Training / 20 Test ratio.
data["Churn"] = data["Churn"].astype(int)Y = data["Churn"].valuesX = data.drop(labels = ["Churn"],axis = 1)# Create Train & Test Datafrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=101)
Logistic Regression is one of the most used machine learning algorithm and mainly used when the dependent variable (here churn 1 or churn 0) is categorical. The independent variables in contrary can be categorical or numerical. Please note that of course it makes sense to understand the theory behind the model in detail, but in this case our goal is to make use of the predictions we won’t go through this in this article.
Step 1. Let’s Import the model we want to use from sci-kit learnStep 2. We make an instance of the ModelStep 3. Is training the model on the training data set and storing the information learned from the data
from sklearn.linear_model import LogisticRegressionmodel = LogisticRegression()result = model.fit(X_train, y_train)
With the trained model we can now predict if a customer churned or not for our test dataset. The results are saved in “prediction_test” and afterwards the accuracy score is measured and printed.
from sklearn import metricsprediction_test = model.predict(X_test)# Print the prediction accuracyprint (metrics.accuracy_score(y_test, prediction_test))
The score show us that in 80% of the cases our model predicted the right outcome for our binary classification problem. That’s considered quite good for a first run, especially when we look which impact each variable has and if that makes sense. So with the final objective to reduce churn and take the right preventing actions in time, we want to know which independent variables have to most influence on our predicted outcome. Therefore we set the coefficients in our model to zero and look at the weights of each variable.
# To get the weights of all the variablesweights = pd.Series(model.coef_[0], index=X.columns.values)weights.sort_values(ascending = False)
It can be observed that some variables have a positive relation to our predicted variable and some have a negative relation. A positive value has a positive impact on our predicted variable. A good example is “Contract_Month-to-month”: The positive relation to churn means that having this type of contract also increases the probability of a customer to churn. On the other hand that “Contract_Two year” is in a highly negative relation to the predicted variable, which means that customers with this type of contract are very unlikely to churn. But we can also see that some variables do not make sense in the first point. “Fiber_Optic” is on top position in terms of a positive impact on churn. While we would expect that this makes a customer stay, as it provides him with fast internet, our model says different. Here it is important to dig deeper and get some context for the data.
One — Talk to your team.We did not only find out which customers are likely to churn, but also which features have the most impact on a customer leaving. Therefore it is highly recommended to share these insights with your customer success team and adapt their focus. Only if the team knows where to put emphasis on, the team is able to to guide a customer to features that make him/her stick around longer. Speak openly, discuss options and make sure you understand the complete context. In many cases the customer success or support team is able to give additional qualitative insights that might underline your findings or totally tell a different story to your findings.
Two — Engage with the customers likely to churn.Yes, there is the story that you should let sleeping dogs lie. But in the case of potential churn this is bullshit. Being quiet and hoping that your customer does not leave will 100% end up in a churn sooner or later. Instead don’t be scared, go out and engage with your customers. There are many options but the best one is the most obvious one: Talk to them.
Look at their profile, identify characteristics and analyse past interactions with your product and then simply talk to them. Ask for feedback, communicate latest developments that might be from interest or educate them on new product features. Approach customers likely to churn, but make sure that you come up with relevant things that may fit their individual needs. It will create a feeling of being understood and bind them to you and your business.
**************************************************************
Articles related to this one:
Predictive Maintenance: ML vs Rule Based
Hands-on: Customer Segmentation
Applied Machine Learning For Improved Startup Valuation
Hands-on: Setup Your Data Environment With Docker
Eliminating Churn is Growth Hacking 2.0
Misleading with Data & Statistics
***************************************************************
Most of what I write about keeps me busy in our own Startup investory.io. Looking forward to hearing your experience :) | [
{
"code": null,
"e": 516,
"s": 172,
"text": "Long story short — in this article we want to get our hands dirty: building a model that identifies our beloved customers with the intention to leave us in the near future. We do this by implementing a predictive model with the help of python. Don’t expect a perfect model, but expect something you can use in your own company / project today!"
},
{
"code": null,
"e": 999,
"s": 516,
"text": "Before we start, let’s briefly recap what churn actually is: Churn quantifies the number of customers who have unsubscribed or canceled their service contract. Customers turning their back to your service or product are no fun for any business. It is very expensive to win them back once lost, not even thinking that they will not do the best word to mouth marketing if unsatisfied. Learn all about the basics of customer churn in one of my previous articles. Now let’s get started!"
},
{
"code": null,
"e": 1545,
"s": 999,
"text": "The basic layer for predicting future customer churn is data from the past. We look at data from customers that already have churned (response) and their characteristics / behaviour (predictors) before the churn happened. By fitting a statistical model that relates the predictors to the response, we will try to predict the response for existing customers. This method belongs to the supervised learning category, just in case you needed one more buzzing expression. In practice we conduct the following steps to make these precise predictions:"
},
{
"code": null,
"e": 3189,
"s": 1545,
"text": "Use Case / Business CaseStep one is actually understanding the business or use case with the desired outcome. Only by understanding the final objective we can build a model that is actually of use. In our case the objective is reducing customer churn by identifying potential churn candidates beforehand, and take proactive actions to make them stay.Data collection & cleaningWith understanding the context it is possible to identify the right data sources, cleansing the data sets and preparing for feature selection or engineering. It sounds quite simple, but this is likely the hardest part. The predicting model is only as good as the data source. And especially Startups or small companies have often trouble to find enough data to train the model adequately.Feature selection & engineeringWith the third step we decide which features we want to include in our model and prepare the cleansed data to be used for the machine learning algorithm to predict customer churn.ModellingWith the prepared data we are ready to feed our model. But to make good predictions, we firstly need to find the right model (selection) and secondly need to evaluate that the algorithm actually works. While this usually takes a few iterations, we will keep this quite simple and stop as soon as the results fit our needs.Insights and ActionsLast but not least we have to evaluate and interpret the outcomes. What does it mean and what actions can we derive from the results? Because predicting customer churn is only half of the part and many people forget that by just predicting, they can still leave. In our case we actually want to make them stop leaving."
},
{
"code": null,
"e": 3540,
"s": 3189,
"text": "Use Case / Business CaseStep one is actually understanding the business or use case with the desired outcome. Only by understanding the final objective we can build a model that is actually of use. In our case the objective is reducing customer churn by identifying potential churn candidates beforehand, and take proactive actions to make them stay."
},
{
"code": null,
"e": 3955,
"s": 3540,
"text": "Data collection & cleaningWith understanding the context it is possible to identify the right data sources, cleansing the data sets and preparing for feature selection or engineering. It sounds quite simple, but this is likely the hardest part. The predicting model is only as good as the data source. And especially Startups or small companies have often trouble to find enough data to train the model adequately."
},
{
"code": null,
"e": 4166,
"s": 3955,
"text": "Feature selection & engineeringWith the third step we decide which features we want to include in our model and prepare the cleansed data to be used for the machine learning algorithm to predict customer churn."
},
{
"code": null,
"e": 4498,
"s": 4166,
"text": "ModellingWith the prepared data we are ready to feed our model. But to make good predictions, we firstly need to find the right model (selection) and secondly need to evaluate that the algorithm actually works. While this usually takes a few iterations, we will keep this quite simple and stop as soon as the results fit our needs."
},
{
"code": null,
"e": 4837,
"s": 4498,
"text": "Insights and ActionsLast but not least we have to evaluate and interpret the outcomes. What does it mean and what actions can we derive from the results? Because predicting customer churn is only half of the part and many people forget that by just predicting, they can still leave. In our case we actually want to make them stop leaving."
},
{
"code": null,
"e": 5380,
"s": 4837,
"text": "To predict if a customer will churn or not, we are working with Python and it’s amazing open source libraries. First of all we use Jupyter Notebook, that is an open-source application for live coding and it allows us to tell a story with the code. Furthermore we import Pandas, which puts our data in an easy-to-use structure for data analysis and data transformation. To make data exploration more graspable, we use plotly to visualise some of our insights. Finally with scikit-learn we will split our dataset and train our predictive model."
},
{
"code": null,
"e": 5869,
"s": 5380,
"text": "One of the most valuable assets a company has is data. As data is rarely shared publicly, we take an available dataset you can find on IBMs website as well as on other pages like Kaggle: Telcom Customer Churn Dataset. The raw dataset contains more than 7000 entries. All entries have several features and of course a column stating if the customer has churned or not.To better understand the data we will first load it into pandas and explore it with the help of some very basic commands."
},
{
"code": null,
"e": 6145,
"s": 5869,
"text": "import numpy as npimport pandas as pd import matplotlib.pyplot as plt import seaborn as snsimport warningswarnings.filterwarnings(\"ignore\")from pylab import rcParams%matplotlib inline# Loading the CSV with pandasdata = pd.read_csv('..Customer Churn/Telco-Customer-Churn.csv')"
},
{
"code": null,
"e": 6481,
"s": 6145,
"text": "This section is kept quite short as you can learn more about general data exploration in better tutorials. Nevertheless to get initial insights and learn what story you can tell with the data, data exploration makes definitely sense. By using the python functions data.head(5) and “data.shape” we get a general overview of the dataset."
},
{
"code": null,
"e": 6765,
"s": 6481,
"text": "In detail we have a look at the target feature, the actual “Churn”. Therefore we plot it accordingly and see that 26,5% Of the total amount of customer churn. This is important to know so we have the same proportion of Churned Customers to Non-Churned Customers in our training data."
},
{
"code": null,
"e": 7062,
"s": 6765,
"text": "# Data to plotsizes = data['Churn'].value_counts(sort = True)colors = [\"grey\",\"purple\"] rcParams['figure.figsize'] = 5,5# Plotplt.pie(sizes, explode=explode, labels=labels, colors=colors, autopct='%1.1f%%', shadow=True, startangle=270,)plt.title('Percentage of Churn in Dataset')plt.show()"
},
{
"code": null,
"e": 7474,
"s": 7062,
"text": "Be aware that the better we prepare our data for the machine learning model, the better our prediction will be. We can have to most advanced algorithm, but if our training data sucks, our result will suck too. For that reason data scientists spend so much time on preparing the data. And as data preprocessing takes a lot of time, but is not the focus here, we will work through a few transformations exemplary."
},
{
"code": null,
"e": 7803,
"s": 7474,
"text": "Dropping irrelevant dataThere may be data included that is not needed to improve our results. Best is that to identify by logic thinking or by creating a correlation matrix. In this data set we have the customerID for example. As it does not influence our predicted outcome, we drop the column with the pandas “drop()” function."
},
{
"code": null,
"e": 8132,
"s": 7803,
"text": "Dropping irrelevant dataThere may be data included that is not needed to improve our results. Best is that to identify by logic thinking or by creating a correlation matrix. In this data set we have the customerID for example. As it does not influence our predicted outcome, we drop the column with the pandas “drop()” function."
},
{
"code": null,
"e": 8180,
"s": 8132,
"text": "data.drop(['customerID'], axis=1, inplace=True)"
},
{
"code": null,
"e": 8713,
"s": 8180,
"text": "2. Missing ValuesFurthermore it is important to handle missing data. The values can be identified by the “.isnull()” function in pandas for example. After identifying the null values it depends on each case if it makes sense to fill the missing value for example with the mean, median or the mode, or in case there is enough training data drop the entry completely. In the data set we are working with there is a very unusual case — there are no null values. Lucky us for today, but important to know usually we have to handle this."
},
{
"code": null,
"e": 9129,
"s": 8713,
"text": "3. Converting Numerical Features From ObjectFrom our data exploration (in this case “data.dtypes()”) we can see that the the columns MonthlyCharges and TotalCharges are numbers, but actually in the object format. Why is this bad? Our machine learning model can only work with actual numeric data. Therefore with the “to_numeric” function we can change the format and prepare the data for our machine learning model."
},
{
"code": null,
"e": 9188,
"s": 9129,
"text": "data['TotalCharges'] = pd.to_numeric(data['TotalCharges'])"
},
{
"code": null,
"e": 9513,
"s": 9188,
"text": "4. Categorical data into numerical dataAs we cannot calculate anything with string values, we have to convert these values intro numeric ones. A simple example in the Telcom dataset is the gender. By using the Pandas function “get_dummies()” two columns will replace the gender column with “gender_Female” and “gender_Male”."
},
{
"code": null,
"e": 9716,
"s": 9513,
"text": "Additionally to that we could use the “get_dummies()” function for all the categorical variables in the dataset. This is a powerful function, but it may be disturbing to have so many additional columns."
},
{
"code": null,
"e": 10170,
"s": 9716,
"text": "5. Splitting the datasetFirst our model needs to be trained, second our model needs to be tested. Therefore it is best to have two different dataset. As for now we only have one, it is very common to split the data accordingly. X is the data with the independent variables, Y is the data with the dependent variable. The test size variable determines in which ratio the data will be split. It is quite common to do this in a 80 Training / 20 Test ratio."
},
{
"code": null,
"e": 10446,
"s": 10170,
"text": "data[\"Churn\"] = data[\"Churn\"].astype(int)Y = data[\"Churn\"].valuesX = data.drop(labels = [\"Churn\"],axis = 1)# Create Train & Test Datafrom sklearn.model_selection import train_test_splitX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=101)"
},
{
"code": null,
"e": 10871,
"s": 10446,
"text": "Logistic Regression is one of the most used machine learning algorithm and mainly used when the dependent variable (here churn 1 or churn 0) is categorical. The independent variables in contrary can be categorical or numerical. Please note that of course it makes sense to understand the theory behind the model in detail, but in this case our goal is to make use of the predictions we won’t go through this in this article."
},
{
"code": null,
"e": 11080,
"s": 10871,
"text": "Step 1. Let’s Import the model we want to use from sci-kit learnStep 2. We make an instance of the ModelStep 3. Is training the model on the training data set and storing the information learned from the data"
},
{
"code": null,
"e": 11196,
"s": 11080,
"text": "from sklearn.linear_model import LogisticRegressionmodel = LogisticRegression()result = model.fit(X_train, y_train)"
},
{
"code": null,
"e": 11391,
"s": 11196,
"text": "With the trained model we can now predict if a customer churned or not for our test dataset. The results are saved in “prediction_test” and afterwards the accuracy score is measured and printed."
},
{
"code": null,
"e": 11544,
"s": 11391,
"text": "from sklearn import metricsprediction_test = model.predict(X_test)# Print the prediction accuracyprint (metrics.accuracy_score(y_test, prediction_test))"
},
{
"code": null,
"e": 12071,
"s": 11544,
"text": "The score show us that in 80% of the cases our model predicted the right outcome for our binary classification problem. That’s considered quite good for a first run, especially when we look which impact each variable has and if that makes sense. So with the final objective to reduce churn and take the right preventing actions in time, we want to know which independent variables have to most influence on our predicted outcome. Therefore we set the coefficients in our model to zero and look at the weights of each variable."
},
{
"code": null,
"e": 12210,
"s": 12071,
"text": "# To get the weights of all the variablesweights = pd.Series(model.coef_[0], index=X.columns.values)weights.sort_values(ascending = False)"
},
{
"code": null,
"e": 13098,
"s": 12210,
"text": "It can be observed that some variables have a positive relation to our predicted variable and some have a negative relation. A positive value has a positive impact on our predicted variable. A good example is “Contract_Month-to-month”: The positive relation to churn means that having this type of contract also increases the probability of a customer to churn. On the other hand that “Contract_Two year” is in a highly negative relation to the predicted variable, which means that customers with this type of contract are very unlikely to churn. But we can also see that some variables do not make sense in the first point. “Fiber_Optic” is on top position in terms of a positive impact on churn. While we would expect that this makes a customer stay, as it provides him with fast internet, our model says different. Here it is important to dig deeper and get some context for the data."
},
{
"code": null,
"e": 13773,
"s": 13098,
"text": "One — Talk to your team.We did not only find out which customers are likely to churn, but also which features have the most impact on a customer leaving. Therefore it is highly recommended to share these insights with your customer success team and adapt their focus. Only if the team knows where to put emphasis on, the team is able to to guide a customer to features that make him/her stick around longer. Speak openly, discuss options and make sure you understand the complete context. In many cases the customer success or support team is able to give additional qualitative insights that might underline your findings or totally tell a different story to your findings."
},
{
"code": null,
"e": 14182,
"s": 13773,
"text": "Two — Engage with the customers likely to churn.Yes, there is the story that you should let sleeping dogs lie. But in the case of potential churn this is bullshit. Being quiet and hoping that your customer does not leave will 100% end up in a churn sooner or later. Instead don’t be scared, go out and engage with your customers. There are many options but the best one is the most obvious one: Talk to them."
},
{
"code": null,
"e": 14637,
"s": 14182,
"text": "Look at their profile, identify characteristics and analyse past interactions with your product and then simply talk to them. Ask for feedback, communicate latest developments that might be from interest or educate them on new product features. Approach customers likely to churn, but make sure that you come up with relevant things that may fit their individual needs. It will create a feeling of being understood and bind them to you and your business."
},
{
"code": null,
"e": 14700,
"s": 14637,
"text": "**************************************************************"
},
{
"code": null,
"e": 14730,
"s": 14700,
"text": "Articles related to this one:"
},
{
"code": null,
"e": 14771,
"s": 14730,
"text": "Predictive Maintenance: ML vs Rule Based"
},
{
"code": null,
"e": 14803,
"s": 14771,
"text": "Hands-on: Customer Segmentation"
},
{
"code": null,
"e": 14859,
"s": 14803,
"text": "Applied Machine Learning For Improved Startup Valuation"
},
{
"code": null,
"e": 14909,
"s": 14859,
"text": "Hands-on: Setup Your Data Environment With Docker"
},
{
"code": null,
"e": 14949,
"s": 14909,
"text": "Eliminating Churn is Growth Hacking 2.0"
},
{
"code": null,
"e": 14983,
"s": 14949,
"text": "Misleading with Data & Statistics"
},
{
"code": null,
"e": 15047,
"s": 14983,
"text": "***************************************************************"
}
] |
Find N Unique Integers Sum up to Zero in C++ | Suppose we have an integer n. We have to return any array that contains n unique integers, such that they add up to 0. So if input is n = 5, then one possible output will be [-7, -1, 1, 3, 4]
To solve this, we will follow these steps −
take an array A as final answer, and take x := 0
for i in range 0 to n – 2A[i] = (i + 1)x := x + i + 1
A[i] = (i + 1)
x := x + i + 1
A[n – 1] = x
return A
Let us see the following implementation to get better understanding −
Live Demo
#include <bits/stdc++.h>
using namespace std;
void print_vector(vector<int> v){
cout << "[";
for(int i = 0; i<v.size(); i++){
cout << v[i] << ", ";
}
cout << "]"<<endl;
}
class Solution {
public:
vector<int> sumZero(int n) {
vector <int> ans(n);
int x = 0;
for(int i = 0; i < n - 1; i++){
ans[i] = (i + 1);
x += (i + 1);
}
ans[n - 1] = -x;
return ans;
}
};
main(){
Solution ob;
print_vector(ob.sumZero(10)) ;
}
10
[1, 2, 3, 4, 5, 6, 7, 8, 9, -45, ] | [
{
"code": null,
"e": 1254,
"s": 1062,
"text": "Suppose we have an integer n. We have to return any array that contains n unique integers, such that they add up to 0. So if input is n = 5, then one possible output will be [-7, -1, 1, 3, 4]"
},
{
"code": null,
"e": 1298,
"s": 1254,
"text": "To solve this, we will follow these steps −"
},
{
"code": null,
"e": 1347,
"s": 1298,
"text": "take an array A as final answer, and take x := 0"
},
{
"code": null,
"e": 1401,
"s": 1347,
"text": "for i in range 0 to n – 2A[i] = (i + 1)x := x + i + 1"
},
{
"code": null,
"e": 1416,
"s": 1401,
"text": "A[i] = (i + 1)"
},
{
"code": null,
"e": 1431,
"s": 1416,
"text": "x := x + i + 1"
},
{
"code": null,
"e": 1444,
"s": 1431,
"text": "A[n – 1] = x"
},
{
"code": null,
"e": 1453,
"s": 1444,
"text": "return A"
},
{
"code": null,
"e": 1523,
"s": 1453,
"text": "Let us see the following implementation to get better understanding −"
},
{
"code": null,
"e": 1534,
"s": 1523,
"text": " Live Demo"
},
{
"code": null,
"e": 2062,
"s": 1534,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nvoid print_vector(vector<int> v){\n cout << \"[\";\n for(int i = 0; i<v.size(); i++){\n cout << v[i] << \", \";\n }\n cout << \"]\"<<endl;\n}\nclass Solution {\n public:\n vector<int> sumZero(int n) {\n vector <int> ans(n);\n int x = 0;\n for(int i = 0; i < n - 1; i++){\n ans[i] = (i + 1);\n x += (i + 1);\n }\n ans[n - 1] = -x;\n return ans;\n }\n};\nmain(){\n Solution ob;\n print_vector(ob.sumZero(10)) ;\n}"
},
{
"code": null,
"e": 2065,
"s": 2062,
"text": "10"
},
{
"code": null,
"e": 2100,
"s": 2065,
"text": "[1, 2, 3, 4, 5, 6, 7, 8, 9, -45, ]"
}
] |
AngularJS - Quick Guide | AngularJS is an open source web application framework. It was originally developed in 2009 by Misko Hevery and Adam Abrons. It is now maintained by Google. Its latest version is 1.4.3.
Definition of AngularJS as put by its official documentation is as follows −
AngularJS is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you currently have to write. And it all happens within the browser, making it an ideal partner with any server technology.
AngularJS is a powerful JavaScript based development framework to create RICH Internet Application(RIA).
AngularJS is a powerful JavaScript based development framework to create RICH Internet Application(RIA).
AngularJS provides developers options to write client side application (using JavaScript) in a clean MVC(Model View Controller) way.
AngularJS provides developers options to write client side application (using JavaScript) in a clean MVC(Model View Controller) way.
Application written in AngularJS is cross-browser compliant. AngularJS automatically handles JavaScript code suitable for each browser.
Application written in AngularJS is cross-browser compliant. AngularJS automatically handles JavaScript code suitable for each browser.
AngularJS is open source, completely free, and used by thousands of developers around the world. It is licensed under the Apache License version 2.0.
AngularJS is open source, completely free, and used by thousands of developers around the world. It is licensed under the Apache License version 2.0.
Overall, AngularJS is a framework to build large scale and high performance web application while keeping them as easy-to-maintain.
Following are most important core features of AngularJS −
Data-binding − It is the automatic synchronization of data between model and view components.
Data-binding − It is the automatic synchronization of data between model and view components.
Scope − These are objects that refer to the model. They act as a glue between controller and view.
Scope − These are objects that refer to the model. They act as a glue between controller and view.
Controller − These are JavaScript functions that are bound to a particular scope.
Controller − These are JavaScript functions that are bound to a particular scope.
Services − AngularJS come with several built-in services for example $https: to make a XMLHttpRequests. These are singleton objects which are instantiated only once in app.
Services − AngularJS come with several built-in services for example $https: to make a XMLHttpRequests. These are singleton objects which are instantiated only once in app.
Filters − These select a subset of items from an array and returns a new array.
Filters − These select a subset of items from an array and returns a new array.
Directives − Directives are markers on DOM elements (such as elements, attributes, css, and more). These can be used to create custom HTML tags that serve as new, custom widgets. AngularJS has built-in directives (ngBind, ngModel...)
Directives − Directives are markers on DOM elements (such as elements, attributes, css, and more). These can be used to create custom HTML tags that serve as new, custom widgets. AngularJS has built-in directives (ngBind, ngModel...)
Templates − These are the rendered view with information from the controller and model. These can be a single file (like index.html) or multiple views in one page using "partials".
Templates − These are the rendered view with information from the controller and model. These can be a single file (like index.html) or multiple views in one page using "partials".
Routing − It is concept of switching views.
Routing − It is concept of switching views.
Model View Whatever − MVC is a design pattern for dividing an application into different parts (called Model, View and Controller), each with distinct responsibilities. AngularJS does not implement MVC in the traditional sense, but rather something closer to MVVM (Model-View-ViewModel). The Angular JS team refers it humorously as Model View Whatever.
Model View Whatever − MVC is a design pattern for dividing an application into different parts (called Model, View and Controller), each with distinct responsibilities. AngularJS does not implement MVC in the traditional sense, but rather something closer to MVVM (Model-View-ViewModel). The Angular JS team refers it humorously as Model View Whatever.
Deep Linking − Deep linking allows you to encode the state of application in the URL so that it can be bookmarked. The application can then be restored from the URL to the same state.
Deep Linking − Deep linking allows you to encode the state of application in the URL so that it can be bookmarked. The application can then be restored from the URL to the same state.
Dependency Injection − AngularJS has a built-in dependency injection subsystem that helps the developer by making the application easier to develop, understand, and test.
Dependency Injection − AngularJS has a built-in dependency injection subsystem that helps the developer by making the application easier to develop, understand, and test.
Following diagram depicts some important parts of AngularJS which we will discuss in detail in the subsequent chapters.
AngularJS provides capability to create Single Page Application in a very clean and maintainable way.
AngularJS provides capability to create Single Page Application in a very clean and maintainable way.
AngularJS provides data binding capability to HTML thus giving user a rich and responsive experience
AngularJS provides data binding capability to HTML thus giving user a rich and responsive experience
AngularJS code is unit testable.
AngularJS code is unit testable.
AngularJS uses dependency injection and make use of separation of concerns.
AngularJS uses dependency injection and make use of separation of concerns.
AngularJS provides reusable components.
AngularJS provides reusable components.
With AngularJS, developer write less code and get more functionality.
With AngularJS, developer write less code and get more functionality.
In AngularJS, views are pure html pages, and controllers written in JavaScript do the business processing.
In AngularJS, views are pure html pages, and controllers written in JavaScript do the business processing.
On top of everything, AngularJS applications can run on all major browsers and smart phones including Android and iOS based phones/tablets.
Though AngularJS comes with lots of plus points but same time we should consider the following points −
Not Secure − Being JavaScript only framework, application written in AngularJS are not safe. Server side authentication and authorization is must to keep an application secure.
Not Secure − Being JavaScript only framework, application written in AngularJS are not safe. Server side authentication and authorization is must to keep an application secure.
Not degradable − If your application user disables JavaScript then user will just see the basic page and nothing more.
Not degradable − If your application user disables JavaScript then user will just see the basic page and nothing more.
The AngularJS framework can be divided into following three major parts −
ng-app − This directive defines and links an AngularJS application to HTML.
ng-app − This directive defines and links an AngularJS application to HTML.
ng-model − This directive binds the values of AngularJS application data to HTML input controls.
ng-model − This directive binds the values of AngularJS application data to HTML input controls.
ng-bind − This directive binds the AngularJS Application data to HTML tags.
ng-bind − This directive binds the AngularJS Application data to HTML tags.
In this chapter we will discuss about how to set up AngularJS library to be used in web application development. We will also briefly study the directory structure and its contents.
When you open the link https://angularjs.org/, you will see there are two options to download AngularJS library −
View on GitHub − Click on this button to go to GitHub and get all of the latest scripts.
View on GitHub − Click on this button to go to GitHub and get all of the latest scripts.
Download AngularJS 1 − Or click on this button, a screen as below would be seen −
Download AngularJS 1 − Or click on this button, a screen as below would be seen −
This screen gives various options of using Angular JS as follows −
Downloading and hosting files locally
There are two different options legacy and latest. The names itself are self descriptive. legacy has version less than 1.2.x and latest has 1.5.x version.
We can also go with the minified, uncompressed or zipped version.
CDN access − You also have access to a CDN. The CDN will give you access around the world to regional data centers that in this case, Google host. This means using CDN moves the responsibility of hosting files from your own servers to a series of external ones. This also offers an advantage that if the visitor to your webpage has already downloaded a copy of AngularJS from the same CDN, it won't have to be re-downloaded.
This screen gives various options of using Angular JS as follows −
Downloading and hosting files locally
There are two different options legacy and latest. The names itself are self descriptive. legacy has version less than 1.2.x and latest has 1.5.x version.
We can also go with the minified, uncompressed or zipped version.
Downloading and hosting files locally
There are two different options legacy and latest. The names itself are self descriptive. legacy has version less than 1.2.x and latest has 1.5.x version.
There are two different options legacy and latest. The names itself are self descriptive. legacy has version less than 1.2.x and latest has 1.5.x version.
We can also go with the minified, uncompressed or zipped version.
We can also go with the minified, uncompressed or zipped version.
CDN access − You also have access to a CDN. The CDN will give you access around the world to regional data centers that in this case, Google host. This means using CDN moves the responsibility of hosting files from your own servers to a series of external ones. This also offers an advantage that if the visitor to your webpage has already downloaded a copy of AngularJS from the same CDN, it won't have to be re-downloaded.
CDN access − You also have access to a CDN. The CDN will give you access around the world to regional data centers that in this case, Google host. This means using CDN moves the responsibility of hosting files from your own servers to a series of external ones. This also offers an advantage that if the visitor to your webpage has already downloaded a copy of AngularJS from the same CDN, it won't have to be re-downloaded.
Try the new angularJS 2 − Click on this button to download Angular JS beta 2 version.This version is very fast, mobile supported and flexible compare to legacy and latest of AngularJS 1
Try the new angularJS 2 − Click on this button to download Angular JS beta 2 version.This version is very fast, mobile supported and flexible compare to legacy and latest of AngularJS 1
Now let us write a simple example using AngularJS library. Let us create an HTML file myfirstexample.html as below −
<!doctype html>
<html>
<head>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.5.2/angular.min.js"></script>
</head>
<body ng-app = "myapp">
<div ng-controller = "HelloController" >
<h2>Welcome {{helloTo.title}} to the world of Tutorialspoint!</h2>
</div>
<script>
angular.module("myapp", [])
.controller("HelloController", function($scope) {
$scope.helloTo = {};
$scope.helloTo.title = "AngularJS";
});
</script>
</body>
</html>
Following sections describe the above code in detail −
We have included the AngularJS JavaScript file in the HTML page so we can use AngularJS −
<head>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular.min.js"></script>
</head>
If you want to update into latest version of Angular JS, use the following script source
or else Check the latest version of AngularJS on their official website.
<head>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.5.2/angular.min.js"></script>
</head>
Next we tell what part of the HTML contains the AngularJS app. This done by adding the ng-app attribute to the root HTML element of the AngularJS app. You can either add it to html element or body element as shown below −
<body ng-app = "myapp">
</body>
The view is this part −
<div ng-controller = "HelloController" >
<h2>Welcome {{helloTo.title}} to the world of Tutorialspoint!</h2>
</div>
ng-controller tells AngularJS what controller to use with this view. helloTo.title tells AngularJS to write the "model" value named helloTo.title to the HTML at this location.
The controller part is −
<script>
angular.module("myapp", [])
.controller("HelloController", function($scope) {
$scope.helloTo = {};
$scope.helloTo.title = "AngularJS";
});
</script>
This code registers a controller function named HelloController in the angular module named myapp. We will study more about modules and controllers in their respective chapters. The controller function is registered in angular via the angular.module(...).controller(...) function call.
The $scope parameter passed to the controller function is the model. The controller function adds a helloTo JavaScript object, and in that object it adds a title field.
Save the above code as myfirstexample.html and open it in any browser. You will see an output as below −
Welcome AngularJS to the world of Tutorialspoint!
When the page is loaded in the browser, following things happen −
HTML document is loaded into the browser, and evaluated by the browser. AngularJS JavaScript file is loaded, the angular global object is created. Next, JavaScript which registers controller functions is executed.
HTML document is loaded into the browser, and evaluated by the browser. AngularJS JavaScript file is loaded, the angular global object is created. Next, JavaScript which registers controller functions is executed.
Next AngularJS scans through the HTML to look for AngularJS apps and views. Once view is located, it connects that view to the corresponding controller function.
Next AngularJS scans through the HTML to look for AngularJS apps and views. Once view is located, it connects that view to the corresponding controller function.
Next, AngularJS executes the controller functions. It then renders the views with data from the model populated by the controller. The page is now ready.
Next, AngularJS executes the controller functions. It then renders the views with data from the model populated by the controller. The page is now ready.
Model View Controller or MVC as it is popularly called, is a software design pattern for developing web applications. A Model View Controller pattern is made up of the following three parts −
Model − It is the lowest level of the pattern responsible for maintaining data.
Model − It is the lowest level of the pattern responsible for maintaining data.
View − It is responsible for displaying all or a portion of the data to the user.
View − It is responsible for displaying all or a portion of the data to the user.
Controller − It is a software Code that controls the interactions between the Model and View.
Controller − It is a software Code that controls the interactions between the Model and View.
MVC is popular because it isolates the application logic from the user interface layer and supports separation of concerns. The controller receives all requests for the application and then works with the model to prepare any data needed by the view. The view then uses the data prepared by the controller to generate a final presentable response. The MVC abstraction can be graphically represented as follows.
The model is responsible for managing application data. It responds to the request from view and to the instructions from controller to update itself.
A presentation of data in a particular format, triggered by the controller's decision to present the data. They are script-based template systems such as JSP, ASP, PHP and very easy to integrate with AJAX technology.
The controller responds to user input and performs interactions on the data model objects. The controller receives input, validates it, and then performs business operations that modify the state of the data model.
AngularJS is a MVC based framework. In the coming chapters, we will see how AngularJS uses MVC methodology.
Before we start with creating actual HelloWorld application using AngularJS, let us see what are the actual parts of a AngularJS application. An AngularJS application consists of following three important parts −
ng-app − This directive defines and links an AngularJS application to HTML.
ng-app − This directive defines and links an AngularJS application to HTML.
ng-model − This directive binds the values of AngularJS application data to HTML input controls.
ng-model − This directive binds the values of AngularJS application data to HTML input controls.
ng-bind − This directive binds the AngularJS Application data to HTML tags.
ng-bind − This directive binds the AngularJS Application data to HTML tags.
Being a pure JavaScript framework, It can be added using <Script> tag.
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<div ng-app = "">
...
</div>
<p>Enter your Name: <input type = "text" ng-model = "name"></p>
<p>Hello <span ng-bind = "name"></span>!</p>
Use above mentioned three steps in an HTML page.
testAngularJS.htm
<html>
<head>
<title>AngularJS First Application</title>
</head>
<body>
<h1>Sample Application</h1>
<div ng-app = "">
<p>Enter your Name: <input type = "text" ng-model = "name"></p>
<p>Hello <span ng-bind = "name"></span>!</p>
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. Enter your name and see the result.
Enter your Name:
Hello !
ng-app directive indicates the start of AngularJS application.
ng-app directive indicates the start of AngularJS application.
ng-model directive then creates a model variable named "name" which can be used with the html page and within the div having ng-app directive.
ng-model directive then creates a model variable named "name" which can be used with the html page and within the div having ng-app directive.
ng-bind then uses the name model to be displayed in the html span tag whenever user input something in the text box.
ng-bind then uses the name model to be displayed in the html span tag whenever user input something in the text box.
Closing</div> tag indicates the end of AngularJS application.
Closing</div> tag indicates the end of AngularJS application.
AngularJS directives are used to extend HTML. These are special attributes starting with ng- prefix. We're going to discuss following directives −
ng-app − This directive starts an AngularJS Application.
ng-app − This directive starts an AngularJS Application.
ng-init − This directive initializes application data.
ng-init − This directive initializes application data.
ng-model − This directive binds the values of AngularJS application data to HTML input controls.
ng-model − This directive binds the values of AngularJS application data to HTML input controls.
ng-repeat − This directive repeats html elements for each item in a collection.
ng-repeat − This directive repeats html elements for each item in a collection.
ng-app directive starts an AngularJS Application. It defines the root element. It automatically initializes or bootstraps the application when web page containing AngularJS Application is loaded. It is also used to load various AngularJS modules in AngularJS Application. In following example, we've defined a default AngularJS application using ng-app attribute of a div element.
<div ng-app = "">
...
</div>
ng-init directive initializes an AngularJS Application data. It is used to put values to the variables to be used in the application. In following example, we'll initialize an array of countries. We're using JSON syntax to define array of countries.
<div ng-app = "" ng-init = "countries = [{locale:'en-US',name:'United States'},
{locale:'en-GB',name:'United Kingdom'}, {locale:'en-FR',name:'France'}]">
...
</div>
This directive binds the values of AngularJS application data to HTML input controls. In following example, we've defined a model named "name".
<div ng-app = "">
...
<p>Enter your Name: <input type = "text" ng-model = "name"></p>
</div>
ng-repeat directive repeats html elements for each item in a collection. In following example, we've iterated over array of countries.
<div ng-app = "">
...
<p>List of Countries with locale:</p>
<ol>
<li ng-repeat = "country in countries">
{{ 'Country: ' + country.name + ', Locale: ' + country.locale }}
</li>
</ol>
</div>
Following example will showcase all the above mentioned directives.
<html>
<head>
<title>AngularJS Directives</title>
</head>
<body>
<h1>Sample Application</h1>
<div ng-app = "" ng-init = "countries = [{locale:'en-US',name:'United States'},
{locale:'en-GB',name:'United Kingdom'}, {locale:'en-FR',name:'France'}]">
<p>Enter your Name: <input type = "text" ng-model = "name"></p>
<p>Hello <span ng-bind = "name"></span>!</p>
<p>List of Countries with locale:</p>
<ol>
<li ng-repeat = "country in countries">
{{ 'Country: ' + country.name + ', Locale: ' + country.locale }}
</li>
</ol>
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. Enter your name and see the result.
Enter your Name:
Hello !
List of Countries with locale:
{{ 'Country: ' + country.name + ', Locale: ' + country.locale }}
{{ 'Country: ' + country.name + ', Locale: ' + country.locale }}
Expressions are used to bind application data to html. Expressions are written inside double braces like {{ expression}}. Expressions behaves in same way as ng-bind directives. AngularJS application expressions are pure javascript expressions and outputs the data where they are used.
<p>Expense on Books : {{cost * quantity}} Rs</p>
<p>Hello {{student.firstname + " " + student.lastname}}!</p>
<p>Roll No: {{student.rollno}}</p>
<p>Marks(Math): {{marks[3]}}</p>
Following example will showcase all the above mentioned expressions.
<html>
<head>
<title>AngularJS Expressions</title>
</head>
<body>
<h1>Sample Application</h1>
<div ng-app = "" ng-init = "quantity = 1;cost = 30;
student = {firstname:'Mahesh',lastname:'Parashar',rollno:101};
marks = [80,90,75,73,60]">
<p>Hello {{student.firstname + " " + student.lastname}}!</p>
<p>Expense on Books : {{cost * quantity}} Rs</p>
<p>Roll No: {{student.rollno}}</p>
<p>Marks(Math): {{marks[3]}}</p>
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
Hello {{student.firstname + " " + student.lastname}}!
Expense on Books : {{cost * quantity}} Rs
Roll No: {{student.rollno}}
Marks(Math): {{marks[3]}}
AngularJS application mainly relies on controllers to control the flow of data in the application. A controller is defined using ng-controller directive. A controller is a JavaScript object containing attributes/properties and functions. Each controller accepts $scope as a parameter which refers to the application/module that controller is to control.
<div ng-app = "" ng-controller = "studentController">
...
</div>
Here we've declared a controller studentController using ng-controller directive. As a next step we'll define the studentController as follows −
<script>
function studentController($scope) {
$scope.student = {
firstName: "Mahesh",
lastName: "Parashar",
fullName: function() {
var studentObject;
studentObject = $scope.student;
return studentObject.firstName + " " + studentObject.lastName;
}
};
}
</script>
studentController defined as a JavaScript object with $scope as argument.
studentController defined as a JavaScript object with $scope as argument.
$scope refers to application which is to use the studentController object.
$scope refers to application which is to use the studentController object.
$scope.student is property of studentController object.
$scope.student is property of studentController object.
firstName and lastName are two properties of $scope.student object. We've passed the default values to them.
firstName and lastName are two properties of $scope.student object. We've passed the default values to them.
fullName is the function of $scope.student object whose task is to return the combined name.
fullName is the function of $scope.student object whose task is to return the combined name.
In fullName function we're getting the student object and then return the combined name.
In fullName function we're getting the student object and then return the combined name.
As a note, we can also define the controller object in separate JS file and refer that file in the html page.
As a note, we can also define the controller object in separate JS file and refer that file in the html page.
Now we can use studentController's student property using ng-model or using expressions as follows.
Enter first name: <input type = "text" ng-model = "student.firstName"><br>
Enter last name: <input type = "text" ng-model = "student.lastName"><br>
<br>
You are entering: {{student.fullName()}}
We've bounded student.firstName and student.lastname to two input boxes.
We've bounded student.firstName and student.lastname to two input boxes.
We've bounded student.fullName() to HTML.
We've bounded student.fullName() to HTML.
Now whenever you type anything in first name and last name input boxes, you can see the full name getting updated automatically.
Now whenever you type anything in first name and last name input boxes, you can see the full name getting updated automatically.
Following example will showcase use of controller.
<html>
<head>
<title>Angular JS Controller</title>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "studentController">
Enter first name: <input type = "text" ng-model = "student.firstName"><br>
<br>
Enter last name: <input type = "text" ng-model = "student.lastName"><br>
<br>
You are entering: {{student.fullName()}}
</div>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('studentController', function($scope) {
$scope.student = {
firstName: "Mahesh",
lastName: "Parashar",
fullName: function() {
var studentObject;
studentObject = $scope.student;
return studentObject.firstName + " " + studentObject.lastName;
}
};
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
Filters are used to change modify the data and can be clubbed in expression or directives using pipe character. Following is the list of commonly used filters.
uppercase
converts a text to upper case text.
lowercase
converts a text to lower case text.
currency
formats text in a currency format.
filter
filter the array to a subset of it based on provided criteria.
orderby
orders the array based on provided criteria.
Add uppercase filter to an expression using pipe character. Here we've added uppercase filter to print student name in all capital letters.
Enter first name:<input type = "text" ng-model = "student.firstName">
Enter last name: <input type = "text" ng-model = "student.lastName">
Name in Upper Case: {{student.fullName() | uppercase}}
Add lowercase filter to an expression using pipe character. Here we've added lowercase filter to print student name in all lowercase letters.
Enter first name:<input type = "text" ng-model = "student.firstName">
Enter last name: <input type = "text" ng-model = "student.lastName">
Name in Lower Case: {{student.fullName() | lowercase}}
Add currency filter to an expression returning number using pipe character. Here we've added currency filter to print fees using currency format.
Enter fees: <input type = "text" ng-model = "student.fees">
fees: {{student.fees | currency}}
To display only required subjects, we've used subjectName as filter.
Enter subject: <input type = "text" ng-model = "subjectName">
Subject:
<ul>
<li ng-repeat = "subject in student.subjects | filter: subjectName">
{{ subject.name + ', marks:' + subject.marks }}
</li>
</ul>
To order subjects by marks, we've used orderBy marks.
Subject:
<ul>
<li ng-repeat = "subject in student.subjects | orderBy:'marks'">
{{ subject.name + ', marks:' + subject.marks }}
</li>
</ul>
Following example will showcase all the above mentioned filters.
<html>
<head>
<title>Angular JS Filters</title>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "studentController">
<table border = "0">
<tr>
<td>Enter first name:</td>
<td><input type = "text" ng-model = "student.firstName"></td>
</tr>
<tr>
<td>Enter last name: </td>
<td><input type = "text" ng-model = "student.lastName"></td>
</tr>
<tr>
<td>Enter fees: </td>
<td><input type = "text" ng-model = "student.fees"></td>
</tr>
<tr>
<td>Enter subject: </td>
<td><input type = "text" ng-model = "subjectName"></td>
</tr>
</table>
<br/>
<table border = "0">
<tr>
<td>Name in Upper Case: </td><td>{{student.fullName() | uppercase}}</td>
</tr>
<tr>
<td>Name in Lower Case: </td><td>{{student.fullName() | lowercase}}</td>
</tr>
<tr>
<td>fees: </td><td>{{student.fees | currency}}
</td>
</tr>
<tr>
<td>Subject:</td>
<td>
<ul>
<li ng-repeat = "subject in student.subjects | filter: subjectName |orderBy:'marks'">
{{ subject.name + ', marks:' + subject.marks }}
</li>
</ul>
</td>
</tr>
</table>
</div>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('studentController', function($scope) {
$scope.student = {
firstName: "Mahesh",
lastName: "Parashar",
fees:500,
subjects:[
{name:'Physics',marks:70},
{name:'Chemistry',marks:80},
{name:'Math',marks:65}
],
fullName: function() {
var studentObject;
studentObject = $scope.student;
return studentObject.firstName + " " + studentObject.lastName;
}
};
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
{{ subject.name + ', marks:' + subject.marks }}
Table data is normally repeatable by nature. ng-repeat directive can be used to draw table easily. Following example states the use of ng-repeat directive to draw a table.
<table>
<tr>
<th>Name</th>
<th>Marks</th>
</tr>
<tr ng-repeat = "subject in student.subjects">
<td>{{ subject.name }}</td>
<td>{{ subject.marks }}</td>
</tr>
</table>
Table can be styled using CSS Styling.
<style>
table, th , td {
border: 1px solid grey;
border-collapse: collapse;
padding: 5px;
}
table tr:nth-child(odd) {
background-color: #f2f2f2;
}
table tr:nth-child(even) {
background-color: #ffffff;
}
</style>
Following example will showcase all the above mentioned directive.
<html>
<head>
<title>Angular JS Table</title>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script>
<style>
table, th , td {
border: 1px solid grey;
border-collapse: collapse;
padding: 5px;
}
table tr:nth-child(odd) {
background-color: #f2f2f2;
}
table tr:nth-child(even) {
background-color: #ffffff;
}
</style>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "studentController">
<table border = "0">
<tr>
<td>Enter first name:</td>
<td><input type = "text" ng-model = "student.firstName"></td>
</tr>
<tr>
<td>Enter last name: </td>
<td>
<input type = "text" ng-model = "student.lastName">
</td>
</tr>
<tr>
<td>Name: </td>
<td>{{student.fullName()}}</td>
</tr>
<tr>
<td>Subject:</td>
<td>
<table>
<tr>
<th>Name</th>.
<th>Marks</th>
</tr>
<tr ng-repeat = "subject in student.subjects">
<td>{{ subject.name }}</td>
<td>{{ subject.marks }}</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('studentController', function($scope) {
$scope.student = {
firstName: "Mahesh",
lastName: "Parashar",
fees:500,
subjects:[
{name:'Physics',marks:70},
{name:'Chemistry',marks:80},
{name:'Math',marks:65},
{name:'English',marks:75},
{name:'Hindi',marks:67}
],
fullName: function() {
var studentObject;
studentObject = $scope.student;
return studentObject.firstName + " " + studentObject.lastName;
}
};
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
Following directives can be used to bind application data to attributes of HTML DOM Elements.
ng-disabled
disables a given control.
ng-show
shows a given control.
ng-hide
hides a given control.
ng-click
represents a AngularJS click event.
Add ng-disabled attribute to a HTML button and pass it a model. Bind the model to an checkbox and see the variation.
<input type = "checkbox" ng-model = "enableDisableButton">Disable Button
<button ng-disabled = "enableDisableButton">Click Me!</button>
Add ng-show attribute to a HTML button and pass it a model. Bind the model to an checkbox and see the variation.
<input type = "checkbox" ng-model = "showHide1">Show Button
<button ng-show = "showHide1">Click Me!</button>
Add ng-hide attribute to a HTML button and pass it a model. Bind the model to an checkbox and see the variation.
<input type = "checkbox" ng-model = "showHide2">Hide Button
<button ng-hide = "showHide2">Click Me!</button>
Add ng-click attribute to a HTML button and update a model. Bind the model to html and see the variation.
<p>Total click: {{ clickCounter }}</p>
<button ng-click = "clickCounter = clickCounter + 1">Click Me!</button>
Following example will showcase all the above mentioned directives.
<html>
<head>
<title>AngularJS HTML DOM</title>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "">
<table border = "0">
<tr>
<td><input type = "checkbox" ng-model = "enableDisableButton">Disable Button</td>
<td><button ng-disabled = "enableDisableButton">Click Me!</button></td>
</tr>
<tr>
<td><input type = "checkbox" ng-model = "showHide1">Show Button</td>
<td><button ng-show = "showHide1">Click Me!</button></td>
</tr>
<tr>
<td><input type = "checkbox" ng-model = "showHide2">Hide Button</td>
<td><button ng-hide = "showHide2">Click Me!</button></td>
</tr>
<tr>
<td><p>Total click: {{ clickCounter }}</p></td>
<td><button ng-click = "clickCounter = clickCounter + 1">Click Me!</button></td>
</tr>
</table>
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
Total click: {{ clickCounter }}
AngularJS supports modular approach. Modules are used to separate logics say services, controllers, application etc. and keep the code clean. We define modules in separate js files and name them as per the module.js file. In this example we're going to create two modules.
Application Module − used to initialize an application with controller(s).
Application Module − used to initialize an application with controller(s).
Controller Module − used to define the controller.
Controller Module − used to define the controller.
var mainApp = angular.module("mainApp", []);
Here we've declared an application mainApp module using angular.module function. We've passed an empty array to it. This array generally contains dependent modules.
mainApp.controller("studentController", function($scope) {
$scope.student = {
firstName: "Mahesh",
lastName: "Parashar",
fees:500,
subjects:[
{name:'Physics',marks:70},
{name:'Chemistry',marks:80},
{name:'Math',marks:65},
{name:'English',marks:75},
{name:'Hindi',marks:67}
],
fullName: function() {
var studentObject;
studentObject = $scope.student;
return studentObject.firstName + " " + studentObject.lastName;
}
};
});
Here we've declared a controller studentController module using mainApp.controller function.
<div ng-app = "mainApp" ng-controller = "studentController">
...
<script src = "mainApp.js"></script>
<script src = "studentController.js"></script>
</div>
Here we've used application module using ng-app directive and controller using ng-controller directive. We've imported mainApp.js and studentController.js in the main html page.
Following example will showcase all the above mentioned modules.
<html>
<head>
<title>Angular JS Modules</title>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script>
<script src = "/angularjs/src/module/mainApp.js"></script>
<script src = "/angularjs/src/module/studentController.js"></script>
<style>
table, th , td {
border: 1px solid grey;
border-collapse: collapse;
padding: 5px;
}
table tr:nth-child(odd) {
background-color: #f2f2f2;
}
table tr:nth-child(even) {
background-color: #ffffff;
}
</style>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "studentController">
<table border = "0">
<tr>
<td>Enter first name:</td>
<td><input type = "text" ng-model = "student.firstName"></td>
</tr>
<tr>
<td>Enter last name: </td>
<td><input type = "text" ng-model = "student.lastName"></td>
</tr>
<tr>
<td>Name: </td>
<td>{{student.fullName()}}</td>
</tr>
<tr>
<td>Subject:</td>
<td>
<table>
<tr>
<th>Name</th>
<th>Marks</th>
</tr>
<tr ng-repeat = "subject in student.subjects">
<td>{{ subject.name }}</td>
<td>{{ subject.marks }}</td>
</tr>
</table>
</td>
</tr>
</table>
</div>
</body>
</html>
var mainApp = angular.module("mainApp", []);
mainApp.controller("studentController", function($scope) {
$scope.student = {
firstName: "Mahesh",
lastName: "Parashar",
fees:500,
subjects:[
{name:'Physics',marks:70},
{name:'Chemistry',marks:80},
{name:'Math',marks:65},
{name:'English',marks:75},
{name:'Hindi',marks:67}
],
fullName: function() {
var studentObject;
studentObject = $scope.student;
return studentObject.firstName + " " + studentObject.lastName;
}
};
});
Open textAngularJS.htm in a web browser. See the result.
AngularJS enriches form filling and validation. We can use ng-click to handle AngularJS click on button and use $dirty and $invalid flags to do the validations in seemless way. Use novalidate with a form declaration to disable any browser specific validation. Forms controls makes heavy use of Angular events. Let's have a quick look on events first.
AngularJS provides multiple events which can be associated with the HTML controls. For example ng-click is normally associated with button. Following are supported events in Angular JS.
ng-click
ng-dbl-click
ng-mousedown
ng-mouseup
ng-mouseenter
ng-mouseleave
ng-mousemove
ng-mouseover
ng-keydown
ng-keyup
ng-keypress
ng-change
Reset data of a form using on-click directive of a button.
<input name = "firstname" type = "text" ng-model = "firstName" required>
<input name = "lastname" type = "text" ng-model = "lastName" required>
<input name = "email" type = "email" ng-model = "email" required>
<button ng-click = "reset()">Reset</button>
<script>
function studentController($scope) {
$scope.reset = function() {
$scope.firstName = "Mahesh";
$scope.lastName = "Parashar";
$scope.email = "[email protected]";
}
$scope.reset();
}
</script>
Following can be used to track error.
$dirty − states that value has been changed.
$dirty − states that value has been changed.
$invalid − states that value entered is invalid.
$invalid − states that value entered is invalid.
$error − states the exact error.
$error − states the exact error.
Following example will showcase all the above mentioned directives.
<html>
<head>
<title>Angular JS Forms</title>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script>
<style>
table, th , td {
border: 1px solid grey;
border-collapse: collapse;
padding: 5px;
}
table tr:nth-child(odd) {
background-color: #f2f2f2;
}
table tr:nth-child(even) {
background-color: #ffffff;
}
</style>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "studentController">
<form name = "studentForm" novalidate>
<table border = "0">
<tr>
<td>Enter first name:</td>
<td><input name = "firstname" type = "text" ng-model = "firstName" required>
<span style = "color:red" ng-show = "studentForm.firstname.$dirty && studentForm.firstname.$invalid">
<span ng-show = "studentForm.firstname.$error.required">First Name is required.</span>
</span>
</td>
</tr>
<tr>
<td>Enter last name: </td>
<td><input name = "lastname" type = "text" ng-model = "lastName" required>
<span style = "color:red" ng-show = "studentForm.lastname.$dirty && studentForm.lastname.$invalid">
<span ng-show = "studentForm.lastname.$error.required">Last Name is required.</span>
</span>
</td>
</tr>
<tr>
<td>Email: </td><td><input name = "email" type = "email" ng-model = "email" length = "100" required>
<span style = "color:red" ng-show = "studentForm.email.$dirty && studentForm.email.$invalid">
<span ng-show = "studentForm.email.$error.required">Email is required.</span>
<span ng-show = "studentForm.email.$error.email">Invalid email address.</span>
</span>
</td>
</tr>
<tr>
<td>
<button ng-click = "reset()">Reset</button>
</td>
<td>
<button ng-disabled = "studentForm.firstname.$dirty &&
studentForm.firstname.$invalid || studentForm.lastname.$dirty &&
studentForm.lastname.$invalid || studentForm.email.$dirty &&
studentForm.email.$invalid" ng-click="submit()">Submit</button>
</td>
</tr>
</table>
</form>
</div>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('studentController', function($scope) {
$scope.reset = function() {
$scope.firstName = "Mahesh";
$scope.lastName = "Parashar";
$scope.email = "[email protected]";
}
$scope.reset();
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
HTML does not support embedding html pages within html page. To achieve this functionality following ways are used −
Using Ajax − Make a server call to get the corresponding html page and set it in innerHTML of html control.
Using Ajax − Make a server call to get the corresponding html page and set it in innerHTML of html control.
Using Server Side Includes − JSP, PHP and other web side server technologies can include html pages within a dynamic page.
Using Server Side Includes − JSP, PHP and other web side server technologies can include html pages within a dynamic page.
Using AngularJS, we can embed HTML pages within a HTML page using ng-include directive.
<div ng-app = "" ng-controller = "studentController">
<div ng-include = "'main.htm'"></div>
<div ng-include = "'subjects.htm'"></div>
</div>
<html>
<head>
<title>Angular JS Includes</title>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<style>
table, th , td {
border: 1px solid grey;
border-collapse: collapse;
padding: 5px;
}
table tr:nth-child(odd) {
background-color: #f2f2f2;
}
table tr:nth-child(even) {
background-color: #ffffff;
}
</style>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "studentController">
<div ng-include = "'/angularjs/src/include/main.htm'"></div>
<div ng-include = "'/angularjs/src/include/subjects.htm'"></div>
</div>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('studentController', function($scope) {
$scope.student = {
firstName: "Mahesh",
lastName: "Parashar",
fees:500,
subjects:[
{name:'Physics',marks:70},
{name:'Chemistry',marks:80},
{name:'Math',marks:65},
{name:'English',marks:75},
{name:'Hindi',marks:67}
],
fullName: function() {
var studentObject;
studentObject = $scope.student;
return studentObject.firstName + " " + studentObject.lastName;
}
};
});
</script>
</body>
</html>
<table border = "0">
<tr>
<td>Enter first name:</td>
<td><input type = "text" ng-model = "student.firstName"></td>
</tr>
<tr>
<td>Enter last name: </td>
<td><input type = "text" ng-model = "student.lastName"></td>
</tr>
<tr>
<td>Name: </td>
<td>{{student.fullName()}}</td>
</tr>
</table>
<p>Subjects:</p>
<table>
<tr>
<th>Name</th>
<th>Marks</th>
</tr>
<tr ng-repeat = "subject in student.subjects">
<td>{{ subject.name }}</td>
<td>{{ subject.marks }}</td>
</tr>
</table>
To run this example, you need to deploy textAngularJS.htm, main.htm and subjects.htm to a webserver. Open textAngularJS.htm using url of your server in a web browser. See the result.
AngularJS provides $https: control which works as a service to read data from the server. The server makes a database call to get the desired records. AngularJS needs data in JSON format. Once the data is ready, $https: can be used to get the data from server in the following manner −
function studentController($scope,$https:) {
var url = "data.txt";
$https:.get(url).success( function(response) {
$scope.students = response;
});
}
Here, the file data.txt contains student records. $https: service makes an ajax call and sets response to its property students. students model can be used to draw tables in HTML.
[
{
"Name" : "Mahesh Parashar",
"RollNo" : 101,
"Percentage" : "80%"
},
{
"Name" : "Dinkar Kad",
"RollNo" : 201,
"Percentage" : "70%"
},
{
"Name" : "Robert",
"RollNo" : 191,
"Percentage" : "75%"
},
{
"Name" : "Julian Joe",
"RollNo" : 111,
"Percentage" : "77%"
}
]
<html>
<head>
<title>Angular JS Includes</title>
<style>
table, th , td {
border: 1px solid grey;
border-collapse: collapse;
padding: 5px;
}
table tr:nth-child(odd) {
background-color: #f2f2f2;
}
table tr:nth-child(even) {
background-color: #ffffff;
}
</style>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "" ng-controller = "studentController">
<table>
<tr>
<th>Name</th>
<th>Roll No</th>
<th>Percentage</th>
</tr>
<tr ng-repeat = "student in students">
<td>{{ student.Name }}</td>
<td>{{ student.RollNo }}</td>
<td>{{ student.Percentage }}</td>
</tr>
</table>
</div>
<script>
function studentController($scope,$http) {
var url = "data.txt";
$http.get(url).then( function(response) {
$scope.students = response.data;
});
}
</script>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.2.15/angular.min.js">
</script>
</body>
</html>
To execute this example, you need to deploy testAngularJS.htm and data.txt file to a web server. Open the file testAngularJS.htm using the URL of your server in a web browser and see the result.
AngularJS supports Single Page Application via multiple views on a single page. To do this AngularJS has provided ng-view and ng-template directives and $routeProvider services.
ng-view tag simply creates a place holder where a corresponding view (html or ng-template view) can be placed based on the configuration.
Define a div with ng-view within the main module.
<div ng-app = "mainApp">
...
<div ng-view></div>
</div>
ng-template directive is used to create an html view using script tag. It contains "id" attribute which is used by $routeProvider to map a view with a controller.
Define a script block with type as ng-template within the main module.
<div ng-app = "mainApp">
...
<script type = "text/ng-template" id = "addStudent.htm">
<h2> Add Student </h2>
{{message}}
</script>
</div>
$routeProvider is the key service which set the configuration of urls, map them with the corresponding html page or ng-template, and attach a controller with the same.
Define a script block with main module and set the routing configuration.
var mainApp = angular.module("mainApp", ['ngRoute']);
mainApp.config(['$routeProvider', function($routeProvider) {
$routeProvider
.when('/addStudent', {
templateUrl: 'addStudent.htm', controller: 'AddStudentController'
})
.when('/viewStudents', {
templateUrl: 'viewStudents.htm', controller: 'ViewStudentsController'
})
.otherwise ({
redirectTo: '/addStudent'
});
}]);
Following are the important points to be considered in above example.
$routeProvider is defined as a function under config of mainApp module using key as '$routeProvider'.
$routeProvider is defined as a function under config of mainApp module using key as '$routeProvider'.
$routeProvider.when defines a url "/addStudent" which then is mapped to "addStudent.htm". addStudent.htm should be present in the same path as main html page.If htm page is not defined then ng-template to be used with id="addStudent.htm". We've used ng-template.
$routeProvider.when defines a url "/addStudent" which then is mapped to "addStudent.htm". addStudent.htm should be present in the same path as main html page.If htm page is not defined then ng-template to be used with id="addStudent.htm". We've used ng-template.
"otherwise" is used to set the default view.
"otherwise" is used to set the default view.
"controller" is used to set the corresponding controller for the view.
"controller" is used to set the corresponding controller for the view.
Following example will showcase all the above mentioned directives.
<html>
<head>
<title>Angular JS Views</title>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js"></script>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular-route.min.js">
</script>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp">
<p><a href = "#addStudent">Add Student</a></p>
<p><a href = "#viewStudents">View Students</a></p>
<div ng-view></div>
<script type = "text/ng-template" id = "addStudent.htm">
<h2> Add Student </h2>
{{message}}
</script>
<script type = "text/ng-template" id = "viewStudents.htm">
<h2> View Students </h2>
{{message}}
</script>
</div>
<script>
var mainApp = angular.module("mainApp", ['ngRoute']);
mainApp.config(['$routeProvider', function($routeProvider) {
$routeProvider
.when('/addStudent', {
templateUrl: 'addStudent.htm',
controller: 'AddStudentController'
})
.when('/viewStudents', {
templateUrl: 'viewStudents.htm',
controller: 'ViewStudentsController'
})
.otherwise({
redirectTo: '/addStudent'
});
}]);
mainApp.controller('AddStudentController', function($scope) {
$scope.message = "This page will be used to display add student form";
});
mainApp.controller('ViewStudentsController', function($scope) {
$scope.message = "This page will be used to display all the students";
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
Add Student
View Students
Scope is a special javascript object which plays the role of joining controller with the views. Scope contains the model data. In controllers, model data is accessed via $scope object.
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller("shapeController", function($scope) {
$scope.message = "In shape controller";
$scope.type = "Shape";
});
</script>
Following are the important points to be considered in above example.
$scope is passed as first argument to controller during its constructor definition.
$scope is passed as first argument to controller during its constructor definition.
$scope.message and $scope.type are the models which are to be used in the HTML page.
$scope.message and $scope.type are the models which are to be used in the HTML page.
We've set values to models which will be reflected in the application module whose controller is shapeController.
We've set values to models which will be reflected in the application module whose controller is shapeController.
We can define functions as well in $scope.
We can define functions as well in $scope.
Scope are controllers specific. If we defines nested controllers then child controller will inherit the scope of its parent controller.
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller("shapeController", function($scope) {
$scope.message = "In shape controller";
$scope.type = "Shape";
});
mainApp.controller("circleController", function($scope) {
$scope.message = "In circle controller";
});
</script>
Following are the important points to be considered in above example.
We've set values to models in shapeController.
We've set values to models in shapeController.
We've overridden message in child controller circleController. When "message" is used within module of controller circleController, the overridden message will be used.
We've overridden message in child controller circleController. When "message" is used within module of controller circleController, the overridden message will be used.
Following example will showcase all the above mentioned directives.
<html>
<head>
<title>Angular JS Forms</title>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "shapeController">
<p>{{message}} <br/> {{type}} </p>
<div ng-controller = "circleController">
<p>{{message}} <br/> {{type}} </p>
</div>
<div ng-controller = "squareController">
<p>{{message}} <br/> {{type}} </p>
</div>
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller("shapeController", function($scope) {
$scope.message = "In shape controller";
$scope.type = "Shape";
});
mainApp.controller("circleController", function($scope) {
$scope.message = "In circle controller";
});
mainApp.controller("squareController", function($scope) {
$scope.message = "In square controller";
$scope.type = "Square";
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
{{message}} {{type}}
{{message}} {{type}}
{{message}} {{type}}
AngularJS supports the concepts of "Separation of Concerns" using services architecture. Services are javascript functions and are responsible to do a specific tasks only. This makes them an individual entity which is maintainable and testable. Controllers, filters can call them as on requirement basis. Services are normally injected using dependency injection mechanism of AngularJS.
AngularJS provides many inbuilt services for example, $https:, $route, $window, $location etc. Each service is responsible for a specific task for example, $https: is used to make ajax call to get the server data. $route is used to define the routing information and so on. Inbuilt services are always prefixed with $ symbol.
There are two ways to create a service.
factory
service
Using factory method, we first define a factory and then assign method to it.
var mainApp = angular.module("mainApp", []);
mainApp.factory('MathService', function() {
var factory = {};
factory.multiply = function(a, b) {
return a * b
}
return factory;
});
Using service method, we define a service and then assign method to it. We've also injected an already available service to it.
mainApp.service('CalcService', function(MathService) {
this.square = function(a) {
return MathService.multiply(a,a);
}
});
Following example will showcase all the above mentioned directives.
<html>
<head>
<title>Angular JS Services</title>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "CalcController">
<p>Enter a number: <input type = "number" ng-model = "number" /></p>
<button ng-click = "square()">X<sup>2</sup></button>
<p>Result: {{result}}</p>
</div>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.factory('MathService', function() {
var factory = {};
factory.multiply = function(a, b) {
return a * b
}
return factory;
});
mainApp.service('CalcService', function(MathService) {
this.square = function(a) {
return MathService.multiply(a,a);
}
});
mainApp.controller('CalcController', function($scope, CalcService) {
$scope.square = function() {
$scope.result = CalcService.square($scope.number);
}
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
Enter a number:
X2
Result: {{result}}
Dependency Injection is a software design pattern in which components are given their dependencies instead of hard coding them within the component. This relieves a component from locating the dependency and makes dependencies configurable. This helps in making components reusable, maintainable and testable.
AngularJS provides a supreme Dependency Injection mechanism. It provides following core components which can be injected into each other as dependencies.
value
factory
service
provider
constant
value is simple javascript object and it is used to pass values to controller during config phase.
//define a module
var mainApp = angular.module("mainApp", []);
//create a value object as "defaultInput" and pass it a data.
mainApp.value("defaultInput", 5);
...
//inject the value in the controller using its name "defaultInput"
mainApp.controller('CalcController', function($scope, CalcService, defaultInput) {
$scope.number = defaultInput;
$scope.result = CalcService.square($scope.number);
$scope.square = function() {
$scope.result = CalcService.square($scope.number);
}
});
factory is a function which is used to return value. It creates value on demand whenever a service or controller requires. It normally uses a factory function to calculate and return the value.
//define a module
var mainApp = angular.module("mainApp", []);
//create a factory "MathService" which provides a method multiply to return multiplication of two numbers
mainApp.factory('MathService', function() {
var factory = {};
factory.multiply = function(a, b) {
return a * b
}
return factory;
});
//inject the factory "MathService" in a service to utilize the multiply method of factory.
mainApp.service('CalcService', function(MathService) {
this.square = function(a) {
return MathService.multiply(a,a);
}
});
...
service is a singleton javascript object containing a set of functions to perform certain tasks. Services are defined using service() functions and then injected into controllers.
//define a module
var mainApp = angular.module("mainApp", []);
...
//create a service which defines a method square to return square of a number.
mainApp.service('CalcService', function(MathService) {
this.square = function(a) {
return MathService.multiply(a,a);
}
});
//inject the service "CalcService" into the controller
mainApp.controller('CalcController', function($scope, CalcService, defaultInput) {
$scope.number = defaultInput;
$scope.result = CalcService.square($scope.number);
$scope.square = function() {
$scope.result = CalcService.square($scope.number);
}
});
provider is used by AngularJS internally to create services, factory etc. during config phase(phase during which AngularJS bootstraps itself). Below mention script can be used to create MathService that we've created earlier. Provider is a special factory method with a method get() which is used to return the value/service/factory.
//define a module
var mainApp = angular.module("mainApp", []);
...
//create a service using provider which defines a method square to return square of a number.
mainApp.config(function($provide) {
$provide.provider('MathService', function() {
this.$get = function() {
var factory = {};
factory.multiply = function(a, b) {
return a * b;
}
return factory;
};
});
});
constants are used to pass values at config phase considering the fact that value can not be used to be passed during config phase.
mainApp.constant("configParam", "constant value");
Following example will showcase all the above mentioned directives.
<html>
<head>
<title>AngularJS Dependency Injection</title>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "CalcController">
<p>Enter a number: <input type = "number" ng-model = "number" /></p>
<button ng-click = "square()">X<sup>2</sup></button>
<p>Result: {{result}}</p>
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.config(function($provide) {
$provide.provider('MathService', function() {
this.$get = function() {
var factory = {};
factory.multiply = function(a, b) {
return a * b;
}
return factory;
};
});
});
mainApp.value("defaultInput", 5);
mainApp.factory('MathService', function() {
var factory = {};
factory.multiply = function(a, b) {
return a * b;
}
return factory;
});
mainApp.service('CalcService', function(MathService) {
this.square = function(a) {
return MathService.multiply(a,a);
}
});
mainApp.controller('CalcController', function($scope, CalcService, defaultInput) {
$scope.number = defaultInput;
$scope.result = CalcService.square($scope.number);
$scope.square = function() {
$scope.result = CalcService.square($scope.number);
}
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
Enter a number:
X2
Result: {{result}}
Custom directives are used in AngularJS to extend the functionality of HTML. Custom directives are defined using "directive" function. A custom directive simply replaces the element for which it is activated. AngularJS application during bootstrap finds the matching elements and do one time activity using its compile() method of the custom directive then process the element using link() method of the custom directive based on the scope of the directive. AngularJS provides support to create custom directives for following type of elements.
Element directives − Directive activates when a matching element is encountered.
Element directives − Directive activates when a matching element is encountered.
Attribute − Directive activates when a matching attribute is encountered.
Attribute − Directive activates when a matching attribute is encountered.
CSS − Directive activates when a matching css style is encountered.
CSS − Directive activates when a matching css style is encountered.
Comment − Directive activates when a matching comment is encountered.
Comment − Directive activates when a matching comment is encountered.
Define custom html tags.
<student name = "Mahesh"></student><br/>
<student name = "Piyush"></student>
Define custom directive to handle above custom html tags.
var mainApp = angular.module("mainApp", []);
//Create a directive, first parameter is the html element to be attached.
//We are attaching student html tag.
//This directive will be activated as soon as any student element is encountered in html
mainApp.directive('student', function() {
//define the directive object
var directive = {};
//restrict = E, signifies that directive is Element directive
directive.restrict = 'E';
//template replaces the complete element with its text.
directive.template = "Student: <b>{{student.name}}</b> ,
Roll No: <b>{{student.rollno}}</b>";
//scope is used to distinguish each student element based on criteria.
directive.scope = {
student : "=name"
}
//compile is called during application initialization. AngularJS calls
it once when html page is loaded.
directive.compile = function(element, attributes) {
element.css("border", "1px solid #cccccc");
//linkFunction is linked with each element with scope to get the element specific data.
var linkFunction = function($scope, element, attributes) {
element.html("Student: <b>"+$scope.student.name +"</b> ,
Roll No: <b>"+$scope.student.rollno+"</b><br/>");
element.css("background-color", "#ff00ff");
}
return linkFunction;
}
return directive;
});
Define controller to update the scope for directive. Here we are using name attribute's value as scope's child.
mainApp.controller('StudentController', function($scope) {
$scope.Mahesh = {};
$scope.Mahesh.name = "Mahesh Parashar";
$scope.Mahesh.rollno = 1;
$scope.Piyush = {};
$scope.Piyush.name = "Piyush Parashar";
$scope.Piyush.rollno = 2;
});
<html>
<head>
<title>Angular JS Custom Directives</title>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "StudentController">
<student name = "Mahesh"></student><br/>
<student name = "Piyush"></student>
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.directive('student', function() {
var directive = {};
directive.restrict = 'E';
directive.template = "Student: <b>{{student.name}}</b> ,
Roll No: <b>{{student.rollno}}</b>";
directive.scope = {
student : "=name"
}
directive.compile = function(element, attributes) {
element.css("border", "1px solid #cccccc");
var linkFunction = function($scope, element, attributes) {
element.html("Student: <b>"+$scope.student.name +"</b> ,
Roll No: <b>"+$scope.student.rollno+"</b><br/>");
element.css("background-color", "#ff00ff");
}
return linkFunction;
}
return directive;
});
mainApp.controller('StudentController', function($scope) {
$scope.Mahesh = {};
$scope.Mahesh.name = "Mahesh Parashar";
$scope.Mahesh.rollno = 1;
$scope.Piyush = {};
$scope.Piyush.name = "Piyush Parashar";
$scope.Piyush.rollno = 2;
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
AngularJS supports inbuilt internationalization for three types of filters currency, date and numbers. We only need to incorporate corresponding js according to locale of the country. By default it handles the locale of the browser. For example, to use Danish locale, use following script.
<script src = "https://code.angularjs.org/1.2.5/i18n/angular-locale_da-dk.js">
</script>
<html>
<head>
<title>Angular JS Forms</title>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "StudentController">
{{fees | currency }} <br/><br/>
{{admissiondate | date }} <br/><br/>
{{rollno | number }}
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<script src = "https://code.angularjs.org/1.3.14/i18n/angular-locale_da-dk.js">
</script>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('StudentController', function($scope) {
$scope.fees = 100;
$scope.admissiondate = new Date();
$scope.rollno = 123.45;
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
<html>
<head>
<title>Angular JS Forms</title>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "StudentController">
{{fees | currency }} <br/><br/>
{{admissiondate | date }} <br/><br/>
{{rollno | number }}
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<!-- <script src = "https://code.angularjs.org/1.3.14/i18n/angular-locale_da-dk.js">
</script> -->
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('StudentController', function($scope) {
$scope.fees = 100;
$scope.admissiondate = new Date();
$scope.rollno = 123.45;
});
</script>
</body>
</html>
Open textAngularJS.htm in a web browser. See the result.
AngularJS supports inbuilt internationalization for three types of filters : Currency, Date, and Numbers. We only need to incorporate corresponding java script according to locale of the country. By default, it considers the locale of the browser. For example, for Danish locale, use the following script −
<script src = "https://code.angularjs.org/1.2.5/i18n/angular-locale_da-dk.js">
</script>
<html>
<head>
<title>Angular JS Forms</title>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "StudentController">
{{fees | currency }} <br/><br/>
{{admissiondate | date }} <br/><br/>
{{rollno | number }}
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<script src = "https://code.angularjs.org/1.3.14/i18n/angular-locale_da-dk.js">
</script>
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('StudentController', function($scope) {
$scope.fees = 100;
$scope.admissiondate = new Date();
$scope.rollno = 123.45;
});
</script>
</body>
</html>
Open the file testAngularJS.htm in a web browser and see the result.
<html>
<head>
<title>Angular JS Forms</title>
</head>
<body>
<h2>AngularJS Sample Application</h2>
<div ng-app = "mainApp" ng-controller = "StudentController">
{{fees | currency }} <br/><br/>
{{admissiondate | date }} <br/><br/>
{{rollno | number }}
</div>
<script src = "https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js">
</script>
<!-- <script src = "https://code.angularjs.org/1.3.14/i18n/angular-locale_da-dk.js">
</script> -->
<script>
var mainApp = angular.module("mainApp", []);
mainApp.controller('StudentController', function($scope) {
$scope.fees = 100;
$scope.admissiondate = new Date();
$scope.rollno = 123.45;
});
</script>
</body>
</html>
Open the file testAngularJS.htm in a web browser and see the result.
16 Lectures
1.5 hours
Anadi Sharma
40 Lectures
2.5 hours
Skillbakerystudios
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2884,
"s": 2699,
"text": "AngularJS is an open source web application framework. It was originally developed in 2009 by Misko Hevery and Adam Abrons. It is now maintained by Google. Its latest version is 1.4.3."
},
{
"code": null,
"e": 2961,
"s": 2884,
"text": "Definition of AngularJS as put by its official documentation is as follows −"
},
{
"code": null,
"e": 3363,
"s": 2961,
"text": "AngularJS is a structural framework for dynamic web apps. It lets you use HTML as your template language and lets you extend HTML's syntax to express your application's components clearly and succinctly. Angular's data binding and dependency injection eliminate much of the code you currently have to write. And it all happens within the browser, making it an ideal partner with any server technology."
},
{
"code": null,
"e": 3468,
"s": 3363,
"text": "AngularJS is a powerful JavaScript based development framework to create RICH Internet Application(RIA)."
},
{
"code": null,
"e": 3573,
"s": 3468,
"text": "AngularJS is a powerful JavaScript based development framework to create RICH Internet Application(RIA)."
},
{
"code": null,
"e": 3706,
"s": 3573,
"text": "AngularJS provides developers options to write client side application (using JavaScript) in a clean MVC(Model View Controller) way."
},
{
"code": null,
"e": 3839,
"s": 3706,
"text": "AngularJS provides developers options to write client side application (using JavaScript) in a clean MVC(Model View Controller) way."
},
{
"code": null,
"e": 3975,
"s": 3839,
"text": "Application written in AngularJS is cross-browser compliant. AngularJS automatically handles JavaScript code suitable for each browser."
},
{
"code": null,
"e": 4111,
"s": 3975,
"text": "Application written in AngularJS is cross-browser compliant. AngularJS automatically handles JavaScript code suitable for each browser."
},
{
"code": null,
"e": 4261,
"s": 4111,
"text": "AngularJS is open source, completely free, and used by thousands of developers around the world. It is licensed under the Apache License version 2.0."
},
{
"code": null,
"e": 4411,
"s": 4261,
"text": "AngularJS is open source, completely free, and used by thousands of developers around the world. It is licensed under the Apache License version 2.0."
},
{
"code": null,
"e": 4543,
"s": 4411,
"text": "Overall, AngularJS is a framework to build large scale and high performance web application while keeping them as easy-to-maintain."
},
{
"code": null,
"e": 4601,
"s": 4543,
"text": "Following are most important core features of AngularJS −"
},
{
"code": null,
"e": 4695,
"s": 4601,
"text": "Data-binding − It is the automatic synchronization of data between model and view components."
},
{
"code": null,
"e": 4789,
"s": 4695,
"text": "Data-binding − It is the automatic synchronization of data between model and view components."
},
{
"code": null,
"e": 4888,
"s": 4789,
"text": "Scope − These are objects that refer to the model. They act as a glue between controller and view."
},
{
"code": null,
"e": 4987,
"s": 4888,
"text": "Scope − These are objects that refer to the model. They act as a glue between controller and view."
},
{
"code": null,
"e": 5069,
"s": 4987,
"text": "Controller − These are JavaScript functions that are bound to a particular scope."
},
{
"code": null,
"e": 5151,
"s": 5069,
"text": "Controller − These are JavaScript functions that are bound to a particular scope."
},
{
"code": null,
"e": 5324,
"s": 5151,
"text": "Services − AngularJS come with several built-in services for example $https: to make a XMLHttpRequests. These are singleton objects which are instantiated only once in app."
},
{
"code": null,
"e": 5497,
"s": 5324,
"text": "Services − AngularJS come with several built-in services for example $https: to make a XMLHttpRequests. These are singleton objects which are instantiated only once in app."
},
{
"code": null,
"e": 5577,
"s": 5497,
"text": "Filters − These select a subset of items from an array and returns a new array."
},
{
"code": null,
"e": 5657,
"s": 5577,
"text": "Filters − These select a subset of items from an array and returns a new array."
},
{
"code": null,
"e": 5891,
"s": 5657,
"text": "Directives − Directives are markers on DOM elements (such as elements, attributes, css, and more). These can be used to create custom HTML tags that serve as new, custom widgets. AngularJS has built-in directives (ngBind, ngModel...)"
},
{
"code": null,
"e": 6125,
"s": 5891,
"text": "Directives − Directives are markers on DOM elements (such as elements, attributes, css, and more). These can be used to create custom HTML tags that serve as new, custom widgets. AngularJS has built-in directives (ngBind, ngModel...)"
},
{
"code": null,
"e": 6306,
"s": 6125,
"text": "Templates − These are the rendered view with information from the controller and model. These can be a single file (like index.html) or multiple views in one page using \"partials\"."
},
{
"code": null,
"e": 6487,
"s": 6306,
"text": "Templates − These are the rendered view with information from the controller and model. These can be a single file (like index.html) or multiple views in one page using \"partials\"."
},
{
"code": null,
"e": 6531,
"s": 6487,
"text": "Routing − It is concept of switching views."
},
{
"code": null,
"e": 6575,
"s": 6531,
"text": "Routing − It is concept of switching views."
},
{
"code": null,
"e": 6928,
"s": 6575,
"text": "Model View Whatever − MVC is a design pattern for dividing an application into different parts (called Model, View and Controller), each with distinct responsibilities. AngularJS does not implement MVC in the traditional sense, but rather something closer to MVVM (Model-View-ViewModel). The Angular JS team refers it humorously as Model View Whatever."
},
{
"code": null,
"e": 7281,
"s": 6928,
"text": "Model View Whatever − MVC is a design pattern for dividing an application into different parts (called Model, View and Controller), each with distinct responsibilities. AngularJS does not implement MVC in the traditional sense, but rather something closer to MVVM (Model-View-ViewModel). The Angular JS team refers it humorously as Model View Whatever."
},
{
"code": null,
"e": 7465,
"s": 7281,
"text": "Deep Linking − Deep linking allows you to encode the state of application in the URL so that it can be bookmarked. The application can then be restored from the URL to the same state."
},
{
"code": null,
"e": 7649,
"s": 7465,
"text": "Deep Linking − Deep linking allows you to encode the state of application in the URL so that it can be bookmarked. The application can then be restored from the URL to the same state."
},
{
"code": null,
"e": 7820,
"s": 7649,
"text": "Dependency Injection − AngularJS has a built-in dependency injection subsystem that helps the developer by making the application easier to develop, understand, and test."
},
{
"code": null,
"e": 7991,
"s": 7820,
"text": "Dependency Injection − AngularJS has a built-in dependency injection subsystem that helps the developer by making the application easier to develop, understand, and test."
},
{
"code": null,
"e": 8111,
"s": 7991,
"text": "Following diagram depicts some important parts of AngularJS which we will discuss in detail in the subsequent chapters."
},
{
"code": null,
"e": 8213,
"s": 8111,
"text": "AngularJS provides capability to create Single Page Application in a very clean and maintainable way."
},
{
"code": null,
"e": 8315,
"s": 8213,
"text": "AngularJS provides capability to create Single Page Application in a very clean and maintainable way."
},
{
"code": null,
"e": 8416,
"s": 8315,
"text": "AngularJS provides data binding capability to HTML thus giving user a rich and responsive experience"
},
{
"code": null,
"e": 8517,
"s": 8416,
"text": "AngularJS provides data binding capability to HTML thus giving user a rich and responsive experience"
},
{
"code": null,
"e": 8550,
"s": 8517,
"text": "AngularJS code is unit testable."
},
{
"code": null,
"e": 8583,
"s": 8550,
"text": "AngularJS code is unit testable."
},
{
"code": null,
"e": 8659,
"s": 8583,
"text": "AngularJS uses dependency injection and make use of separation of concerns."
},
{
"code": null,
"e": 8735,
"s": 8659,
"text": "AngularJS uses dependency injection and make use of separation of concerns."
},
{
"code": null,
"e": 8775,
"s": 8735,
"text": "AngularJS provides reusable components."
},
{
"code": null,
"e": 8815,
"s": 8775,
"text": "AngularJS provides reusable components."
},
{
"code": null,
"e": 8885,
"s": 8815,
"text": "With AngularJS, developer write less code and get more functionality."
},
{
"code": null,
"e": 8955,
"s": 8885,
"text": "With AngularJS, developer write less code and get more functionality."
},
{
"code": null,
"e": 9062,
"s": 8955,
"text": "In AngularJS, views are pure html pages, and controllers written in JavaScript do the business processing."
},
{
"code": null,
"e": 9169,
"s": 9062,
"text": "In AngularJS, views are pure html pages, and controllers written in JavaScript do the business processing."
},
{
"code": null,
"e": 9309,
"s": 9169,
"text": "On top of everything, AngularJS applications can run on all major browsers and smart phones including Android and iOS based phones/tablets."
},
{
"code": null,
"e": 9413,
"s": 9309,
"text": "Though AngularJS comes with lots of plus points but same time we should consider the following points −"
},
{
"code": null,
"e": 9590,
"s": 9413,
"text": "Not Secure − Being JavaScript only framework, application written in AngularJS are not safe. Server side authentication and authorization is must to keep an application secure."
},
{
"code": null,
"e": 9767,
"s": 9590,
"text": "Not Secure − Being JavaScript only framework, application written in AngularJS are not safe. Server side authentication and authorization is must to keep an application secure."
},
{
"code": null,
"e": 9886,
"s": 9767,
"text": "Not degradable − If your application user disables JavaScript then user will just see the basic page and nothing more."
},
{
"code": null,
"e": 10005,
"s": 9886,
"text": "Not degradable − If your application user disables JavaScript then user will just see the basic page and nothing more."
},
{
"code": null,
"e": 10079,
"s": 10005,
"text": "The AngularJS framework can be divided into following three major parts −"
},
{
"code": null,
"e": 10155,
"s": 10079,
"text": "ng-app − This directive defines and links an AngularJS application to HTML."
},
{
"code": null,
"e": 10231,
"s": 10155,
"text": "ng-app − This directive defines and links an AngularJS application to HTML."
},
{
"code": null,
"e": 10328,
"s": 10231,
"text": "ng-model − This directive binds the values of AngularJS application data to HTML input controls."
},
{
"code": null,
"e": 10425,
"s": 10328,
"text": "ng-model − This directive binds the values of AngularJS application data to HTML input controls."
},
{
"code": null,
"e": 10501,
"s": 10425,
"text": "ng-bind − This directive binds the AngularJS Application data to HTML tags."
},
{
"code": null,
"e": 10577,
"s": 10501,
"text": "ng-bind − This directive binds the AngularJS Application data to HTML tags."
},
{
"code": null,
"e": 10759,
"s": 10577,
"text": "In this chapter we will discuss about how to set up AngularJS library to be used in web application development. We will also briefly study the directory structure and its contents."
},
{
"code": null,
"e": 10873,
"s": 10759,
"text": "When you open the link https://angularjs.org/, you will see there are two options to download AngularJS library −"
},
{
"code": null,
"e": 10962,
"s": 10873,
"text": "View on GitHub − Click on this button to go to GitHub and get all of the latest scripts."
},
{
"code": null,
"e": 11051,
"s": 10962,
"text": "View on GitHub − Click on this button to go to GitHub and get all of the latest scripts."
},
{
"code": null,
"e": 11134,
"s": 11051,
"text": "Download AngularJS 1 − Or click on this button, a screen as below would be seen − "
},
{
"code": null,
"e": 11217,
"s": 11134,
"text": "Download AngularJS 1 − Or click on this button, a screen as below would be seen − "
},
{
"code": null,
"e": 11974,
"s": 11217,
"text": "This screen gives various options of using Angular JS as follows −\n\nDownloading and hosting files locally\n\nThere are two different options legacy and latest. The names itself are self descriptive. legacy has version less than 1.2.x and latest has 1.5.x version.\nWe can also go with the minified, uncompressed or zipped version.\n\n\nCDN access − You also have access to a CDN. The CDN will give you access around the world to regional data centers that in this case, Google host. This means using CDN moves the responsibility of hosting files from your own servers to a series of external ones. This also offers an advantage that if the visitor to your webpage has already downloaded a copy of AngularJS from the same CDN, it won't have to be re-downloaded.\n\n"
},
{
"code": null,
"e": 12041,
"s": 11974,
"text": "This screen gives various options of using Angular JS as follows −"
},
{
"code": null,
"e": 12303,
"s": 12041,
"text": "Downloading and hosting files locally\n\nThere are two different options legacy and latest. The names itself are self descriptive. legacy has version less than 1.2.x and latest has 1.5.x version.\nWe can also go with the minified, uncompressed or zipped version.\n\n"
},
{
"code": null,
"e": 12341,
"s": 12303,
"text": "Downloading and hosting files locally"
},
{
"code": null,
"e": 12496,
"s": 12341,
"text": "There are two different options legacy and latest. The names itself are self descriptive. legacy has version less than 1.2.x and latest has 1.5.x version."
},
{
"code": null,
"e": 12651,
"s": 12496,
"text": "There are two different options legacy and latest. The names itself are self descriptive. legacy has version less than 1.2.x and latest has 1.5.x version."
},
{
"code": null,
"e": 12717,
"s": 12651,
"text": "We can also go with the minified, uncompressed or zipped version."
},
{
"code": null,
"e": 12783,
"s": 12717,
"text": "We can also go with the minified, uncompressed or zipped version."
},
{
"code": null,
"e": 13208,
"s": 12783,
"text": "CDN access − You also have access to a CDN. The CDN will give you access around the world to regional data centers that in this case, Google host. This means using CDN moves the responsibility of hosting files from your own servers to a series of external ones. This also offers an advantage that if the visitor to your webpage has already downloaded a copy of AngularJS from the same CDN, it won't have to be re-downloaded."
},
{
"code": null,
"e": 13633,
"s": 13208,
"text": "CDN access − You also have access to a CDN. The CDN will give you access around the world to regional data centers that in this case, Google host. This means using CDN moves the responsibility of hosting files from your own servers to a series of external ones. This also offers an advantage that if the visitor to your webpage has already downloaded a copy of AngularJS from the same CDN, it won't have to be re-downloaded."
},
{
"code": null,
"e": 13819,
"s": 13633,
"text": "Try the new angularJS 2 − Click on this button to download Angular JS beta 2 version.This version is very fast, mobile supported and flexible compare to legacy and latest of AngularJS 1"
},
{
"code": null,
"e": 14005,
"s": 13819,
"text": "Try the new angularJS 2 − Click on this button to download Angular JS beta 2 version.This version is very fast, mobile supported and flexible compare to legacy and latest of AngularJS 1"
},
{
"code": null,
"e": 14122,
"s": 14005,
"text": "Now let us write a simple example using AngularJS library. Let us create an HTML file myfirstexample.html as below −"
},
{
"code": null,
"e": 14709,
"s": 14122,
"text": "<!doctype html>\n<html>\n \n <head>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.5.2/angular.min.js\"></script>\n </head>\n \n <body ng-app = \"myapp\">\n \n <div ng-controller = \"HelloController\" >\n <h2>Welcome {{helloTo.title}} to the world of Tutorialspoint!</h2>\n </div>\n \n <script>\n angular.module(\"myapp\", [])\n \n .controller(\"HelloController\", function($scope) {\n $scope.helloTo = {};\n $scope.helloTo.title = \"AngularJS\";\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 14764,
"s": 14709,
"text": "Following sections describe the above code in detail −"
},
{
"code": null,
"e": 14854,
"s": 14764,
"text": "We have included the AngularJS JavaScript file in the HTML page so we can use AngularJS −"
},
{
"code": null,
"e": 14967,
"s": 14854,
"text": "<head>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.4.8/angular.min.js\"></script>\n</head>"
},
{
"code": null,
"e": 15130,
"s": 14967,
"text": "If you want to update into latest version of Angular JS, use the following script source \nor else Check the latest version of AngularJS on their official website."
},
{
"code": null,
"e": 15243,
"s": 15130,
"text": "<head>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.5.2/angular.min.js\"></script>\n</head>"
},
{
"code": null,
"e": 15465,
"s": 15243,
"text": "Next we tell what part of the HTML contains the AngularJS app. This done by adding the ng-app attribute to the root HTML element of the AngularJS app. You can either add it to html element or body element as shown below −"
},
{
"code": null,
"e": 15497,
"s": 15465,
"text": "<body ng-app = \"myapp\">\n</body>"
},
{
"code": null,
"e": 15521,
"s": 15497,
"text": "The view is this part −"
},
{
"code": null,
"e": 15639,
"s": 15521,
"text": "<div ng-controller = \"HelloController\" >\n <h2>Welcome {{helloTo.title}} to the world of Tutorialspoint!</h2>\n</div>"
},
{
"code": null,
"e": 15815,
"s": 15639,
"text": "ng-controller tells AngularJS what controller to use with this view. helloTo.title tells AngularJS to write the \"model\" value named helloTo.title to the HTML at this location."
},
{
"code": null,
"e": 15840,
"s": 15815,
"text": "The controller part is −"
},
{
"code": null,
"e": 16023,
"s": 15840,
"text": "<script>\n angular.module(\"myapp\", [])\n \n .controller(\"HelloController\", function($scope) {\n $scope.helloTo = {};\n $scope.helloTo.title = \"AngularJS\";\n });\n</script>"
},
{
"code": null,
"e": 16309,
"s": 16023,
"text": "This code registers a controller function named HelloController in the angular module named myapp. We will study more about modules and controllers in their respective chapters. The controller function is registered in angular via the angular.module(...).controller(...) function call."
},
{
"code": null,
"e": 16478,
"s": 16309,
"text": "The $scope parameter passed to the controller function is the model. The controller function adds a helloTo JavaScript object, and in that object it adds a title field."
},
{
"code": null,
"e": 16583,
"s": 16478,
"text": "Save the above code as myfirstexample.html and open it in any browser. You will see an output as below −"
},
{
"code": null,
"e": 16634,
"s": 16583,
"text": "Welcome AngularJS to the world of Tutorialspoint!\n"
},
{
"code": null,
"e": 16700,
"s": 16634,
"text": "When the page is loaded in the browser, following things happen −"
},
{
"code": null,
"e": 16914,
"s": 16700,
"text": "HTML document is loaded into the browser, and evaluated by the browser. AngularJS JavaScript file is loaded, the angular global object is created. Next, JavaScript which registers controller functions is executed."
},
{
"code": null,
"e": 17128,
"s": 16914,
"text": "HTML document is loaded into the browser, and evaluated by the browser. AngularJS JavaScript file is loaded, the angular global object is created. Next, JavaScript which registers controller functions is executed."
},
{
"code": null,
"e": 17290,
"s": 17128,
"text": "Next AngularJS scans through the HTML to look for AngularJS apps and views. Once view is located, it connects that view to the corresponding controller function."
},
{
"code": null,
"e": 17452,
"s": 17290,
"text": "Next AngularJS scans through the HTML to look for AngularJS apps and views. Once view is located, it connects that view to the corresponding controller function."
},
{
"code": null,
"e": 17606,
"s": 17452,
"text": "Next, AngularJS executes the controller functions. It then renders the views with data from the model populated by the controller. The page is now ready."
},
{
"code": null,
"e": 17760,
"s": 17606,
"text": "Next, AngularJS executes the controller functions. It then renders the views with data from the model populated by the controller. The page is now ready."
},
{
"code": null,
"e": 17952,
"s": 17760,
"text": "Model View Controller or MVC as it is popularly called, is a software design pattern for developing web applications. A Model View Controller pattern is made up of the following three parts −"
},
{
"code": null,
"e": 18032,
"s": 17952,
"text": "Model − It is the lowest level of the pattern responsible for maintaining data."
},
{
"code": null,
"e": 18112,
"s": 18032,
"text": "Model − It is the lowest level of the pattern responsible for maintaining data."
},
{
"code": null,
"e": 18194,
"s": 18112,
"text": "View − It is responsible for displaying all or a portion of the data to the user."
},
{
"code": null,
"e": 18276,
"s": 18194,
"text": "View − It is responsible for displaying all or a portion of the data to the user."
},
{
"code": null,
"e": 18370,
"s": 18276,
"text": "Controller − It is a software Code that controls the interactions between the Model and View."
},
{
"code": null,
"e": 18464,
"s": 18370,
"text": "Controller − It is a software Code that controls the interactions between the Model and View."
},
{
"code": null,
"e": 18875,
"s": 18464,
"text": "MVC is popular because it isolates the application logic from the user interface layer and supports separation of concerns. The controller receives all requests for the application and then works with the model to prepare any data needed by the view. The view then uses the data prepared by the controller to generate a final presentable response. The MVC abstraction can be graphically represented as follows."
},
{
"code": null,
"e": 19026,
"s": 18875,
"text": "The model is responsible for managing application data. It responds to the request from view and to the instructions from controller to update itself."
},
{
"code": null,
"e": 19243,
"s": 19026,
"text": "A presentation of data in a particular format, triggered by the controller's decision to present the data. They are script-based template systems such as JSP, ASP, PHP and very easy to integrate with AJAX technology."
},
{
"code": null,
"e": 19458,
"s": 19243,
"text": "The controller responds to user input and performs interactions on the data model objects. The controller receives input, validates it, and then performs business operations that modify the state of the data model."
},
{
"code": null,
"e": 19566,
"s": 19458,
"text": "AngularJS is a MVC based framework. In the coming chapters, we will see how AngularJS uses MVC methodology."
},
{
"code": null,
"e": 19779,
"s": 19566,
"text": "Before we start with creating actual HelloWorld application using AngularJS, let us see what are the actual parts of a AngularJS application. An AngularJS application consists of following three important parts −"
},
{
"code": null,
"e": 19855,
"s": 19779,
"text": "ng-app − This directive defines and links an AngularJS application to HTML."
},
{
"code": null,
"e": 19931,
"s": 19855,
"text": "ng-app − This directive defines and links an AngularJS application to HTML."
},
{
"code": null,
"e": 20028,
"s": 19931,
"text": "ng-model − This directive binds the values of AngularJS application data to HTML input controls."
},
{
"code": null,
"e": 20125,
"s": 20028,
"text": "ng-model − This directive binds the values of AngularJS application data to HTML input controls."
},
{
"code": null,
"e": 20201,
"s": 20125,
"text": "ng-bind − This directive binds the AngularJS Application data to HTML tags."
},
{
"code": null,
"e": 20277,
"s": 20201,
"text": "ng-bind − This directive binds the AngularJS Application data to HTML tags."
},
{
"code": null,
"e": 20348,
"s": 20277,
"text": "Being a pure JavaScript framework, It can be added using <Script> tag."
},
{
"code": null,
"e": 20445,
"s": 20348,
"text": "<script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n</script>"
},
{
"code": null,
"e": 20477,
"s": 20445,
"text": "<div ng-app = \"\">\n ...\n</div>"
},
{
"code": null,
"e": 20541,
"s": 20477,
"text": "<p>Enter your Name: <input type = \"text\" ng-model = \"name\"></p>"
},
{
"code": null,
"e": 20586,
"s": 20541,
"text": "<p>Hello <span ng-bind = \"name\"></span>!</p>"
},
{
"code": null,
"e": 20635,
"s": 20586,
"text": "Use above mentioned three steps in an HTML page."
},
{
"code": null,
"e": 20653,
"s": 20635,
"text": "testAngularJS.htm"
},
{
"code": null,
"e": 21091,
"s": 20653,
"text": "<html>\n <head>\n <title>AngularJS First Application</title>\n </head>\n \n <body>\n <h1>Sample Application</h1>\n \n <div ng-app = \"\">\n <p>Enter your Name: <input type = \"text\" ng-model = \"name\"></p>\n <p>Hello <span ng-bind = \"name\"></span>!</p>\n </div>\n \n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 21168,
"s": 21091,
"text": "Open textAngularJS.htm in a web browser. Enter your name and see the result."
},
{
"code": null,
"e": 21186,
"s": 21168,
"text": "Enter your Name: "
},
{
"code": null,
"e": 21194,
"s": 21186,
"text": "Hello !"
},
{
"code": null,
"e": 21257,
"s": 21194,
"text": "ng-app directive indicates the start of AngularJS application."
},
{
"code": null,
"e": 21320,
"s": 21257,
"text": "ng-app directive indicates the start of AngularJS application."
},
{
"code": null,
"e": 21463,
"s": 21320,
"text": "ng-model directive then creates a model variable named \"name\" which can be used with the html page and within the div having ng-app directive."
},
{
"code": null,
"e": 21606,
"s": 21463,
"text": "ng-model directive then creates a model variable named \"name\" which can be used with the html page and within the div having ng-app directive."
},
{
"code": null,
"e": 21723,
"s": 21606,
"text": "ng-bind then uses the name model to be displayed in the html span tag whenever user input something in the text box."
},
{
"code": null,
"e": 21840,
"s": 21723,
"text": "ng-bind then uses the name model to be displayed in the html span tag whenever user input something in the text box."
},
{
"code": null,
"e": 21902,
"s": 21840,
"text": "Closing</div> tag indicates the end of AngularJS application."
},
{
"code": null,
"e": 21964,
"s": 21902,
"text": "Closing</div> tag indicates the end of AngularJS application."
},
{
"code": null,
"e": 22111,
"s": 21964,
"text": "AngularJS directives are used to extend HTML. These are special attributes starting with ng- prefix. We're going to discuss following directives −"
},
{
"code": null,
"e": 22168,
"s": 22111,
"text": "ng-app − This directive starts an AngularJS Application."
},
{
"code": null,
"e": 22225,
"s": 22168,
"text": "ng-app − This directive starts an AngularJS Application."
},
{
"code": null,
"e": 22280,
"s": 22225,
"text": "ng-init − This directive initializes application data."
},
{
"code": null,
"e": 22335,
"s": 22280,
"text": "ng-init − This directive initializes application data."
},
{
"code": null,
"e": 22432,
"s": 22335,
"text": "ng-model − This directive binds the values of AngularJS application data to HTML input controls."
},
{
"code": null,
"e": 22529,
"s": 22432,
"text": "ng-model − This directive binds the values of AngularJS application data to HTML input controls."
},
{
"code": null,
"e": 22609,
"s": 22529,
"text": "ng-repeat − This directive repeats html elements for each item in a collection."
},
{
"code": null,
"e": 22689,
"s": 22609,
"text": "ng-repeat − This directive repeats html elements for each item in a collection."
},
{
"code": null,
"e": 23070,
"s": 22689,
"text": "ng-app directive starts an AngularJS Application. It defines the root element. It automatically initializes or bootstraps the application when web page containing AngularJS Application is loaded. It is also used to load various AngularJS modules in AngularJS Application. In following example, we've defined a default AngularJS application using ng-app attribute of a div element."
},
{
"code": null,
"e": 23102,
"s": 23070,
"text": "<div ng-app = \"\">\n ...\n</div>"
},
{
"code": null,
"e": 23352,
"s": 23102,
"text": "ng-init directive initializes an AngularJS Application data. It is used to put values to the variables to be used in the application. In following example, we'll initialize an array of countries. We're using JSON syntax to define array of countries."
},
{
"code": null,
"e": 23524,
"s": 23352,
"text": "<div ng-app = \"\" ng-init = \"countries = [{locale:'en-US',name:'United States'}, \n {locale:'en-GB',name:'United Kingdom'}, {locale:'en-FR',name:'France'}]\">\n ...\n</div>"
},
{
"code": null,
"e": 23668,
"s": 23524,
"text": "This directive binds the values of AngularJS application data to HTML input controls. In following example, we've defined a model named \"name\"."
},
{
"code": null,
"e": 23767,
"s": 23668,
"text": "<div ng-app = \"\">\n ...\n <p>Enter your Name: <input type = \"text\" ng-model = \"name\"></p>\n</div>"
},
{
"code": null,
"e": 23902,
"s": 23767,
"text": "ng-repeat directive repeats html elements for each item in a collection. In following example, we've iterated over array of countries."
},
{
"code": null,
"e": 24128,
"s": 23902,
"text": "<div ng-app = \"\">\n ...\n <p>List of Countries with locale:</p>\n \n <ol>\n <li ng-repeat = \"country in countries\">\n {{ 'Country: ' + country.name + ', Locale: ' + country.locale }}\n </li>\n </ol>\n</div>"
},
{
"code": null,
"e": 24196,
"s": 24128,
"text": "Following example will showcase all the above mentioned directives."
},
{
"code": null,
"e": 25007,
"s": 24196,
"text": "<html>\n <head>\n <title>AngularJS Directives</title>\n </head>\n \n <body>\n <h1>Sample Application</h1>\n \n <div ng-app = \"\" ng-init = \"countries = [{locale:'en-US',name:'United States'}, \n {locale:'en-GB',name:'United Kingdom'}, {locale:'en-FR',name:'France'}]\"> \n <p>Enter your Name: <input type = \"text\" ng-model = \"name\"></p>\n <p>Hello <span ng-bind = \"name\"></span>!</p>\n <p>List of Countries with locale:</p>\n \n <ol>\n <li ng-repeat = \"country in countries\">\n {{ 'Country: ' + country.name + ', Locale: ' + country.locale }}\n </li>\n </ol>\n </div>\n \n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 25084,
"s": 25007,
"text": "Open textAngularJS.htm in a web browser. Enter your name and see the result."
},
{
"code": null,
"e": 25102,
"s": 25084,
"text": "Enter your Name: "
},
{
"code": null,
"e": 25110,
"s": 25102,
"text": "Hello !"
},
{
"code": null,
"e": 25141,
"s": 25110,
"text": "List of Countries with locale:"
},
{
"code": null,
"e": 25225,
"s": 25141,
"text": "\n\n {{ 'Country: ' + country.name + ', Locale: ' + country.locale }}\n \n"
},
{
"code": null,
"e": 25307,
"s": 25225,
"text": "\n {{ 'Country: ' + country.name + ', Locale: ' + country.locale }}\n "
},
{
"code": null,
"e": 25592,
"s": 25307,
"text": "Expressions are used to bind application data to html. Expressions are written inside double braces like {{ expression}}. Expressions behaves in same way as ng-bind directives. AngularJS application expressions are pure javascript expressions and outputs the data where they are used."
},
{
"code": null,
"e": 25641,
"s": 25592,
"text": "<p>Expense on Books : {{cost * quantity}} Rs</p>"
},
{
"code": null,
"e": 25702,
"s": 25641,
"text": "<p>Hello {{student.firstname + \" \" + student.lastname}}!</p>"
},
{
"code": null,
"e": 25737,
"s": 25702,
"text": "<p>Roll No: {{student.rollno}}</p>"
},
{
"code": null,
"e": 25770,
"s": 25737,
"text": "<p>Marks(Math): {{marks[3]}}</p>"
},
{
"code": null,
"e": 25839,
"s": 25770,
"text": "Following example will showcase all the above mentioned expressions."
},
{
"code": null,
"e": 26501,
"s": 25839,
"text": "<html>\n <head>\n <title>AngularJS Expressions</title>\n </head>\n \n <body>\n <h1>Sample Application</h1>\n \n <div ng-app = \"\" ng-init = \"quantity = 1;cost = 30; \n student = {firstname:'Mahesh',lastname:'Parashar',rollno:101};\n marks = [80,90,75,73,60]\">\n <p>Hello {{student.firstname + \" \" + student.lastname}}!</p>\n <p>Expense on Books : {{cost * quantity}} Rs</p>\n <p>Roll No: {{student.rollno}}</p>\n <p>Marks(Math): {{marks[3]}}</p>\n </div>\n \n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 26558,
"s": 26501,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 26612,
"s": 26558,
"text": "Hello {{student.firstname + \" \" + student.lastname}}!"
},
{
"code": null,
"e": 26654,
"s": 26612,
"text": "Expense on Books : {{cost * quantity}} Rs"
},
{
"code": null,
"e": 26682,
"s": 26654,
"text": "Roll No: {{student.rollno}}"
},
{
"code": null,
"e": 26708,
"s": 26682,
"text": "Marks(Math): {{marks[3]}}"
},
{
"code": null,
"e": 27062,
"s": 26708,
"text": "AngularJS application mainly relies on controllers to control the flow of data in the application. A controller is defined using ng-controller directive. A controller is a JavaScript object containing attributes/properties and functions. Each controller accepts $scope as a parameter which refers to the application/module that controller is to control."
},
{
"code": null,
"e": 27130,
"s": 27062,
"text": "<div ng-app = \"\" ng-controller = \"studentController\">\n ...\n</div>"
},
{
"code": null,
"e": 27275,
"s": 27130,
"text": "Here we've declared a controller studentController using ng-controller directive. As a next step we'll define the studentController as follows −"
},
{
"code": null,
"e": 27637,
"s": 27275,
"text": "<script>\n function studentController($scope) {\n $scope.student = {\n firstName: \"Mahesh\",\n lastName: \"Parashar\",\n \n fullName: function() {\n var studentObject;\n studentObject = $scope.student;\n return studentObject.firstName + \" \" + studentObject.lastName;\n }\n };\n }\n</script>"
},
{
"code": null,
"e": 27711,
"s": 27637,
"text": "studentController defined as a JavaScript object with $scope as argument."
},
{
"code": null,
"e": 27785,
"s": 27711,
"text": "studentController defined as a JavaScript object with $scope as argument."
},
{
"code": null,
"e": 27860,
"s": 27785,
"text": "$scope refers to application which is to use the studentController object."
},
{
"code": null,
"e": 27935,
"s": 27860,
"text": "$scope refers to application which is to use the studentController object."
},
{
"code": null,
"e": 27991,
"s": 27935,
"text": "$scope.student is property of studentController object."
},
{
"code": null,
"e": 28047,
"s": 27991,
"text": "$scope.student is property of studentController object."
},
{
"code": null,
"e": 28156,
"s": 28047,
"text": "firstName and lastName are two properties of $scope.student object. We've passed the default values to them."
},
{
"code": null,
"e": 28265,
"s": 28156,
"text": "firstName and lastName are two properties of $scope.student object. We've passed the default values to them."
},
{
"code": null,
"e": 28358,
"s": 28265,
"text": "fullName is the function of $scope.student object whose task is to return the combined name."
},
{
"code": null,
"e": 28451,
"s": 28358,
"text": "fullName is the function of $scope.student object whose task is to return the combined name."
},
{
"code": null,
"e": 28540,
"s": 28451,
"text": "In fullName function we're getting the student object and then return the combined name."
},
{
"code": null,
"e": 28629,
"s": 28540,
"text": "In fullName function we're getting the student object and then return the combined name."
},
{
"code": null,
"e": 28739,
"s": 28629,
"text": "As a note, we can also define the controller object in separate JS file and refer that file in the html page."
},
{
"code": null,
"e": 28849,
"s": 28739,
"text": "As a note, we can also define the controller object in separate JS file and refer that file in the html page."
},
{
"code": null,
"e": 28949,
"s": 28849,
"text": "Now we can use studentController's student property using ng-model or using expressions as follows."
},
{
"code": null,
"e": 29143,
"s": 28949,
"text": "Enter first name: <input type = \"text\" ng-model = \"student.firstName\"><br>\nEnter last name: <input type = \"text\" ng-model = \"student.lastName\"><br>\n<br>\nYou are entering: {{student.fullName()}}"
},
{
"code": null,
"e": 29216,
"s": 29143,
"text": "We've bounded student.firstName and student.lastname to two input boxes."
},
{
"code": null,
"e": 29289,
"s": 29216,
"text": "We've bounded student.firstName and student.lastname to two input boxes."
},
{
"code": null,
"e": 29331,
"s": 29289,
"text": "We've bounded student.fullName() to HTML."
},
{
"code": null,
"e": 29373,
"s": 29331,
"text": "We've bounded student.fullName() to HTML."
},
{
"code": null,
"e": 29502,
"s": 29373,
"text": "Now whenever you type anything in first name and last name input boxes, you can see the full name getting updated automatically."
},
{
"code": null,
"e": 29631,
"s": 29502,
"text": "Now whenever you type anything in first name and last name input boxes, you can see the full name getting updated automatically."
},
{
"code": null,
"e": 29682,
"s": 29631,
"text": "Following example will showcase use of controller."
},
{
"code": null,
"e": 30818,
"s": 29682,
"text": "<html>\n <head>\n <title>Angular JS Controller</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"studentController\">\n Enter first name: <input type = \"text\" ng-model = \"student.firstName\"><br>\n <br>\n Enter last name: <input type = \"text\" ng-model = \"student.lastName\"><br>\n <br>\n You are entering: {{student.fullName()}}\n </div>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('studentController', function($scope) {\n $scope.student = {\n firstName: \"Mahesh\",\n lastName: \"Parashar\",\n \n fullName: function() {\n var studentObject;\n studentObject = $scope.student;\n return studentObject.firstName + \" \" + studentObject.lastName;\n }\n };\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 30875,
"s": 30818,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 31035,
"s": 30875,
"text": "Filters are used to change modify the data and can be clubbed in expression or directives using pipe character. Following is the list of commonly used filters."
},
{
"code": null,
"e": 31045,
"s": 31035,
"text": "uppercase"
},
{
"code": null,
"e": 31081,
"s": 31045,
"text": "converts a text to upper case text."
},
{
"code": null,
"e": 31091,
"s": 31081,
"text": "lowercase"
},
{
"code": null,
"e": 31127,
"s": 31091,
"text": "converts a text to lower case text."
},
{
"code": null,
"e": 31136,
"s": 31127,
"text": "currency"
},
{
"code": null,
"e": 31171,
"s": 31136,
"text": "formats text in a currency format."
},
{
"code": null,
"e": 31178,
"s": 31171,
"text": "filter"
},
{
"code": null,
"e": 31241,
"s": 31178,
"text": "filter the array to a subset of it based on provided criteria."
},
{
"code": null,
"e": 31249,
"s": 31241,
"text": "orderby"
},
{
"code": null,
"e": 31294,
"s": 31249,
"text": "orders the array based on provided criteria."
},
{
"code": null,
"e": 31434,
"s": 31294,
"text": "Add uppercase filter to an expression using pipe character. Here we've added uppercase filter to print student name in all capital letters."
},
{
"code": null,
"e": 31628,
"s": 31434,
"text": "Enter first name:<input type = \"text\" ng-model = \"student.firstName\">\nEnter last name: <input type = \"text\" ng-model = \"student.lastName\">\nName in Upper Case: {{student.fullName() | uppercase}}"
},
{
"code": null,
"e": 31770,
"s": 31628,
"text": "Add lowercase filter to an expression using pipe character. Here we've added lowercase filter to print student name in all lowercase letters."
},
{
"code": null,
"e": 31964,
"s": 31770,
"text": "Enter first name:<input type = \"text\" ng-model = \"student.firstName\">\nEnter last name: <input type = \"text\" ng-model = \"student.lastName\">\nName in Lower Case: {{student.fullName() | lowercase}}"
},
{
"code": null,
"e": 32110,
"s": 31964,
"text": "Add currency filter to an expression returning number using pipe character. Here we've added currency filter to print fees using currency format."
},
{
"code": null,
"e": 32204,
"s": 32110,
"text": "Enter fees: <input type = \"text\" ng-model = \"student.fees\">\nfees: {{student.fees | currency}}"
},
{
"code": null,
"e": 32273,
"s": 32204,
"text": "To display only required subjects, we've used subjectName as filter."
},
{
"code": null,
"e": 32490,
"s": 32273,
"text": "Enter subject: <input type = \"text\" ng-model = \"subjectName\">\nSubject:\n<ul>\n <li ng-repeat = \"subject in student.subjects | filter: subjectName\">\n {{ subject.name + ', marks:' + subject.marks }}\n </li>\n</ul>"
},
{
"code": null,
"e": 32544,
"s": 32490,
"text": "To order subjects by marks, we've used orderBy marks."
},
{
"code": null,
"e": 32695,
"s": 32544,
"text": "Subject:\n<ul>\n <li ng-repeat = \"subject in student.subjects | orderBy:'marks'\">\n {{ subject.name + ', marks:' + subject.marks }}\n </li>\n</ul>"
},
{
"code": null,
"e": 32760,
"s": 32695,
"text": "Following example will showcase all the above mentioned filters."
},
{
"code": null,
"e": 35311,
"s": 32760,
"text": "<html>\n <head>\n <title>Angular JS Filters</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"studentController\">\n <table border = \"0\">\n <tr>\n <td>Enter first name:</td>\n <td><input type = \"text\" ng-model = \"student.firstName\"></td>\n </tr>\n <tr>\n <td>Enter last name: </td>\n <td><input type = \"text\" ng-model = \"student.lastName\"></td>\n </tr>\n <tr>\n <td>Enter fees: </td>\n <td><input type = \"text\" ng-model = \"student.fees\"></td>\n </tr>\n <tr>\n <td>Enter subject: </td>\n <td><input type = \"text\" ng-model = \"subjectName\"></td>\n </tr>\n </table>\n <br/>\n \n <table border = \"0\">\n <tr>\n <td>Name in Upper Case: </td><td>{{student.fullName() | uppercase}}</td>\n </tr>\n <tr>\n <td>Name in Lower Case: </td><td>{{student.fullName() | lowercase}}</td>\n </tr>\n <tr>\n <td>fees: </td><td>{{student.fees | currency}}\n </td>\n </tr>\n <tr>\n <td>Subject:</td>\n <td>\n <ul>\n <li ng-repeat = \"subject in student.subjects | filter: subjectName |orderBy:'marks'\">\n {{ subject.name + ', marks:' + subject.marks }}\n </li>\n </ul>\n </td>\n </tr>\n </table>\n </div>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('studentController', function($scope) {\n $scope.student = {\n firstName: \"Mahesh\",\n lastName: \"Parashar\",\n fees:500,\n \n subjects:[\n {name:'Physics',marks:70},\n {name:'Chemistry',marks:80},\n {name:'Math',marks:65}\n ],\n \n fullName: function() {\n var studentObject;\n studentObject = $scope.student;\n return studentObject.firstName + \" \" + studentObject.lastName;\n }\n };\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 35368,
"s": 35311,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 35427,
"s": 35368,
"text": "\n {{ subject.name + ', marks:' + subject.marks }}\n "
},
{
"code": null,
"e": 35599,
"s": 35427,
"text": "Table data is normally repeatable by nature. ng-repeat directive can be used to draw table easily. Following example states the use of ng-repeat directive to draw a table."
},
{
"code": null,
"e": 35806,
"s": 35599,
"text": "<table>\n <tr>\n <th>Name</th>\n <th>Marks</th>\n </tr>\n \n <tr ng-repeat = \"subject in student.subjects\">\n <td>{{ subject.name }}</td>\n <td>{{ subject.marks }}</td>\n </tr>\n</table>"
},
{
"code": null,
"e": 35845,
"s": 35806,
"text": "Table can be styled using CSS Styling."
},
{
"code": null,
"e": 36110,
"s": 35845,
"text": "<style>\n table, th , td {\n border: 1px solid grey;\n border-collapse: collapse;\n padding: 5px;\n }\n \n table tr:nth-child(odd) {\n background-color: #f2f2f2;\n }\n\n table tr:nth-child(even) {\n background-color: #ffffff;\n }\n</style>"
},
{
"code": null,
"e": 36177,
"s": 36110,
"text": "Following example will showcase all the above mentioned directive."
},
{
"code": null,
"e": 38725,
"s": 36177,
"text": "<html>\n <head>\n <title>Angular JS Table</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\"></script>\n \n <style>\n table, th , td {\n border: 1px solid grey;\n border-collapse: collapse;\n padding: 5px;\n }\n\n table tr:nth-child(odd) {\n background-color: #f2f2f2;\n }\n\n table tr:nth-child(even) {\n background-color: #ffffff;\n }\n </style>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n <div ng-app = \"mainApp\" ng-controller = \"studentController\">\n \n <table border = \"0\">\n <tr>\n <td>Enter first name:</td>\n <td><input type = \"text\" ng-model = \"student.firstName\"></td>\n </tr>\n <tr>\n <td>Enter last name: </td>\n <td>\n <input type = \"text\" ng-model = \"student.lastName\">\n </td>\n </tr>\n <tr>\n <td>Name: </td>\n <td>{{student.fullName()}}</td>\n </tr>\n <tr>\n <td>Subject:</td>\n \n <td>\n <table>\n <tr>\n <th>Name</th>.\n <th>Marks</th>\n </tr>\n <tr ng-repeat = \"subject in student.subjects\">\n <td>{{ subject.name }}</td>\n <td>{{ subject.marks }}</td>\n </tr>\n </table>\n </td>\n </tr>\n </table>\n </div>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('studentController', function($scope) {\n $scope.student = {\n firstName: \"Mahesh\",\n lastName: \"Parashar\",\n fees:500,\n \n subjects:[\n {name:'Physics',marks:70},\n {name:'Chemistry',marks:80},\n {name:'Math',marks:65},\n {name:'English',marks:75},\n {name:'Hindi',marks:67}\n ],\n \n fullName: function() {\n var studentObject;\n studentObject = $scope.student;\n return studentObject.firstName + \" \" + studentObject.lastName;\n }\n };\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 38782,
"s": 38725,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 38876,
"s": 38782,
"text": "Following directives can be used to bind application data to attributes of HTML DOM Elements."
},
{
"code": null,
"e": 38888,
"s": 38876,
"text": "ng-disabled"
},
{
"code": null,
"e": 38914,
"s": 38888,
"text": "disables a given control."
},
{
"code": null,
"e": 38922,
"s": 38914,
"text": "ng-show"
},
{
"code": null,
"e": 38945,
"s": 38922,
"text": "shows a given control."
},
{
"code": null,
"e": 38953,
"s": 38945,
"text": "ng-hide"
},
{
"code": null,
"e": 38976,
"s": 38953,
"text": "hides a given control."
},
{
"code": null,
"e": 38985,
"s": 38976,
"text": "ng-click"
},
{
"code": null,
"e": 39021,
"s": 38985,
"text": "represents a AngularJS click event."
},
{
"code": null,
"e": 39138,
"s": 39021,
"text": "Add ng-disabled attribute to a HTML button and pass it a model. Bind the model to an checkbox and see the variation."
},
{
"code": null,
"e": 39274,
"s": 39138,
"text": "<input type = \"checkbox\" ng-model = \"enableDisableButton\">Disable Button\n<button ng-disabled = \"enableDisableButton\">Click Me!</button>"
},
{
"code": null,
"e": 39387,
"s": 39274,
"text": "Add ng-show attribute to a HTML button and pass it a model. Bind the model to an checkbox and see the variation."
},
{
"code": null,
"e": 39496,
"s": 39387,
"text": "<input type = \"checkbox\" ng-model = \"showHide1\">Show Button\n<button ng-show = \"showHide1\">Click Me!</button>"
},
{
"code": null,
"e": 39609,
"s": 39496,
"text": "Add ng-hide attribute to a HTML button and pass it a model. Bind the model to an checkbox and see the variation."
},
{
"code": null,
"e": 39718,
"s": 39609,
"text": "<input type = \"checkbox\" ng-model = \"showHide2\">Hide Button\n<button ng-hide = \"showHide2\">Click Me!</button>"
},
{
"code": null,
"e": 39824,
"s": 39718,
"text": "Add ng-click attribute to a HTML button and update a model. Bind the model to html and see the variation."
},
{
"code": null,
"e": 39935,
"s": 39824,
"text": "<p>Total click: {{ clickCounter }}</p>\n<button ng-click = \"clickCounter = clickCounter + 1\">Click Me!</button>"
},
{
"code": null,
"e": 40003,
"s": 39935,
"text": "Following example will showcase all the above mentioned directives."
},
{
"code": null,
"e": 41199,
"s": 40003,
"text": "<html>\n <head>\n <title>AngularJS HTML DOM</title>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"\">\n <table border = \"0\">\n <tr>\n <td><input type = \"checkbox\" ng-model = \"enableDisableButton\">Disable Button</td>\n <td><button ng-disabled = \"enableDisableButton\">Click Me!</button></td>\n </tr>\n \n <tr>\n <td><input type = \"checkbox\" ng-model = \"showHide1\">Show Button</td>\n <td><button ng-show = \"showHide1\">Click Me!</button></td>\n </tr>\n \n <tr>\n <td><input type = \"checkbox\" ng-model = \"showHide2\">Hide Button</td>\n <td><button ng-hide = \"showHide2\">Click Me!</button></td>\n </tr>\n \n <tr>\n <td><p>Total click: {{ clickCounter }}</p></td>\n <td><button ng-click = \"clickCounter = clickCounter + 1\">Click Me!</button></td>\n </tr>\n </table>\n </div>\n \n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 41256,
"s": 41199,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 41288,
"s": 41256,
"text": "Total click: {{ clickCounter }}"
},
{
"code": null,
"e": 41561,
"s": 41288,
"text": "AngularJS supports modular approach. Modules are used to separate logics say services, controllers, application etc. and keep the code clean. We define modules in separate js files and name them as per the module.js file. In this example we're going to create two modules."
},
{
"code": null,
"e": 41636,
"s": 41561,
"text": "Application Module − used to initialize an application with controller(s)."
},
{
"code": null,
"e": 41711,
"s": 41636,
"text": "Application Module − used to initialize an application with controller(s)."
},
{
"code": null,
"e": 41762,
"s": 41711,
"text": "Controller Module − used to define the controller."
},
{
"code": null,
"e": 41813,
"s": 41762,
"text": "Controller Module − used to define the controller."
},
{
"code": null,
"e": 41859,
"s": 41813,
"text": "var mainApp = angular.module(\"mainApp\", []);\n"
},
{
"code": null,
"e": 42024,
"s": 41859,
"text": "Here we've declared an application mainApp module using angular.module function. We've passed an empty array to it. This array generally contains dependent modules."
},
{
"code": null,
"e": 42580,
"s": 42024,
"text": "mainApp.controller(\"studentController\", function($scope) {\n $scope.student = {\n firstName: \"Mahesh\",\n lastName: \"Parashar\",\n fees:500,\n \n subjects:[\n {name:'Physics',marks:70},\n {name:'Chemistry',marks:80},\n {name:'Math',marks:65},\n {name:'English',marks:75},\n {name:'Hindi',marks:67}\n ],\n \n fullName: function() {\n var studentObject;\n studentObject = $scope.student;\n return studentObject.firstName + \" \" + studentObject.lastName;\n }\n };\n});"
},
{
"code": null,
"e": 42673,
"s": 42580,
"text": "Here we've declared a controller studentController module using mainApp.controller function."
},
{
"code": null,
"e": 42840,
"s": 42673,
"text": "<div ng-app = \"mainApp\" ng-controller = \"studentController\">\n ...\n <script src = \"mainApp.js\"></script>\n <script src = \"studentController.js\"></script>\n\t\n</div>"
},
{
"code": null,
"e": 43018,
"s": 42840,
"text": "Here we've used application module using ng-app directive and controller using ng-controller directive. We've imported mainApp.js and studentController.js in the main html page."
},
{
"code": null,
"e": 43083,
"s": 43018,
"text": "Following example will showcase all the above mentioned modules."
},
{
"code": null,
"e": 44938,
"s": 43083,
"text": "<html>\n <head>\n <title>Angular JS Modules</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\"></script>\n <script src = \"/angularjs/src/module/mainApp.js\"></script>\n <script src = \"/angularjs/src/module/studentController.js\"></script>\n \n <style>\n table, th , td {\n border: 1px solid grey;\n border-collapse: collapse;\n padding: 5px;\n }\n \n table tr:nth-child(odd) {\n background-color: #f2f2f2;\n }\n \n table tr:nth-child(even) {\n background-color: #ffffff;\n }\n </style>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n <div ng-app = \"mainApp\" ng-controller = \"studentController\">\n \n <table border = \"0\">\n <tr>\n <td>Enter first name:</td>\n <td><input type = \"text\" ng-model = \"student.firstName\"></td>\n </tr>\n \n <tr>\n <td>Enter last name: </td>\n <td><input type = \"text\" ng-model = \"student.lastName\"></td>\n </tr>\n \n <tr>\n <td>Name: </td>\n <td>{{student.fullName()}}</td>\n </tr>\n \n <tr>\n <td>Subject:</td>\n \n <td>\n <table>\n <tr>\n <th>Name</th>\n <th>Marks</th>\n </tr>\n <tr ng-repeat = \"subject in student.subjects\">\n <td>{{ subject.name }}</td>\n <td>{{ subject.marks }}</td>\n </tr>\n </table>\n </td>\n </tr>\n </table>\n </div>\n \n </body>\n</html>"
},
{
"code": null,
"e": 44983,
"s": 44938,
"text": "var mainApp = angular.module(\"mainApp\", []);"
},
{
"code": null,
"e": 45539,
"s": 44983,
"text": "mainApp.controller(\"studentController\", function($scope) {\n $scope.student = {\n firstName: \"Mahesh\",\n lastName: \"Parashar\",\n fees:500,\n \n subjects:[\n {name:'Physics',marks:70},\n {name:'Chemistry',marks:80},\n {name:'Math',marks:65},\n {name:'English',marks:75},\n {name:'Hindi',marks:67}\n ],\n \n fullName: function() {\n var studentObject;\n studentObject = $scope.student;\n return studentObject.firstName + \" \" + studentObject.lastName;\n }\n };\n});"
},
{
"code": null,
"e": 45596,
"s": 45539,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 45947,
"s": 45596,
"text": "AngularJS enriches form filling and validation. We can use ng-click to handle AngularJS click on button and use $dirty and $invalid flags to do the validations in seemless way. Use novalidate with a form declaration to disable any browser specific validation. Forms controls makes heavy use of Angular events. Let's have a quick look on events first."
},
{
"code": null,
"e": 46133,
"s": 45947,
"text": "AngularJS provides multiple events which can be associated with the HTML controls. For example ng-click is normally associated with button. Following are supported events in Angular JS."
},
{
"code": null,
"e": 46142,
"s": 46133,
"text": "ng-click"
},
{
"code": null,
"e": 46155,
"s": 46142,
"text": "ng-dbl-click"
},
{
"code": null,
"e": 46168,
"s": 46155,
"text": "ng-mousedown"
},
{
"code": null,
"e": 46179,
"s": 46168,
"text": "ng-mouseup"
},
{
"code": null,
"e": 46193,
"s": 46179,
"text": "ng-mouseenter"
},
{
"code": null,
"e": 46207,
"s": 46193,
"text": "ng-mouseleave"
},
{
"code": null,
"e": 46220,
"s": 46207,
"text": "ng-mousemove"
},
{
"code": null,
"e": 46233,
"s": 46220,
"text": "ng-mouseover"
},
{
"code": null,
"e": 46244,
"s": 46233,
"text": "ng-keydown"
},
{
"code": null,
"e": 46253,
"s": 46244,
"text": "ng-keyup"
},
{
"code": null,
"e": 46265,
"s": 46253,
"text": "ng-keypress"
},
{
"code": null,
"e": 46275,
"s": 46265,
"text": "ng-change"
},
{
"code": null,
"e": 46334,
"s": 46275,
"text": "Reset data of a form using on-click directive of a button."
},
{
"code": null,
"e": 46866,
"s": 46334,
"text": "<input name = \"firstname\" type = \"text\" ng-model = \"firstName\" required>\n<input name = \"lastname\" type = \"text\" ng-model = \"lastName\" required>\n<input name = \"email\" type = \"email\" ng-model = \"email\" required>\n<button ng-click = \"reset()\">Reset</button>\n\n<script>\n function studentController($scope) { \n $scope.reset = function() {\n $scope.firstName = \"Mahesh\";\n $scope.lastName = \"Parashar\";\n $scope.email = \"[email protected]\";\n } \n \n $scope.reset();\n }\n</script>"
},
{
"code": null,
"e": 46904,
"s": 46866,
"text": "Following can be used to track error."
},
{
"code": null,
"e": 46949,
"s": 46904,
"text": "$dirty − states that value has been changed."
},
{
"code": null,
"e": 46994,
"s": 46949,
"text": "$dirty − states that value has been changed."
},
{
"code": null,
"e": 47043,
"s": 46994,
"text": "$invalid − states that value entered is invalid."
},
{
"code": null,
"e": 47092,
"s": 47043,
"text": "$invalid − states that value entered is invalid."
},
{
"code": null,
"e": 47125,
"s": 47092,
"text": "$error − states the exact error."
},
{
"code": null,
"e": 47158,
"s": 47125,
"text": "$error − states the exact error."
},
{
"code": null,
"e": 47226,
"s": 47158,
"text": "Following example will showcase all the above mentioned directives."
},
{
"code": null,
"e": 50552,
"s": 47226,
"text": "<html>\n <head>\n <title>Angular JS Forms</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\"></script>\n \n <style>\n table, th , td {\n border: 1px solid grey;\n border-collapse: collapse;\n padding: 5px;\n }\n \n table tr:nth-child(odd) {\n background-color: #f2f2f2;\n }\n \n table tr:nth-child(even) {\n background-color: #ffffff;\n }\n </style>\n \n </head>\n <body>\n \n <h2>AngularJS Sample Application</h2>\n <div ng-app = \"mainApp\" ng-controller = \"studentController\">\n \n <form name = \"studentForm\" novalidate>\n <table border = \"0\">\n <tr>\n <td>Enter first name:</td>\n <td><input name = \"firstname\" type = \"text\" ng-model = \"firstName\" required>\n <span style = \"color:red\" ng-show = \"studentForm.firstname.$dirty && studentForm.firstname.$invalid\">\n <span ng-show = \"studentForm.firstname.$error.required\">First Name is required.</span>\n </span>\n </td>\n </tr>\n \n <tr>\n <td>Enter last name: </td>\n <td><input name = \"lastname\" type = \"text\" ng-model = \"lastName\" required>\n <span style = \"color:red\" ng-show = \"studentForm.lastname.$dirty && studentForm.lastname.$invalid\">\n <span ng-show = \"studentForm.lastname.$error.required\">Last Name is required.</span>\n </span>\n </td>\n </tr>\n \n <tr>\n <td>Email: </td><td><input name = \"email\" type = \"email\" ng-model = \"email\" length = \"100\" required>\n <span style = \"color:red\" ng-show = \"studentForm.email.$dirty && studentForm.email.$invalid\">\n <span ng-show = \"studentForm.email.$error.required\">Email is required.</span>\n <span ng-show = \"studentForm.email.$error.email\">Invalid email address.</span>\n </span>\n </td>\n </tr>\n \n <tr>\n <td>\n <button ng-click = \"reset()\">Reset</button>\n </td>\n <td>\n <button ng-disabled = \"studentForm.firstname.$dirty &&\n studentForm.firstname.$invalid || studentForm.lastname.$dirty &&\n studentForm.lastname.$invalid || studentForm.email.$dirty &&\n studentForm.email.$invalid\" ng-click=\"submit()\">Submit</button>\n </td>\n </tr>\n\t\t\t\t\t\n </table>\n </form>\n </div>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('studentController', function($scope) {\n $scope.reset = function() {\n $scope.firstName = \"Mahesh\";\n $scope.lastName = \"Parashar\";\n $scope.email = \"[email protected]\";\n }\n \n $scope.reset();\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 50609,
"s": 50552,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 50726,
"s": 50609,
"text": "HTML does not support embedding html pages within html page. To achieve this functionality following ways are used −"
},
{
"code": null,
"e": 50834,
"s": 50726,
"text": "Using Ajax − Make a server call to get the corresponding html page and set it in innerHTML of html control."
},
{
"code": null,
"e": 50942,
"s": 50834,
"text": "Using Ajax − Make a server call to get the corresponding html page and set it in innerHTML of html control."
},
{
"code": null,
"e": 51065,
"s": 50942,
"text": "Using Server Side Includes − JSP, PHP and other web side server technologies can include html pages within a dynamic page."
},
{
"code": null,
"e": 51188,
"s": 51065,
"text": "Using Server Side Includes − JSP, PHP and other web side server technologies can include html pages within a dynamic page."
},
{
"code": null,
"e": 51276,
"s": 51188,
"text": "Using AngularJS, we can embed HTML pages within a HTML page using ng-include directive."
},
{
"code": null,
"e": 51423,
"s": 51276,
"text": "<div ng-app = \"\" ng-controller = \"studentController\">\n <div ng-include = \"'main.htm'\"></div>\n <div ng-include = \"'subjects.htm'\"></div>\n</div>"
},
{
"code": null,
"e": 53128,
"s": 51423,
"text": "<html>\n <head>\n <title>Angular JS Includes</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n \n <style>\n table, th , td {\n border: 1px solid grey;\n border-collapse: collapse;\n padding: 5px;\n }\n \n table tr:nth-child(odd) {\n background-color: #f2f2f2;\n }\n \n table tr:nth-child(even) {\n background-color: #ffffff;\n }\n </style>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"studentController\">\n <div ng-include = \"'/angularjs/src/include/main.htm'\"></div>\n <div ng-include = \"'/angularjs/src/include/subjects.htm'\"></div>\n </div>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('studentController', function($scope) {\n $scope.student = {\n firstName: \"Mahesh\",\n lastName: \"Parashar\",\n fees:500,\n \n subjects:[\n {name:'Physics',marks:70},\n {name:'Chemistry',marks:80},\n {name:'Math',marks:65},\n {name:'English',marks:75},\n {name:'Hindi',marks:67}\n ],\n \n fullName: function() {\n var studentObject;\n studentObject = $scope.student;\n return studentObject.firstName + \" \" + studentObject.lastName;\n }\n };\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 53478,
"s": 53128,
"text": "<table border = \"0\">\n <tr>\n <td>Enter first name:</td>\n <td><input type = \"text\" ng-model = \"student.firstName\"></td>\n </tr>\n \n <tr>\n <td>Enter last name: </td>\n <td><input type = \"text\" ng-model = \"student.lastName\"></td>\n </tr>\n \n <tr>\n <td>Name: </td>\n <td>{{student.fullName()}}</td>\n </tr>\n</table>"
},
{
"code": null,
"e": 53702,
"s": 53478,
"text": "<p>Subjects:</p>\n<table>\n <tr>\n <th>Name</th>\n <th>Marks</th>\n </tr>\n \n <tr ng-repeat = \"subject in student.subjects\">\n <td>{{ subject.name }}</td>\n <td>{{ subject.marks }}</td>\n </tr>\n</table>"
},
{
"code": null,
"e": 53885,
"s": 53702,
"text": "To run this example, you need to deploy textAngularJS.htm, main.htm and subjects.htm to a webserver. Open textAngularJS.htm using url of your server in a web browser. See the result."
},
{
"code": null,
"e": 54171,
"s": 53885,
"text": "AngularJS provides $https: control which works as a service to read data from the server. The server makes a database call to get the desired records. AngularJS needs data in JSON format. Once the data is ready, $https: can be used to get the data from server in the following manner −"
},
{
"code": null,
"e": 54336,
"s": 54171,
"text": "function studentController($scope,$https:) {\n var url = \"data.txt\";\n\n $https:.get(url).success( function(response) {\n $scope.students = response; \n });\n}"
},
{
"code": null,
"e": 54516,
"s": 54336,
"text": "Here, the file data.txt contains student records. $https: service makes an ajax call and sets response to its property students. students model can be used to draw tables in HTML."
},
{
"code": null,
"e": 54882,
"s": 54516,
"text": "[\n {\n \"Name\" : \"Mahesh Parashar\",\n \"RollNo\" : 101,\n \"Percentage\" : \"80%\"\n },\n\t\n {\n \"Name\" : \"Dinkar Kad\",\n \"RollNo\" : 201,\n \"Percentage\" : \"70%\"\n },\n\t\n {\n \"Name\" : \"Robert\",\n \"RollNo\" : 191,\n \"Percentage\" : \"75%\"\n },\n\t\n {\n \"Name\" : \"Julian Joe\",\n \"RollNo\" : 111,\n \"Percentage\" : \"77%\"\n }\n]"
},
{
"code": null,
"e": 56232,
"s": 54882,
"text": "<html>\n <head>\n <title>Angular JS Includes</title>\n \n <style>\n table, th , td {\n border: 1px solid grey;\n border-collapse: collapse;\n padding: 5px;\n }\n \n table tr:nth-child(odd) {\n background-color: #f2f2f2;\n }\n \n table tr:nth-child(even) {\n background-color: #ffffff;\n }\n </style>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n <div ng-app = \"\" ng-controller = \"studentController\">\n \n <table>\n <tr>\n <th>Name</th>\n <th>Roll No</th>\n <th>Percentage</th>\n </tr>\n \n <tr ng-repeat = \"student in students\">\n <td>{{ student.Name }}</td>\n <td>{{ student.RollNo }}</td>\n <td>{{ student.Percentage }}</td>\n </tr>\n </table>\n </div>\n \n <script>\n function studentController($scope,$http) {\n var url = \"data.txt\";\n\n $http.get(url).then( function(response) {\n $scope.students = response.data;\n });\n }\n </script>\n \n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.2.15/angular.min.js\">\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 56427,
"s": 56232,
"text": "To execute this example, you need to deploy testAngularJS.htm and data.txt file to a web server. Open the file testAngularJS.htm using the URL of your server in a web browser and see the result."
},
{
"code": null,
"e": 56605,
"s": 56427,
"text": "AngularJS supports Single Page Application via multiple views on a single page. To do this AngularJS has provided ng-view and ng-template directives and $routeProvider services."
},
{
"code": null,
"e": 56743,
"s": 56605,
"text": "ng-view tag simply creates a place holder where a corresponding view (html or ng-template view) can be placed based on the configuration."
},
{
"code": null,
"e": 56793,
"s": 56743,
"text": "Define a div with ng-view within the main module."
},
{
"code": null,
"e": 56860,
"s": 56793,
"text": "<div ng-app = \"mainApp\">\n ...\n <div ng-view></div>\n\n</div> "
},
{
"code": null,
"e": 57023,
"s": 56860,
"text": "ng-template directive is used to create an html view using script tag. It contains \"id\" attribute which is used by $routeProvider to map a view with a controller."
},
{
"code": null,
"e": 57094,
"s": 57023,
"text": "Define a script block with type as ng-template within the main module."
},
{
"code": null,
"e": 57260,
"s": 57094,
"text": "<div ng-app = \"mainApp\">\n ...\n\t\n <script type = \"text/ng-template\" id = \"addStudent.htm\">\n <h2> Add Student </h2>\n {{message}}\n </script>\n\n</div> "
},
{
"code": null,
"e": 57428,
"s": 57260,
"text": "$routeProvider is the key service which set the configuration of urls, map them with the corresponding html page or ng-template, and attach a controller with the same."
},
{
"code": null,
"e": 57502,
"s": 57428,
"text": "Define a script block with main module and set the routing configuration."
},
{
"code": null,
"e": 57925,
"s": 57502,
"text": "var mainApp = angular.module(\"mainApp\", ['ngRoute']);\n\nmainApp.config(['$routeProvider', function($routeProvider) {\n $routeProvider\n \n .when('/addStudent', {\n templateUrl: 'addStudent.htm', controller: 'AddStudentController'\n })\n \n .when('/viewStudents', {\n templateUrl: 'viewStudents.htm', controller: 'ViewStudentsController'\n })\n \n .otherwise ({\n redirectTo: '/addStudent'\n });\n\t\n}]);"
},
{
"code": null,
"e": 57995,
"s": 57925,
"text": "Following are the important points to be considered in above example."
},
{
"code": null,
"e": 58097,
"s": 57995,
"text": "$routeProvider is defined as a function under config of mainApp module using key as '$routeProvider'."
},
{
"code": null,
"e": 58199,
"s": 58097,
"text": "$routeProvider is defined as a function under config of mainApp module using key as '$routeProvider'."
},
{
"code": null,
"e": 58462,
"s": 58199,
"text": "$routeProvider.when defines a url \"/addStudent\" which then is mapped to \"addStudent.htm\". addStudent.htm should be present in the same path as main html page.If htm page is not defined then ng-template to be used with id=\"addStudent.htm\". We've used ng-template."
},
{
"code": null,
"e": 58725,
"s": 58462,
"text": "$routeProvider.when defines a url \"/addStudent\" which then is mapped to \"addStudent.htm\". addStudent.htm should be present in the same path as main html page.If htm page is not defined then ng-template to be used with id=\"addStudent.htm\". We've used ng-template."
},
{
"code": null,
"e": 58770,
"s": 58725,
"text": "\"otherwise\" is used to set the default view."
},
{
"code": null,
"e": 58815,
"s": 58770,
"text": "\"otherwise\" is used to set the default view."
},
{
"code": null,
"e": 58886,
"s": 58815,
"text": "\"controller\" is used to set the corresponding controller for the view."
},
{
"code": null,
"e": 58957,
"s": 58886,
"text": "\"controller\" is used to set the corresponding controller for the view."
},
{
"code": null,
"e": 59025,
"s": 58957,
"text": "Following example will showcase all the above mentioned directives."
},
{
"code": null,
"e": 60880,
"s": 59025,
"text": "<html>\n <head>\n <title>Angular JS Views</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\"></script>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular-route.min.js\">\n </script>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n <div ng-app = \"mainApp\">\n <p><a href = \"#addStudent\">Add Student</a></p>\n <p><a href = \"#viewStudents\">View Students</a></p>\n <div ng-view></div>\n \n <script type = \"text/ng-template\" id = \"addStudent.htm\">\n <h2> Add Student </h2>\n {{message}}\n </script>\n \n <script type = \"text/ng-template\" id = \"viewStudents.htm\">\n <h2> View Students </h2>\n {{message}}\n </script>\n </div>\n \n <script>\n var mainApp = angular.module(\"mainApp\", ['ngRoute']);\n mainApp.config(['$routeProvider', function($routeProvider) {\n $routeProvider\n \n .when('/addStudent', {\n templateUrl: 'addStudent.htm',\n controller: 'AddStudentController'\n })\n \n .when('/viewStudents', {\n templateUrl: 'viewStudents.htm',\n controller: 'ViewStudentsController'\n })\n \n .otherwise({\n redirectTo: '/addStudent'\n });\n }]);\n \n mainApp.controller('AddStudentController', function($scope) {\n $scope.message = \"This page will be used to display add student form\";\n });\n \n mainApp.controller('ViewStudentsController', function($scope) {\n $scope.message = \"This page will be used to display all the students\";\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 60937,
"s": 60880,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 60949,
"s": 60937,
"text": "Add Student"
},
{
"code": null,
"e": 60963,
"s": 60949,
"text": "View Students"
},
{
"code": null,
"e": 61148,
"s": 60963,
"text": "Scope is a special javascript object which plays the role of joining controller with the views. Scope contains the model data. In controllers, model data is accessed via $scope object."
},
{
"code": null,
"e": 61361,
"s": 61148,
"text": "<script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller(\"shapeController\", function($scope) {\n $scope.message = \"In shape controller\";\n $scope.type = \"Shape\";\n });\n</script>"
},
{
"code": null,
"e": 61431,
"s": 61361,
"text": "Following are the important points to be considered in above example."
},
{
"code": null,
"e": 61515,
"s": 61431,
"text": "$scope is passed as first argument to controller during its constructor definition."
},
{
"code": null,
"e": 61599,
"s": 61515,
"text": "$scope is passed as first argument to controller during its constructor definition."
},
{
"code": null,
"e": 61684,
"s": 61599,
"text": "$scope.message and $scope.type are the models which are to be used in the HTML page."
},
{
"code": null,
"e": 61769,
"s": 61684,
"text": "$scope.message and $scope.type are the models which are to be used in the HTML page."
},
{
"code": null,
"e": 61883,
"s": 61769,
"text": "We've set values to models which will be reflected in the application module whose controller is shapeController."
},
{
"code": null,
"e": 61997,
"s": 61883,
"text": "We've set values to models which will be reflected in the application module whose controller is shapeController."
},
{
"code": null,
"e": 62040,
"s": 61997,
"text": "We can define functions as well in $scope."
},
{
"code": null,
"e": 62083,
"s": 62040,
"text": "We can define functions as well in $scope."
},
{
"code": null,
"e": 62219,
"s": 62083,
"text": "Scope are controllers specific. If we defines nested controllers then child controller will inherit the scope of its parent controller."
},
{
"code": null,
"e": 62553,
"s": 62219,
"text": "<script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller(\"shapeController\", function($scope) {\n $scope.message = \"In shape controller\";\n $scope.type = \"Shape\";\n });\n \n mainApp.controller(\"circleController\", function($scope) {\n $scope.message = \"In circle controller\";\n });\n\t\n</script>"
},
{
"code": null,
"e": 62623,
"s": 62553,
"text": "Following are the important points to be considered in above example."
},
{
"code": null,
"e": 62670,
"s": 62623,
"text": "We've set values to models in shapeController."
},
{
"code": null,
"e": 62717,
"s": 62670,
"text": "We've set values to models in shapeController."
},
{
"code": null,
"e": 62886,
"s": 62717,
"text": "We've overridden message in child controller circleController. When \"message\" is used within module of controller circleController, the overridden message will be used."
},
{
"code": null,
"e": 63055,
"s": 62886,
"text": "We've overridden message in child controller circleController. When \"message\" is used within module of controller circleController, the overridden message will be used."
},
{
"code": null,
"e": 63123,
"s": 63055,
"text": "Following example will showcase all the above mentioned directives."
},
{
"code": null,
"e": 64355,
"s": 63123,
"text": "<html>\n <head>\n <title>Angular JS Forms</title>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"shapeController\">\n <p>{{message}} <br/> {{type}} </p>\n \n <div ng-controller = \"circleController\">\n <p>{{message}} <br/> {{type}} </p>\n </div>\n \n <div ng-controller = \"squareController\">\n <p>{{message}} <br/> {{type}} </p>\n </div>\n\t\t\t\n </div>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller(\"shapeController\", function($scope) {\n $scope.message = \"In shape controller\";\n $scope.type = \"Shape\";\n });\n \n mainApp.controller(\"circleController\", function($scope) {\n $scope.message = \"In circle controller\";\n });\n \n mainApp.controller(\"squareController\", function($scope) {\n $scope.message = \"In square controller\";\n $scope.type = \"Square\";\n });\n\t\t\t\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 64412,
"s": 64355,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 64435,
"s": 64412,
"text": "{{message}} {{type}} "
},
{
"code": null,
"e": 64458,
"s": 64435,
"text": "{{message}} {{type}} "
},
{
"code": null,
"e": 64481,
"s": 64458,
"text": "{{message}} {{type}} "
},
{
"code": null,
"e": 64868,
"s": 64481,
"text": "AngularJS supports the concepts of \"Separation of Concerns\" using services architecture. Services are javascript functions and are responsible to do a specific tasks only. This makes them an individual entity which is maintainable and testable. Controllers, filters can call them as on requirement basis. Services are normally injected using dependency injection mechanism of AngularJS."
},
{
"code": null,
"e": 65194,
"s": 64868,
"text": "AngularJS provides many inbuilt services for example, $https:, $route, $window, $location etc. Each service is responsible for a specific task for example, $https: is used to make ajax call to get the server data. $route is used to define the routing information and so on. Inbuilt services are always prefixed with $ symbol."
},
{
"code": null,
"e": 65234,
"s": 65194,
"text": "There are two ways to create a service."
},
{
"code": null,
"e": 65242,
"s": 65234,
"text": "factory"
},
{
"code": null,
"e": 65250,
"s": 65242,
"text": "service"
},
{
"code": null,
"e": 65328,
"s": 65250,
"text": "Using factory method, we first define a factory and then assign method to it."
},
{
"code": null,
"e": 65533,
"s": 65328,
"text": "var mainApp = angular.module(\"mainApp\", []);\nmainApp.factory('MathService', function() {\n var factory = {};\n \n factory.multiply = function(a, b) {\n return a * b\n }\n \n return factory;\n}); "
},
{
"code": null,
"e": 65661,
"s": 65533,
"text": "Using service method, we define a service and then assign method to it. We've also injected an already available service to it."
},
{
"code": null,
"e": 65796,
"s": 65661,
"text": "mainApp.service('CalcService', function(MathService) {\n this.square = function(a) {\n return MathService.multiply(a,a);\n }\n});"
},
{
"code": null,
"e": 65864,
"s": 65796,
"text": "Following example will showcase all the above mentioned directives."
},
{
"code": null,
"e": 67126,
"s": 65864,
"text": "<html>\n <head>\n <title>Angular JS Services</title>\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"CalcController\">\n <p>Enter a number: <input type = \"number\" ng-model = \"number\" /></p>\n <button ng-click = \"square()\">X<sup>2</sup></button>\n <p>Result: {{result}}</p>\n </div>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.factory('MathService', function() {\n var factory = {};\n \n factory.multiply = function(a, b) {\n return a * b\n }\n return factory;\n });\n \n mainApp.service('CalcService', function(MathService) {\n this.square = function(a) {\n return MathService.multiply(a,a);\n }\n });\n \n mainApp.controller('CalcController', function($scope, CalcService) {\n $scope.square = function() {\n $scope.result = CalcService.square($scope.number);\n }\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 67183,
"s": 67126,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 67204,
"s": 67183,
"text": "Enter a number: \nX2\n"
},
{
"code": null,
"e": 67223,
"s": 67204,
"text": "Result: {{result}}"
},
{
"code": null,
"e": 67533,
"s": 67223,
"text": "Dependency Injection is a software design pattern in which components are given their dependencies instead of hard coding them within the component. This relieves a component from locating the dependency and makes dependencies configurable. This helps in making components reusable, maintainable and testable."
},
{
"code": null,
"e": 67687,
"s": 67533,
"text": "AngularJS provides a supreme Dependency Injection mechanism. It provides following core components which can be injected into each other as dependencies."
},
{
"code": null,
"e": 67693,
"s": 67687,
"text": "value"
},
{
"code": null,
"e": 67701,
"s": 67693,
"text": "factory"
},
{
"code": null,
"e": 67709,
"s": 67701,
"text": "service"
},
{
"code": null,
"e": 67718,
"s": 67709,
"text": "provider"
},
{
"code": null,
"e": 67727,
"s": 67718,
"text": "constant"
},
{
"code": null,
"e": 67826,
"s": 67727,
"text": "value is simple javascript object and it is used to pass values to controller during config phase."
},
{
"code": null,
"e": 68330,
"s": 67826,
"text": "//define a module\nvar mainApp = angular.module(\"mainApp\", []);\n\n//create a value object as \"defaultInput\" and pass it a data.\nmainApp.value(\"defaultInput\", 5);\n...\n\n//inject the value in the controller using its name \"defaultInput\"\nmainApp.controller('CalcController', function($scope, CalcService, defaultInput) {\n $scope.number = defaultInput;\n $scope.result = CalcService.square($scope.number);\n \n $scope.square = function() {\n $scope.result = CalcService.square($scope.number);\n }\n});"
},
{
"code": null,
"e": 68524,
"s": 68330,
"text": "factory is a function which is used to return value. It creates value on demand whenever a service or controller requires. It normally uses a factory function to calculate and return the value."
},
{
"code": null,
"e": 69081,
"s": 68524,
"text": "//define a module\nvar mainApp = angular.module(\"mainApp\", []);\n\n//create a factory \"MathService\" which provides a method multiply to return multiplication of two numbers\nmainApp.factory('MathService', function() {\n var factory = {};\n \n factory.multiply = function(a, b) {\n return a * b\n }\n return factory;\n}); \n\n//inject the factory \"MathService\" in a service to utilize the multiply method of factory.\nmainApp.service('CalcService', function(MathService) {\n this.square = function(a) {\n return MathService.multiply(a,a);\n }\n});\n..."
},
{
"code": null,
"e": 69261,
"s": 69081,
"text": "service is a singleton javascript object containing a set of functions to perform certain tasks. Services are defined using service() functions and then injected into controllers."
},
{
"code": null,
"e": 69872,
"s": 69261,
"text": "//define a module\nvar mainApp = angular.module(\"mainApp\", []);\n...\n\n//create a service which defines a method square to return square of a number.\nmainApp.service('CalcService', function(MathService) {\n this.square = function(a) {\n return MathService.multiply(a,a); \n }\n});\n\n//inject the service \"CalcService\" into the controller\nmainApp.controller('CalcController', function($scope, CalcService, defaultInput) {\n $scope.number = defaultInput;\n $scope.result = CalcService.square($scope.number);\n \n $scope.square = function() {\n $scope.result = CalcService.square($scope.number);\n }\n});"
},
{
"code": null,
"e": 70206,
"s": 69872,
"text": "provider is used by AngularJS internally to create services, factory etc. during config phase(phase during which AngularJS bootstraps itself). Below mention script can be used to create MathService that we've created earlier. Provider is a special factory method with a method get() which is used to return the value/service/factory."
},
{
"code": null,
"e": 70651,
"s": 70206,
"text": "//define a module\nvar mainApp = angular.module(\"mainApp\", []);\n...\n\n//create a service using provider which defines a method square to return square of a number.\nmainApp.config(function($provide) {\n $provide.provider('MathService', function() {\n this.$get = function() {\n var factory = {}; \n \n factory.multiply = function(a, b) {\n return a * b; \n }\n return factory;\n };\n });\n});"
},
{
"code": null,
"e": 70783,
"s": 70651,
"text": "constants are used to pass values at config phase considering the fact that value can not be used to be passed during config phase."
},
{
"code": null,
"e": 70835,
"s": 70783,
"text": "mainApp.constant(\"configParam\", \"constant value\");\n"
},
{
"code": null,
"e": 70903,
"s": 70835,
"text": "Following example will showcase all the above mentioned directives."
},
{
"code": null,
"e": 72749,
"s": 70903,
"text": "<html>\n <head>\n <title>AngularJS Dependency Injection</title>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"CalcController\">\n <p>Enter a number: <input type = \"number\" ng-model = \"number\" /></p>\n <button ng-click = \"square()\">X<sup>2</sup></button>\n <p>Result: {{result}}</p>\n </div>\n \n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.config(function($provide) {\n $provide.provider('MathService', function() {\n this.$get = function() {\n var factory = {};\n \n factory.multiply = function(a, b) {\n return a * b;\n }\n return factory;\n };\n });\n });\n\t\t\t\n mainApp.value(\"defaultInput\", 5);\n \n mainApp.factory('MathService', function() {\n var factory = {};\n \n factory.multiply = function(a, b) {\n return a * b;\n }\n return factory;\n });\n \n mainApp.service('CalcService', function(MathService) {\n this.square = function(a) {\n return MathService.multiply(a,a);\n }\n });\n \n mainApp.controller('CalcController', function($scope, CalcService, defaultInput) {\n $scope.number = defaultInput;\n $scope.result = CalcService.square($scope.number);\n\n $scope.square = function() {\n $scope.result = CalcService.square($scope.number);\n }\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 72806,
"s": 72749,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 72827,
"s": 72806,
"text": "Enter a number: \nX2\n"
},
{
"code": null,
"e": 72846,
"s": 72827,
"text": "Result: {{result}}"
},
{
"code": null,
"e": 73391,
"s": 72846,
"text": "Custom directives are used in AngularJS to extend the functionality of HTML. Custom directives are defined using \"directive\" function. A custom directive simply replaces the element for which it is activated. AngularJS application during bootstrap finds the matching elements and do one time activity using its compile() method of the custom directive then process the element using link() method of the custom directive based on the scope of the directive. AngularJS provides support to create custom directives for following type of elements."
},
{
"code": null,
"e": 73472,
"s": 73391,
"text": "Element directives − Directive activates when a matching element is encountered."
},
{
"code": null,
"e": 73553,
"s": 73472,
"text": "Element directives − Directive activates when a matching element is encountered."
},
{
"code": null,
"e": 73627,
"s": 73553,
"text": "Attribute − Directive activates when a matching attribute is encountered."
},
{
"code": null,
"e": 73701,
"s": 73627,
"text": "Attribute − Directive activates when a matching attribute is encountered."
},
{
"code": null,
"e": 73769,
"s": 73701,
"text": "CSS − Directive activates when a matching css style is encountered."
},
{
"code": null,
"e": 73837,
"s": 73769,
"text": "CSS − Directive activates when a matching css style is encountered."
},
{
"code": null,
"e": 73907,
"s": 73837,
"text": "Comment − Directive activates when a matching comment is encountered."
},
{
"code": null,
"e": 73977,
"s": 73907,
"text": "Comment − Directive activates when a matching comment is encountered."
},
{
"code": null,
"e": 74002,
"s": 73977,
"text": "Define custom html tags."
},
{
"code": null,
"e": 74079,
"s": 74002,
"text": "<student name = \"Mahesh\"></student><br/>\n<student name = \"Piyush\"></student>"
},
{
"code": null,
"e": 74137,
"s": 74079,
"text": "Define custom directive to handle above custom html tags."
},
{
"code": null,
"e": 75525,
"s": 74137,
"text": "var mainApp = angular.module(\"mainApp\", []);\n\n//Create a directive, first parameter is the html element to be attached.\t \n//We are attaching student html tag. \n//This directive will be activated as soon as any student element is encountered in html\n\nmainApp.directive('student', function() {\n //define the directive object\n var directive = {};\n \n //restrict = E, signifies that directive is Element directive\n directive.restrict = 'E';\n \n //template replaces the complete element with its text. \n directive.template = \"Student: <b>{{student.name}}</b> , \n Roll No: <b>{{student.rollno}}</b>\";\n \n //scope is used to distinguish each student element based on criteria.\n directive.scope = {\n student : \"=name\"\n }\n \n //compile is called during application initialization. AngularJS calls \n it once when html page is loaded.\n\t\n directive.compile = function(element, attributes) {\n element.css(\"border\", \"1px solid #cccccc\");\n \n //linkFunction is linked with each element with scope to get the element specific data.\n var linkFunction = function($scope, element, attributes) {\n element.html(\"Student: <b>\"+$scope.student.name +\"</b> , \n Roll No: <b>\"+$scope.student.rollno+\"</b><br/>\");\n element.css(\"background-color\", \"#ff00ff\");\n }\n return linkFunction;\n }\n \n return directive;\n});"
},
{
"code": null,
"e": 75637,
"s": 75525,
"text": "Define controller to update the scope for directive. Here we are using name attribute's value as scope's child."
},
{
"code": null,
"e": 75896,
"s": 75637,
"text": "mainApp.controller('StudentController', function($scope) {\n $scope.Mahesh = {};\n $scope.Mahesh.name = \"Mahesh Parashar\";\n $scope.Mahesh.rollno = 1;\n \n $scope.Piyush = {};\n $scope.Piyush.name = \"Piyush Parashar\";\n $scope.Piyush.rollno = 2;\n});"
},
{
"code": null,
"e": 77685,
"s": 75896,
"text": "<html>\n <head>\n <title>Angular JS Custom Directives</title>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"StudentController\">\n <student name = \"Mahesh\"></student><br/>\n <student name = \"Piyush\"></student>\n </div>\n\t\t\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.directive('student', function() {\n var directive = {};\n directive.restrict = 'E';\n directive.template = \"Student: <b>{{student.name}}</b> , \n Roll No: <b>{{student.rollno}}</b>\";\n \n directive.scope = {\n student : \"=name\"\n }\n \n directive.compile = function(element, attributes) {\n element.css(\"border\", \"1px solid #cccccc\");\n \n var linkFunction = function($scope, element, attributes) {\n element.html(\"Student: <b>\"+$scope.student.name +\"</b> , \n Roll No: <b>\"+$scope.student.rollno+\"</b><br/>\");\n element.css(\"background-color\", \"#ff00ff\");\n }\n return linkFunction;\n }\n \n return directive;\n });\n \n mainApp.controller('StudentController', function($scope) {\n $scope.Mahesh = {};\n $scope.Mahesh.name = \"Mahesh Parashar\";\n $scope.Mahesh.rollno = 1;\n\n $scope.Piyush = {};\n $scope.Piyush.name = \"Piyush Parashar\";\n $scope.Piyush.rollno = 2;\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 77742,
"s": 77685,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 78032,
"s": 77742,
"text": "AngularJS supports inbuilt internationalization for three types of filters currency, date and numbers. We only need to incorporate corresponding js according to locale of the country. By default it handles the locale of the browser. For example, to use Danish locale, use following script."
},
{
"code": null,
"e": 78122,
"s": 78032,
"text": "<script src = \"https://code.angularjs.org/1.2.5/i18n/angular-locale_da-dk.js\">\n</script> "
},
{
"code": null,
"e": 78990,
"s": 78122,
"text": "<html>\n <head>\n <title>Angular JS Forms</title>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"StudentController\">\n {{fees | currency }} <br/><br/>\n {{admissiondate | date }} <br/><br/>\n {{rollno | number }}\n </div>\n\t\t\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n <script src = \"https://code.angularjs.org/1.3.14/i18n/angular-locale_da-dk.js\">\n </script>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('StudentController', function($scope) {\n $scope.fees = 100;\n $scope.admissiondate = new Date();\n $scope.rollno = 123.45;\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 79047,
"s": 78990,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 79924,
"s": 79047,
"text": "<html>\n <head>\n <title>Angular JS Forms</title>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"StudentController\">\n {{fees | currency }} <br/><br/>\n {{admissiondate | date }} <br/><br/>\n {{rollno | number }}\n </div>\n\t\t\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n <!-- <script src = \"https://code.angularjs.org/1.3.14/i18n/angular-locale_da-dk.js\">\n </script> -->\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('StudentController', function($scope) {\n $scope.fees = 100;\n $scope.admissiondate = new Date();\n $scope.rollno = 123.45;\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 79981,
"s": 79924,
"text": "Open textAngularJS.htm in a web browser. See the result."
},
{
"code": null,
"e": 80288,
"s": 79981,
"text": "AngularJS supports inbuilt internationalization for three types of filters : Currency, Date, and Numbers. We only need to incorporate corresponding java script according to locale of the country. By default, it considers the locale of the browser. For example, for Danish locale, use the following script −"
},
{
"code": null,
"e": 80378,
"s": 80288,
"text": "<script src = \"https://code.angularjs.org/1.2.5/i18n/angular-locale_da-dk.js\">\n</script> "
},
{
"code": null,
"e": 81246,
"s": 80378,
"text": "<html>\n <head>\n <title>Angular JS Forms</title>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"StudentController\">\n {{fees | currency }} <br/><br/>\n {{admissiondate | date }} <br/><br/>\n {{rollno | number }}\n </div>\n\t\t\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n <script src = \"https://code.angularjs.org/1.3.14/i18n/angular-locale_da-dk.js\">\n </script>\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('StudentController', function($scope) {\n $scope.fees = 100;\n $scope.admissiondate = new Date();\n $scope.rollno = 123.45;\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 81315,
"s": 81246,
"text": "Open the file testAngularJS.htm in a web browser and see the result."
},
{
"code": null,
"e": 82192,
"s": 81315,
"text": "<html>\n <head>\n <title>Angular JS Forms</title>\n </head>\n \n <body>\n <h2>AngularJS Sample Application</h2>\n \n <div ng-app = \"mainApp\" ng-controller = \"StudentController\">\n {{fees | currency }} <br/><br/>\n {{admissiondate | date }} <br/><br/>\n {{rollno | number }}\n </div>\n\t\t\n <script src = \"https://ajax.googleapis.com/ajax/libs/angularjs/1.3.14/angular.min.js\">\n </script>\n <!-- <script src = \"https://code.angularjs.org/1.3.14/i18n/angular-locale_da-dk.js\">\n </script> -->\n \n <script>\n var mainApp = angular.module(\"mainApp\", []);\n \n mainApp.controller('StudentController', function($scope) {\n $scope.fees = 100;\n $scope.admissiondate = new Date();\n $scope.rollno = 123.45;\n });\n </script>\n \n </body>\n</html>"
},
{
"code": null,
"e": 82261,
"s": 82192,
"text": "Open the file testAngularJS.htm in a web browser and see the result."
},
{
"code": null,
"e": 82296,
"s": 82261,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 82310,
"s": 82296,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 82345,
"s": 82310,
"text": "\n 40 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 82365,
"s": 82345,
"text": " Skillbakerystudios"
},
{
"code": null,
"e": 82372,
"s": 82365,
"text": " Print"
},
{
"code": null,
"e": 82383,
"s": 82372,
"text": " Add Notes"
}
] |
Calculation of Serial and Non-Serial Schedules in DBMS - GeeksforGeeks | 10 Jan, 2022
The serial execution of a transaction is defined as Schedule. It is made up of a number of transactions each comprising of a number of instructions.
Types of Schedules –
Serial Schedules
Non-Serial Schedules
Calculation of Number of Schedules :Consider there to be N number of transactions t1, t2, t3, ...., tN with k1, k2, k3, ....., kN number of operations respectively.
Total Number of Schedules –(N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)Number of Serial Schedules –It is all possible permutations of n transactions = N!Number of Non-Serial Schedules –Total Schedules = Serial Schedules + Non-Serial SchedulesNumber of Non-Serial Schedules = Total Number of Schedules – Number of Serial Schedules((N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)) - (N!)
Total Number of Schedules –(N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)
(N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)
Number of Serial Schedules –It is all possible permutations of n transactions = N!
It is all possible permutations of n transactions = N!
Number of Non-Serial Schedules –Total Schedules = Serial Schedules + Non-Serial SchedulesNumber of Non-Serial Schedules = Total Number of Schedules – Number of Serial Schedules((N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)) - (N!)
((N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)) - (N!)
Example –Consider there to be three transactions with 1, 2, and 3 operations respectively.We have to find –
Total number of schedules possible.
Total number of serial schedules and non-serial schedules possible.
Solution –Total Number of Schedules,
= (1 + 2 + 3)! / (1! + 2! + 3!) = 6! / 12 = 60
Number of Serial Schedules,
= 3! = 6
Number of Non-Serial Schedules,
= Total Number of Schedules - Number of Serial Schedules
= 60 - 6
= 54
saronind93
DBMS
GATE CS
DBMS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Second Normal Form (2NF)
Introduction of Relational Algebra in DBMS
Types of Functional dependencies in DBMS
KDD Process in Data Mining
Difference between File System and DBMS
Layers of OSI Model
Types of Operating Systems
TCP/IP Model
Page Replacement Algorithms in Operating Systems
Semaphores in Process Synchronization | [
{
"code": null,
"e": 24224,
"s": 24196,
"text": "\n10 Jan, 2022"
},
{
"code": null,
"e": 24373,
"s": 24224,
"text": "The serial execution of a transaction is defined as Schedule. It is made up of a number of transactions each comprising of a number of instructions."
},
{
"code": null,
"e": 24394,
"s": 24373,
"text": "Types of Schedules –"
},
{
"code": null,
"e": 24411,
"s": 24394,
"text": "Serial Schedules"
},
{
"code": null,
"e": 24432,
"s": 24411,
"text": "Non-Serial Schedules"
},
{
"code": null,
"e": 24597,
"s": 24432,
"text": "Calculation of Number of Schedules :Consider there to be N number of transactions t1, t2, t3, ...., tN with k1, k2, k3, ....., kN number of operations respectively."
},
{
"code": null,
"e": 25012,
"s": 24597,
"text": "Total Number of Schedules –(N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)Number of Serial Schedules –It is all possible permutations of n transactions = N!Number of Non-Serial Schedules –Total Schedules = Serial Schedules + Non-Serial SchedulesNumber of Non-Serial Schedules = Total Number of Schedules – Number of Serial Schedules((N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)) - (N!)"
},
{
"code": null,
"e": 25100,
"s": 25012,
"text": "Total Number of Schedules –(N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)"
},
{
"code": null,
"e": 25161,
"s": 25100,
"text": "(N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)"
},
{
"code": null,
"e": 25244,
"s": 25161,
"text": "Number of Serial Schedules –It is all possible permutations of n transactions = N!"
},
{
"code": null,
"e": 25299,
"s": 25244,
"text": "It is all possible permutations of n transactions = N!"
},
{
"code": null,
"e": 25545,
"s": 25299,
"text": "Number of Non-Serial Schedules –Total Schedules = Serial Schedules + Non-Serial SchedulesNumber of Non-Serial Schedules = Total Number of Schedules – Number of Serial Schedules((N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)) - (N!)"
},
{
"code": null,
"e": 25615,
"s": 25545,
"text": "((N1 + N2 + N3 + ..... + Nn)! / (N1! * N2! * N3! * ... * Nn!)) - (N!)"
},
{
"code": null,
"e": 25723,
"s": 25615,
"text": "Example –Consider there to be three transactions with 1, 2, and 3 operations respectively.We have to find –"
},
{
"code": null,
"e": 25759,
"s": 25723,
"text": "Total number of schedules possible."
},
{
"code": null,
"e": 25827,
"s": 25759,
"text": "Total number of serial schedules and non-serial schedules possible."
},
{
"code": null,
"e": 25864,
"s": 25827,
"text": "Solution –Total Number of Schedules,"
},
{
"code": null,
"e": 26056,
"s": 25864,
"text": "= (1 + 2 + 3)! / (1! + 2! + 3!) = 6! / 12 = 60\n\nNumber of Serial Schedules,\n= 3! = 6\n\nNumber of Non-Serial Schedules,\n= Total Number of Schedules - Number of Serial Schedules \n= 60 - 6 \n= 54 "
},
{
"code": null,
"e": 26067,
"s": 26056,
"text": "saronind93"
},
{
"code": null,
"e": 26072,
"s": 26067,
"text": "DBMS"
},
{
"code": null,
"e": 26080,
"s": 26072,
"text": "GATE CS"
},
{
"code": null,
"e": 26085,
"s": 26080,
"text": "DBMS"
},
{
"code": null,
"e": 26183,
"s": 26085,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26192,
"s": 26183,
"text": "Comments"
},
{
"code": null,
"e": 26205,
"s": 26192,
"text": "Old Comments"
},
{
"code": null,
"e": 26230,
"s": 26205,
"text": "Second Normal Form (2NF)"
},
{
"code": null,
"e": 26273,
"s": 26230,
"text": "Introduction of Relational Algebra in DBMS"
},
{
"code": null,
"e": 26314,
"s": 26273,
"text": "Types of Functional dependencies in DBMS"
},
{
"code": null,
"e": 26341,
"s": 26314,
"text": "KDD Process in Data Mining"
},
{
"code": null,
"e": 26381,
"s": 26341,
"text": "Difference between File System and DBMS"
},
{
"code": null,
"e": 26401,
"s": 26381,
"text": "Layers of OSI Model"
},
{
"code": null,
"e": 26428,
"s": 26401,
"text": "Types of Operating Systems"
},
{
"code": null,
"e": 26441,
"s": 26428,
"text": "TCP/IP Model"
},
{
"code": null,
"e": 26490,
"s": 26441,
"text": "Page Replacement Algorithms in Operating Systems"
}
] |
Types of Parallelism in Processing Execution | Data Parallelism means concurrent execution of the same task on each multiple computing core.
Let’s take an example, summing the contents of an array of size N. For a single-core system, one thread would simply sum the elements [0] . . . [N − 1]. For a dual-core system, however, thread A, running on core 0, could sum the elements [0] . . . [N/2 − 1] and while thread B, running on core 1, could sum the elements [N/2] . . . [N − 1]. So the Two threads would be running in parallel on separate computing cores.
Task Parallelism means concurrent execution of the different task on multiple computing cores.
Consider again our example above, an example of task parallelism might involve two threads, each performing a unique statistical operation on the array of elements. Again The threads are operating in parallel on separate computing cores, but each is performing a unique operation.
Bit-level parallelism is a form of parallel computing which is based on increasing processor word size. In this type of parallelism, with increasing the word size reduces the number of instructions the processor must execute in order to perform an operation on variables whose sizes are greater than the length of the word.
E.g., consider a case where an 8-bit processor must add two 16-bit integers. First the 8 lower-order bits from each integer were must added by processor, then add the 8 higher-order bits, and then two instructions to complete a single operation. A processor with 16- bit would be able to complete the operation with single instruction.
Instruction-level parallelism means the simultaneous execution of multiple instructions from a program. While pipelining is a form of ILP, we must exploit it to achieve parallel execution of the instructions in the instruction stream.
for (i=1; i<=100; i= i+1)
y[i] = y[i] + x[i];
This is a parallel loop. Every iteration of the loop can overlap with any other iteration, although within each loop iteration there is little opportunity for overlap. | [
{
"code": null,
"e": 1156,
"s": 1062,
"text": "Data Parallelism means concurrent execution of the same task on each multiple computing core."
},
{
"code": null,
"e": 1574,
"s": 1156,
"text": "Let’s take an example, summing the contents of an array of size N. For a single-core system, one thread would simply sum the elements [0] . . . [N − 1]. For a dual-core system, however, thread A, running on core 0, could sum the elements [0] . . . [N/2 − 1] and while thread B, running on core 1, could sum the elements [N/2] . . . [N − 1]. So the Two threads would be running in parallel on separate computing cores."
},
{
"code": null,
"e": 1669,
"s": 1574,
"text": "Task Parallelism means concurrent execution of the different task on multiple computing cores."
},
{
"code": null,
"e": 1950,
"s": 1669,
"text": "Consider again our example above, an example of task parallelism might involve two threads, each performing a unique statistical operation on the array of elements. Again The threads are operating in parallel on separate computing cores, but each is performing a unique operation."
},
{
"code": null,
"e": 2274,
"s": 1950,
"text": "Bit-level parallelism is a form of parallel computing which is based on increasing processor word size. In this type of parallelism, with increasing the word size reduces the number of instructions the processor must execute in order to perform an operation on variables whose sizes are greater than the length of the word."
},
{
"code": null,
"e": 2610,
"s": 2274,
"text": "E.g., consider a case where an 8-bit processor must add two 16-bit integers. First the 8 lower-order bits from each integer were must added by processor, then add the 8 higher-order bits, and then two instructions to complete a single operation. A processor with 16- bit would be able to complete the operation with single instruction."
},
{
"code": null,
"e": 2845,
"s": 2610,
"text": "Instruction-level parallelism means the simultaneous execution of multiple instructions from a program. While pipelining is a form of ILP, we must exploit it to achieve parallel execution of the instructions in the instruction stream."
},
{
"code": null,
"e": 2894,
"s": 2845,
"text": "for (i=1; i<=100; i= i+1)\n y[i] = y[i] + x[i];"
},
{
"code": null,
"e": 3062,
"s": 2894,
"text": "This is a parallel loop. Every iteration of the loop can overlap with any other iteration, although within each loop iteration there is little opportunity for overlap."
}
] |
How to Build an Online Machine Learning App with Python | by M Khorasani | Towards Data Science | Machine learning is rapidly becoming as ubiquitous as data itself. Quite literally wherever there is an abundance of data, machine learning is somehow intertwined. And for good reason too. After all, what utility would data have if we were not able to use it to predict something about the future? Luckily there is a plethora of toolkits and frameworks that have made it rather simple to deploy ML in Python. Specifically, Sklearn has done a terrifically effective job at making ML accessible to developers.
Even still, there are many technical and STEM-associated individuals that would have plenty of use for ML but who lack the necessary coding skills to see it through. This specific audience would require a solely visual tool with a friendly user interface that would allow them to replicate exactly what others would do with code.
These days tech is dominated by the ‘as-a-service’ paradigm that seeks to offer literally anything as a service, including but not limited to software, databases, CAD, and so forth. There’s no reason to believe that machine learning is somehow exempt from this notion. As you have probably guessed by now, there is indeed the term MLaaS or in other words, machine learning as a service.
While there are quite a few MLaaS platforms out there, most of them appear to be priced quite heftily, with some setting you back upwards of $300 a month. Well truth be told, it doesn't have to be that way. There is in fact a way for you to render your own simplified MLaaS platform as an online web application. And in this tutorial, I will show you exactly how to do that.
But before we proceed any further, please take a look at the final ML app:
You can also test the ML App at the following link:
Link of the ML App
To implement our own version of MLaaS, we will be using the Sklearn library in Python. The novelty of this package is that it simplifies the complexity of training, and testing a variety of ML classifiers including but not limited to logistic regression, naive Bayes, support vector machine, decision tree, and K nearest neighbors.
To render our ML platform as a web app, we will be using Streamlit. Streamlit is a pure Python web framework that has all but closed the app development gap for Python programmers. It is a robust and scalable API with an exceptionally shallow learning curve, that has reduced development time from weeks to minutes.
To plot and visualize your confusion matrix, ROC curve, and perhaps other data, we will be using Plotly — my favorite library when it comes to visualizing interactive charts and visuals in a web application.
Pandas gives you the ability to manipulate, mutate, transform and visualize data in frames, all with a couple of lines of code. In this application, we will use Pandas to read/write our data from/into csv files and to manipulate our data frames based on selected parameters.
Please proceed by firing up Anaconda or any Python IDE of your choice and installing Sklearn as shown below:
pip install sklearn
Subsequently, import all of the required packages:
import streamlit as stimport pandas as pdfrom sklearn.model_selection import train_test_splitfrom sklearn.linear_model import LogisticRegressionfrom sklearn import metricsimport plotly.express as pximport plotly.graph_objects as goimport plotly.figure_factory as ffimport base64import os
In our ML app, we will create a logistic regression classifier, where we first upload a training dataset that we will use to train our model with. We will gauge the performance of our hyperparameters by visualizing the confusion matrix and ROC curve. And finally, when we are happy, we will upload an unlabeled test dataset to apply our ML model.
First, we will create a file upload function as shown below:
Then, we will create multiple widgets to select our feature columns, label column, hyperparameters, and advanced parameters as shown below:
Now we will use our training dataset and the selected parameters to train our model. Simultaneously we will visualize the confusion matrix, ROC curve, and other metrics to gauge the performance of our model:
And finally, once we are satisfied with the performance of our model, we will upload an unlabeled dataset to test our model. Once the dataset has been labeled using our classifier, we will allow the user to download the data as a csv file:
To run your ML app locally proceed by typing the following commands in Anaconda prompt. First, change your root directory to where your source code is saved:
cd C:/Users/...
Then type the following to run your app:
streamlit run file_name.py
And what’s an awesome ML app if you can’t deploy it to the cloud for the world to see? Before you contemplate AWS, Azure, GCP, or Heroku, consider Streamlit itself. With Streamlit’s one-click deployment deploying web apps has never been so simple.
First, push your code to a public GitHub repository and then head over to Streamlit sharing, sign up, and link your GitHub account as follows:
Select the relevant repository and script, and then click Deploy!
And there you have it, an elegant, highly interactive, cloud deployable free ML app at your service.
You can also test the ML App at the following link:
Link of the ML App
If you want to learn more about data visualization, Python, and app development, then feel free to check out the following (affiliate linked) courses:
www.coursera.org
www.coursera.org
www.coursera.org
The source code for this tutorial can be found in the following GitHub repository. | [
{
"code": null,
"e": 679,
"s": 171,
"text": "Machine learning is rapidly becoming as ubiquitous as data itself. Quite literally wherever there is an abundance of data, machine learning is somehow intertwined. And for good reason too. After all, what utility would data have if we were not able to use it to predict something about the future? Luckily there is a plethora of toolkits and frameworks that have made it rather simple to deploy ML in Python. Specifically, Sklearn has done a terrifically effective job at making ML accessible to developers."
},
{
"code": null,
"e": 1009,
"s": 679,
"text": "Even still, there are many technical and STEM-associated individuals that would have plenty of use for ML but who lack the necessary coding skills to see it through. This specific audience would require a solely visual tool with a friendly user interface that would allow them to replicate exactly what others would do with code."
},
{
"code": null,
"e": 1396,
"s": 1009,
"text": "These days tech is dominated by the ‘as-a-service’ paradigm that seeks to offer literally anything as a service, including but not limited to software, databases, CAD, and so forth. There’s no reason to believe that machine learning is somehow exempt from this notion. As you have probably guessed by now, there is indeed the term MLaaS or in other words, machine learning as a service."
},
{
"code": null,
"e": 1771,
"s": 1396,
"text": "While there are quite a few MLaaS platforms out there, most of them appear to be priced quite heftily, with some setting you back upwards of $300 a month. Well truth be told, it doesn't have to be that way. There is in fact a way for you to render your own simplified MLaaS platform as an online web application. And in this tutorial, I will show you exactly how to do that."
},
{
"code": null,
"e": 1846,
"s": 1771,
"text": "But before we proceed any further, please take a look at the final ML app:"
},
{
"code": null,
"e": 1898,
"s": 1846,
"text": "You can also test the ML App at the following link:"
},
{
"code": null,
"e": 1917,
"s": 1898,
"text": "Link of the ML App"
},
{
"code": null,
"e": 2249,
"s": 1917,
"text": "To implement our own version of MLaaS, we will be using the Sklearn library in Python. The novelty of this package is that it simplifies the complexity of training, and testing a variety of ML classifiers including but not limited to logistic regression, naive Bayes, support vector machine, decision tree, and K nearest neighbors."
},
{
"code": null,
"e": 2565,
"s": 2249,
"text": "To render our ML platform as a web app, we will be using Streamlit. Streamlit is a pure Python web framework that has all but closed the app development gap for Python programmers. It is a robust and scalable API with an exceptionally shallow learning curve, that has reduced development time from weeks to minutes."
},
{
"code": null,
"e": 2773,
"s": 2565,
"text": "To plot and visualize your confusion matrix, ROC curve, and perhaps other data, we will be using Plotly — my favorite library when it comes to visualizing interactive charts and visuals in a web application."
},
{
"code": null,
"e": 3048,
"s": 2773,
"text": "Pandas gives you the ability to manipulate, mutate, transform and visualize data in frames, all with a couple of lines of code. In this application, we will use Pandas to read/write our data from/into csv files and to manipulate our data frames based on selected parameters."
},
{
"code": null,
"e": 3157,
"s": 3048,
"text": "Please proceed by firing up Anaconda or any Python IDE of your choice and installing Sklearn as shown below:"
},
{
"code": null,
"e": 3177,
"s": 3157,
"text": "pip install sklearn"
},
{
"code": null,
"e": 3228,
"s": 3177,
"text": "Subsequently, import all of the required packages:"
},
{
"code": null,
"e": 3516,
"s": 3228,
"text": "import streamlit as stimport pandas as pdfrom sklearn.model_selection import train_test_splitfrom sklearn.linear_model import LogisticRegressionfrom sklearn import metricsimport plotly.express as pximport plotly.graph_objects as goimport plotly.figure_factory as ffimport base64import os"
},
{
"code": null,
"e": 3863,
"s": 3516,
"text": "In our ML app, we will create a logistic regression classifier, where we first upload a training dataset that we will use to train our model with. We will gauge the performance of our hyperparameters by visualizing the confusion matrix and ROC curve. And finally, when we are happy, we will upload an unlabeled test dataset to apply our ML model."
},
{
"code": null,
"e": 3924,
"s": 3863,
"text": "First, we will create a file upload function as shown below:"
},
{
"code": null,
"e": 4064,
"s": 3924,
"text": "Then, we will create multiple widgets to select our feature columns, label column, hyperparameters, and advanced parameters as shown below:"
},
{
"code": null,
"e": 4272,
"s": 4064,
"text": "Now we will use our training dataset and the selected parameters to train our model. Simultaneously we will visualize the confusion matrix, ROC curve, and other metrics to gauge the performance of our model:"
},
{
"code": null,
"e": 4512,
"s": 4272,
"text": "And finally, once we are satisfied with the performance of our model, we will upload an unlabeled dataset to test our model. Once the dataset has been labeled using our classifier, we will allow the user to download the data as a csv file:"
},
{
"code": null,
"e": 4670,
"s": 4512,
"text": "To run your ML app locally proceed by typing the following commands in Anaconda prompt. First, change your root directory to where your source code is saved:"
},
{
"code": null,
"e": 4686,
"s": 4670,
"text": "cd C:/Users/..."
},
{
"code": null,
"e": 4727,
"s": 4686,
"text": "Then type the following to run your app:"
},
{
"code": null,
"e": 4754,
"s": 4727,
"text": "streamlit run file_name.py"
},
{
"code": null,
"e": 5002,
"s": 4754,
"text": "And what’s an awesome ML app if you can’t deploy it to the cloud for the world to see? Before you contemplate AWS, Azure, GCP, or Heroku, consider Streamlit itself. With Streamlit’s one-click deployment deploying web apps has never been so simple."
},
{
"code": null,
"e": 5145,
"s": 5002,
"text": "First, push your code to a public GitHub repository and then head over to Streamlit sharing, sign up, and link your GitHub account as follows:"
},
{
"code": null,
"e": 5211,
"s": 5145,
"text": "Select the relevant repository and script, and then click Deploy!"
},
{
"code": null,
"e": 5312,
"s": 5211,
"text": "And there you have it, an elegant, highly interactive, cloud deployable free ML app at your service."
},
{
"code": null,
"e": 5364,
"s": 5312,
"text": "You can also test the ML App at the following link:"
},
{
"code": null,
"e": 5383,
"s": 5364,
"text": "Link of the ML App"
},
{
"code": null,
"e": 5534,
"s": 5383,
"text": "If you want to learn more about data visualization, Python, and app development, then feel free to check out the following (affiliate linked) courses:"
},
{
"code": null,
"e": 5551,
"s": 5534,
"text": "www.coursera.org"
},
{
"code": null,
"e": 5568,
"s": 5551,
"text": "www.coursera.org"
},
{
"code": null,
"e": 5585,
"s": 5568,
"text": "www.coursera.org"
}
] |
Java Variables | Variables are containers for storing data values.
In Java, there are different types of variables, for example:
String - stores text, such as "Hello". String values are
surrounded by double quotes
int - stores integers (whole numbers), without decimals, such as 123 or -123
float - stores floating point numbers, with decimals, such as 19.99 or -19.99
char - stores single characters, such as
'a' or 'B'. Char values are surrounded by single quotes
boolean - stores values with two states:
true or false
To create a variable, you must specify the type and assign it a value:
type variableName = value;
Where type is one of Java's types (such as int or String), and
variableName is the name of the variable (such as x or
name). The equal sign is used to assign values to the variable.
To create a variable that should store text, look at the following example:
Create a variable called name of type String and assign it the value "John":
String name = "John";
System.out.println(name);
Try it Yourself »
To create a variable that should store a number, look at the following example:
Create a variable called myNum of type int and assign it the value 15:
int myNum = 15;
System.out.println(myNum);
Try it Yourself »
You can also declare a variable without assigning the value, and assign the value later:
int myNum;
myNum = 15;
System.out.println(myNum);
Try it Yourself »
Note that if you assign a new value to an existing variable, it will overwrite the previous value:
Change the value of myNum from 15 to 20:
int myNum = 15;
myNum = 20; // myNum is now 20
System.out.println(myNum);
Try it Yourself »
If you don't want others (or yourself) to overwrite existing values, use the final keyword (this will declare the variable as "final" or "constant", which means unchangeable and read-only):
final int myNum = 15;
myNum = 20; // will generate an error: cannot assign a value to a final variable
Try it Yourself »
A demonstration of how to declare variables of other types:
int myNum = 5;
float myFloatNum = 5.99f;
char myLetter = 'D';
boolean myBool = true;
String myText = "Hello";
You will learn more about data types in the next section.
Create a variable named carName and assign the value Volvo to it.
= ;
Start the Exercise
We just launchedW3Schools videos
Get certifiedby completinga course today!
If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:
[email protected]
Your message has been sent to W3Schools. | [
{
"code": null,
"e": 51,
"s": 0,
"text": "Variables are containers for storing data values. "
},
{
"code": null,
"e": 113,
"s": 51,
"text": "In Java, there are different types of variables, for example:"
},
{
"code": null,
"e": 200,
"s": 113,
"text": "String - stores text, such as \"Hello\". String values are \n surrounded by double quotes"
},
{
"code": null,
"e": 277,
"s": 200,
"text": "int - stores integers (whole numbers), without decimals, such as 123 or -123"
},
{
"code": null,
"e": 355,
"s": 277,
"text": "float - stores floating point numbers, with decimals, such as 19.99 or -19.99"
},
{
"code": null,
"e": 454,
"s": 355,
"text": "char - stores single characters, such as \n 'a' or 'B'. Char values are surrounded by single quotes"
},
{
"code": null,
"e": 511,
"s": 454,
"text": "boolean - stores values with two states: \n true or false"
},
{
"code": null,
"e": 582,
"s": 511,
"text": "To create a variable, you must specify the type and assign it a value:"
},
{
"code": null,
"e": 610,
"s": 582,
"text": "type variableName = value;\n"
},
{
"code": null,
"e": 793,
"s": 610,
"text": "Where type is one of Java's types (such as int or String), and \nvariableName is the name of the variable (such as x or\nname). The equal sign is used to assign values to the variable."
},
{
"code": null,
"e": 869,
"s": 793,
"text": "To create a variable that should store text, look at the following example:"
},
{
"code": null,
"e": 946,
"s": 869,
"text": "Create a variable called name of type String and assign it the value \"John\":"
},
{
"code": null,
"e": 995,
"s": 946,
"text": "String name = \"John\";\nSystem.out.println(name);\n"
},
{
"code": null,
"e": 1015,
"s": 995,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1095,
"s": 1015,
"text": "To create a variable that should store a number, look at the following example:"
},
{
"code": null,
"e": 1166,
"s": 1095,
"text": "Create a variable called myNum of type int and assign it the value 15:"
},
{
"code": null,
"e": 1210,
"s": 1166,
"text": "int myNum = 15;\nSystem.out.println(myNum);\n"
},
{
"code": null,
"e": 1230,
"s": 1210,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1319,
"s": 1230,
"text": "You can also declare a variable without assigning the value, and assign the value later:"
},
{
"code": null,
"e": 1370,
"s": 1319,
"text": "int myNum;\nmyNum = 15;\nSystem.out.println(myNum);\n"
},
{
"code": null,
"e": 1390,
"s": 1370,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1489,
"s": 1390,
"text": "Note that if you assign a new value to an existing variable, it will overwrite the previous value:"
},
{
"code": null,
"e": 1530,
"s": 1489,
"text": "Change the value of myNum from 15 to 20:"
},
{
"code": null,
"e": 1606,
"s": 1530,
"text": "int myNum = 15;\nmyNum = 20; // myNum is now 20\nSystem.out.println(myNum);\n"
},
{
"code": null,
"e": 1626,
"s": 1606,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 1816,
"s": 1626,
"text": "If you don't want others (or yourself) to overwrite existing values, use the final keyword (this will declare the variable as \"final\" or \"constant\", which means unchangeable and read-only):"
},
{
"code": null,
"e": 1921,
"s": 1816,
"text": "final int myNum = 15;\nmyNum = 20; // will generate an error: cannot assign a value to a final variable\n"
},
{
"code": null,
"e": 1941,
"s": 1921,
"text": "\nTry it Yourself »\n"
},
{
"code": null,
"e": 2001,
"s": 1941,
"text": "A demonstration of how to declare variables of other types:"
},
{
"code": null,
"e": 2112,
"s": 2001,
"text": "int myNum = 5;\nfloat myFloatNum = 5.99f;\nchar myLetter = 'D';\nboolean myBool = true;\nString myText = \"Hello\";\n"
},
{
"code": null,
"e": 2170,
"s": 2112,
"text": "You will learn more about data types in the next section."
},
{
"code": null,
"e": 2236,
"s": 2170,
"text": "Create a variable named carName and assign the value Volvo to it."
},
{
"code": null,
"e": 2243,
"s": 2236,
"text": " = ;\n"
},
{
"code": null,
"e": 2262,
"s": 2243,
"text": "Start the Exercise"
},
{
"code": null,
"e": 2295,
"s": 2262,
"text": "We just launchedW3Schools videos"
},
{
"code": null,
"e": 2337,
"s": 2295,
"text": "Get certifiedby completinga course today!"
},
{
"code": null,
"e": 2444,
"s": 2337,
"text": "If you want to report an error, or if you want to make a suggestion, do not hesitate to send us an e-mail:"
},
{
"code": null,
"e": 2463,
"s": 2444,
"text": "[email protected]"
}
] |
Meme Text Generation with a Deep Convolutional Network in Keras & Tensorflow | by dylan wenzlau | Towards Data Science | The goal of this post is to describe end-to-end how to build a deep conv net for text generation, but in greater depth than some of the existing articles I’ve read. This will be a practical guide and while I suggest many best practices, I am not an expert in deep learning theory nor have I read every single relevant research paper. I’ll cover takeaways about data cleaning, training, model design, and prediction algorithms.
The raw dataset we’ll draw from is ~100M public meme captions by users of the Imgflip Meme Generator. To speed up training and reduce complexity of the model, we only use the 48 most popular memes and exactly 20,000 captions per meme, totaling 960,000 captions as training data. However, since we are building a generational model there will be one training example for each character in the caption, totaling ~45,000,000 training examples. Character-level generation rather than word-level was chosen here because memes tend to use spelling and grammar... uh... creatively. Also, character-level deep learning is a superset of word-level deep learning and can therefore achieve higher accuracy if you have enough data and your model design is sufficient to learn all the complexity. If you try the finished model below, you’ll also see that char-level can be more fun!
Below is what the training data looks if the first meme caption is “make all the memes”. I’m omitting the code for reading from the database and performing initial cleaning because it’s very standard and could be done in multiple ways.
training_data = [ ["000000061533 0 ", "m"], ["000000061533 0 m", "a"], ["000000061533 0 ma", "k"], ["000000061533 0 mak", "e"], ["000000061533 0 make", "|"], ["000000061533 1 make|", "a"], ["000000061533 1 make|a", "l"], ["000000061533 1 make|al", "l"], ["000000061533 1 make|all", " "], ["000000061533 1 make|all ", "t"], ["000000061533 1 make|all t", "h"], ["000000061533 1 make|all th", "e"], ["000000061533 1 make|all the", " "], ["000000061533 1 make|all the ", "m"], ["000000061533 1 make|all the m", "e"], ["000000061533 1 make|all the me", "m"], ["000000061533 1 make|all the mem", "e"], ["000000061533 1 make|all the meme", "s"], ["000000061533 1 make|all the memes", "|"], ... 45 million more rows here ...]# we'll need our feature text and labels as separate arrays latertexts = [row[0] for row in training_data]labels = [row[1] for row in training_data]
Like most things in machine learning, this is just a classification problem. We are classifying the text strings on the left into one of ~70 different buckets where the buckets are characters.
Let’s unpack the format.
The first 12 characters are the meme template ID. This allows the model to differentiate between the 48 distinct memes we’re feeding it. The string is left padded with zeros so all IDs are the same length.
The 0 or 1 is the index of the current text box being predicted, generally 0 is the top box and 1 is the bottom box, although many memes are more complex. The two spaces are just extra spacing to ensure the model can tell the box index apart from the template ID and meme text. Note: it is critical that our convolution kernel width (seen later in this post) is no wider than the 4 spaces plus the index character, aka ≤ 5.
After that is the text of the meme so far, with | used as the end-of-text-box character.
Finally, the last character by itself (the 2nd array item) is the next character in the sequence.
Several cleaning techniques were used on the data before training:
Trim leading and trailing whitespace and replace repeated whitespace (\s+) with a single space character.
Apply a minimum string length of 10 characters so we don’t generate boring one-word or one-letter memes.
Apply a maximum string length of 82 characters so we don’t generate super long memes and because the model will train faster. 82 is arbitrary, it just made the overall training strings about 100 characters.
Convert everything to lowercase to reduce the number of characters the model must learn, and because many memes are just all caps anyway.
Skip meme captions with non-ascii characters to reduce the complexity the model has to learn. This means that both our feature text and labels will come from a set of only ~70 characters, depending on which ascii characters the training data happens to include.
Skip meme captions containing the pipe character | since it’s our special end-of-text-box character.
Run the text through a language detection library and skip meme captions that are unlikely to be English. Improves quality of the text we generate since the model only has to learn one language, and identical character sequences can have meanings in multiple languages.
Skip duplicate meme captions we’ve already added to the training set to reduce the chance the model simply memorizes entire meme captions.
Our data is now ready to feed into a neural net!
That may or may not be a word. Fun fact: apparently we in deep learning are heathens for calling multi-dimensional arrays tensors rather than the mathematical term “holors”, which is a generalized tensor not requiring particular transformation properties. But whatever, tensor sounds cooler than holor ;)
First, here is the python import code for everything we’ll do below:
from keras import Sequentialfrom keras.preprocessing.sequence import pad_sequencesfrom keras.callbacks import ModelCheckpointfrom keras.layers import Dense, Dropout, GlobalMaxPooling1D, Conv1D, MaxPooling1D, Embeddingfrom keras.layers.normalization import BatchNormalizationimport numpy as npimport util # util is a custom file I wrote, see github link below
Neural nets operate on tensors of numbers (vectors/matrices/multi-dimensional arrays), so we need to restructure our text accordingly. Each of our training texts will be transformed into an array of integers (a rank 1 tensor) by replacing each character with its corresponding index from the array of ~70 unique characters found in the data. The order of the character array is arbitrary, but we choose to order it by character frequency so it stays roughly consistent when changing the amount of training data. Keras has a Tokenizer class which you can use for this (with char_level=True), but I wrote my own util functions because they were must faster than the Keras tokenizer.
# output: {' ': 1, '0': 2, 'e': 3, ... }char_to_int = util.map_char_to_int(texts)# output: [[2, 2, 27, 11, ...], ... ]sequences = util.texts_to_sequences(texts, char_to_int)labels = [char_to_int[char] for char in labels]
These are the characters our data contains in order of frequency:
0etoains|rhl1udmy2cg4p53wf6b897kv."!?j:x,*"z-q/&$)(#%+_@=>;<][~`^}{\
Next we will pad our sequences of integers with leading zeros so they are all the same length, since the model‘s tensor math requires the shape of each training example to be identical. (note: I could have used as low as 100 length here because our texts are only 100 characters, but I wanted all my pooling operations later on to be perfectly divisible by 2.)
SEQUENCE_LENGTH = 128data = pad_sequences(sequences, maxlen=SEQUENCE_LENGTH)
And finally we’ll shuffle our training data and split it into training and validation sets. Shuffling (randomizing the order) ensures that a particular subset of the data is not always the subset we use to validate accuracy. Splitting some data into a validation set allows us to gauge how well the model is performing on examples that we are not allowing it to use for training.
# randomize order of training dataindices = np.arange(data.shape[0])np.random.shuffle(indices)data = data[indices]labels = labels[indices]# validation set can be much smaller if we use a lot of datavalidation_ratio = 0.2 if data.shape[0] < 1000000 else 0.02num_validation_samples = int(validation_ratio * data.shape[0])x_train = data[:-num_validation_samples]y_train = labels[:-num_validation_samples]x_val = data[-num_validation_samples:]y_val = labels[-num_validation_samples:]
I chose to use a convolutional net because convolutions are simple and fast to train. I did briefly test a two layer LSTM, but the accuracy per time spent training was worse than the conv net, and predictions with even that small LSTM took longer than the age of the universe (okay, maybe it just felt that long). Generative Adversarial Networks (GANs) are beautiful creatures with massive potential, but using them for text generation is still in early stages and my first attempt at it was lackluster. Maybe that will be my next post...
Okay, here’s the code used to construct our conv net model in Keras:
EMBEDDING_DIM = 16model = Sequential()model.add(Embedding(len(char_to_int) + 1, EMBEDDING_DIM, input_length=SEQUENCE_LENGTH))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(MaxPooling1D(2))model.add(Dropout(0.25))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(MaxPooling1D(2))model.add(Dropout(0.25))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(MaxPooling1D(2))model.add(Dropout(0.25))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(MaxPooling1D(2))model.add(Dropout(0.25))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(GlobalMaxPooling1D())model.add(Dropout(0.25))model.add(Dense(1024, activation='relu'))model.add(BatchNormalization())model.add(Dropout(0.25))model.add(Dense(len(labels_index), activation='softmax'))model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])
Lots of things going on there. Here’s what all that code is doing:
First the model converts each input example from an array of 128 integers (each representing one text character) into a 128x16 matrix using a Keras Embedding. An embedding is a layer that learns an optimal way to convert each of our characters from being represented as an integer to instead being represented as an array of 16 floats like [0.02, ..., -0.91]. This allows the model to learn which characters are used similarly by embedding them near one another in 16-dimensional space, and ultimately increases the accuracy of the model’s predictions.
Next we add 5 convolutional layers each with a kernel size of 5, 1024 filters, and a ReLU activation. Conceptually, the first conv layer is learning how to construct words from characters, and later layers are learning to construct longer words and chains of words (n-grams), each more abstracted than the previous.
padding='same' is used to ensure the output dimensions of the layer are the same as the input dimensions, since otherwise a width 5 convolution would reduce the dimension of the layer by 2 for each side of the kernel.
1024 was chosen as the number of filters because it was a good tradeoff between training speed and model accuracy, determined by trial and error. For other datasets I would recommend starting with 128 filters and then increasing/decreasing it by a factor of two several times to see what happens. More filters generally means better model accuracy, but slower training, slower runtime predicting, and larger model size. However, if you have too little data or too many filters, your model may overfit and accuracy will plummet, in which case you should decrease the filters.
Kernel size of 5 was chosen after testing 2, 3, 5, and 7. Kernels of 2 and 3 did worse, and 7 was similar but slower due to needing to train 7/5 more parameters. In my research, other people have had success using kernel sizes from 3 to 7 in various combinations, but my takeaway is that a size 5 kernel usually performs decently on text data, and you can always experiment later on to squeeze out more accuracy for your particular dataset.
The ReLU activation was chosen because it is fast, simple, and very good for a huge variety of use cases. My takeaway from reading a few articles and research papers was that Leaky ReLU or other variations may give a slight improvement on some datasets, but it’s not guaranteed to be better, and it’s less likely to be noticeable on larger datasets.
Batch normalization is added after each conv layer so that the input parameters to the next layer are normalized based on the mean and variance for the given batch. This mechanism isn’t perfectly understood by deep learning engineers yet, but we know that normalizing input parameters improves training speed and becomes more important with deeper networks due to vanishing/exploding gradients. The original batch normalization paper had impressive results.
A bit of dropout is added after each conv layer to help prevent the layer from simply memorizing the data and overfitting. Dropout(0.25) randomly kills 25% of the parameters (sets them to zero).
MaxPooling1D(2) is added between each conv layer to “squeeze” our sequence of 128 characters in half into sequences of 64, 32, 16, and 8 characters in the following layers. Conceptually, this allows the convolutional filters to learn more abstract patterns from the text in the deeper layers, since our width 5 kernel will span twice as many characters after the dimensionality is reduced by 2X by each max pooling operation.
After all the conv layers we use a global max pooling layer, which is identical to the normal max pooling layers except that it automatically chooses how much to shrink the input size in order to match the size of our next layer. The final layers are just standard Dense (fully-connected) layers with 1024 neurons, and finally 70 neurons because our classifier needs to output the probability for each of our 70 different labels.
The model.compile step is pretty standard. The RMSprop optimizer is a decent all around optimizer and I didn’t experiment with changing it for this neural net. loss=sparse_categorical_crossentropy tells the model we want it to optimize for choosing the best category among a set of 2 or more categories (aka labels). The “sparse” part refers to the fact that our labels are integers between 0 and 70 rather than one-hot arrays each of length 70. Using one hot arrays for the labels takes WAY more memory, more time to process, and does not affect the model accuracy. Don’t use one hot labels!
Keras has a nice model.summary() function that lets us view our model:
_________________________________________________________________Layer (type) Output Shape Param #=================================================================embedding_1 (Embedding) (None, 128, 16) 1136_________________________________________________________________conv1d_1 (Conv1D) (None, 128, 1024) 82944_________________________________________________________________batch_normalization_1 (Batch (None, 128, 1024) 4096_________________________________________________________________max_pooling1d_1 (MaxPooling1 (None, 64, 1024) 0_________________________________________________________________dropout_1 (Dropout) (None, 64, 1024) 0_________________________________________________________________conv1d_2 (Conv1D) (None, 64, 1024) 5243904_________________________________________________________________batch_normalization_2 (Batch (None, 64, 1024) 4096_________________________________________________________________max_pooling1d_2 (MaxPooling1 (None, 32, 1024) 0_________________________________________________________________dropout_2 (Dropout) (None, 32, 1024) 0_________________________________________________________________conv1d_3 (Conv1D) (None, 32, 1024) 5243904_________________________________________________________________batch_normalization_3 (Batch (None, 32, 1024) 4096_________________________________________________________________max_pooling1d_3 (MaxPooling1 (None, 16, 1024) 0_________________________________________________________________dropout_3 (Dropout) (None, 16, 1024) 0_________________________________________________________________conv1d_4 (Conv1D) (None, 16, 1024) 5243904_________________________________________________________________batch_normalization_4 (Batch (None, 16, 1024) 4096_________________________________________________________________max_pooling1d_4 (MaxPooling1 (None, 8, 1024) 0_________________________________________________________________dropout_4 (Dropout) (None, 8, 1024) 0_________________________________________________________________conv1d_5 (Conv1D) (None, 8, 1024) 5243904_________________________________________________________________batch_normalization_5 (Batch (None, 8, 1024) 4096_________________________________________________________________global_max_pooling1d_1 (Glob (None, 1024) 0_________________________________________________________________dropout_5 (Dropout) (None, 1024) 0_________________________________________________________________dense_1 (Dense) (None, 1024) 1049600_________________________________________________________________batch_normalization_6 (Batch (None, 1024) 4096_________________________________________________________________dropout_6 (Dropout) (None, 1024) 0_________________________________________________________________dense_2 (Dense) (None, 70) 71750=================================================================Total params: 22,205,622Trainable params: 22,193,334Non-trainable params: 12,288_________________________________________________________________
The parameter counts are particularly useful if you’re not addicted to doing tensor shape multiplication in your head. When adjusting the hyperparameters we discussed above, it’s useful to keep an eye on the parameter count of the model, which roughly represents the model’s total amount of learning capacity.
Now we’re going to let the model train and use “checkpoints” to save the history and optimal model along the way so that we can check progress and make predictions using the latest model at any point during training.
# the path where you want to save all of this model's filesMODEL_PATH = '/home/ubuntu/imgflip/models/conv_model'# just make this large since you can stop training at any timeNUM_EPOCHS = 48# batch size below 256 will reduce training speed since# CPU (non-GPU) work must be done between each batchBATCH_SIZE = 256# callback to save the model whenever validation loss improvescheckpointer = ModelCheckpoint(filepath=MODEL_PATH + '/model.h5', verbose=1, save_best_only=True)# custom callback to save history and plots after each epochhistory_checkpointer = util.SaveHistoryCheckpoint(MODEL_PATH)# the main training function where all the magic happens!history = model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, callbacks=[checkpointer, history_checkpointer])
This is where you just sit and watch the magic number go up over a period of many hours...
Train on 44274928 samples, validate on 903569 samplesEpoch 1/4844274928/44274928 [==============================] - 16756s 378us/step - loss: 1.5516 - acc: 0.5443 - val_loss: 1.3723 - val_acc: 0.5891Epoch 00001: val_loss improved from inf to 1.37226, saving model to /home/ubuntu/imgflip/models/gen_2019_04_04_03_28_00/model.h5Epoch 2/4844274928/44274928 [==============================] - 16767s 379us/step - loss: 1.4424 - acc: 0.5748 - val_loss: 1.3416 - val_acc: 0.5979Epoch 00002: val_loss improved from 1.37226 to 1.34157, saving model to /home/ubuntu/imgflip/models/gen_2019_04_04_03_28_00/model.h5Epoch 3/4844274928/44274928 [==============================] - 16798s 379us/step - loss: 1.4192 - acc: 0.5815 - val_loss: 1.3239 - val_acc: 0.6036Epoch 00003: val_loss improved from 1.34157 to 1.32394, saving model to /home/ubuntu/imgflip/models/gen_2019_04_04_03_28_00/model.h5Epoch 4/4844274928/44274928 [==============================] - 16798s 379us/step - loss: 1.4015 - acc: 0.5857 - val_loss: 1.3127 - val_acc: 0.6055Epoch 00004: val_loss improved from 1.32394 to 1.31274, saving model to /home/ubuntu/imgflip/models/gen_2019_04_04_03_28_00/model.h5Epoch 5/48 1177344/44274928 [..............................] - ETA: 4:31:59 - loss: 1.3993 - acc: 0.5869
Fast forward and here are some shiny plots of our loss and accuracy at each epoch:
I’ve found that when training loss/accuracy is worse than validation loss/accuracy it’s a sign that the model is learning well and not overfitting.
As a side note if you’re using an AWS server for training, I found the optimal instance to be p3.2xlarge. This uses their fastest GPU as of April 2019 (Tesla V100), and the instance only has one GPU since our model cannot make use of multiple GPUs very efficiently. I did try using Keras’s multi_gpu_model but it requires making the batch size way larger to actually realize speed gains, which can mess with the model’s ability to converge, and it barely got 2X faster even when using 4 GPUs. The p3.8xlarge with 4 GPUs costs 4 times more, so for me it wasn’t worth it.
Okay so now we have a model that can output the probabilities for which character should come next in a meme caption, but how do we use that to actually create a full meme caption from scratch?
The basic premise is that we initialize a string with whichever meme we want to generate text for, and then we call model.predict once for each character until the model outputs the end-of-box-text character | as many times as there are text boxes in the meme. For the “X All The Y” meme seen above, the default number of text boxes is 2 and our initial text would be:
"000000061533 0 "
I tried a few different approaches for choosing the next character given the model’s output of 70 probabilities:
Pick the character with highest score each time. This is ultra boring because it chooses the exact same text every time for a given meme, and it uses the same words over and over across memes. It spit out “when you find out your friends are the best party” for the X All The Y meme over and over. It liked to use the words “best” and “party” a lot in other memes too.Give each character a probability of being chosen equal to the score the model gave it, but only if the score is above a certain threshold (≥ 10% of the highest score works well for this model). This means multiple characters can be chosen, but bias is given to higher scored characters. This method succeeded in adding variety, but longer phrases sometimes lacked cohesion. Here’s one from the Futurama Fry meme: “not sure if she said or just put out of my day”.Give each character an equal probability of being chosen, but only if it’s score is high enough (≥ 10% of the highest score works well for this model). Also, use beam search to keep a running list of N texts at any given time, and use the product of all the character scores instead of just the last character’s score. This takes up to N times longer to compute, but seems to improve sentence cohesion in some cases.
Pick the character with highest score each time. This is ultra boring because it chooses the exact same text every time for a given meme, and it uses the same words over and over across memes. It spit out “when you find out your friends are the best party” for the X All The Y meme over and over. It liked to use the words “best” and “party” a lot in other memes too.
Give each character a probability of being chosen equal to the score the model gave it, but only if the score is above a certain threshold (≥ 10% of the highest score works well for this model). This means multiple characters can be chosen, but bias is given to higher scored characters. This method succeeded in adding variety, but longer phrases sometimes lacked cohesion. Here’s one from the Futurama Fry meme: “not sure if she said or just put out of my day”.
Give each character an equal probability of being chosen, but only if it’s score is high enough (≥ 10% of the highest score works well for this model). Also, use beam search to keep a running list of N texts at any given time, and use the product of all the character scores instead of just the last character’s score. This takes up to N times longer to compute, but seems to improve sentence cohesion in some cases.
I’m currently using method #2 because it’s much faster than beam search and both methods give decent results. Below are some random examples:
You can play with the latest model yourself and generate from any of the 48 memes at imgflip.com/ai-meme.
The code for making runtime predictions using method #2 is below. The full implementation on Github is a generalized beam search algorithm so beam search can be enabled simply by increasing the beam width beyond 1.
# min score as percentage of the maximum score, not absoluteMIN_SCORE = 0.1int_to_char = {v: k for k, v in char_to_int.items()}def predict_meme_text(template_id, num_boxes, init_text = ''): template_id = str(template_id).zfill(12) final_text = '' for char_count in range(len(init_text), SEQUENCE_LENGTH): box_index = str(final_text.count('|')) texts = [template_id + ' ' + box_index + ' ' + final_text] sequences = util.texts_to_sequences(texts, char_to_int) data = pad_sequences(sequences, maxlen=SEQUENCE_LENGTH) predictions_list = model.predict(data) predictions = [] for j in range(0, len(predictions_list[0])): predictions.append({ 'text': final_text + int_to_char[j], 'score': predictions_list[0][j] }) predictions = sorted(predictions, key=lambda p: p['score'], reverse=True) top_predictions = [] top_score = predictions[0]['score'] rand_int = random.randint(int(MIN_SCORE * 1000), 1000) for prediction in predictions: # give each char a chance of being chosen based on its score if prediction['score'] >= rand_int / 1000 * top_score: top_predictions.append(prediction) random.shuffle(top_predictions) final_text = top_predictions[0]['text'] if char_count >= SEQUENCE_LENGTH - 1 or final_text.count('|') == num_boxes - 1: return final_text
You can view all the code including utility functions and a sample training data file on github. | [
{
"code": null,
"e": 598,
"s": 171,
"text": "The goal of this post is to describe end-to-end how to build a deep conv net for text generation, but in greater depth than some of the existing articles I’ve read. This will be a practical guide and while I suggest many best practices, I am not an expert in deep learning theory nor have I read every single relevant research paper. I’ll cover takeaways about data cleaning, training, model design, and prediction algorithms."
},
{
"code": null,
"e": 1468,
"s": 598,
"text": "The raw dataset we’ll draw from is ~100M public meme captions by users of the Imgflip Meme Generator. To speed up training and reduce complexity of the model, we only use the 48 most popular memes and exactly 20,000 captions per meme, totaling 960,000 captions as training data. However, since we are building a generational model there will be one training example for each character in the caption, totaling ~45,000,000 training examples. Character-level generation rather than word-level was chosen here because memes tend to use spelling and grammar... uh... creatively. Also, character-level deep learning is a superset of word-level deep learning and can therefore achieve higher accuracy if you have enough data and your model design is sufficient to learn all the complexity. If you try the finished model below, you’ll also see that char-level can be more fun!"
},
{
"code": null,
"e": 1704,
"s": 1468,
"text": "Below is what the training data looks if the first meme caption is “make all the memes”. I’m omitting the code for reading from the database and performing initial cleaning because it’s very standard and could be done in multiple ways."
},
{
"code": null,
"e": 2672,
"s": 1704,
"text": "training_data = [ [\"000000061533 0 \", \"m\"], [\"000000061533 0 m\", \"a\"], [\"000000061533 0 ma\", \"k\"], [\"000000061533 0 mak\", \"e\"], [\"000000061533 0 make\", \"|\"], [\"000000061533 1 make|\", \"a\"], [\"000000061533 1 make|a\", \"l\"], [\"000000061533 1 make|al\", \"l\"], [\"000000061533 1 make|all\", \" \"], [\"000000061533 1 make|all \", \"t\"], [\"000000061533 1 make|all t\", \"h\"], [\"000000061533 1 make|all th\", \"e\"], [\"000000061533 1 make|all the\", \" \"], [\"000000061533 1 make|all the \", \"m\"], [\"000000061533 1 make|all the m\", \"e\"], [\"000000061533 1 make|all the me\", \"m\"], [\"000000061533 1 make|all the mem\", \"e\"], [\"000000061533 1 make|all the meme\", \"s\"], [\"000000061533 1 make|all the memes\", \"|\"], ... 45 million more rows here ...]# we'll need our feature text and labels as separate arrays latertexts = [row[0] for row in training_data]labels = [row[1] for row in training_data]"
},
{
"code": null,
"e": 2865,
"s": 2672,
"text": "Like most things in machine learning, this is just a classification problem. We are classifying the text strings on the left into one of ~70 different buckets where the buckets are characters."
},
{
"code": null,
"e": 2890,
"s": 2865,
"text": "Let’s unpack the format."
},
{
"code": null,
"e": 3096,
"s": 2890,
"text": "The first 12 characters are the meme template ID. This allows the model to differentiate between the 48 distinct memes we’re feeding it. The string is left padded with zeros so all IDs are the same length."
},
{
"code": null,
"e": 3520,
"s": 3096,
"text": "The 0 or 1 is the index of the current text box being predicted, generally 0 is the top box and 1 is the bottom box, although many memes are more complex. The two spaces are just extra spacing to ensure the model can tell the box index apart from the template ID and meme text. Note: it is critical that our convolution kernel width (seen later in this post) is no wider than the 4 spaces plus the index character, aka ≤ 5."
},
{
"code": null,
"e": 3609,
"s": 3520,
"text": "After that is the text of the meme so far, with | used as the end-of-text-box character."
},
{
"code": null,
"e": 3707,
"s": 3609,
"text": "Finally, the last character by itself (the 2nd array item) is the next character in the sequence."
},
{
"code": null,
"e": 3774,
"s": 3707,
"text": "Several cleaning techniques were used on the data before training:"
},
{
"code": null,
"e": 3880,
"s": 3774,
"text": "Trim leading and trailing whitespace and replace repeated whitespace (\\s+) with a single space character."
},
{
"code": null,
"e": 3985,
"s": 3880,
"text": "Apply a minimum string length of 10 characters so we don’t generate boring one-word or one-letter memes."
},
{
"code": null,
"e": 4192,
"s": 3985,
"text": "Apply a maximum string length of 82 characters so we don’t generate super long memes and because the model will train faster. 82 is arbitrary, it just made the overall training strings about 100 characters."
},
{
"code": null,
"e": 4330,
"s": 4192,
"text": "Convert everything to lowercase to reduce the number of characters the model must learn, and because many memes are just all caps anyway."
},
{
"code": null,
"e": 4592,
"s": 4330,
"text": "Skip meme captions with non-ascii characters to reduce the complexity the model has to learn. This means that both our feature text and labels will come from a set of only ~70 characters, depending on which ascii characters the training data happens to include."
},
{
"code": null,
"e": 4693,
"s": 4592,
"text": "Skip meme captions containing the pipe character | since it’s our special end-of-text-box character."
},
{
"code": null,
"e": 4963,
"s": 4693,
"text": "Run the text through a language detection library and skip meme captions that are unlikely to be English. Improves quality of the text we generate since the model only has to learn one language, and identical character sequences can have meanings in multiple languages."
},
{
"code": null,
"e": 5102,
"s": 4963,
"text": "Skip duplicate meme captions we’ve already added to the training set to reduce the chance the model simply memorizes entire meme captions."
},
{
"code": null,
"e": 5151,
"s": 5102,
"text": "Our data is now ready to feed into a neural net!"
},
{
"code": null,
"e": 5456,
"s": 5151,
"text": "That may or may not be a word. Fun fact: apparently we in deep learning are heathens for calling multi-dimensional arrays tensors rather than the mathematical term “holors”, which is a generalized tensor not requiring particular transformation properties. But whatever, tensor sounds cooler than holor ;)"
},
{
"code": null,
"e": 5525,
"s": 5456,
"text": "First, here is the python import code for everything we’ll do below:"
},
{
"code": null,
"e": 5885,
"s": 5525,
"text": "from keras import Sequentialfrom keras.preprocessing.sequence import pad_sequencesfrom keras.callbacks import ModelCheckpointfrom keras.layers import Dense, Dropout, GlobalMaxPooling1D, Conv1D, MaxPooling1D, Embeddingfrom keras.layers.normalization import BatchNormalizationimport numpy as npimport util # util is a custom file I wrote, see github link below"
},
{
"code": null,
"e": 6566,
"s": 5885,
"text": "Neural nets operate on tensors of numbers (vectors/matrices/multi-dimensional arrays), so we need to restructure our text accordingly. Each of our training texts will be transformed into an array of integers (a rank 1 tensor) by replacing each character with its corresponding index from the array of ~70 unique characters found in the data. The order of the character array is arbitrary, but we choose to order it by character frequency so it stays roughly consistent when changing the amount of training data. Keras has a Tokenizer class which you can use for this (with char_level=True), but I wrote my own util functions because they were must faster than the Keras tokenizer."
},
{
"code": null,
"e": 6787,
"s": 6566,
"text": "# output: {' ': 1, '0': 2, 'e': 3, ... }char_to_int = util.map_char_to_int(texts)# output: [[2, 2, 27, 11, ...], ... ]sequences = util.texts_to_sequences(texts, char_to_int)labels = [char_to_int[char] for char in labels]"
},
{
"code": null,
"e": 6853,
"s": 6787,
"text": "These are the characters our data contains in order of frequency:"
},
{
"code": null,
"e": 6922,
"s": 6853,
"text": "0etoains|rhl1udmy2cg4p53wf6b897kv.\"!?j:x,*\"z-q/&$)(#%+_@=>;<][~`^}{\\"
},
{
"code": null,
"e": 7283,
"s": 6922,
"text": "Next we will pad our sequences of integers with leading zeros so they are all the same length, since the model‘s tensor math requires the shape of each training example to be identical. (note: I could have used as low as 100 length here because our texts are only 100 characters, but I wanted all my pooling operations later on to be perfectly divisible by 2.)"
},
{
"code": null,
"e": 7360,
"s": 7283,
"text": "SEQUENCE_LENGTH = 128data = pad_sequences(sequences, maxlen=SEQUENCE_LENGTH)"
},
{
"code": null,
"e": 7740,
"s": 7360,
"text": "And finally we’ll shuffle our training data and split it into training and validation sets. Shuffling (randomizing the order) ensures that a particular subset of the data is not always the subset we use to validate accuracy. Splitting some data into a validation set allows us to gauge how well the model is performing on examples that we are not allowing it to use for training."
},
{
"code": null,
"e": 8220,
"s": 7740,
"text": "# randomize order of training dataindices = np.arange(data.shape[0])np.random.shuffle(indices)data = data[indices]labels = labels[indices]# validation set can be much smaller if we use a lot of datavalidation_ratio = 0.2 if data.shape[0] < 1000000 else 0.02num_validation_samples = int(validation_ratio * data.shape[0])x_train = data[:-num_validation_samples]y_train = labels[:-num_validation_samples]x_val = data[-num_validation_samples:]y_val = labels[-num_validation_samples:]"
},
{
"code": null,
"e": 8759,
"s": 8220,
"text": "I chose to use a convolutional net because convolutions are simple and fast to train. I did briefly test a two layer LSTM, but the accuracy per time spent training was worse than the conv net, and predictions with even that small LSTM took longer than the age of the universe (okay, maybe it just felt that long). Generative Adversarial Networks (GANs) are beautiful creatures with massive potential, but using them for text generation is still in early stages and my first attempt at it was lackluster. Maybe that will be my next post..."
},
{
"code": null,
"e": 8828,
"s": 8759,
"text": "Okay, here’s the code used to construct our conv net model in Keras:"
},
{
"code": null,
"e": 9913,
"s": 8828,
"text": "EMBEDDING_DIM = 16model = Sequential()model.add(Embedding(len(char_to_int) + 1, EMBEDDING_DIM, input_length=SEQUENCE_LENGTH))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(MaxPooling1D(2))model.add(Dropout(0.25))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(MaxPooling1D(2))model.add(Dropout(0.25))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(MaxPooling1D(2))model.add(Dropout(0.25))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(MaxPooling1D(2))model.add(Dropout(0.25))model.add(Conv1D(1024, 5, activation='relu', padding='same'))model.add(BatchNormalization())model.add(GlobalMaxPooling1D())model.add(Dropout(0.25))model.add(Dense(1024, activation='relu'))model.add(BatchNormalization())model.add(Dropout(0.25))model.add(Dense(len(labels_index), activation='softmax'))model.compile(loss='sparse_categorical_crossentropy', optimizer='rmsprop', metrics=['acc'])"
},
{
"code": null,
"e": 9980,
"s": 9913,
"text": "Lots of things going on there. Here’s what all that code is doing:"
},
{
"code": null,
"e": 10533,
"s": 9980,
"text": "First the model converts each input example from an array of 128 integers (each representing one text character) into a 128x16 matrix using a Keras Embedding. An embedding is a layer that learns an optimal way to convert each of our characters from being represented as an integer to instead being represented as an array of 16 floats like [0.02, ..., -0.91]. This allows the model to learn which characters are used similarly by embedding them near one another in 16-dimensional space, and ultimately increases the accuracy of the model’s predictions."
},
{
"code": null,
"e": 10849,
"s": 10533,
"text": "Next we add 5 convolutional layers each with a kernel size of 5, 1024 filters, and a ReLU activation. Conceptually, the first conv layer is learning how to construct words from characters, and later layers are learning to construct longer words and chains of words (n-grams), each more abstracted than the previous."
},
{
"code": null,
"e": 11067,
"s": 10849,
"text": "padding='same' is used to ensure the output dimensions of the layer are the same as the input dimensions, since otherwise a width 5 convolution would reduce the dimension of the layer by 2 for each side of the kernel."
},
{
"code": null,
"e": 11642,
"s": 11067,
"text": "1024 was chosen as the number of filters because it was a good tradeoff between training speed and model accuracy, determined by trial and error. For other datasets I would recommend starting with 128 filters and then increasing/decreasing it by a factor of two several times to see what happens. More filters generally means better model accuracy, but slower training, slower runtime predicting, and larger model size. However, if you have too little data or too many filters, your model may overfit and accuracy will plummet, in which case you should decrease the filters."
},
{
"code": null,
"e": 12083,
"s": 11642,
"text": "Kernel size of 5 was chosen after testing 2, 3, 5, and 7. Kernels of 2 and 3 did worse, and 7 was similar but slower due to needing to train 7/5 more parameters. In my research, other people have had success using kernel sizes from 3 to 7 in various combinations, but my takeaway is that a size 5 kernel usually performs decently on text data, and you can always experiment later on to squeeze out more accuracy for your particular dataset."
},
{
"code": null,
"e": 12433,
"s": 12083,
"text": "The ReLU activation was chosen because it is fast, simple, and very good for a huge variety of use cases. My takeaway from reading a few articles and research papers was that Leaky ReLU or other variations may give a slight improvement on some datasets, but it’s not guaranteed to be better, and it’s less likely to be noticeable on larger datasets."
},
{
"code": null,
"e": 12891,
"s": 12433,
"text": "Batch normalization is added after each conv layer so that the input parameters to the next layer are normalized based on the mean and variance for the given batch. This mechanism isn’t perfectly understood by deep learning engineers yet, but we know that normalizing input parameters improves training speed and becomes more important with deeper networks due to vanishing/exploding gradients. The original batch normalization paper had impressive results."
},
{
"code": null,
"e": 13086,
"s": 12891,
"text": "A bit of dropout is added after each conv layer to help prevent the layer from simply memorizing the data and overfitting. Dropout(0.25) randomly kills 25% of the parameters (sets them to zero)."
},
{
"code": null,
"e": 13512,
"s": 13086,
"text": "MaxPooling1D(2) is added between each conv layer to “squeeze” our sequence of 128 characters in half into sequences of 64, 32, 16, and 8 characters in the following layers. Conceptually, this allows the convolutional filters to learn more abstract patterns from the text in the deeper layers, since our width 5 kernel will span twice as many characters after the dimensionality is reduced by 2X by each max pooling operation."
},
{
"code": null,
"e": 13942,
"s": 13512,
"text": "After all the conv layers we use a global max pooling layer, which is identical to the normal max pooling layers except that it automatically chooses how much to shrink the input size in order to match the size of our next layer. The final layers are just standard Dense (fully-connected) layers with 1024 neurons, and finally 70 neurons because our classifier needs to output the probability for each of our 70 different labels."
},
{
"code": null,
"e": 14535,
"s": 13942,
"text": "The model.compile step is pretty standard. The RMSprop optimizer is a decent all around optimizer and I didn’t experiment with changing it for this neural net. loss=sparse_categorical_crossentropy tells the model we want it to optimize for choosing the best category among a set of 2 or more categories (aka labels). The “sparse” part refers to the fact that our labels are integers between 0 and 70 rather than one-hot arrays each of length 70. Using one hot arrays for the labels takes WAY more memory, more time to process, and does not affect the model accuracy. Don’t use one hot labels!"
},
{
"code": null,
"e": 14606,
"s": 14535,
"text": "Keras has a nice model.summary() function that lets us view our model:"
},
{
"code": null,
"e": 18028,
"s": 14606,
"text": "_________________________________________________________________Layer (type) Output Shape Param #=================================================================embedding_1 (Embedding) (None, 128, 16) 1136_________________________________________________________________conv1d_1 (Conv1D) (None, 128, 1024) 82944_________________________________________________________________batch_normalization_1 (Batch (None, 128, 1024) 4096_________________________________________________________________max_pooling1d_1 (MaxPooling1 (None, 64, 1024) 0_________________________________________________________________dropout_1 (Dropout) (None, 64, 1024) 0_________________________________________________________________conv1d_2 (Conv1D) (None, 64, 1024) 5243904_________________________________________________________________batch_normalization_2 (Batch (None, 64, 1024) 4096_________________________________________________________________max_pooling1d_2 (MaxPooling1 (None, 32, 1024) 0_________________________________________________________________dropout_2 (Dropout) (None, 32, 1024) 0_________________________________________________________________conv1d_3 (Conv1D) (None, 32, 1024) 5243904_________________________________________________________________batch_normalization_3 (Batch (None, 32, 1024) 4096_________________________________________________________________max_pooling1d_3 (MaxPooling1 (None, 16, 1024) 0_________________________________________________________________dropout_3 (Dropout) (None, 16, 1024) 0_________________________________________________________________conv1d_4 (Conv1D) (None, 16, 1024) 5243904_________________________________________________________________batch_normalization_4 (Batch (None, 16, 1024) 4096_________________________________________________________________max_pooling1d_4 (MaxPooling1 (None, 8, 1024) 0_________________________________________________________________dropout_4 (Dropout) (None, 8, 1024) 0_________________________________________________________________conv1d_5 (Conv1D) (None, 8, 1024) 5243904_________________________________________________________________batch_normalization_5 (Batch (None, 8, 1024) 4096_________________________________________________________________global_max_pooling1d_1 (Glob (None, 1024) 0_________________________________________________________________dropout_5 (Dropout) (None, 1024) 0_________________________________________________________________dense_1 (Dense) (None, 1024) 1049600_________________________________________________________________batch_normalization_6 (Batch (None, 1024) 4096_________________________________________________________________dropout_6 (Dropout) (None, 1024) 0_________________________________________________________________dense_2 (Dense) (None, 70) 71750=================================================================Total params: 22,205,622Trainable params: 22,193,334Non-trainable params: 12,288_________________________________________________________________"
},
{
"code": null,
"e": 18338,
"s": 18028,
"text": "The parameter counts are particularly useful if you’re not addicted to doing tensor shape multiplication in your head. When adjusting the hyperparameters we discussed above, it’s useful to keep an eye on the parameter count of the model, which roughly represents the model’s total amount of learning capacity."
},
{
"code": null,
"e": 18555,
"s": 18338,
"text": "Now we’re going to let the model train and use “checkpoints” to save the history and optimal model along the way so that we can check progress and make predictions using the latest model at any point during training."
},
{
"code": null,
"e": 19364,
"s": 18555,
"text": "# the path where you want to save all of this model's filesMODEL_PATH = '/home/ubuntu/imgflip/models/conv_model'# just make this large since you can stop training at any timeNUM_EPOCHS = 48# batch size below 256 will reduce training speed since# CPU (non-GPU) work must be done between each batchBATCH_SIZE = 256# callback to save the model whenever validation loss improvescheckpointer = ModelCheckpoint(filepath=MODEL_PATH + '/model.h5', verbose=1, save_best_only=True)# custom callback to save history and plots after each epochhistory_checkpointer = util.SaveHistoryCheckpoint(MODEL_PATH)# the main training function where all the magic happens!history = model.fit(x_train, y_train, validation_data=(x_val, y_val), epochs=NUM_EPOCHS, batch_size=BATCH_SIZE, callbacks=[checkpointer, history_checkpointer])"
},
{
"code": null,
"e": 19455,
"s": 19364,
"text": "This is where you just sit and watch the magic number go up over a period of many hours..."
},
{
"code": null,
"e": 20721,
"s": 19455,
"text": "Train on 44274928 samples, validate on 903569 samplesEpoch 1/4844274928/44274928 [==============================] - 16756s 378us/step - loss: 1.5516 - acc: 0.5443 - val_loss: 1.3723 - val_acc: 0.5891Epoch 00001: val_loss improved from inf to 1.37226, saving model to /home/ubuntu/imgflip/models/gen_2019_04_04_03_28_00/model.h5Epoch 2/4844274928/44274928 [==============================] - 16767s 379us/step - loss: 1.4424 - acc: 0.5748 - val_loss: 1.3416 - val_acc: 0.5979Epoch 00002: val_loss improved from 1.37226 to 1.34157, saving model to /home/ubuntu/imgflip/models/gen_2019_04_04_03_28_00/model.h5Epoch 3/4844274928/44274928 [==============================] - 16798s 379us/step - loss: 1.4192 - acc: 0.5815 - val_loss: 1.3239 - val_acc: 0.6036Epoch 00003: val_loss improved from 1.34157 to 1.32394, saving model to /home/ubuntu/imgflip/models/gen_2019_04_04_03_28_00/model.h5Epoch 4/4844274928/44274928 [==============================] - 16798s 379us/step - loss: 1.4015 - acc: 0.5857 - val_loss: 1.3127 - val_acc: 0.6055Epoch 00004: val_loss improved from 1.32394 to 1.31274, saving model to /home/ubuntu/imgflip/models/gen_2019_04_04_03_28_00/model.h5Epoch 5/48 1177344/44274928 [..............................] - ETA: 4:31:59 - loss: 1.3993 - acc: 0.5869"
},
{
"code": null,
"e": 20804,
"s": 20721,
"text": "Fast forward and here are some shiny plots of our loss and accuracy at each epoch:"
},
{
"code": null,
"e": 20952,
"s": 20804,
"text": "I’ve found that when training loss/accuracy is worse than validation loss/accuracy it’s a sign that the model is learning well and not overfitting."
},
{
"code": null,
"e": 21522,
"s": 20952,
"text": "As a side note if you’re using an AWS server for training, I found the optimal instance to be p3.2xlarge. This uses their fastest GPU as of April 2019 (Tesla V100), and the instance only has one GPU since our model cannot make use of multiple GPUs very efficiently. I did try using Keras’s multi_gpu_model but it requires making the batch size way larger to actually realize speed gains, which can mess with the model’s ability to converge, and it barely got 2X faster even when using 4 GPUs. The p3.8xlarge with 4 GPUs costs 4 times more, so for me it wasn’t worth it."
},
{
"code": null,
"e": 21716,
"s": 21522,
"text": "Okay so now we have a model that can output the probabilities for which character should come next in a meme caption, but how do we use that to actually create a full meme caption from scratch?"
},
{
"code": null,
"e": 22085,
"s": 21716,
"text": "The basic premise is that we initialize a string with whichever meme we want to generate text for, and then we call model.predict once for each character until the model outputs the end-of-box-text character | as many times as there are text boxes in the meme. For the “X All The Y” meme seen above, the default number of text boxes is 2 and our initial text would be:"
},
{
"code": null,
"e": 22105,
"s": 22085,
"text": "\"000000061533 0 \""
},
{
"code": null,
"e": 22218,
"s": 22105,
"text": "I tried a few different approaches for choosing the next character given the model’s output of 70 probabilities:"
},
{
"code": null,
"e": 23465,
"s": 22218,
"text": "Pick the character with highest score each time. This is ultra boring because it chooses the exact same text every time for a given meme, and it uses the same words over and over across memes. It spit out “when you find out your friends are the best party” for the X All The Y meme over and over. It liked to use the words “best” and “party” a lot in other memes too.Give each character a probability of being chosen equal to the score the model gave it, but only if the score is above a certain threshold (≥ 10% of the highest score works well for this model). This means multiple characters can be chosen, but bias is given to higher scored characters. This method succeeded in adding variety, but longer phrases sometimes lacked cohesion. Here’s one from the Futurama Fry meme: “not sure if she said or just put out of my day”.Give each character an equal probability of being chosen, but only if it’s score is high enough (≥ 10% of the highest score works well for this model). Also, use beam search to keep a running list of N texts at any given time, and use the product of all the character scores instead of just the last character’s score. This takes up to N times longer to compute, but seems to improve sentence cohesion in some cases."
},
{
"code": null,
"e": 23833,
"s": 23465,
"text": "Pick the character with highest score each time. This is ultra boring because it chooses the exact same text every time for a given meme, and it uses the same words over and over across memes. It spit out “when you find out your friends are the best party” for the X All The Y meme over and over. It liked to use the words “best” and “party” a lot in other memes too."
},
{
"code": null,
"e": 24297,
"s": 23833,
"text": "Give each character a probability of being chosen equal to the score the model gave it, but only if the score is above a certain threshold (≥ 10% of the highest score works well for this model). This means multiple characters can be chosen, but bias is given to higher scored characters. This method succeeded in adding variety, but longer phrases sometimes lacked cohesion. Here’s one from the Futurama Fry meme: “not sure if she said or just put out of my day”."
},
{
"code": null,
"e": 24714,
"s": 24297,
"text": "Give each character an equal probability of being chosen, but only if it’s score is high enough (≥ 10% of the highest score works well for this model). Also, use beam search to keep a running list of N texts at any given time, and use the product of all the character scores instead of just the last character’s score. This takes up to N times longer to compute, but seems to improve sentence cohesion in some cases."
},
{
"code": null,
"e": 24856,
"s": 24714,
"text": "I’m currently using method #2 because it’s much faster than beam search and both methods give decent results. Below are some random examples:"
},
{
"code": null,
"e": 24962,
"s": 24856,
"text": "You can play with the latest model yourself and generate from any of the 48 memes at imgflip.com/ai-meme."
},
{
"code": null,
"e": 25177,
"s": 24962,
"text": "The code for making runtime predictions using method #2 is below. The full implementation on Github is a generalized beam search algorithm so beam search can be enabled simply by increasing the beam width beyond 1."
},
{
"code": null,
"e": 26520,
"s": 25177,
"text": "# min score as percentage of the maximum score, not absoluteMIN_SCORE = 0.1int_to_char = {v: k for k, v in char_to_int.items()}def predict_meme_text(template_id, num_boxes, init_text = ''): template_id = str(template_id).zfill(12) final_text = '' for char_count in range(len(init_text), SEQUENCE_LENGTH): box_index = str(final_text.count('|')) texts = [template_id + ' ' + box_index + ' ' + final_text] sequences = util.texts_to_sequences(texts, char_to_int) data = pad_sequences(sequences, maxlen=SEQUENCE_LENGTH) predictions_list = model.predict(data) predictions = [] for j in range(0, len(predictions_list[0])): predictions.append({ 'text': final_text + int_to_char[j], 'score': predictions_list[0][j] }) predictions = sorted(predictions, key=lambda p: p['score'], reverse=True) top_predictions = [] top_score = predictions[0]['score'] rand_int = random.randint(int(MIN_SCORE * 1000), 1000) for prediction in predictions: # give each char a chance of being chosen based on its score if prediction['score'] >= rand_int / 1000 * top_score: top_predictions.append(prediction) random.shuffle(top_predictions) final_text = top_predictions[0]['text'] if char_count >= SEQUENCE_LENGTH - 1 or final_text.count('|') == num_boxes - 1: return final_text"
}
] |
SVG - Interview Questions | Dear readers, these SVG Interview Questions have been designed specially to get you acquainted with the nature of questions you may encounter during your interview for the subject of SVG. As per my experience good interviewers hardly plan to ask any particular question during your interview, normally questions start with some basic concept of the subject and later they continue based on further discussion and what you answer −
SVG stands for Scalable Vector Graphics.
SVG is a XML based format to draw vector images. It is used to draw two − dimentional vector images.
Following are the core features of SVG −
SVG, Scalable Vector Graphics is an XML based language to define vector based graphics.
SVG, Scalable Vector Graphics is an XML based language to define vector based graphics.
SVG is intended to display images over the web.
SVG is intended to display images over the web.
Being vector images, SVG image never loses quality no matter how they are zoomed out or resized.
Being vector images, SVG image never loses quality no matter how they are zoomed out or resized.
SVG images supports interactivity and animation.
SVG images supports interactivity and animation.
SVG is a W3C standard.
SVG is a W3C standard.
Others image formats like raster images can also be clubbed with SVG images.
Others image formats like raster images can also be clubbed with SVG images.
SVG integrates well with XSLT and DOM of HTML.
SVG integrates well with XSLT and DOM of HTML.
Following are the advantages of using SVG images −
Use any text editor to create and edit SVG images.
Use any text editor to create and edit SVG images.
Being XML based, SVG images are searchable, indexable and can be scripted and compressed.
Being XML based, SVG images are searchable, indexable and can be scripted and compressed.
SVG images are highly scalable as they never loses quality no matter how they are zoomed out or resized.
SVG images are highly scalable as they never loses quality no matter how they are zoomed out or resized.
Good printing quality at any resolution.
Good printing quality at any resolution.
SVG is an Open Standard.
SVG is an Open Standard.
Following are the disadvantages of using SVG images −
Being text format size is larger then compared to binary formatted raster images.
Being text format size is larger then compared to binary formatted raster images.
Size can be big even for small image.
Size can be big even for small image.
'rect' tag of SVG is used to draw a rectangle.
'circle' tag of SVG is used to draw a circle.
'ellipse' tag of SVG is used to draw a ellipse.
'line' tag of SVG is used to draw a line.
'polygon' tag of SVG is used to draw a closed shape consisting of connected straight lines.
'polyline' tag of SVG is used to draw a open shape consisting of connected straight lines.
'path' tag of SVG is used to draw any path.
'text' tag of SVG is used to draw text.
'x' attribute of text tag of SVG represents the x axis cordinates of glyphs.
'y' attribute of text tag of SVG represents the y axis cordinates of glyphs.
'dx' attribute of text tag of SVG represents the shift along with x-axis.
'dy' attribute of text tag of SVG represents the shift along with y-axis.
'rotation' attribute of text tag of SVG sets the rotation to be applied to all glyphs.
'textlength' attribute of text tag of SVG sets the rendering length of the text.
'stroke' property defines color of text, line or outline of any element.
'stroke-width' property defines thickness of text, line or outline of any element.
'stroke-linecap' property defines different types of ending of a line or outline of any path.
'stroke-dasharray' property used to create dashed lines.
SVG uses <filter> element to define filters. <filter> element uses an id attribute to uniquely identify it.Filters are defined within <def> elements and then are referenced by graphics elements by their ids.
SVG provides a rich set of filters. Following is the list of the commonly used filters −
feBlend
feBlend
feColorMatrix
feColorMatrix
feComponentTransfer
feComponentTransfer
feComposite
feComposite
feConvolveMatrix
feConvolveMatrix
feDiffuseLighting
feDiffuseLighting
feDisplacementMap
feDisplacementMap
SVG uses <pattern> element to define patterns. Patterns are defined using <pattern> element and are used to fill graphics elements in tiled fashion.
Gradient refers to smooth transition of one color to another color within a shape. SVG provides two types of gradients −
Linear Gradients
Linear Gradients
Radial Gradients
Radial Gradients
Linear Gradients represents linear transition of one color to another from one direction to another. It is defined using <linearGradient> element.
Radial Gradients represents circular transition of one color to another from one direction to another. It is defined using <radialGradient> element.
Yes! SVG images can be made responsive to user actions. SVG supports pointer events, keyboard events and document events.
Yes! SVG supports JavaScript/ECMAScript functions. Script block is to be in CDATA block consider character data support in XML.
Yes! SVG elements support mouse events, keyboard events. We've used onClick event to call a javascript functions.
In javascript functions, document represents SVG document and can be used to get the SVG elements.
In javascript functions, event represents current event and can be used to get the target element on which event got raised.
<a> element is used to create hyperlink. "xlink:href" attribute is used to pass the IRI (Internationalized Resource Identifiers) which is complementary to URI (Uniform Resource Identifiers).
SVG image can be embedded using following ways −
using embed tag
using embed tag
using object tag
using object tag
using iframe
using iframe
'rect' tag of SVG is used to draw a rectangle. Following are the commonly used attributes −
x − x-axis co-ordinate of top left of the rectangle. Default is 0.
x − x-axis co-ordinate of top left of the rectangle. Default is 0.
y − y-axis co-ordinate of top left of the rectangle. Default is 0.
y − y-axis co-ordinate of top left of the rectangle. Default is 0.
width − width of the rectangle.
width − width of the rectangle.
height − height of the rectangle.
height − height of the rectangle.
rx − used to round the corner of the rounded rectangle.
rx − used to round the corner of the rounded rectangle.
ry − used to round the corner of the rounded rectangle.
ry − used to round the corner of the rounded rectangle.
Example −
<rect
x = "100" y = "30"
width = "300" height = "100"
style = "fill:rgb(121,0,121);stroke-width:3;stroke:rgb(0,0,0)" >
'circle' tag of SVG is used to draw a circle. Following are the commonly used attributes −
cx − x-axis co-ordinate of the center of the circle. Default is 0.
cx − x-axis co-ordinate of the center of the circle. Default is 0.
cy − y-axis co-ordinate of the center of the circle. Default is 0.
cy − y-axis co-ordinate of the center of the circle. Default is 0.
r − radius of the circle.
r − radius of the circle.
Example −
<circle
cx = "100" cy = "100" r = "50"
stroke = "black"
stroke-width = "3"
fill = "rgb(121,0,121)" >
'ellipse' tag of SVG is used to draw a ellipse. Following are the commonly used attributes −
cx − x-axis co-ordinate of the center of the ellipse. Default is 0.
cx − x-axis co-ordinate of the center of the ellipse. Default is 0.
cy − y-axis co-ordinate of the center of the ellipse. Default is 0.
cy − y-axis co-ordinate of the center of the ellipse. Default is 0.
rx − x-axis radius of the ellipse.
rx − x-axis radius of the ellipse.
ry − y-axis radius of the ellipse.
ry − y-axis radius of the ellipse.
Example −
<ellipse
cx = "100" cy = "100"
rx = "90" ry = "50"
stroke = "black"
stroke-width = "3"
fill = "rgb(121,0,121)">
'line' tag of SVG is used to draw a line. Following are the commonly used attributes −
x1 − x-axis co-ordinate of the start point. Default is 0.
x1 − x-axis co-ordinate of the start point. Default is 0.
y1 − y-axis co-ordinate of the start point. Default is 0.
y1 − y-axis co-ordinate of the start point. Default is 0.
x2 − x-axis co-ordinate of the end point. Default is 0.
x2 − x-axis co-ordinate of the end point. Default is 0.
y2 − y-axis co-ordinate of the end point. Default is 0.
y2 − y-axis co-ordinate of the end point. Default is 0.
Example −
<line
x1 = "20" y1 = "20"
x2 = "150" y2 = "150"
stroke = "black"
stroke-width = "3"
fill = "rgb(121,0,121)">
'polygon' tag of SVG is used to draw a polygon. Following is the commonly used attribute −
points - List of points to make up a polygon.
Example −
<polygon
points = "150,75 258,137.5 258,262.5 150,325 42,262.6 42,137.5"
stroke = "black"
stroke-width = "3"
fill = "rgb(121,0,121)">
'polyline' tag of SVG is used to draw a open ended polygon. Following is the commonly used attribute −
points − List of points to make up a polygon.
Example −
<polyline
points = "150,75 258,137.5 258,262.5 150,325 42,262.6 42,137.5"
stroke = "black"
stroke-width = "3"
fill = "none">
'path' tag of SVG is used to draw a free flow path. Following is the commonly used attribute −
d − path data,usually a set of commands like moveto, lineto etc.
Example −
<path
d = "M 100 100 L 300 100 L 200 300 z"
stroke = "black"
stroke-width = "3"
fill = "rgb(121,0,121)">
M command of path element move from one point to another point.
L command of path element creates a line.
H command of path element creates a horizontal line.
V command of path element creates a vertical line.
C command of path element creates a curve.
S command of path element creates a smooth curve.
Q command of path element creates a quadratic Bezier curve.
T command of path element creates a smooth quadratic Bezier curve.
A command of path element creates a elliptical arc.
Z command of path element closes the path.
When commands are in Upper case, these represents absolute path. In case their lower case commands are used, then relative path is used.
Further you can go through your past assignments you have done with the subject and make sure you are able to speak confidently on them. If you are fresher then interviewer does not expect you will answer very complex questions, rather you have to make your basics concepts very strong.
Second it really doesn't matter much if you could not answer few questions but it matters that whatever you answered, you must have answered with confidence. So just feel confident during your interview. We at tutorialspoint wish you best luck to have a good interviewer and all the very best for your future endeavor. Cheers :-)
45 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2795,
"s": 2364,
"text": "Dear readers, these SVG Interview Questions have been designed specially to get you acquainted with the nature of questions you may encounter during your interview for the subject of SVG. As per my experience good interviewers hardly plan to ask any particular question during your interview, normally questions start with some basic concept of the subject and later they continue based on further discussion and what you answer −"
},
{
"code": null,
"e": 2836,
"s": 2795,
"text": "SVG stands for Scalable Vector Graphics."
},
{
"code": null,
"e": 2937,
"s": 2836,
"text": "SVG is a XML based format to draw vector images. It is used to draw two − dimentional vector images."
},
{
"code": null,
"e": 2978,
"s": 2937,
"text": "Following are the core features of SVG −"
},
{
"code": null,
"e": 3066,
"s": 2978,
"text": "SVG, Scalable Vector Graphics is an XML based language to define vector based graphics."
},
{
"code": null,
"e": 3154,
"s": 3066,
"text": "SVG, Scalable Vector Graphics is an XML based language to define vector based graphics."
},
{
"code": null,
"e": 3202,
"s": 3154,
"text": "SVG is intended to display images over the web."
},
{
"code": null,
"e": 3250,
"s": 3202,
"text": "SVG is intended to display images over the web."
},
{
"code": null,
"e": 3347,
"s": 3250,
"text": "Being vector images, SVG image never loses quality no matter how they are zoomed out or resized."
},
{
"code": null,
"e": 3444,
"s": 3347,
"text": "Being vector images, SVG image never loses quality no matter how they are zoomed out or resized."
},
{
"code": null,
"e": 3493,
"s": 3444,
"text": "SVG images supports interactivity and animation."
},
{
"code": null,
"e": 3542,
"s": 3493,
"text": "SVG images supports interactivity and animation."
},
{
"code": null,
"e": 3565,
"s": 3542,
"text": "SVG is a W3C standard."
},
{
"code": null,
"e": 3588,
"s": 3565,
"text": "SVG is a W3C standard."
},
{
"code": null,
"e": 3665,
"s": 3588,
"text": "Others image formats like raster images can also be clubbed with SVG images."
},
{
"code": null,
"e": 3742,
"s": 3665,
"text": "Others image formats like raster images can also be clubbed with SVG images."
},
{
"code": null,
"e": 3789,
"s": 3742,
"text": "SVG integrates well with XSLT and DOM of HTML."
},
{
"code": null,
"e": 3836,
"s": 3789,
"text": "SVG integrates well with XSLT and DOM of HTML."
},
{
"code": null,
"e": 3887,
"s": 3836,
"text": "Following are the advantages of using SVG images −"
},
{
"code": null,
"e": 3938,
"s": 3887,
"text": "Use any text editor to create and edit SVG images."
},
{
"code": null,
"e": 3989,
"s": 3938,
"text": "Use any text editor to create and edit SVG images."
},
{
"code": null,
"e": 4079,
"s": 3989,
"text": "Being XML based, SVG images are searchable, indexable and can be scripted and compressed."
},
{
"code": null,
"e": 4169,
"s": 4079,
"text": "Being XML based, SVG images are searchable, indexable and can be scripted and compressed."
},
{
"code": null,
"e": 4274,
"s": 4169,
"text": "SVG images are highly scalable as they never loses quality no matter how they are zoomed out or resized."
},
{
"code": null,
"e": 4379,
"s": 4274,
"text": "SVG images are highly scalable as they never loses quality no matter how they are zoomed out or resized."
},
{
"code": null,
"e": 4420,
"s": 4379,
"text": "Good printing quality at any resolution."
},
{
"code": null,
"e": 4461,
"s": 4420,
"text": "Good printing quality at any resolution."
},
{
"code": null,
"e": 4486,
"s": 4461,
"text": "SVG is an Open Standard."
},
{
"code": null,
"e": 4511,
"s": 4486,
"text": "SVG is an Open Standard."
},
{
"code": null,
"e": 4565,
"s": 4511,
"text": "Following are the disadvantages of using SVG images −"
},
{
"code": null,
"e": 4647,
"s": 4565,
"text": "Being text format size is larger then compared to binary formatted raster images."
},
{
"code": null,
"e": 4729,
"s": 4647,
"text": "Being text format size is larger then compared to binary formatted raster images."
},
{
"code": null,
"e": 4767,
"s": 4729,
"text": "Size can be big even for small image."
},
{
"code": null,
"e": 4805,
"s": 4767,
"text": "Size can be big even for small image."
},
{
"code": null,
"e": 4852,
"s": 4805,
"text": "'rect' tag of SVG is used to draw a rectangle."
},
{
"code": null,
"e": 4898,
"s": 4852,
"text": "'circle' tag of SVG is used to draw a circle."
},
{
"code": null,
"e": 4946,
"s": 4898,
"text": "'ellipse' tag of SVG is used to draw a ellipse."
},
{
"code": null,
"e": 4988,
"s": 4946,
"text": "'line' tag of SVG is used to draw a line."
},
{
"code": null,
"e": 5080,
"s": 4988,
"text": "'polygon' tag of SVG is used to draw a closed shape consisting of connected straight lines."
},
{
"code": null,
"e": 5171,
"s": 5080,
"text": "'polyline' tag of SVG is used to draw a open shape consisting of connected straight lines."
},
{
"code": null,
"e": 5215,
"s": 5171,
"text": "'path' tag of SVG is used to draw any path."
},
{
"code": null,
"e": 5255,
"s": 5215,
"text": "'text' tag of SVG is used to draw text."
},
{
"code": null,
"e": 5332,
"s": 5255,
"text": "'x' attribute of text tag of SVG represents the x axis cordinates of glyphs."
},
{
"code": null,
"e": 5409,
"s": 5332,
"text": "'y' attribute of text tag of SVG represents the y axis cordinates of glyphs."
},
{
"code": null,
"e": 5483,
"s": 5409,
"text": "'dx' attribute of text tag of SVG represents the shift along with x-axis."
},
{
"code": null,
"e": 5557,
"s": 5483,
"text": "'dy' attribute of text tag of SVG represents the shift along with y-axis."
},
{
"code": null,
"e": 5644,
"s": 5557,
"text": "'rotation' attribute of text tag of SVG sets the rotation to be applied to all glyphs."
},
{
"code": null,
"e": 5725,
"s": 5644,
"text": "'textlength' attribute of text tag of SVG sets the rendering length of the text."
},
{
"code": null,
"e": 5798,
"s": 5725,
"text": "'stroke' property defines color of text, line or outline of any element."
},
{
"code": null,
"e": 5881,
"s": 5798,
"text": "'stroke-width' property defines thickness of text, line or outline of any element."
},
{
"code": null,
"e": 5975,
"s": 5881,
"text": "'stroke-linecap' property defines different types of ending of a line or outline of any path."
},
{
"code": null,
"e": 6032,
"s": 5975,
"text": "'stroke-dasharray' property used to create dashed lines."
},
{
"code": null,
"e": 6240,
"s": 6032,
"text": "SVG uses <filter> element to define filters. <filter> element uses an id attribute to uniquely identify it.Filters are defined within <def> elements and then are referenced by graphics elements by their ids."
},
{
"code": null,
"e": 6329,
"s": 6240,
"text": "SVG provides a rich set of filters. Following is the list of the commonly used filters −"
},
{
"code": null,
"e": 6337,
"s": 6329,
"text": "feBlend"
},
{
"code": null,
"e": 6345,
"s": 6337,
"text": "feBlend"
},
{
"code": null,
"e": 6359,
"s": 6345,
"text": "feColorMatrix"
},
{
"code": null,
"e": 6373,
"s": 6359,
"text": "feColorMatrix"
},
{
"code": null,
"e": 6393,
"s": 6373,
"text": "feComponentTransfer"
},
{
"code": null,
"e": 6413,
"s": 6393,
"text": "feComponentTransfer"
},
{
"code": null,
"e": 6425,
"s": 6413,
"text": "feComposite"
},
{
"code": null,
"e": 6437,
"s": 6425,
"text": "feComposite"
},
{
"code": null,
"e": 6454,
"s": 6437,
"text": "feConvolveMatrix"
},
{
"code": null,
"e": 6471,
"s": 6454,
"text": "feConvolveMatrix"
},
{
"code": null,
"e": 6489,
"s": 6471,
"text": "feDiffuseLighting"
},
{
"code": null,
"e": 6507,
"s": 6489,
"text": "feDiffuseLighting"
},
{
"code": null,
"e": 6525,
"s": 6507,
"text": "feDisplacementMap"
},
{
"code": null,
"e": 6543,
"s": 6525,
"text": "feDisplacementMap"
},
{
"code": null,
"e": 6692,
"s": 6543,
"text": "SVG uses <pattern> element to define patterns. Patterns are defined using <pattern> element and are used to fill graphics elements in tiled fashion."
},
{
"code": null,
"e": 6813,
"s": 6692,
"text": "Gradient refers to smooth transition of one color to another color within a shape. SVG provides two types of gradients −"
},
{
"code": null,
"e": 6830,
"s": 6813,
"text": "Linear Gradients"
},
{
"code": null,
"e": 6847,
"s": 6830,
"text": "Linear Gradients"
},
{
"code": null,
"e": 6864,
"s": 6847,
"text": "Radial Gradients"
},
{
"code": null,
"e": 6881,
"s": 6864,
"text": "Radial Gradients"
},
{
"code": null,
"e": 7028,
"s": 6881,
"text": "Linear Gradients represents linear transition of one color to another from one direction to another. It is defined using <linearGradient> element."
},
{
"code": null,
"e": 7177,
"s": 7028,
"text": "Radial Gradients represents circular transition of one color to another from one direction to another. It is defined using <radialGradient> element."
},
{
"code": null,
"e": 7299,
"s": 7177,
"text": "Yes! SVG images can be made responsive to user actions. SVG supports pointer events, keyboard events and document events."
},
{
"code": null,
"e": 7427,
"s": 7299,
"text": "Yes! SVG supports JavaScript/ECMAScript functions. Script block is to be in CDATA block consider character data support in XML."
},
{
"code": null,
"e": 7541,
"s": 7427,
"text": "Yes! SVG elements support mouse events, keyboard events. We've used onClick event to call a javascript functions."
},
{
"code": null,
"e": 7640,
"s": 7541,
"text": "In javascript functions, document represents SVG document and can be used to get the SVG elements."
},
{
"code": null,
"e": 7765,
"s": 7640,
"text": "In javascript functions, event represents current event and can be used to get the target element on which event got raised."
},
{
"code": null,
"e": 7956,
"s": 7765,
"text": "<a> element is used to create hyperlink. \"xlink:href\" attribute is used to pass the IRI (Internationalized Resource Identifiers) which is complementary to URI (Uniform Resource Identifiers)."
},
{
"code": null,
"e": 8005,
"s": 7956,
"text": "SVG image can be embedded using following ways −"
},
{
"code": null,
"e": 8021,
"s": 8005,
"text": "using embed tag"
},
{
"code": null,
"e": 8037,
"s": 8021,
"text": "using embed tag"
},
{
"code": null,
"e": 8054,
"s": 8037,
"text": "using object tag"
},
{
"code": null,
"e": 8071,
"s": 8054,
"text": "using object tag"
},
{
"code": null,
"e": 8084,
"s": 8071,
"text": "using iframe"
},
{
"code": null,
"e": 8097,
"s": 8084,
"text": "using iframe"
},
{
"code": null,
"e": 8189,
"s": 8097,
"text": "'rect' tag of SVG is used to draw a rectangle. Following are the commonly used attributes −"
},
{
"code": null,
"e": 8256,
"s": 8189,
"text": "x − x-axis co-ordinate of top left of the rectangle. Default is 0."
},
{
"code": null,
"e": 8323,
"s": 8256,
"text": "x − x-axis co-ordinate of top left of the rectangle. Default is 0."
},
{
"code": null,
"e": 8390,
"s": 8323,
"text": "y − y-axis co-ordinate of top left of the rectangle. Default is 0."
},
{
"code": null,
"e": 8457,
"s": 8390,
"text": "y − y-axis co-ordinate of top left of the rectangle. Default is 0."
},
{
"code": null,
"e": 8489,
"s": 8457,
"text": "width − width of the rectangle."
},
{
"code": null,
"e": 8521,
"s": 8489,
"text": "width − width of the rectangle."
},
{
"code": null,
"e": 8555,
"s": 8521,
"text": "height − height of the rectangle."
},
{
"code": null,
"e": 8589,
"s": 8555,
"text": "height − height of the rectangle."
},
{
"code": null,
"e": 8645,
"s": 8589,
"text": "rx − used to round the corner of the rounded rectangle."
},
{
"code": null,
"e": 8701,
"s": 8645,
"text": "rx − used to round the corner of the rounded rectangle."
},
{
"code": null,
"e": 8757,
"s": 8701,
"text": "ry − used to round the corner of the rounded rectangle."
},
{
"code": null,
"e": 8813,
"s": 8757,
"text": "ry − used to round the corner of the rounded rectangle."
},
{
"code": null,
"e": 8823,
"s": 8813,
"text": "Example −"
},
{
"code": null,
"e": 8945,
"s": 8823,
"text": "<rect \nx = \"100\" y = \"30\" \nwidth = \"300\" height = \"100\" \nstyle = \"fill:rgb(121,0,121);stroke-width:3;stroke:rgb(0,0,0)\" >"
},
{
"code": null,
"e": 9036,
"s": 8945,
"text": "'circle' tag of SVG is used to draw a circle. Following are the commonly used attributes −"
},
{
"code": null,
"e": 9103,
"s": 9036,
"text": "cx − x-axis co-ordinate of the center of the circle. Default is 0."
},
{
"code": null,
"e": 9170,
"s": 9103,
"text": "cx − x-axis co-ordinate of the center of the circle. Default is 0."
},
{
"code": null,
"e": 9237,
"s": 9170,
"text": "cy − y-axis co-ordinate of the center of the circle. Default is 0."
},
{
"code": null,
"e": 9304,
"s": 9237,
"text": "cy − y-axis co-ordinate of the center of the circle. Default is 0."
},
{
"code": null,
"e": 9330,
"s": 9304,
"text": "r − radius of the circle."
},
{
"code": null,
"e": 9356,
"s": 9330,
"text": "r − radius of the circle."
},
{
"code": null,
"e": 9366,
"s": 9356,
"text": "Example −"
},
{
"code": null,
"e": 9471,
"s": 9366,
"text": "<circle \ncx = \"100\" cy = \"100\" r = \"50\" \nstroke = \"black\" \nstroke-width = \"3\" \nfill = \"rgb(121,0,121)\" >"
},
{
"code": null,
"e": 9564,
"s": 9471,
"text": "'ellipse' tag of SVG is used to draw a ellipse. Following are the commonly used attributes −"
},
{
"code": null,
"e": 9632,
"s": 9564,
"text": "cx − x-axis co-ordinate of the center of the ellipse. Default is 0."
},
{
"code": null,
"e": 9700,
"s": 9632,
"text": "cx − x-axis co-ordinate of the center of the ellipse. Default is 0."
},
{
"code": null,
"e": 9768,
"s": 9700,
"text": "cy − y-axis co-ordinate of the center of the ellipse. Default is 0."
},
{
"code": null,
"e": 9836,
"s": 9768,
"text": "cy − y-axis co-ordinate of the center of the ellipse. Default is 0."
},
{
"code": null,
"e": 9871,
"s": 9836,
"text": "rx − x-axis radius of the ellipse."
},
{
"code": null,
"e": 9906,
"s": 9871,
"text": "rx − x-axis radius of the ellipse."
},
{
"code": null,
"e": 9941,
"s": 9906,
"text": "ry − y-axis radius of the ellipse."
},
{
"code": null,
"e": 9976,
"s": 9941,
"text": "ry − y-axis radius of the ellipse."
},
{
"code": null,
"e": 9986,
"s": 9976,
"text": "Example −"
},
{
"code": null,
"e": 10103,
"s": 9986,
"text": "<ellipse \ncx = \"100\" cy = \"100\" \nrx = \"90\" ry = \"50\" \nstroke = \"black\" \nstroke-width = \"3\" \nfill = \"rgb(121,0,121)\">"
},
{
"code": null,
"e": 10190,
"s": 10103,
"text": "'line' tag of SVG is used to draw a line. Following are the commonly used attributes −"
},
{
"code": null,
"e": 10248,
"s": 10190,
"text": "x1 − x-axis co-ordinate of the start point. Default is 0."
},
{
"code": null,
"e": 10306,
"s": 10248,
"text": "x1 − x-axis co-ordinate of the start point. Default is 0."
},
{
"code": null,
"e": 10364,
"s": 10306,
"text": "y1 − y-axis co-ordinate of the start point. Default is 0."
},
{
"code": null,
"e": 10422,
"s": 10364,
"text": "y1 − y-axis co-ordinate of the start point. Default is 0."
},
{
"code": null,
"e": 10478,
"s": 10422,
"text": "x2 − x-axis co-ordinate of the end point. Default is 0."
},
{
"code": null,
"e": 10534,
"s": 10478,
"text": "x2 − x-axis co-ordinate of the end point. Default is 0."
},
{
"code": null,
"e": 10590,
"s": 10534,
"text": "y2 − y-axis co-ordinate of the end point. Default is 0."
},
{
"code": null,
"e": 10646,
"s": 10590,
"text": "y2 − y-axis co-ordinate of the end point. Default is 0."
},
{
"code": null,
"e": 10656,
"s": 10646,
"text": "Example −"
},
{
"code": null,
"e": 10770,
"s": 10656,
"text": "<line \nx1 = \"20\" y1 = \"20\" \nx2 = \"150\" y2 = \"150\" \nstroke = \"black\" \nstroke-width = \"3\" \nfill = \"rgb(121,0,121)\">"
},
{
"code": null,
"e": 10861,
"s": 10770,
"text": "'polygon' tag of SVG is used to draw a polygon. Following is the commonly used attribute −"
},
{
"code": null,
"e": 10907,
"s": 10861,
"text": "points - List of points to make up a polygon."
},
{
"code": null,
"e": 10917,
"s": 10907,
"text": "Example −"
},
{
"code": null,
"e": 11055,
"s": 10917,
"text": "<polygon \npoints = \"150,75 258,137.5 258,262.5 150,325 42,262.6 42,137.5\" \nstroke = \"black\" \nstroke-width = \"3\" \nfill = \"rgb(121,0,121)\">"
},
{
"code": null,
"e": 11158,
"s": 11055,
"text": "'polyline' tag of SVG is used to draw a open ended polygon. Following is the commonly used attribute −"
},
{
"code": null,
"e": 11204,
"s": 11158,
"text": "points − List of points to make up a polygon."
},
{
"code": null,
"e": 11214,
"s": 11204,
"text": "Example −"
},
{
"code": null,
"e": 11343,
"s": 11214,
"text": "<polyline \npoints = \"150,75 258,137.5 258,262.5 150,325 42,262.6 42,137.5\" \nstroke = \"black\" \nstroke-width = \"3\" \nfill = \"none\">"
},
{
"code": null,
"e": 11438,
"s": 11343,
"text": "'path' tag of SVG is used to draw a free flow path. Following is the commonly used attribute −"
},
{
"code": null,
"e": 11503,
"s": 11438,
"text": "d − path data,usually a set of commands like moveto, lineto etc."
},
{
"code": null,
"e": 11513,
"s": 11503,
"text": "Example −"
},
{
"code": null,
"e": 11622,
"s": 11513,
"text": "<path \nd = \"M 100 100 L 300 100 L 200 300 z\" \nstroke = \"black\" \nstroke-width = \"3\" \nfill = \"rgb(121,0,121)\">"
},
{
"code": null,
"e": 11686,
"s": 11622,
"text": "M command of path element move from one point to another point."
},
{
"code": null,
"e": 11728,
"s": 11686,
"text": "L command of path element creates a line."
},
{
"code": null,
"e": 11781,
"s": 11728,
"text": "H command of path element creates a horizontal line."
},
{
"code": null,
"e": 11832,
"s": 11781,
"text": "V command of path element creates a vertical line."
},
{
"code": null,
"e": 11875,
"s": 11832,
"text": "C command of path element creates a curve."
},
{
"code": null,
"e": 11925,
"s": 11875,
"text": "S command of path element creates a smooth curve."
},
{
"code": null,
"e": 11985,
"s": 11925,
"text": "Q command of path element creates a quadratic Bezier curve."
},
{
"code": null,
"e": 12052,
"s": 11985,
"text": "T command of path element creates a smooth quadratic Bezier curve."
},
{
"code": null,
"e": 12104,
"s": 12052,
"text": "A command of path element creates a elliptical arc."
},
{
"code": null,
"e": 12147,
"s": 12104,
"text": "Z command of path element closes the path."
},
{
"code": null,
"e": 12284,
"s": 12147,
"text": "When commands are in Upper case, these represents absolute path. In case their lower case commands are used, then relative path is used."
},
{
"code": null,
"e": 12571,
"s": 12284,
"text": "Further you can go through your past assignments you have done with the subject and make sure you are able to speak confidently on them. If you are fresher then interviewer does not expect you will answer very complex questions, rather you have to make your basics concepts very strong."
},
{
"code": null,
"e": 12901,
"s": 12571,
"text": "Second it really doesn't matter much if you could not answer few questions but it matters that whatever you answered, you must have answered with confidence. So just feel confident during your interview. We at tutorialspoint wish you best luck to have a good interviewer and all the very best for your future endeavor. Cheers :-)"
},
{
"code": null,
"e": 12936,
"s": 12901,
"text": "\n 45 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 12967,
"s": 12936,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 12974,
"s": 12967,
"text": " Print"
},
{
"code": null,
"e": 12985,
"s": 12974,
"text": " Add Notes"
}
] |
Detach interrupts from a source in Arduino | We have seen that in order to attach interrupts to a source, we use the .attachInterrupt() function, with the required arguments.
For example, for attaching the interrupts to a specific pin, we use
attachInterrupt(digitalPinToInterrupt(pin), ISR, mode);
In the same way, to detach the interrupt from a source, we can call the detachInterrupt() function. This will simply disable that particular interrupt. The recommended syntax for disabling pin interrupts is −
detachInterrupt(digitalPinToInterrupt(pin))
where pin is the pin number on which you wish to disable the interrupt. | [
{
"code": null,
"e": 1192,
"s": 1062,
"text": "We have seen that in order to attach interrupts to a source, we use the .attachInterrupt() function, with the required arguments."
},
{
"code": null,
"e": 1260,
"s": 1192,
"text": "For example, for attaching the interrupts to a specific pin, we use"
},
{
"code": null,
"e": 1316,
"s": 1260,
"text": "attachInterrupt(digitalPinToInterrupt(pin), ISR, mode);"
},
{
"code": null,
"e": 1525,
"s": 1316,
"text": "In the same way, to detach the interrupt from a source, we can call the detachInterrupt() function. This will simply disable that particular interrupt. The recommended syntax for disabling pin interrupts is −"
},
{
"code": null,
"e": 1569,
"s": 1525,
"text": "detachInterrupt(digitalPinToInterrupt(pin))"
},
{
"code": null,
"e": 1641,
"s": 1569,
"text": "where pin is the pin number on which you wish to disable the interrupt."
}
] |
Unconventional Deep Learning Techniques for Tabular Data | Deep Learning for Tabular Data | Medium | Towards Data Science | In recent years, Deep Learning has made huge strides in the fields of Computer Vision and Natural Language Processing. And as a result, deep learning techniques have often been confined to image data or sequential (text) data. What about tabular data? The traditional method of information storage and retrieval in many organizations is arguably the most important when it comes to business use cases. But data in our tables/dataframes seem to be content with the use of simple Multi-layer feedforward networks in the Deep Learning arena. Although one can argue that Recurrent Neural Networks (RNNs) are often used on tabular time series data, the applications of these methodologies on data without a time series component in it is very limited. In this blog post, we’ll look at the application of some deep learning techniques, usually used on image or text data, on non-time series tabular data in the decreasing level of conventionality.
Conventionally, autoencoders have been used for non-linear dimensionality reduction. Say we have a dataset where the number of features is way more than what we’d prefer it to be, we can use autoencoders to bring the feature set to the desired feature size through complex non-linear functions that we needn’t have to worry about! This is a more effective technique compared to linear dimensionality reduction methods like PCA (Principal Component Analysis) or other conventional non-linear techniques like LLE (Locally Linear Embeddings).
Autoencoders are trained on the training feature set without any labels, i.e., they try to predict as output whatever the input was. This would be a simple task if the hidden layers were wide enough to capture all of our input data. Thereby, the requirement, for a neural network to be an autoencoder, is to have at least one layer, the bottleneck layer, of lower dimension compared to the input or the output dimension. This is usually the embedding layer or the reduced feature set we want to use. The metric when training the network can be the usual mean squared error or mean absolute error. If the original data is x and the reconstructed output generated by the autoencoder is x_hat, we try to minimize
L(x, x_hat) = |x — x_hat|2
After training the encoder and decoder parts of the network together, we use only the encoder part of the model as our dimensionality reduction/feature extraction algorithm. The sample code in Keras is as follows:
from keras.layers import Dense, Dropoutfrom keras.models import Sequential, Modelfrom keras import metrics, InputMETRICS = [ metrics.RootMeanSquaredError(name='rms'), metrics.MeanAbsoluteError(name='mae')]ENCODING_DIM = 16 #Desired DimensionBATCH_SIZE = 64EPOCHS = 100def make_and_train_autoencoder(X_train, metrics=METRICS): len_input_output = X_train.shape[-1] input_ = Input(shape=(len_input_output,)) encoded = Dense(units=ENCODING_DIM*2, activation="relu")(input_) bottleneck = Dense(units=ENCODING_DIM, activation="relu")(encoded) decoded = Dense(units=ENCODING_DIM*2, activation="relu")(bottleneck) output = Dense(units=len_input_output, activation="linear")(decoded) #Training is performed on the entire autoencoder autoencoder = Model(inputs=input_, outputs=output) autoencoder.compile(optimizer='adam', loss='mean_squared_error', metrics=[metrics]) autoencoder.fit(X_train, X_train, batch_size=BATCH_SIZE, epochs=EPOCHS) #Use only the encoder part for dimensionality reduction encoder = Model(inputs=input_, outputs=bottleneck) return autoencoder, encoder
The inspiration for Denoising Autoencoders comes from the field of computer vision. As you can see above, they can be used to remove noise from the input data. Denoising Autoencoders (DAEs) can be used similarly on tabular data as most of the data collection processes inherently have some noise. This technique has proven to be key in many Kaggle competitions’ winning solutions (Porto Seguro’s Safe Driver Prediction). Unlike autoencoders, DAEs do not need to satisfy the requirement of having the encoding and decoding parts to the network. In other words, there is no bottleneck, i.e., it is simply a neural network trained to remove noise. Then the question is, how do we train the network?
As explained in the image above, we first corrupt our input data x. The corrupted data x̃ is usually obtained by adding Gaussian noise or by setting some of the input feature values to zero. This is our way of trying to mimic the noise in our datasets. We then pass x̃ through the DAE we have designed to get the reconstructed output x_hat with the same dimensions as the input x. The loss function is similar to that of a usual autoencoder. A DAE tries to minimize the difference between the output x_hat and the original data x, thereby giving it the ability to eliminate the influence of noise and extract features from the corrupted data. The sample code is as follows:
#Change mean and scale of the noise according to your datanoise = np.random.normal(loc=0, scale=0.5, size=X_train.shape)X_train_noisy = X_train + noiselen_input_output = X_train.shape[-1]def make_dae(metrics=METRICS): dae = Sequential([ Dense(units=len_input_output*2, activation="relu", input_shape=(len_input_output,)), Dropout(0.5), #Add dropout layers if required Dense(units=len_input_output*2, activation="relu"), Dense(units=len_input_output*2, activation="relu"), Dense(units=len_input_output, activation="linear"), ]) dae.compile( optimizer='adam', loss='mean_squared_error', metrics=[metrics] ) return daedae = make_dae()history = dae.fit( X_train_noisy, X_train, batch_size = BATCH_SIZE, epochs = EPOCHS )
We usually train the denoising autoencoder only on the training set. Once the model is trained, we can pass the original data x and the test set, say x` through the DAE to get a denoised version of the dataset.
Many of us might have come across the famous character-level language model for generating Shakespeare like text. What if we trained a language model on our training data (in CSV/txt format)? The primary question is, why would we want to do that? The simple answer — Data Imbalance. Most real-world datasets have a huge difference in the number of training examples for each class/label. Take for example fraud detection. Only about ~0.05% of all transactions are fraudulent. Thereby, we might want to generate more training examples of the minority class to tackle the imbalance problem.
By training a character-level language model on our minority class data (only feature values), we can generate records similar to the minority class just like we generate text by training the model on a set of poems. I won’t dive deep into the specifics of training and sampling from a character-level language model but I encourage you to go through Andrej Karpathy’s blog (or Week 1 of the Sequence Models course on Coursera).
Thankfully, there are libraries like gretel-synthetics that do the above-described job for us! Gretel’s library also has other options like enabling differential privacy in the synthetic datasets produced. Their blog posts are a great way to understand and learn about their library. The following sample code can be used to generate synthetic dataframes:
!pip install gretel-synthetics --upgradefrom pathlib import Pathfrom gretel_synthetics.batch import DataFrameBatchimport pandas as pdsource_df = pd.read_csv("diabetic_data.csv") #File Pathconfig_template = { "max_lines": 20000, # maximum lines of training data. Set to ``0`` to train on entire dataframe "max_line_len": 2048, # the max line length for input training data "epochs": 15, # Gretel recommends 15-50 epochs with GPU for best performance "vocab_size": 20000, # tokenizer model vocabulary size "gen_lines": 100, # the number of generated text lines "dp": True, # train with differential privacy enabled (privacy assurances, but reduced accuracy) "field_delimiter": ",", # Must be specified "overwrite": True, "checkpoint_dir": str(Path.cwd() / "checkpoints")}batcher = DataFrameBatch(df=source_df, config=config_template)batcher.create_training_data()batcher.train_all_batches()status = batcher.generate_all_batch_lines()synthetic_df = batcher.batches_to_df()synthetic_df.head(10)
All of the deep learning techniques discussed above fall in the category of self-supervised or unsupervised learning. These are often used as preprocessing steps before training the actual model for our classification or regression tasks. The effectiveness of these steps will depend on the data you have in hand and parameters like how long you want to train the language model or how much compression you want in your feature set.
Congratulations on making it to the end of the article! Hopefully, the article was able to equip you with new and innovative ways of handling your tabular data. So, the next time you think that XGBoost or LightGBM is all there is to tabular data, think again!
Stay tuned for another post on similar lines where I talk about the application of algorithms like Generative Adversarial Networks (GANs) and Latent Dirichlet Allocation (LDA) on tabular data. Join me in making tabular data great again!
Feel free to comment on the post or connect with me on LinkedIn. I look forward to your feedback and suggestions.
References:
[1] F. Chollet, The Keras Blog, https://blog.keras.io/building-autoencoders-in-keras.html.
[2] A. Watson, J. Myers, A. Steier, Gretel.ai on Medium, https://medium.com/gretel-ai.
[3] M. Jahrer, Kaggle, https://www.kaggle.com/c/porto-seguro-safe-driver-prediction/discussion/44629. | [
{
"code": null,
"e": 1114,
"s": 172,
"text": "In recent years, Deep Learning has made huge strides in the fields of Computer Vision and Natural Language Processing. And as a result, deep learning techniques have often been confined to image data or sequential (text) data. What about tabular data? The traditional method of information storage and retrieval in many organizations is arguably the most important when it comes to business use cases. But data in our tables/dataframes seem to be content with the use of simple Multi-layer feedforward networks in the Deep Learning arena. Although one can argue that Recurrent Neural Networks (RNNs) are often used on tabular time series data, the applications of these methodologies on data without a time series component in it is very limited. In this blog post, we’ll look at the application of some deep learning techniques, usually used on image or text data, on non-time series tabular data in the decreasing level of conventionality."
},
{
"code": null,
"e": 1654,
"s": 1114,
"text": "Conventionally, autoencoders have been used for non-linear dimensionality reduction. Say we have a dataset where the number of features is way more than what we’d prefer it to be, we can use autoencoders to bring the feature set to the desired feature size through complex non-linear functions that we needn’t have to worry about! This is a more effective technique compared to linear dimensionality reduction methods like PCA (Principal Component Analysis) or other conventional non-linear techniques like LLE (Locally Linear Embeddings)."
},
{
"code": null,
"e": 2364,
"s": 1654,
"text": "Autoencoders are trained on the training feature set without any labels, i.e., they try to predict as output whatever the input was. This would be a simple task if the hidden layers were wide enough to capture all of our input data. Thereby, the requirement, for a neural network to be an autoencoder, is to have at least one layer, the bottleneck layer, of lower dimension compared to the input or the output dimension. This is usually the embedding layer or the reduced feature set we want to use. The metric when training the network can be the usual mean squared error or mean absolute error. If the original data is x and the reconstructed output generated by the autoencoder is x_hat, we try to minimize"
},
{
"code": null,
"e": 2391,
"s": 2364,
"text": "L(x, x_hat) = |x — x_hat|2"
},
{
"code": null,
"e": 2605,
"s": 2391,
"text": "After training the encoder and decoder parts of the network together, we use only the encoder part of the model as our dimensionality reduction/feature extraction algorithm. The sample code in Keras is as follows:"
},
{
"code": null,
"e": 3844,
"s": 2605,
"text": "from keras.layers import Dense, Dropoutfrom keras.models import Sequential, Modelfrom keras import metrics, InputMETRICS = [ metrics.RootMeanSquaredError(name='rms'), metrics.MeanAbsoluteError(name='mae')]ENCODING_DIM = 16 #Desired DimensionBATCH_SIZE = 64EPOCHS = 100def make_and_train_autoencoder(X_train, metrics=METRICS): len_input_output = X_train.shape[-1] input_ = Input(shape=(len_input_output,)) encoded = Dense(units=ENCODING_DIM*2, activation=\"relu\")(input_) bottleneck = Dense(units=ENCODING_DIM, activation=\"relu\")(encoded) decoded = Dense(units=ENCODING_DIM*2, activation=\"relu\")(bottleneck) output = Dense(units=len_input_output, activation=\"linear\")(decoded) #Training is performed on the entire autoencoder autoencoder = Model(inputs=input_, outputs=output) autoencoder.compile(optimizer='adam', loss='mean_squared_error', metrics=[metrics]) autoencoder.fit(X_train, X_train, batch_size=BATCH_SIZE, epochs=EPOCHS) #Use only the encoder part for dimensionality reduction encoder = Model(inputs=input_, outputs=bottleneck) return autoencoder, encoder"
},
{
"code": null,
"e": 4540,
"s": 3844,
"text": "The inspiration for Denoising Autoencoders comes from the field of computer vision. As you can see above, they can be used to remove noise from the input data. Denoising Autoencoders (DAEs) can be used similarly on tabular data as most of the data collection processes inherently have some noise. This technique has proven to be key in many Kaggle competitions’ winning solutions (Porto Seguro’s Safe Driver Prediction). Unlike autoencoders, DAEs do not need to satisfy the requirement of having the encoding and decoding parts to the network. In other words, there is no bottleneck, i.e., it is simply a neural network trained to remove noise. Then the question is, how do we train the network?"
},
{
"code": null,
"e": 5214,
"s": 4540,
"text": "As explained in the image above, we first corrupt our input data x. The corrupted data x̃ is usually obtained by adding Gaussian noise or by setting some of the input feature values to zero. This is our way of trying to mimic the noise in our datasets. We then pass x̃ through the DAE we have designed to get the reconstructed output x_hat with the same dimensions as the input x. The loss function is similar to that of a usual autoencoder. A DAE tries to minimize the difference between the output x_hat and the original data x, thereby giving it the ability to eliminate the influence of noise and extract features from the corrupted data. The sample code is as follows:"
},
{
"code": null,
"e": 6033,
"s": 5214,
"text": "#Change mean and scale of the noise according to your datanoise = np.random.normal(loc=0, scale=0.5, size=X_train.shape)X_train_noisy = X_train + noiselen_input_output = X_train.shape[-1]def make_dae(metrics=METRICS): dae = Sequential([ Dense(units=len_input_output*2, activation=\"relu\", input_shape=(len_input_output,)), Dropout(0.5), #Add dropout layers if required Dense(units=len_input_output*2, activation=\"relu\"), Dense(units=len_input_output*2, activation=\"relu\"), Dense(units=len_input_output, activation=\"linear\"), ]) dae.compile( optimizer='adam', loss='mean_squared_error', metrics=[metrics] ) return daedae = make_dae()history = dae.fit( X_train_noisy, X_train, batch_size = BATCH_SIZE, epochs = EPOCHS )"
},
{
"code": null,
"e": 6244,
"s": 6033,
"text": "We usually train the denoising autoencoder only on the training set. Once the model is trained, we can pass the original data x and the test set, say x` through the DAE to get a denoised version of the dataset."
},
{
"code": null,
"e": 6833,
"s": 6244,
"text": "Many of us might have come across the famous character-level language model for generating Shakespeare like text. What if we trained a language model on our training data (in CSV/txt format)? The primary question is, why would we want to do that? The simple answer — Data Imbalance. Most real-world datasets have a huge difference in the number of training examples for each class/label. Take for example fraud detection. Only about ~0.05% of all transactions are fraudulent. Thereby, we might want to generate more training examples of the minority class to tackle the imbalance problem."
},
{
"code": null,
"e": 7262,
"s": 6833,
"text": "By training a character-level language model on our minority class data (only feature values), we can generate records similar to the minority class just like we generate text by training the model on a set of poems. I won’t dive deep into the specifics of training and sampling from a character-level language model but I encourage you to go through Andrej Karpathy’s blog (or Week 1 of the Sequence Models course on Coursera)."
},
{
"code": null,
"e": 7618,
"s": 7262,
"text": "Thankfully, there are libraries like gretel-synthetics that do the above-described job for us! Gretel’s library also has other options like enabling differential privacy in the synthetic datasets produced. Their blog posts are a great way to understand and learn about their library. The following sample code can be used to generate synthetic dataframes:"
},
{
"code": null,
"e": 8673,
"s": 7618,
"text": "!pip install gretel-synthetics --upgradefrom pathlib import Pathfrom gretel_synthetics.batch import DataFrameBatchimport pandas as pdsource_df = pd.read_csv(\"diabetic_data.csv\") #File Pathconfig_template = { \"max_lines\": 20000, # maximum lines of training data. Set to ``0`` to train on entire dataframe \"max_line_len\": 2048, # the max line length for input training data \"epochs\": 15, # Gretel recommends 15-50 epochs with GPU for best performance \"vocab_size\": 20000, # tokenizer model vocabulary size \"gen_lines\": 100, # the number of generated text lines \"dp\": True, # train with differential privacy enabled (privacy assurances, but reduced accuracy) \"field_delimiter\": \",\", # Must be specified \"overwrite\": True, \"checkpoint_dir\": str(Path.cwd() / \"checkpoints\")}batcher = DataFrameBatch(df=source_df, config=config_template)batcher.create_training_data()batcher.train_all_batches()status = batcher.generate_all_batch_lines()synthetic_df = batcher.batches_to_df()synthetic_df.head(10)"
},
{
"code": null,
"e": 9106,
"s": 8673,
"text": "All of the deep learning techniques discussed above fall in the category of self-supervised or unsupervised learning. These are often used as preprocessing steps before training the actual model for our classification or regression tasks. The effectiveness of these steps will depend on the data you have in hand and parameters like how long you want to train the language model or how much compression you want in your feature set."
},
{
"code": null,
"e": 9366,
"s": 9106,
"text": "Congratulations on making it to the end of the article! Hopefully, the article was able to equip you with new and innovative ways of handling your tabular data. So, the next time you think that XGBoost or LightGBM is all there is to tabular data, think again!"
},
{
"code": null,
"e": 9603,
"s": 9366,
"text": "Stay tuned for another post on similar lines where I talk about the application of algorithms like Generative Adversarial Networks (GANs) and Latent Dirichlet Allocation (LDA) on tabular data. Join me in making tabular data great again!"
},
{
"code": null,
"e": 9717,
"s": 9603,
"text": "Feel free to comment on the post or connect with me on LinkedIn. I look forward to your feedback and suggestions."
},
{
"code": null,
"e": 9729,
"s": 9717,
"text": "References:"
},
{
"code": null,
"e": 9820,
"s": 9729,
"text": "[1] F. Chollet, The Keras Blog, https://blog.keras.io/building-autoencoders-in-keras.html."
},
{
"code": null,
"e": 9907,
"s": 9820,
"text": "[2] A. Watson, J. Myers, A. Steier, Gretel.ai on Medium, https://medium.com/gretel-ai."
}
] |
Running Apache Flink with RocksDB on Azure Kubernetes Service | by Hunter Kempf | Towards Data Science | Recently I was looking into how to deploy an Apache Flink cluster that uses RocksDB as the backend state and found a lack of detailed documentation on the subject. I was able to piece together how to deploy this from the Flink documentation and some stack overflow posts but there wasn’t a clear how-to guide anywhere. My goal with this post is to show a step by step guide for how to do this as of Flink 1.11.0. This guide can be followed with minimal modifications for AWS deployments. I will note the changes.
Before you start on this guide it is expected that you have an Azure Subscription with AKS and a Storage Account with a blob store running. If you need help getting that up and running see the links section at the bottom of this article with links to the Azure guides for how to set those services up.
It is also expected that you have flink, the azure-cli and docker downloaded see the bottom for links to those if you don't have them installed.
Using Azure Blob storage or AWS S3 storage for Flink backend state requires an extra plugin jar that is not built into the base Flink docker image. To add the needed jar you first need to download the apache/flink-docker repo from GitHub.
github.com
Once you have downloaded the repo navigate to the folder with the version of Flink you want to deploy. In my case, I want to use 1.11 then if you plan on using scala code, navigate to the directory that has the version of scala you are using. I use scala 2.12 so for me that folder is scala_2.12-debian. If you use java or python I believe you can use either version. Inside that folder you should see two files: Dockerfile and docker-entrypoint.sh. The file we will be editing is Dockerfile. Open that up in a text editor and add the following lines just above the COPY docker-entrypoint.sh line.
For Azure Blob Storage:
# install Flink Azure FS Hadoop pluginRUN mkdir ./plugins/azure-fs-hadoopCOPY ./flink-azure-fs-hadoop-1.11.0.jar ./plugins/azure-fs-hadoop/
For AWS S3 you have 2 options Presto and Hadoop:
Presto:
# install Flink S3 FS Presto pluginRUN mkdir ./plugins/s3-fs-prestoCOPY ./flink-s3-fs-presto-1.11.0.jar ./plugins/s3-fs-presto/
Hadoop:
# install Flink S3 FS Hadoop pluginRUN mkdir ./plugins/s3-fs-hadoopCOPY ./flink-s3-fs-hadoop-1.11.0.jar ./plugins/s3-fs-hadoop/
Next from the Flink 1.11.0 folder navigate to the /opt/ directory and copy the jar you need for your backend to the same directory as the Dockerfile you edited. For Azure that jar is flink-azure-fs-hadoop-1.11.0.jar
Now Build and Push the Docker Image to DockerHub using the following command replacing hunterkempf with your DockerHub username and changing the tag to reflect the version of flink, scala and the plugin you added to it:
docker build --tag hunterkempf/flink:1.11.0_scala_2.12_fs_azure .docker push hunterkempf/flink
ci.apache.org
First, go to the Flink Kubernetes setup page and create the following .yaml files on your computer using a text editor and copying/pasting from the Appendix.
flink-configuration-configmap.yaml
jobmanager-service.yaml
jobmanager-session-deployment.yaml
taskmanager-session-deployment.yaml
flink-configuration-configmap.yaml
You will need to add the following lines to the flink-conf.yaml section:
This example is for a blob store named: flinkblob in a storage account named: flinkstorage. You will also need to get the Access Key to your storage account and paste it after the fs.azure.account.key.flinkstorage.blob.core.windows.net:
state.backend: rocksdbstate.checkpoints.dir: wasbs://[email protected]/flink/fs.azure.account.key.flinkstorage.blob.core.windows.net:
jobmanager-session-deployment.yaml and taskmanager-session-deployment.yaml
Replace the image: flink:1.11.0-scala_2.11 with the name of the docker image you created in part 1 for both jobmanager-session-deployment and taskmanager-session-deployment.
Connect to the AKS Cluster
Run these two commands to connect to your AKS Cluster
az aks install-cliaz aks get-credentials --resource-group myResourceGroup --name myAKSCluster
Deploy Flink to AKS
From the directory that you created the yaml files run the following commands
kubectl create -f flink-configuration-configmap.yamlkubectl create -f jobmanager-service.yamlkubectl create -f jobmanager-session-deployment.yamlkubectl create -f taskmanager-session-deployment.yaml
Now you should have a Flink Cluster running on AKS using Azure Blob Storage as a RocksDB Backend.
View Flink Dashboard
To access the Flink Dashboard run the following commands:
kubectl get pods
this will return something similar to what is below
NAME READY STATUS RESTARTS AGEflink-jobmanager-5f857bf45d-n4mbt 1/1 Running 0 38sflink-taskmanager-68854b7998-2zkd6 1/1 Running 0 31sflink-taskmanager-68854b7998-vvm5m 1/1 Running 0 31s
copy the name of the flink-jobmanager pod and use that for the following command replacing the 5f857bf45d-n4mbtwith your value
kubectl port-forward flink-jobmanager-5f857bf45d-n4mbt 8081:8081
Now open up http://localhost:8081 and you should see the Flink Dashboard
To check your configurations click on the Job Manager tab on the left
you should see something similar with your settings listed out in key value notation.
To test the Deployment you can upload one of the test jar’s from the Flink 1.11.0 folder located in the examples/streaming/ folder
Running the Streaming WordCount job took 1 second and returned no errors for me. If you get an Internal Server Error check the Job Manager Logs to see the error stack trace.
Here are some useful links for further reading | [
{
"code": null,
"e": 684,
"s": 171,
"text": "Recently I was looking into how to deploy an Apache Flink cluster that uses RocksDB as the backend state and found a lack of detailed documentation on the subject. I was able to piece together how to deploy this from the Flink documentation and some stack overflow posts but there wasn’t a clear how-to guide anywhere. My goal with this post is to show a step by step guide for how to do this as of Flink 1.11.0. This guide can be followed with minimal modifications for AWS deployments. I will note the changes."
},
{
"code": null,
"e": 986,
"s": 684,
"text": "Before you start on this guide it is expected that you have an Azure Subscription with AKS and a Storage Account with a blob store running. If you need help getting that up and running see the links section at the bottom of this article with links to the Azure guides for how to set those services up."
},
{
"code": null,
"e": 1131,
"s": 986,
"text": "It is also expected that you have flink, the azure-cli and docker downloaded see the bottom for links to those if you don't have them installed."
},
{
"code": null,
"e": 1370,
"s": 1131,
"text": "Using Azure Blob storage or AWS S3 storage for Flink backend state requires an extra plugin jar that is not built into the base Flink docker image. To add the needed jar you first need to download the apache/flink-docker repo from GitHub."
},
{
"code": null,
"e": 1381,
"s": 1370,
"text": "github.com"
},
{
"code": null,
"e": 1979,
"s": 1381,
"text": "Once you have downloaded the repo navigate to the folder with the version of Flink you want to deploy. In my case, I want to use 1.11 then if you plan on using scala code, navigate to the directory that has the version of scala you are using. I use scala 2.12 so for me that folder is scala_2.12-debian. If you use java or python I believe you can use either version. Inside that folder you should see two files: Dockerfile and docker-entrypoint.sh. The file we will be editing is Dockerfile. Open that up in a text editor and add the following lines just above the COPY docker-entrypoint.sh line."
},
{
"code": null,
"e": 2003,
"s": 1979,
"text": "For Azure Blob Storage:"
},
{
"code": null,
"e": 2143,
"s": 2003,
"text": "# install Flink Azure FS Hadoop pluginRUN mkdir ./plugins/azure-fs-hadoopCOPY ./flink-azure-fs-hadoop-1.11.0.jar ./plugins/azure-fs-hadoop/"
},
{
"code": null,
"e": 2192,
"s": 2143,
"text": "For AWS S3 you have 2 options Presto and Hadoop:"
},
{
"code": null,
"e": 2200,
"s": 2192,
"text": "Presto:"
},
{
"code": null,
"e": 2328,
"s": 2200,
"text": "# install Flink S3 FS Presto pluginRUN mkdir ./plugins/s3-fs-prestoCOPY ./flink-s3-fs-presto-1.11.0.jar ./plugins/s3-fs-presto/"
},
{
"code": null,
"e": 2336,
"s": 2328,
"text": "Hadoop:"
},
{
"code": null,
"e": 2464,
"s": 2336,
"text": "# install Flink S3 FS Hadoop pluginRUN mkdir ./plugins/s3-fs-hadoopCOPY ./flink-s3-fs-hadoop-1.11.0.jar ./plugins/s3-fs-hadoop/"
},
{
"code": null,
"e": 2680,
"s": 2464,
"text": "Next from the Flink 1.11.0 folder navigate to the /opt/ directory and copy the jar you need for your backend to the same directory as the Dockerfile you edited. For Azure that jar is flink-azure-fs-hadoop-1.11.0.jar"
},
{
"code": null,
"e": 2900,
"s": 2680,
"text": "Now Build and Push the Docker Image to DockerHub using the following command replacing hunterkempf with your DockerHub username and changing the tag to reflect the version of flink, scala and the plugin you added to it:"
},
{
"code": null,
"e": 2995,
"s": 2900,
"text": "docker build --tag hunterkempf/flink:1.11.0_scala_2.12_fs_azure .docker push hunterkempf/flink"
},
{
"code": null,
"e": 3009,
"s": 2995,
"text": "ci.apache.org"
},
{
"code": null,
"e": 3167,
"s": 3009,
"text": "First, go to the Flink Kubernetes setup page and create the following .yaml files on your computer using a text editor and copying/pasting from the Appendix."
},
{
"code": null,
"e": 3202,
"s": 3167,
"text": "flink-configuration-configmap.yaml"
},
{
"code": null,
"e": 3226,
"s": 3202,
"text": "jobmanager-service.yaml"
},
{
"code": null,
"e": 3261,
"s": 3226,
"text": "jobmanager-session-deployment.yaml"
},
{
"code": null,
"e": 3297,
"s": 3261,
"text": "taskmanager-session-deployment.yaml"
},
{
"code": null,
"e": 3332,
"s": 3297,
"text": "flink-configuration-configmap.yaml"
},
{
"code": null,
"e": 3405,
"s": 3332,
"text": "You will need to add the following lines to the flink-conf.yaml section:"
},
{
"code": null,
"e": 3642,
"s": 3405,
"text": "This example is for a blob store named: flinkblob in a storage account named: flinkstorage. You will also need to get the Access Key to your storage account and paste it after the fs.azure.account.key.flinkstorage.blob.core.windows.net:"
},
{
"code": null,
"e": 3804,
"s": 3642,
"text": "state.backend: rocksdbstate.checkpoints.dir: wasbs://[email protected]/flink/fs.azure.account.key.flinkstorage.blob.core.windows.net: "
},
{
"code": null,
"e": 3879,
"s": 3804,
"text": "jobmanager-session-deployment.yaml and taskmanager-session-deployment.yaml"
},
{
"code": null,
"e": 4053,
"s": 3879,
"text": "Replace the image: flink:1.11.0-scala_2.11 with the name of the docker image you created in part 1 for both jobmanager-session-deployment and taskmanager-session-deployment."
},
{
"code": null,
"e": 4080,
"s": 4053,
"text": "Connect to the AKS Cluster"
},
{
"code": null,
"e": 4134,
"s": 4080,
"text": "Run these two commands to connect to your AKS Cluster"
},
{
"code": null,
"e": 4228,
"s": 4134,
"text": "az aks install-cliaz aks get-credentials --resource-group myResourceGroup --name myAKSCluster"
},
{
"code": null,
"e": 4248,
"s": 4228,
"text": "Deploy Flink to AKS"
},
{
"code": null,
"e": 4326,
"s": 4248,
"text": "From the directory that you created the yaml files run the following commands"
},
{
"code": null,
"e": 4525,
"s": 4326,
"text": "kubectl create -f flink-configuration-configmap.yamlkubectl create -f jobmanager-service.yamlkubectl create -f jobmanager-session-deployment.yamlkubectl create -f taskmanager-session-deployment.yaml"
},
{
"code": null,
"e": 4623,
"s": 4525,
"text": "Now you should have a Flink Cluster running on AKS using Azure Blob Storage as a RocksDB Backend."
},
{
"code": null,
"e": 4644,
"s": 4623,
"text": "View Flink Dashboard"
},
{
"code": null,
"e": 4702,
"s": 4644,
"text": "To access the Flink Dashboard run the following commands:"
},
{
"code": null,
"e": 4719,
"s": 4702,
"text": "kubectl get pods"
},
{
"code": null,
"e": 4771,
"s": 4719,
"text": "this will return something similar to what is below"
},
{
"code": null,
"e": 5040,
"s": 4771,
"text": "NAME READY STATUS RESTARTS AGEflink-jobmanager-5f857bf45d-n4mbt 1/1 Running 0 38sflink-taskmanager-68854b7998-2zkd6 1/1 Running 0 31sflink-taskmanager-68854b7998-vvm5m 1/1 Running 0 31s"
},
{
"code": null,
"e": 5167,
"s": 5040,
"text": "copy the name of the flink-jobmanager pod and use that for the following command replacing the 5f857bf45d-n4mbtwith your value"
},
{
"code": null,
"e": 5232,
"s": 5167,
"text": "kubectl port-forward flink-jobmanager-5f857bf45d-n4mbt 8081:8081"
},
{
"code": null,
"e": 5305,
"s": 5232,
"text": "Now open up http://localhost:8081 and you should see the Flink Dashboard"
},
{
"code": null,
"e": 5375,
"s": 5305,
"text": "To check your configurations click on the Job Manager tab on the left"
},
{
"code": null,
"e": 5461,
"s": 5375,
"text": "you should see something similar with your settings listed out in key value notation."
},
{
"code": null,
"e": 5592,
"s": 5461,
"text": "To test the Deployment you can upload one of the test jar’s from the Flink 1.11.0 folder located in the examples/streaming/ folder"
},
{
"code": null,
"e": 5766,
"s": 5592,
"text": "Running the Streaming WordCount job took 1 second and returned no errors for me. If you get an Internal Server Error check the Job Manager Logs to see the error stack trace."
}
] |
Java Program to sort an array in alphabetical order | Let us first create a string array:
String[] strArr = new String[] { "r", "p", "v","y", "s", "q" };
Now, use the Arrays.sort() method to sort an array in alphabetical order. Here, we have set the order to be case insensitive:
Arrays.sort(strArr, String.CASE_INSENSITIVE_ORDER);
import java.util.Arrays;
public class Demo {
public static void main(String[] args) {
String[] strArr = new String[] { "r", "p", "v","y", "s", "q" };
System.out.println("Sort array in alphabetical order...");
Arrays.sort(strArr, String.CASE_INSENSITIVE_ORDER);
for (int a = 0; a < strArr.length; a++) {
System.out.println(strArr[a]);
}
}
}
Sort array in alphabetical order...
p
q
r
s
v
y | [
{
"code": null,
"e": 1098,
"s": 1062,
"text": "Let us first create a string array:"
},
{
"code": null,
"e": 1162,
"s": 1098,
"text": "String[] strArr = new String[] { \"r\", \"p\", \"v\",\"y\", \"s\", \"q\" };"
},
{
"code": null,
"e": 1288,
"s": 1162,
"text": "Now, use the Arrays.sort() method to sort an array in alphabetical order. Here, we have set the order to be case insensitive:"
},
{
"code": null,
"e": 1340,
"s": 1288,
"text": "Arrays.sort(strArr, String.CASE_INSENSITIVE_ORDER);"
},
{
"code": null,
"e": 1725,
"s": 1340,
"text": "import java.util.Arrays;\npublic class Demo {\n public static void main(String[] args) {\n String[] strArr = new String[] { \"r\", \"p\", \"v\",\"y\", \"s\", \"q\" };\n System.out.println(\"Sort array in alphabetical order...\");\n Arrays.sort(strArr, String.CASE_INSENSITIVE_ORDER);\n for (int a = 0; a < strArr.length; a++) {\n System.out.println(strArr[a]);\n }\n }\n}"
},
{
"code": null,
"e": 1773,
"s": 1725,
"text": "Sort array in alphabetical order...\np\nq\nr\ns\nv\ny"
}
] |
TypeScript Tutorial | TypeScript lets you write JavaScript the way you really want to. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. TypeScript is pure object oriented with classes, interfaces and statically typed like C# or Java. The popular JavaScript framework Angular 2.0 is written in TypeScript. Mastering TypeScript can help programmers to write object-oriented programs and have them compiled to JavaScript, both on server side and client side.
Programmers coming from Object Oriented world will find it easy to use TypeScript. With the knowledge of TypeScript, they can build web applications much faster, as TypeScript has good tooling support.
As a reader of this tutorial, you should have a good understanding of OOP concepts and basic JavaScript, to make the most of this tutorial.
We have provided Typescript Online Compiler which helps you to Edit and Execute the code directly from your browser. Try to click the icon to run the following Typescript code to print conventional "Hello, World!".
var message:string = "Hello World"
console.log(message)
On compiling, it will generate following JavaScript code.
//Generated by typescript 1.8.10
var message = "Hello World";
console.log(message);
45 Lectures
4 hours
Antonio Papa
41 Lectures
7 hours
Haider Malik
60 Lectures
2.5 hours
Skillbakerystudios
77 Lectures
8 hours
Sean Bradley
77 Lectures
3.5 hours
TELCOMA Global
19 Lectures
3 hours
Christopher Frewin
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2513,
"s": 2048,
"text": "TypeScript lets you write JavaScript the way you really want to. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. TypeScript is pure object oriented with classes, interfaces and statically typed like C# or Java. The popular JavaScript framework Angular 2.0 is written in TypeScript. Mastering TypeScript can help programmers to write object-oriented programs and have them compiled to JavaScript, both on server side and client side."
},
{
"code": null,
"e": 2715,
"s": 2513,
"text": "Programmers coming from Object Oriented world will find it easy to use TypeScript. With the knowledge of TypeScript, they can build web applications much faster, as TypeScript has good tooling support."
},
{
"code": null,
"e": 2855,
"s": 2715,
"text": "As a reader of this tutorial, you should have a good understanding of OOP concepts and basic JavaScript, to make the most of this tutorial."
},
{
"code": null,
"e": 3072,
"s": 2855,
"text": "We have provided Typescript Online Compiler which helps you to Edit and Execute the code directly from your browser. Try to click the icon to run the following Typescript code to print conventional \"Hello, World!\"."
},
{
"code": null,
"e": 3130,
"s": 3072,
"text": "var message:string = \"Hello World\" \nconsole.log(message)\n"
},
{
"code": null,
"e": 3188,
"s": 3130,
"text": "On compiling, it will generate following JavaScript code."
},
{
"code": null,
"e": 3273,
"s": 3188,
"text": "//Generated by typescript 1.8.10\nvar message = \"Hello World\";\nconsole.log(message);\n"
},
{
"code": null,
"e": 3306,
"s": 3273,
"text": "\n 45 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 3320,
"s": 3306,
"text": " Antonio Papa"
},
{
"code": null,
"e": 3353,
"s": 3320,
"text": "\n 41 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 3367,
"s": 3353,
"text": " Haider Malik"
},
{
"code": null,
"e": 3402,
"s": 3367,
"text": "\n 60 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3422,
"s": 3402,
"text": " Skillbakerystudios"
},
{
"code": null,
"e": 3455,
"s": 3422,
"text": "\n 77 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 3469,
"s": 3455,
"text": " Sean Bradley"
},
{
"code": null,
"e": 3504,
"s": 3469,
"text": "\n 77 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 3520,
"s": 3504,
"text": " TELCOMA Global"
},
{
"code": null,
"e": 3553,
"s": 3520,
"text": "\n 19 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 3573,
"s": 3553,
"text": " Christopher Frewin"
},
{
"code": null,
"e": 3580,
"s": 3573,
"text": " Print"
},
{
"code": null,
"e": 3591,
"s": 3580,
"text": " Add Notes"
}
] |
9 Pandas value_counts() tricks to improve your data analysis | by B. Chen | Towards Data Science | Data Scientists often spend most of their time exploring and preprocessing data. When it comes to data profiling and understand the data structure, Pandas value_counts() is one of the top favorites. The function returns a Series containing counts of unique values. The resulting Series can be sorted in descending or ascending order, including or excluding NA through parameter control.
In this article, we will explore the different use cases of Pandas value_counts(). You’ll learn how to use it to deal with the following common tasks.
Default parametersSort results in ascending orderSort results alphabeticallyInclude NA from the resultsShow result in a percentage countBin continuous data into discrete intervalsGroup by and call value_counts()Convert resulting Series into a DataFrameApply to a DataFrame
Default parameters
Sort results in ascending order
Sort results alphabetically
Include NA from the results
Show result in a percentage count
Bin continuous data into discrete intervals
Group by and call value_counts()
Convert resulting Series into a DataFrame
Apply to a DataFrame
Please check out Notebook for source code.
Visit Github Repo for other tutorials
Pandas value_counts() function returns a Series containing counts of unique values. By default, the resulting Series is in descending order without any NA values. For example, let’s get counts for the column “Embarked” from the Titanic dataset.
>>> df['Embarked'].value_counts()S 644C 168Q 77Name: Embarked, dtype: int64
The series returned by value_count() is in descending order by default. We can set the argument ascending to True for an ascending results.
>>> df['Embarked'].value_counts(ascending=True)Q 77C 168S 644Name: Embarked, dtype: int64
We have learned the argument ascending to get result sorted by the value counts ASC or DESC. In some cases, it’s more preferable to display our results in alphabetical order. This can be done by calling sort_index(ascending=True) after the value_counts(), for example
>>> df['Embarked'].value_counts(ascending=True).sort_index(ascending=True)C 168Q 77S 644Name: Embarked, dtype: int64
By default, rows that contain any NA values are omitted from the results. There is an argument dropna to configure it. We can set the value to False to include counts of rows with NA.
df['Embarked'].value_counts(dropna=False)S 644C 168Q 77NaN 2Name: Embarked, dtype: int64
When doing Exploratory Data Analysis, sometimes it can be more useful to see a percentage count of the unique values. This can be done by setting the argument normalize to True , for example:
df['Embarked'].value_counts(normalize=True)S 0.724409C 0.188976Q 0.086614Name: Embarked, dtype: float64
If we prefer the result to be formatted with a percent sign (%), we can set the Pandas display option as follows:
>>> pd.set_option('display.float_format', '{:.2%}'.format)>>> df['Embarked'].value_counts(normalize = True)S 72.44%C 18.90%Q 8.66%Name: Embarked, dtype: float64
By calling pd.set_option('display.float_format', '{:.2%}'.format), it updates Pandas default display settings and applies to all float values. To reset it, you can call pd.reset_option('display.float_format').
In addition, thanks for David B Rosen (PhD)’s advice. Instead of the Pandas display option, we can simply convert the result into a DataFrame and call style.format('{:.2%}') as follows:
df['Embarked'].value_counts(normalize = True).to_frame().style.format('{:.2%}')
If you want to learn more about Pandas Display options, you can check out:
towardsdatascience.com
Pandas value_counts() can be used to bin continuous data into discrete intervals with the bin argument. Similar to the Pandas cut() function, we can pass an integer or a list to the bin argument.
When an integer is passed to bin, the function will discretize continuous values into equal-sized bins, for example:
>>> df['Fare'].value_counts(bins=3)(-0.513, 170.776] 871(170.776, 341.553] 17(341.553, 512.329] 3Name: Fare, dtype: int64
When a list is passed to bin, the function will divide continuous values into custom groups, for example:
>>> df['Fare'].value_counts(bins=[-1, 20, 100, 550])(-1.001, 20.0] 515(20.0, 100.0] 323(100.0, 550.0] 53Name: Fare, dtype: int64
If you want to learn more about binning with Pandas cut(), you can check out:
towardsdatascience.com
Pandas groupby() allows us to split data into separate groups to perform computations for better analysis. One of the common use cases is to group by a certain column and then get the count of unique values of another column. For example, let’s group by “Embarked” column and get the count of different “Sex” values.
>>> df.groupby('Embarked')['Sex'].value_counts()Embarked Sex C male 95 female 73Q male 41 female 36S male 441 female 203Name: Sex, dtype: int64
If you want to learn more about groupby(), you can check out:
towardsdatascience.com
Pandas value_counts() returns a Series, including the previous example with MultiIndex. If we would like our result to be displayed as a DataFrame, we can call to_frame() after the value_count().
>>> df.groupby('Embarked')['Sex'].value_counts().to_frame()
So far, we have been applying value_counts() to a Pandas Series, there is an equivalent method in Pandas DataFrame. Pandas DataFrame.value_counts() returns a Series containing counts of unique rows in the DataFrame.
Let’s take a look at an example to better understand it:
df = pd.DataFrame({ 'num_legs': [2, 4, 4, 6], 'num_wings': [2, 0, 0, 0]}, index=['falcon', 'dog', 'cat', 'ant'])>>> df.value_counts()num_legs num_wings4 0 26 0 12 2 1dtype: int64
By calling value_counts() on df, it returns a MultiIndex Series with num_legs and num_wings as the index. From the result, we can find that 2 records have num_legs=4 and num_wing=0.
Similarly, we can call to_frame() to convert the result into a DataFrame
>>> df.value_counts().to_frame()
In this article, we have explored the different use cases of Pandas value_counts(). It’s very handy and one of the top favorite methods during Exploratory Data Analysis and Data Preprocessing.
I hope this article will help you to save time in learning Pandas. I recommend you to check out the documentation for the value_counts() API and to know about other things you can do.
Thanks for reading. Please check out the notebook for the source code and stay tuned if you are interested in the practical aspect of machine learning.
All Pandas json_normalize() you should know for flattening JSON
Using Pandas method chaining to improve code readability
How to do a Custom Sort on Pandas DataFrame
All the Pandas shift() you should know for data analysis
When to use Pandas transform() function
Pandas concat() tricks you should know
Difference between apply() and transform() in Pandas
All the Pandas merge() you should know
Working with datetime in Pandas DataFrame
Pandas read_csv() tricks you should know
4 tricks you should know to parse date columns with Pandas read_csv()
More tutorials can be found on Github | [
{
"code": null,
"e": 558,
"s": 171,
"text": "Data Scientists often spend most of their time exploring and preprocessing data. When it comes to data profiling and understand the data structure, Pandas value_counts() is one of the top favorites. The function returns a Series containing counts of unique values. The resulting Series can be sorted in descending or ascending order, including or excluding NA through parameter control."
},
{
"code": null,
"e": 709,
"s": 558,
"text": "In this article, we will explore the different use cases of Pandas value_counts(). You’ll learn how to use it to deal with the following common tasks."
},
{
"code": null,
"e": 982,
"s": 709,
"text": "Default parametersSort results in ascending orderSort results alphabeticallyInclude NA from the resultsShow result in a percentage countBin continuous data into discrete intervalsGroup by and call value_counts()Convert resulting Series into a DataFrameApply to a DataFrame"
},
{
"code": null,
"e": 1001,
"s": 982,
"text": "Default parameters"
},
{
"code": null,
"e": 1033,
"s": 1001,
"text": "Sort results in ascending order"
},
{
"code": null,
"e": 1061,
"s": 1033,
"text": "Sort results alphabetically"
},
{
"code": null,
"e": 1089,
"s": 1061,
"text": "Include NA from the results"
},
{
"code": null,
"e": 1123,
"s": 1089,
"text": "Show result in a percentage count"
},
{
"code": null,
"e": 1167,
"s": 1123,
"text": "Bin continuous data into discrete intervals"
},
{
"code": null,
"e": 1200,
"s": 1167,
"text": "Group by and call value_counts()"
},
{
"code": null,
"e": 1242,
"s": 1200,
"text": "Convert resulting Series into a DataFrame"
},
{
"code": null,
"e": 1263,
"s": 1242,
"text": "Apply to a DataFrame"
},
{
"code": null,
"e": 1306,
"s": 1263,
"text": "Please check out Notebook for source code."
},
{
"code": null,
"e": 1344,
"s": 1306,
"text": "Visit Github Repo for other tutorials"
},
{
"code": null,
"e": 1589,
"s": 1344,
"text": "Pandas value_counts() function returns a Series containing counts of unique values. By default, the resulting Series is in descending order without any NA values. For example, let’s get counts for the column “Embarked” from the Titanic dataset."
},
{
"code": null,
"e": 1675,
"s": 1589,
"text": ">>> df['Embarked'].value_counts()S 644C 168Q 77Name: Embarked, dtype: int64"
},
{
"code": null,
"e": 1815,
"s": 1675,
"text": "The series returned by value_count() is in descending order by default. We can set the argument ascending to True for an ascending results."
},
{
"code": null,
"e": 1915,
"s": 1815,
"text": ">>> df['Embarked'].value_counts(ascending=True)Q 77C 168S 644Name: Embarked, dtype: int64"
},
{
"code": null,
"e": 2183,
"s": 1915,
"text": "We have learned the argument ascending to get result sorted by the value counts ASC or DESC. In some cases, it’s more preferable to display our results in alphabetical order. This can be done by calling sort_index(ascending=True) after the value_counts(), for example"
},
{
"code": null,
"e": 2310,
"s": 2183,
"text": ">>> df['Embarked'].value_counts(ascending=True).sort_index(ascending=True)C 168Q 77S 644Name: Embarked, dtype: int64"
},
{
"code": null,
"e": 2494,
"s": 2310,
"text": "By default, rows that contain any NA values are omitted from the results. There is an argument dropna to configure it. We can set the value to False to include counts of rows with NA."
},
{
"code": null,
"e": 2604,
"s": 2494,
"text": "df['Embarked'].value_counts(dropna=False)S 644C 168Q 77NaN 2Name: Embarked, dtype: int64"
},
{
"code": null,
"e": 2796,
"s": 2604,
"text": "When doing Exploratory Data Analysis, sometimes it can be more useful to see a percentage count of the unique values. This can be done by setting the argument normalize to True , for example:"
},
{
"code": null,
"e": 2909,
"s": 2796,
"text": "df['Embarked'].value_counts(normalize=True)S 0.724409C 0.188976Q 0.086614Name: Embarked, dtype: float64"
},
{
"code": null,
"e": 3023,
"s": 2909,
"text": "If we prefer the result to be formatted with a percent sign (%), we can set the Pandas display option as follows:"
},
{
"code": null,
"e": 3191,
"s": 3023,
"text": ">>> pd.set_option('display.float_format', '{:.2%}'.format)>>> df['Embarked'].value_counts(normalize = True)S 72.44%C 18.90%Q 8.66%Name: Embarked, dtype: float64"
},
{
"code": null,
"e": 3401,
"s": 3191,
"text": "By calling pd.set_option('display.float_format', '{:.2%}'.format), it updates Pandas default display settings and applies to all float values. To reset it, you can call pd.reset_option('display.float_format')."
},
{
"code": null,
"e": 3587,
"s": 3401,
"text": "In addition, thanks for David B Rosen (PhD)’s advice. Instead of the Pandas display option, we can simply convert the result into a DataFrame and call style.format('{:.2%}') as follows:"
},
{
"code": null,
"e": 3667,
"s": 3587,
"text": "df['Embarked'].value_counts(normalize = True).to_frame().style.format('{:.2%}')"
},
{
"code": null,
"e": 3742,
"s": 3667,
"text": "If you want to learn more about Pandas Display options, you can check out:"
},
{
"code": null,
"e": 3765,
"s": 3742,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 3961,
"s": 3765,
"text": "Pandas value_counts() can be used to bin continuous data into discrete intervals with the bin argument. Similar to the Pandas cut() function, we can pass an integer or a list to the bin argument."
},
{
"code": null,
"e": 4078,
"s": 3961,
"text": "When an integer is passed to bin, the function will discretize continuous values into equal-sized bins, for example:"
},
{
"code": null,
"e": 4213,
"s": 4078,
"text": ">>> df['Fare'].value_counts(bins=3)(-0.513, 170.776] 871(170.776, 341.553] 17(341.553, 512.329] 3Name: Fare, dtype: int64"
},
{
"code": null,
"e": 4319,
"s": 4213,
"text": "When a list is passed to bin, the function will divide continuous values into custom groups, for example:"
},
{
"code": null,
"e": 4459,
"s": 4319,
"text": ">>> df['Fare'].value_counts(bins=[-1, 20, 100, 550])(-1.001, 20.0] 515(20.0, 100.0] 323(100.0, 550.0] 53Name: Fare, dtype: int64"
},
{
"code": null,
"e": 4537,
"s": 4459,
"text": "If you want to learn more about binning with Pandas cut(), you can check out:"
},
{
"code": null,
"e": 4560,
"s": 4537,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 4877,
"s": 4560,
"text": "Pandas groupby() allows us to split data into separate groups to perform computations for better analysis. One of the common use cases is to group by a certain column and then get the count of unique values of another column. For example, let’s group by “Embarked” column and get the count of different “Sex” values."
},
{
"code": null,
"e": 5103,
"s": 4877,
"text": ">>> df.groupby('Embarked')['Sex'].value_counts()Embarked Sex C male 95 female 73Q male 41 female 36S male 441 female 203Name: Sex, dtype: int64"
},
{
"code": null,
"e": 5165,
"s": 5103,
"text": "If you want to learn more about groupby(), you can check out:"
},
{
"code": null,
"e": 5188,
"s": 5165,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 5384,
"s": 5188,
"text": "Pandas value_counts() returns a Series, including the previous example with MultiIndex. If we would like our result to be displayed as a DataFrame, we can call to_frame() after the value_count()."
},
{
"code": null,
"e": 5444,
"s": 5384,
"text": ">>> df.groupby('Embarked')['Sex'].value_counts().to_frame()"
},
{
"code": null,
"e": 5660,
"s": 5444,
"text": "So far, we have been applying value_counts() to a Pandas Series, there is an equivalent method in Pandas DataFrame. Pandas DataFrame.value_counts() returns a Series containing counts of unique rows in the DataFrame."
},
{
"code": null,
"e": 5717,
"s": 5660,
"text": "Let’s take a look at an example to better understand it:"
},
{
"code": null,
"e": 5963,
"s": 5717,
"text": "df = pd.DataFrame({ 'num_legs': [2, 4, 4, 6], 'num_wings': [2, 0, 0, 0]}, index=['falcon', 'dog', 'cat', 'ant'])>>> df.value_counts()num_legs num_wings4 0 26 0 12 2 1dtype: int64"
},
{
"code": null,
"e": 6145,
"s": 5963,
"text": "By calling value_counts() on df, it returns a MultiIndex Series with num_legs and num_wings as the index. From the result, we can find that 2 records have num_legs=4 and num_wing=0."
},
{
"code": null,
"e": 6218,
"s": 6145,
"text": "Similarly, we can call to_frame() to convert the result into a DataFrame"
},
{
"code": null,
"e": 6251,
"s": 6218,
"text": ">>> df.value_counts().to_frame()"
},
{
"code": null,
"e": 6444,
"s": 6251,
"text": "In this article, we have explored the different use cases of Pandas value_counts(). It’s very handy and one of the top favorite methods during Exploratory Data Analysis and Data Preprocessing."
},
{
"code": null,
"e": 6628,
"s": 6444,
"text": "I hope this article will help you to save time in learning Pandas. I recommend you to check out the documentation for the value_counts() API and to know about other things you can do."
},
{
"code": null,
"e": 6780,
"s": 6628,
"text": "Thanks for reading. Please check out the notebook for the source code and stay tuned if you are interested in the practical aspect of machine learning."
},
{
"code": null,
"e": 6844,
"s": 6780,
"text": "All Pandas json_normalize() you should know for flattening JSON"
},
{
"code": null,
"e": 6901,
"s": 6844,
"text": "Using Pandas method chaining to improve code readability"
},
{
"code": null,
"e": 6945,
"s": 6901,
"text": "How to do a Custom Sort on Pandas DataFrame"
},
{
"code": null,
"e": 7002,
"s": 6945,
"text": "All the Pandas shift() you should know for data analysis"
},
{
"code": null,
"e": 7042,
"s": 7002,
"text": "When to use Pandas transform() function"
},
{
"code": null,
"e": 7081,
"s": 7042,
"text": "Pandas concat() tricks you should know"
},
{
"code": null,
"e": 7134,
"s": 7081,
"text": "Difference between apply() and transform() in Pandas"
},
{
"code": null,
"e": 7173,
"s": 7134,
"text": "All the Pandas merge() you should know"
},
{
"code": null,
"e": 7215,
"s": 7173,
"text": "Working with datetime in Pandas DataFrame"
},
{
"code": null,
"e": 7256,
"s": 7215,
"text": "Pandas read_csv() tricks you should know"
},
{
"code": null,
"e": 7326,
"s": 7256,
"text": "4 tricks you should know to parse date columns with Pandas read_csv()"
}
] |
Tada Animation Effect with CSS | To create Tada animation effect with CSS, you can try to run the following code:
Live Demo
<html>
<head>
<style>
.animated {
background-image: url(/css/images/logo.png);
background-repeat: no-repeat;
background-position: left top;
padding-top:95px;
margin-bottom:60px;
-webkit-animation-duration: 10s;
animation-duration: 10s;
-webkit-animation-fill-mode: both;
animation-fill-mode: both;
}
@-webkit-keyframes tada {
0% {-webkit-transform: scale(1);}
10%, 20% {-webkit-transform: scale(0.9) rotate(-3deg);}
30%, 50%, 70%, 90% {-webkit-transform: scale(1.1) rotate(3deg);}
40%, 60%, 80% {-webkit-transform: scale(1.1) rotate(-3deg);}
100% {-webkit-transform: scale(1) rotate(0);}
}
@keyframes tada {
0% {transform: scale(1);}
10%, 20% {transform: scale(0.9) rotate(-3deg);}
30%, 50%, 70%, 90% {transform: scale(1.1) rotate(3deg);}
40%, 60%, 80% {transform: scale(1.1) rotate(-3deg);}
100% {transform: scale(1) rotate(0);}
}
.tada {
-webkit-animation-name: tada;
animation-name: tada;
}
</style>
</head>
<body>
<div id = "animated-example" class = "animated tada"></div>
<button onclick = "myFunction()">Reload page</button>
<script>
function myFunction() {
location.reload();
}
</script>
</body>
</html> | [
{
"code": null,
"e": 1143,
"s": 1062,
"text": "To create Tada animation effect with CSS, you can try to run the following code:"
},
{
"code": null,
"e": 1153,
"s": 1143,
"text": "Live Demo"
},
{
"code": null,
"e": 2666,
"s": 1153,
"text": "<html>\n <head>\n <style>\n .animated {\n background-image: url(/css/images/logo.png);\n background-repeat: no-repeat;\n background-position: left top;\n padding-top:95px;\n margin-bottom:60px;\n -webkit-animation-duration: 10s;\n animation-duration: 10s;\n -webkit-animation-fill-mode: both;\n animation-fill-mode: both;\n }\n @-webkit-keyframes tada {\n 0% {-webkit-transform: scale(1);}\n 10%, 20% {-webkit-transform: scale(0.9) rotate(-3deg);}\n 30%, 50%, 70%, 90% {-webkit-transform: scale(1.1) rotate(3deg);}\n 40%, 60%, 80% {-webkit-transform: scale(1.1) rotate(-3deg);}\n 100% {-webkit-transform: scale(1) rotate(0);}\n }\n @keyframes tada {\n 0% {transform: scale(1);}\n 10%, 20% {transform: scale(0.9) rotate(-3deg);}\n 30%, 50%, 70%, 90% {transform: scale(1.1) rotate(3deg);}\n 40%, 60%, 80% {transform: scale(1.1) rotate(-3deg);}\n 100% {transform: scale(1) rotate(0);}\n }\n .tada {\n -webkit-animation-name: tada;\n animation-name: tada;\n }\n </style>\n </head>\n <body>\n <div id = \"animated-example\" class = \"animated tada\"></div>\n <button onclick = \"myFunction()\">Reload page</button>\n <script>\n function myFunction() {\n location.reload();\n }\n </script>\n </body>\n</html>"
}
] |
How to position text to bottom left on an image with CSS | To position text to bottom left, use the bottom and left property. You can try to run the following code to position text to bottom left on an image
Live Demo
<!DOCTYPE html>
<html>
<head>
<style>
.box {
position: relative;
}
img {
width: 100%;
height: auto;
opacity: 0.6;
}
.direction {
position: absolute;
bottom: 10px;
left: 19px;
font-size: 13px;
}
</style>
</head>
<body>
<h1>Heading One</h1>
<p>Below image has text in the bottom left:</p>
<div class = "box">
<img src = "https://www.tutorialspoint.com/python/images/python_data_science.jpg" alt="Tutorial" width="800" height="400">
<div class = "direction">Bottom Left Corner</div>
</div>
</body>
</html> | [
{
"code": null,
"e": 1211,
"s": 1062,
"text": "To position text to bottom left, use the bottom and left property. You can try to run the following code to position text to bottom left on an image"
},
{
"code": null,
"e": 1221,
"s": 1211,
"text": "Live Demo"
},
{
"code": null,
"e": 1940,
"s": 1221,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <style>\n .box {\n position: relative;\n }\n img {\n width: 100%;\n height: auto;\n opacity: 0.6;\n }\n .direction {\n position: absolute;\n bottom: 10px;\n left: 19px;\n font-size: 13px;\n }\n </style>\n </head>\n <body>\n <h1>Heading One</h1>\n <p>Below image has text in the bottom left:</p>\n <div class = \"box\">\n <img src = \"https://www.tutorialspoint.com/python/images/python_data_science.jpg\" alt=\"Tutorial\" width=\"800\" height=\"400\">\n <div class = \"direction\">Bottom Left Corner</div>\n </div>\n </body>\n</html>"
}
] |
Pascal - Classes | You have seen that Pascal Objects exhibit some characteristics of object-oriented paradigm. They implement encapsulation, data hiding and inheritance, but they also have limitations. For example, Pascal Objects do not take part in polymorphism. So classes are widely used to implement proper object-oriented behavior in a program, especially the GUI-based software.
A Class is defined in almost the same way as an Object, but is a pointer to an Object rather than the Object itself. Technically, this means that the Class is allocated on the Heap of a program, whereas the Object is allocated on the Stack. In other words, when you declare a variable the object type, it will take up as much space on the stack as the size of the object, but when you declare a variable of the class type, it will always take the size of a pointer on the stack. The actual class data will be on the heap.
A class is declared in the same way as an object, using the type declaration. The general form of a class declaration is as follows −
type class-identifier = class
private
field1 : field-type;
field2 : field-type;
...
public
constructor create();
procedure proc1;
function f1(): function-type;
end;
var classvar : class-identifier;
Its worth to note following important points −
Class definitions should come under the type declaration part of the program only.
Class definitions should come under the type declaration part of the program only.
A class is defined using the class keyword.
A class is defined using the class keyword.
Fields are data items that exist in each instance of the class.
Fields are data items that exist in each instance of the class.
Methods are declared within the definition of a class.
Methods are declared within the definition of a class.
There is a predefined constructor called Create in the Root class. Every abstract class and every concrete class is a descendant of Root, so all classes have at least one constructor.
There is a predefined constructor called Create in the Root class. Every abstract class and every concrete class is a descendant of Root, so all classes have at least one constructor.
There is a predefined destructor called Destroy in the Root class. Every abstract class and every concrete class is a descendant of Root, so, all classes have at least one destructor.
There is a predefined destructor called Destroy in the Root class. Every abstract class and every concrete class is a descendant of Root, so, all classes have at least one destructor.
Let us define a Rectangle class that has two integer type data members - length and width and some member functions to manipulate these data members and a procedure to draw the rectangle.
type
Rectangle = class
private
length, width: integer;
public
constructor create(l, w: integer);
procedure setlength(l: integer);
function getlength(): integer;
procedure setwidth(w: integer);
function getwidth(): integer;
procedure draw;
end;
Let us write a complete program that would create an instance of a rectangle class and draw the rectangle. This is the same example we used while discussing Pascal Objects. You will find both programs are almost same, with the following exceptions −
You will need to include the {$mode objfpc} directive for using the classes.
You will need to include the {$mode objfpc} directive for using the classes.
You will need to include the {$m+} directive for using constructors.
You will need to include the {$m+} directive for using constructors.
Class instantiation is different than object instantiation. Only declaring the variable does not create space for the instance, you will use the constructor create to allocate memory.
Class instantiation is different than object instantiation. Only declaring the variable does not create space for the instance, you will use the constructor create to allocate memory.
Here is the complete example −
{$mode objfpc} // directive to be used for defining classes
{$m+} // directive to be used for using constructor
program exClass;
type
Rectangle = class
private
length, width: integer;
public
constructor create(l, w: integer);
procedure setlength(l: integer);
function getlength(): integer;
procedure setwidth(w: integer);
function getwidth(): integer;
procedure draw;
end;
var
r1: Rectangle;
constructor Rectangle.create(l, w: integer);
begin
length := l;
width := w;
end;
procedure Rectangle.setlength(l: integer);
begin
length := l;
end;
procedure Rectangle.setwidth(w: integer);
begin
width :=w;
end;
function Rectangle.getlength(): integer;
begin
getlength := length;
end;
function Rectangle.getwidth(): integer;
begin
getwidth := width;
end;
procedure Rectangle.draw;
var
i, j: integer;
begin
for i:= 1 to length do
begin
for j:= 1 to width do
write(' * ');
writeln;
end;
end;
begin
r1:= Rectangle.create(3, 7);
writeln(' Draw Rectangle: ', r1.getlength(), ' by ' , r1.getwidth());
r1.draw;
r1.setlength(4);
r1.setwidth(6);
writeln(' Draw Rectangle: ', r1.getlength(), ' by ' , r1.getwidth());
r1.draw;
end.
When the above code is compiled and executed, it produces the following result −
Draw Rectangle: 3 by 7
* * * * * * *
* * * * * * *
* * * * * * *
Draw Rectangle: 4 by 6
* * * * * *
* * * * * *
* * * * * *
* * * * * *
Visibility indicates the accessibility of the class members. Pascal class members have five types of visibility −
Public
These members are always accessible.
Private
These members can only be accessed in the module or unit that contains the class definition. They can be accessed from inside the class methods or from outside them.
Strict Private
These members can only be accessed from methods of the class itself. Other classes or descendent classes in the same unit cannot access them.
Protected
This is same as private, except, these members are accessible to descendent types, even if they are implemented in other modules.
Published
This is same as a Public, but the compiler generates type information that is needed for automatic streaming of these classes if the compiler is in the {$M+} state. Fields defined in a published section must be of class type.
Constructors are special methods, which are called automatically whenever an object is created. So we take full advantage of this behavior by initializing many things through constructor functions.
Pascal provides a special function called create() to define a constructor. You can pass as many arguments as you like into the constructor function.
Following example will create one constructor for a class named Books and it will initialize price and title for the book at the time of object creation.
program classExample;
{$MODE OBJFPC} //directive to be used for creating classes
{$M+} //directive that allows class constructors and destructors
type
Books = Class
private
title : String;
price: real;
public
constructor Create(t : String; p: real); //default constructor
procedure setTitle(t : String); //sets title for a book
function getTitle() : String; //retrieves title
procedure setPrice(p : real); //sets price for a book
function getPrice() : real; //retrieves price
procedure Display(); // display details of a book
end;
var
physics, chemistry, maths: Books;
//default constructor
constructor Books.Create(t : String; p: real);
begin
title := t;
price := p;
end;
procedure Books.setTitle(t : String); //sets title for a book
begin
title := t;
end;
function Books.getTitle() : String; //retrieves title
begin
getTitle := title;
end;
procedure Books.setPrice(p : real); //sets price for a book
begin
price := p;
end;
function Books.getPrice() : real; //retrieves price
begin
getPrice:= price;
end;
procedure Books.Display();
begin
writeln('Title: ', title);
writeln('Price: ', price:5:2);
end;
begin
physics := Books.Create('Physics for High School', 10);
chemistry := Books.Create('Advanced Chemistry', 15);
maths := Books.Create('Algebra', 7);
physics.Display;
chemistry.Display;
maths.Display;
end.
When the above code is compiled and executed, it produces the following result −
Title: Physics for High School
Price: 10
Title: Advanced Chemistry
Price: 15
Title: Algebra
Price: 7
Like the implicit constructor named create, there is also an implicit destructor method destroy using which you can release all the resources used in the class.
Pascal class definitions can optionally inherit from a parent class definition. The syntax is as follows −
type
childClas-identifier = class(baseClass-identifier)
< members >
end;
Following example provides a novels class, which inherits the Books class and adds more functionality based on the requirement.
program inheritanceExample;
{$MODE OBJFPC} //directive to be used for creating classes
{$M+} //directive that allows class constructors and destructors
type
Books = Class
protected
title : String;
price: real;
public
constructor Create(t : String; p: real); //default constructor
procedure setTitle(t : String); //sets title for a book
function getTitle() : String; //retrieves title
procedure setPrice(p : real); //sets price for a book
function getPrice() : real; //retrieves price
procedure Display(); virtual; // display details of a book
end;
(* Creating a derived class *)
type
Novels = Class(Books)
private
author: String;
public
constructor Create(t: String); overload;
constructor Create(a: String; t: String; p: real); overload;
procedure setAuthor(a: String); // sets author for a book
function getAuthor(): String; // retrieves author name
procedure Display(); override;
end;
var
n1, n2: Novels;
//default constructor
constructor Books.Create(t : String; p: real);
begin
title := t;
price := p;
end;
procedure Books.setTitle(t : String); //sets title for a book
begin
title := t;
end;
function Books.getTitle() : String; //retrieves title
begin
getTitle := title;
end;
procedure Books.setPrice(p : real); //sets price for a book
begin
price := p;
end;
function Books.getPrice() : real; //retrieves price
begin
getPrice:= price;
end;
procedure Books.Display();
begin
writeln('Title: ', title);
writeln('Price: ', price);
end;
(* Now the derived class methods *)
constructor Novels.Create(t: String);
begin
inherited Create(t, 0.0);
author:= ' ';
end;
constructor Novels.Create(a: String; t: String; p: real);
begin
inherited Create(t, p);
author:= a;
end;
procedure Novels.setAuthor(a : String); //sets author for a book
begin
author := a;
end;
function Novels.getAuthor() : String; //retrieves author
begin
getAuthor := author;
end;
procedure Novels.Display();
begin
writeln('Title: ', title);
writeln('Price: ', price:5:2);
writeln('Author: ', author);
end;
begin
n1 := Novels.Create('Gone with the Wind');
n2 := Novels.Create('Ayn Rand','Atlas Shrugged', 467.75);
n1.setAuthor('Margaret Mitchell');
n1.setPrice(375.99);
n1.Display;
n2.Display;
end.
When the above code is compiled and executed, it produces the following result −
Title: Gone with the Wind
Price: 375.99
Author: Margaret Mitchell
Title: Atlas Shrugged
Price: 467.75
Author: Ayn Rand
Its worth to note following important points −
The members of the Books class have protected visibility.
The members of the Books class have protected visibility.
The Novels class has two constructors, so the overload operator is used for function overloading.
The Novels class has two constructors, so the overload operator is used for function overloading.
The Books.Display procedure has been declared virtual, so that the same method from the Novels class can override it.
The Books.Display procedure has been declared virtual, so that the same method from the Novels class can override it.
The Novels.Create constructor calls the base class constructor using the inherited keyword.
The Novels.Create constructor calls the base class constructor using the inherited keyword.
Interfaces are defined to provide a common function name to the implementers. Different implementers can implement those interfaces according to their requirements. You can say, interfaces are skeletons, which are implemented by developers. Following is an example of interface −
type
Mail = Interface
Procedure SendMail;
Procedure GetMail;
end;
Report = Class(TInterfacedObject, Mail)
Procedure SendMail;
Procedure GetMail;
end;
Please note that, when a class implements an interface, it should implement all methods of the interface. If a method of an interface is not implemented, then the compiler will give an error.
An abstract class is one that cannot be instantiated, only inherited. An abstract class is specified by including the word symbol abstract in the class definition, like this −
type
Shape = ABSTRACT CLASS (Root)
Procedure draw; ABSTRACT;
...
end;
When inheriting from an abstract class, all methods marked abstract in the parent's class declaration must be defined by the child; additionally, these methods must be defined with the same visibility.
Declaring class members or methods as static makes them accessible without needing an instantiation of the class. A member declared as static cannot be accessed with an instantiated class object (though a static method can). The following example illustrates the concept −
program StaticExample;
{$mode objfpc}
{$static on}
type
myclass=class
num : integer;static;
end;
var
n1, n2 : myclass;
begin
n1:= myclass.create;
n2:= myclass.create;
n1.num := 12;
writeln(n2.num);
n2.num := 31;
writeln(n1.num);
writeln(myclass.num);
myclass.num := myclass.num + 20;
writeln(n1.num);
writeln(n2.num);
end.
When the above code is compiled and executed, it produces the following result −
12
31
31
51
51
You must use the directive {$static on} for using the static members.
94 Lectures
8.5 hours
Stone River ELearning
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2449,
"s": 2083,
"text": "You have seen that Pascal Objects exhibit some characteristics of object-oriented paradigm. They implement encapsulation, data hiding and inheritance, but they also have limitations. For example, Pascal Objects do not take part in polymorphism. So classes are widely used to implement proper object-oriented behavior in a program, especially the GUI-based software."
},
{
"code": null,
"e": 2971,
"s": 2449,
"text": "A Class is defined in almost the same way as an Object, but is a pointer to an Object rather than the Object itself. Technically, this means that the Class is allocated on the Heap of a program, whereas the Object is allocated on the Stack. In other words, when you declare a variable the object type, it will take up as much space on the stack as the size of the object, but when you declare a variable of the class type, it will always take the size of a pointer on the stack. The actual class data will be on the heap."
},
{
"code": null,
"e": 3105,
"s": 2971,
"text": "A class is declared in the same way as an object, using the type declaration. The general form of a class declaration is as follows −"
},
{
"code": null,
"e": 3361,
"s": 3105,
"text": "type class-identifier = class \n private\n field1 : field-type; \n field2 : field-type; \n ...\n \n public\n constructor create();\n procedure proc1; \n function f1(): function-type;\nend; \nvar classvar : class-identifier;"
},
{
"code": null,
"e": 3408,
"s": 3361,
"text": "Its worth to note following important points −"
},
{
"code": null,
"e": 3491,
"s": 3408,
"text": "Class definitions should come under the type declaration part of the program only."
},
{
"code": null,
"e": 3574,
"s": 3491,
"text": "Class definitions should come under the type declaration part of the program only."
},
{
"code": null,
"e": 3618,
"s": 3574,
"text": "A class is defined using the class keyword."
},
{
"code": null,
"e": 3662,
"s": 3618,
"text": "A class is defined using the class keyword."
},
{
"code": null,
"e": 3726,
"s": 3662,
"text": "Fields are data items that exist in each instance of the class."
},
{
"code": null,
"e": 3790,
"s": 3726,
"text": "Fields are data items that exist in each instance of the class."
},
{
"code": null,
"e": 3845,
"s": 3790,
"text": "Methods are declared within the definition of a class."
},
{
"code": null,
"e": 3900,
"s": 3845,
"text": "Methods are declared within the definition of a class."
},
{
"code": null,
"e": 4085,
"s": 3900,
"text": "There is a predefined constructor called Create in the Root class. Every abstract class and every concrete class is a descendant of Root, so all classes have at least one constructor."
},
{
"code": null,
"e": 4270,
"s": 4085,
"text": "There is a predefined constructor called Create in the Root class. Every abstract class and every concrete class is a descendant of Root, so all classes have at least one constructor."
},
{
"code": null,
"e": 4455,
"s": 4270,
"text": "There is a predefined destructor called Destroy in the Root class. Every abstract class and every concrete class is a descendant of Root, so, all classes have at least one destructor."
},
{
"code": null,
"e": 4640,
"s": 4455,
"text": "There is a predefined destructor called Destroy in the Root class. Every abstract class and every concrete class is a descendant of Root, so, all classes have at least one destructor."
},
{
"code": null,
"e": 4828,
"s": 4640,
"text": "Let us define a Rectangle class that has two integer type data members - length and width and some member functions to manipulate these data members and a procedure to draw the rectangle."
},
{
"code": null,
"e": 5127,
"s": 4828,
"text": "type\n Rectangle = class\n private\n length, width: integer;\n \n public\n constructor create(l, w: integer);\n procedure setlength(l: integer);\n function getlength(): integer;\n procedure setwidth(w: integer);\n function getwidth(): integer;\n procedure draw;\nend;"
},
{
"code": null,
"e": 5377,
"s": 5127,
"text": "Let us write a complete program that would create an instance of a rectangle class and draw the rectangle. This is the same example we used while discussing Pascal Objects. You will find both programs are almost same, with the following exceptions −"
},
{
"code": null,
"e": 5454,
"s": 5377,
"text": "You will need to include the {$mode objfpc} directive for using the classes."
},
{
"code": null,
"e": 5531,
"s": 5454,
"text": "You will need to include the {$mode objfpc} directive for using the classes."
},
{
"code": null,
"e": 5600,
"s": 5531,
"text": "You will need to include the {$m+} directive for using constructors."
},
{
"code": null,
"e": 5669,
"s": 5600,
"text": "You will need to include the {$m+} directive for using constructors."
},
{
"code": null,
"e": 5853,
"s": 5669,
"text": "Class instantiation is different than object instantiation. Only declaring the variable does not create space for the instance, you will use the constructor create to allocate memory."
},
{
"code": null,
"e": 6037,
"s": 5853,
"text": "Class instantiation is different than object instantiation. Only declaring the variable does not create space for the instance, you will use the constructor create to allocate memory."
},
{
"code": null,
"e": 6068,
"s": 6037,
"text": "Here is the complete example −"
},
{
"code": null,
"e": 7345,
"s": 6068,
"text": "{$mode objfpc} // directive to be used for defining classes\n{$m+}\t\t // directive to be used for using constructor\n\nprogram exClass;\ntype\n Rectangle = class\n private\n length, width: integer;\n \n public\n constructor create(l, w: integer);\n procedure setlength(l: integer);\n \n function getlength(): integer;\n procedure setwidth(w: integer);\n \n function getwidth(): integer;\n procedure draw;\nend;\nvar\n r1: Rectangle;\n\nconstructor Rectangle.create(l, w: integer);\nbegin\n length := l;\n width := w;\nend;\n\nprocedure Rectangle.setlength(l: integer);\nbegin\n length := l;\nend;\n\nprocedure Rectangle.setwidth(w: integer);\nbegin\n width :=w;\nend;\n\nfunction Rectangle.getlength(): integer;\nbegin\n getlength := length;\nend;\n\nfunction Rectangle.getwidth(): integer;\nbegin\n getwidth := width;\nend;\n\nprocedure Rectangle.draw;\nvar\n i, j: integer;\nbegin\n for i:= 1 to length do\n begin\n for j:= 1 to width do\n write(' * ');\n writeln;\n end;\nend;\n\nbegin\n r1:= Rectangle.create(3, 7);\n \n writeln(' Draw Rectangle: ', r1.getlength(), ' by ' , r1.getwidth());\n r1.draw;\n r1.setlength(4);\n r1.setwidth(6);\n \n writeln(' Draw Rectangle: ', r1.getlength(), ' by ' , r1.getwidth());\n r1.draw;\nend."
},
{
"code": null,
"e": 7426,
"s": 7345,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 7567,
"s": 7426,
"text": "Draw Rectangle: 3 by 7\n* * * * * * *\n* * * * * * *\n* * * * * * *\nDraw Rectangle: 4 by 6\n* * * * * * \n* * * * * * \n* * * * * * \n* * * * * * \n"
},
{
"code": null,
"e": 7681,
"s": 7567,
"text": "Visibility indicates the accessibility of the class members. Pascal class members have five types of visibility −"
},
{
"code": null,
"e": 7688,
"s": 7681,
"text": "Public"
},
{
"code": null,
"e": 7725,
"s": 7688,
"text": "These members are always accessible."
},
{
"code": null,
"e": 7733,
"s": 7725,
"text": "Private"
},
{
"code": null,
"e": 7899,
"s": 7733,
"text": "These members can only be accessed in the module or unit that contains the class definition. They can be accessed from inside the class methods or from outside them."
},
{
"code": null,
"e": 7914,
"s": 7899,
"text": "Strict Private"
},
{
"code": null,
"e": 8056,
"s": 7914,
"text": "These members can only be accessed from methods of the class itself. Other classes or descendent classes in the same unit cannot access them."
},
{
"code": null,
"e": 8066,
"s": 8056,
"text": "Protected"
},
{
"code": null,
"e": 8196,
"s": 8066,
"text": "This is same as private, except, these members are accessible to descendent types, even if they are implemented in other modules."
},
{
"code": null,
"e": 8206,
"s": 8196,
"text": "Published"
},
{
"code": null,
"e": 8432,
"s": 8206,
"text": "This is same as a Public, but the compiler generates type information that is needed for automatic streaming of these classes if the compiler is in the {$M+} state. Fields defined in a published section must be of class type."
},
{
"code": null,
"e": 8630,
"s": 8432,
"text": "Constructors are special methods, which are called automatically whenever an object is created. So we take full advantage of this behavior by initializing many things through constructor functions."
},
{
"code": null,
"e": 8780,
"s": 8630,
"text": "Pascal provides a special function called create() to define a constructor. You can pass as many arguments as you like into the constructor function."
},
{
"code": null,
"e": 8934,
"s": 8780,
"text": "Following example will create one constructor for a class named Books and it will initialize price and title for the book at the time of object creation."
},
{
"code": null,
"e": 10388,
"s": 8934,
"text": "program classExample;\n\n{$MODE OBJFPC} //directive to be used for creating classes\n{$M+} //directive that allows class constructors and destructors\ntype\n Books = Class \n private \n title : String; \n price: real;\n \n public\n constructor Create(t : String; p: real); //default constructor\n \n procedure setTitle(t : String); //sets title for a book\n function getTitle() : String; //retrieves title\n \n procedure setPrice(p : real); //sets price for a book\n function getPrice() : real; //retrieves price\n \n procedure Display(); // display details of a book\nend;\nvar\n physics, chemistry, maths: Books;\n\n//default constructor \nconstructor Books.Create(t : String; p: real);\nbegin\n title := t;\n price := p;\nend;\n\nprocedure Books.setTitle(t : String); //sets title for a book\nbegin\n title := t;\nend;\n\nfunction Books.getTitle() : String; //retrieves title\nbegin\n getTitle := title;\nend;\n\nprocedure Books.setPrice(p : real); //sets price for a book\nbegin\n price := p;\nend;\n\nfunction Books.getPrice() : real; //retrieves price\nbegin\n getPrice:= price;\nend;\n\nprocedure Books.Display();\nbegin\n writeln('Title: ', title);\n writeln('Price: ', price:5:2);\nend;\n\nbegin \n physics := Books.Create('Physics for High School', 10);\n chemistry := Books.Create('Advanced Chemistry', 15);\n maths := Books.Create('Algebra', 7);\n \n physics.Display;\n chemistry.Display;\n maths.Display;\nend."
},
{
"code": null,
"e": 10469,
"s": 10388,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 10571,
"s": 10469,
"text": "Title: Physics for High School\nPrice: 10\nTitle: Advanced Chemistry\nPrice: 15\nTitle: Algebra\nPrice: 7\n"
},
{
"code": null,
"e": 10732,
"s": 10571,
"text": "Like the implicit constructor named create, there is also an implicit destructor method destroy using which you can release all the resources used in the class."
},
{
"code": null,
"e": 10839,
"s": 10732,
"text": "Pascal class definitions can optionally inherit from a parent class definition. The syntax is as follows −"
},
{
"code": null,
"e": 10914,
"s": 10839,
"text": "type\nchildClas-identifier = class(baseClass-identifier) \n< members >\nend; "
},
{
"code": null,
"e": 11042,
"s": 10914,
"text": "Following example provides a novels class, which inherits the Books class and adds more functionality based on the requirement."
},
{
"code": null,
"e": 13443,
"s": 11042,
"text": "program inheritanceExample;\n\n{$MODE OBJFPC} //directive to be used for creating classes\n{$M+} //directive that allows class constructors and destructors\n\ntype\n Books = Class \n protected \n title : String; \n price: real;\n \n public\n constructor Create(t : String; p: real); //default constructor\n \n procedure setTitle(t : String); //sets title for a book\n function getTitle() : String; //retrieves title\n \n procedure setPrice(p : real); //sets price for a book\n function getPrice() : real; //retrieves price\n \n procedure Display(); virtual; // display details of a book\nend;\n(* Creating a derived class *)\n\ntype\n Novels = Class(Books)\n private\n author: String;\n \n public\n constructor Create(t: String); overload;\n constructor Create(a: String; t: String; p: real); overload;\n \n procedure setAuthor(a: String); // sets author for a book\n function getAuthor(): String; // retrieves author name\n \n procedure Display(); override;\nend;\nvar\n n1, n2: Novels;\n\n//default constructor \nconstructor Books.Create(t : String; p: real);\nbegin\n title := t;\n price := p;\nend;\n\nprocedure Books.setTitle(t : String); //sets title for a book\nbegin\n title := t;\nend;\n\nfunction Books.getTitle() : String; //retrieves title\nbegin\n getTitle := title;\nend;\n\nprocedure Books.setPrice(p : real); //sets price for a book\nbegin\n price := p;\nend;\n\nfunction Books.getPrice() : real; //retrieves price\nbegin\n getPrice:= price;\nend;\n\nprocedure Books.Display();\nbegin\n writeln('Title: ', title);\n writeln('Price: ', price);\nend;\n\n(* Now the derived class methods *)\nconstructor Novels.Create(t: String);\nbegin\n inherited Create(t, 0.0);\n author:= ' ';\nend;\n\nconstructor Novels.Create(a: String; t: String; p: real);\nbegin\n inherited Create(t, p);\n author:= a;\nend;\n\nprocedure Novels.setAuthor(a : String); //sets author for a book\nbegin\n author := a;\nend;\n\nfunction Novels.getAuthor() : String; //retrieves author\nbegin\n getAuthor := author;\nend;\n\nprocedure Novels.Display();\nbegin\n writeln('Title: ', title);\n writeln('Price: ', price:5:2);\n writeln('Author: ', author);\nend;\n\nbegin \n n1 := Novels.Create('Gone with the Wind');\n n2 := Novels.Create('Ayn Rand','Atlas Shrugged', 467.75);\n n1.setAuthor('Margaret Mitchell');\n n1.setPrice(375.99);\n n1.Display;\n n2.Display;\nend."
},
{
"code": null,
"e": 13524,
"s": 13443,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 13644,
"s": 13524,
"text": "Title: Gone with the Wind\nPrice: 375.99\nAuthor: Margaret Mitchell\nTitle: Atlas Shrugged\nPrice: 467.75\nAuthor: Ayn Rand\n"
},
{
"code": null,
"e": 13691,
"s": 13644,
"text": "Its worth to note following important points −"
},
{
"code": null,
"e": 13750,
"s": 13691,
"text": "The members of the Books class have protected visibility. "
},
{
"code": null,
"e": 13809,
"s": 13750,
"text": "The members of the Books class have protected visibility. "
},
{
"code": null,
"e": 13907,
"s": 13809,
"text": "The Novels class has two constructors, so the overload operator is used for function overloading."
},
{
"code": null,
"e": 14005,
"s": 13907,
"text": "The Novels class has two constructors, so the overload operator is used for function overloading."
},
{
"code": null,
"e": 14124,
"s": 14005,
"text": "The Books.Display procedure has been declared virtual, so that the same method from the Novels class can override it. "
},
{
"code": null,
"e": 14243,
"s": 14124,
"text": "The Books.Display procedure has been declared virtual, so that the same method from the Novels class can override it. "
},
{
"code": null,
"e": 14336,
"s": 14243,
"text": "The Novels.Create constructor calls the base class constructor using the inherited keyword. "
},
{
"code": null,
"e": 14429,
"s": 14336,
"text": "The Novels.Create constructor calls the base class constructor using the inherited keyword. "
},
{
"code": null,
"e": 14709,
"s": 14429,
"text": "Interfaces are defined to provide a common function name to the implementers. Different implementers can implement those interfaces according to their requirements. You can say, interfaces are skeletons, which are implemented by developers. Following is an example of interface −"
},
{
"code": null,
"e": 14918,
"s": 14709,
"text": "type \n Mail = Interface \n Procedure SendMail; \n Procedure GetMail; \n end; \n \n Report = Class(TInterfacedObject, Mail) \n Procedure SendMail; \n Procedure GetMail; \n end; "
},
{
"code": null,
"e": 15110,
"s": 14918,
"text": "Please note that, when a class implements an interface, it should implement all methods of the interface. If a method of an interface is not implemented, then the compiler will give an error."
},
{
"code": null,
"e": 15286,
"s": 15110,
"text": "An abstract class is one that cannot be instantiated, only inherited. An abstract class is specified by including the word symbol abstract in the class definition, like this −"
},
{
"code": null,
"e": 15374,
"s": 15286,
"text": "type\n Shape = ABSTRACT CLASS (Root)\n Procedure draw; ABSTRACT;\n ...\n end;"
},
{
"code": null,
"e": 15576,
"s": 15374,
"text": "When inheriting from an abstract class, all methods marked abstract in the parent's class declaration must be defined by the child; additionally, these methods must be defined with the same visibility."
},
{
"code": null,
"e": 15849,
"s": 15576,
"text": "Declaring class members or methods as static makes them accessible without needing an instantiation of the class. A member declared as static cannot be accessed with an instantiated class object (though a static method can). The following example illustrates the concept −"
},
{
"code": null,
"e": 16217,
"s": 15849,
"text": "program StaticExample;\n{$mode objfpc}\n{$static on}\ntype\n myclass=class\n num : integer;static;\n end;\nvar\n n1, n2 : myclass;\nbegin\n n1:= myclass.create;\n n2:= myclass.create;\n n1.num := 12;\n writeln(n2.num);\n n2.num := 31;\n writeln(n1.num);\n writeln(myclass.num);\n myclass.num := myclass.num + 20;\n writeln(n1.num);\n writeln(n2.num);\nend."
},
{
"code": null,
"e": 16298,
"s": 16217,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 16314,
"s": 16298,
"text": "12\n31\n31\n51\n51\n"
},
{
"code": null,
"e": 16384,
"s": 16314,
"text": "You must use the directive {$static on} for using the static members."
},
{
"code": null,
"e": 16419,
"s": 16384,
"text": "\n 94 Lectures \n 8.5 hours \n"
},
{
"code": null,
"e": 16442,
"s": 16419,
"text": " Stone River ELearning"
},
{
"code": null,
"e": 16449,
"s": 16442,
"text": " Print"
},
{
"code": null,
"e": 16460,
"s": 16449,
"text": " Add Notes"
}
] |
Inserting Date() in MongoDB through Mongo shell? | In order to insert Date() in MongoDB through Mongo shell, use the following syntax
var yourVariableName= new Date(year,month, day, hour, minute);
db.yourCollectionName({yourDateFieldName:yourVariableName});
Let us first create a date variable
> var creatingDate = new Date(2019, 03, 29, 13, 12);
Let us create a collection with documents:
>db.insertingDateUsingVariableDemo.insertOne({"UserName":"John","UserMessages":["Hi","Hello","Awesome"],"UserPostDate":creatingDate});
Following is the query to display all documents from a collection with the help of find() method
> db.insertingDateUsingVariableDemo.find().pretty();
This will produce the following output
{
"_id" : ObjectId("5c9d1b19a629b87623db1b21"),
"UserName" : "John",
"UserMessages" : [
"Hi",
"Hello",
"Awesome"
],
"UserPostDate" : ISODate("2019-04-29T07:42:00Z")
} | [
{
"code": null,
"e": 1145,
"s": 1062,
"text": "In order to insert Date() in MongoDB through Mongo shell, use the following syntax"
},
{
"code": null,
"e": 1269,
"s": 1145,
"text": "var yourVariableName= new Date(year,month, day, hour, minute);\ndb.yourCollectionName({yourDateFieldName:yourVariableName});"
},
{
"code": null,
"e": 1305,
"s": 1269,
"text": "Let us first create a date variable"
},
{
"code": null,
"e": 1358,
"s": 1305,
"text": "> var creatingDate = new Date(2019, 03, 29, 13, 12);"
},
{
"code": null,
"e": 1401,
"s": 1358,
"text": "Let us create a collection with documents:"
},
{
"code": null,
"e": 1536,
"s": 1401,
"text": ">db.insertingDateUsingVariableDemo.insertOne({\"UserName\":\"John\",\"UserMessages\":[\"Hi\",\"Hello\",\"Awesome\"],\"UserPostDate\":creatingDate});"
},
{
"code": null,
"e": 1633,
"s": 1536,
"text": "Following is the query to display all documents from a collection with the help of find() method"
},
{
"code": null,
"e": 1686,
"s": 1633,
"text": "> db.insertingDateUsingVariableDemo.find().pretty();"
},
{
"code": null,
"e": 1725,
"s": 1686,
"text": "This will produce the following output"
},
{
"code": null,
"e": 1925,
"s": 1725,
"text": "{\n \"_id\" : ObjectId(\"5c9d1b19a629b87623db1b21\"),\n \"UserName\" : \"John\",\n \"UserMessages\" : [\n \"Hi\",\n \"Hello\",\n \"Awesome\"\n ],\n \"UserPostDate\" : ISODate(\"2019-04-29T07:42:00Z\")\n}"
}
] |
Creating a popup message box with an Entry field in tkinter | Tkinter Popup are toplevel window interfaces that wrap up the widget and the element with the main window. It can be embedded in any main window using a handler like Button Widget. Popup can be created using the Toplevel(root) constructor.
In this example, we will create a simple application containing a Label widget and a button to open the popup message box which contains an Entry field. Once the popup is opened, it can have the functionality to go back to the main window.
#Import the required library
from tkinter import*
#Create an instance of tkinter frame
win= Tk()
#Define geometry of the window
win.geometry("750x250")
#Define a function to close the popup window
def close_win(top):
top.destroy()
def insert_val(e):
e.insert(0, "Hello World!")
#Define a function to open the Popup Dialogue
def popupwin():
#Create a Toplevel window
top= Toplevel(win)
top.geometry("750x250")
#Create an Entry Widget in the Toplevel window
entry= Entry(top, width= 25)
entry.pack()
#Create a Button to print something in the Entry widget
Button(top,text= "Insert", command= lambda:insert_val(entry)).pack(pady= 5,side=TOP)
#Create a Button Widget in the Toplevel Window
button= Button(top, text="Ok", command=lambda:close_win(top))
button.pack(pady=5, side= TOP)
#Create a Label
label= Label(win, text="Click the Button to Open the Popup Dialogue", font= ('Helvetica 15 bold'))
label.pack(pady=20)
#Create a Button
button= Button(win, text= "Click Me!", command= popupwin, font= ('Helvetica 14 bold'))
button.pack(pady=20)
win.mainloop()
Running the above code will display a window that contains a button to open the Popup Dialogue.
Once we click the button, it will open the popup Dialog Box. The popup Dialog has two buttons, each for a different purpose. When we click the “Ok” button, it will destroy the popup and revert back to the main window. | [
{
"code": null,
"e": 1302,
"s": 1062,
"text": "Tkinter Popup are toplevel window interfaces that wrap up the widget and the element with the main window. It can be embedded in any main window using a handler like Button Widget. Popup can be created using the Toplevel(root) constructor."
},
{
"code": null,
"e": 1542,
"s": 1302,
"text": "In this example, we will create a simple application containing a Label widget and a button to open the popup message box which contains an Entry field. Once the popup is opened, it can have the functionality to go back to the main window."
},
{
"code": null,
"e": 2642,
"s": 1542,
"text": "#Import the required library\nfrom tkinter import*\n\n#Create an instance of tkinter frame\nwin= Tk()\n\n#Define geometry of the window\nwin.geometry(\"750x250\")\n\n#Define a function to close the popup window\ndef close_win(top):\n top.destroy()\ndef insert_val(e):\n e.insert(0, \"Hello World!\")\n\n#Define a function to open the Popup Dialogue\ndef popupwin():\n #Create a Toplevel window\n top= Toplevel(win)\n top.geometry(\"750x250\")\n\n #Create an Entry Widget in the Toplevel window\n entry= Entry(top, width= 25)\n entry.pack()\n\n #Create a Button to print something in the Entry widget\n Button(top,text= \"Insert\", command= lambda:insert_val(entry)).pack(pady= 5,side=TOP)\n #Create a Button Widget in the Toplevel Window\n button= Button(top, text=\"Ok\", command=lambda:close_win(top))\n button.pack(pady=5, side= TOP)\n#Create a Label\nlabel= Label(win, text=\"Click the Button to Open the Popup Dialogue\", font= ('Helvetica 15 bold'))\nlabel.pack(pady=20)\n\n#Create a Button\nbutton= Button(win, text= \"Click Me!\", command= popupwin, font= ('Helvetica 14 bold'))\nbutton.pack(pady=20)\nwin.mainloop()"
},
{
"code": null,
"e": 2738,
"s": 2642,
"text": "Running the above code will display a window that contains a button to open the Popup Dialogue."
},
{
"code": null,
"e": 2956,
"s": 2738,
"text": "Once we click the button, it will open the popup Dialog Box. The popup Dialog has two buttons, each for a different purpose. When we click the “Ok” button, it will destroy the popup and revert back to the main window."
}
] |
LIME: Explaining predictions of machine learning models (1/2) | by Sharayu Rane | Towards Data Science | Feature Importance using Gini IndexFeature Importance using Permutation ImportancePartial Dependence Plots
Feature Importance using Gini Index
Feature Importance using Permutation Importance
Partial Dependence Plots
I would like to begin by asking the following question: “Can we trust the model predictions just because the model performance is convincingly high on the test data?” Many people might answer this question as ‘Yes’. But this is not always true. High model performance should not be considered an indicator to trust the model predictions, as the signals being picked up by the model can be random and might not make business sense.
In this blog I am going to talk about LIME , a method that helps in understanding the reasons behind predictions at an instance level. This is a model agnostic technique i.e. you can use this with any model be it Neural Networks, Tree based models, SVMs etc.
LIME stands for Local Interpretable Model-Agnostic Explanations. The word Local seems to be most confounding for many people. So let me start by explaining why the word Local.
The relation between the target and the independent variable can get very complex at a global level, for example refer to the image below. Looking at this global view does not provide much insight on the relationship between the independent feature and the target, so LIME zooms in goes at a very very local level.
Let’s assume that the instance being explained is marked by green star. The principal intuition behind LIME is that after zooming in and analyzing the relationship between target and the independent feature locally, we can approximate this relation using linear models.
Now let’s understand what are the steps of how LIME works:
Sample and SMOTE the instances in the original data close to the green star (instance being explained)
Calculate the distance between the sampled instances and the instance being explained
For these synthetically generated instances, make predictions using the original global model
Fit a simple linear model on this data set
This linear model is weighted based on the similarity index calculated in step 2. This is to ensure that the errors on the instances most closest to the original instance are valued more than others
Now let’s get into the example to explain the LIME results. I have used the data set from Kaggle. The objective is to identify the strongest predictors of default credit card payment. I have fitted a Scikit-learn implementation of Gradient Boosted Trees.
Correctly classified ‘Non Default Payment’ instance:
Orange color indicates that the contribution is towards a ‘Default Payment’, and Blue color indicates that the contribution is towards ‘Non Default Payment’ . The global model’s prediction for this instance is 0.072 where as for the local model the prediction is 0.066. So, for this particular instance the prediction of the global model and local model is very close.
The above example shows that the top contributor for this particular instance to be tagged as ‘Non Default Payment’ is the repayment status in the previous month (PAY_0 = 1). This means that in the previous month the payment was done on time. The interpretation of the other variables is also similar (PAY_2, PAY_3 etc.). This indicates that if the payments for previous months was done on time then there is high likelihood for the person to not default in the next month.
High amount of LIMIT_BAL also contributes towards ‘Non Default Payment’. This makes sense too because high LIMIT_BAL means that the supplementary credit left is low. So, there is a chance that the next month’s bill will potentially be low, and hence less chance to default. Now, for continuous features LIME’s output provides more detail as it specifies a range of feature values that are causing that feature to have its influence. E.g. ‘LIMIT_BAL > 240,000’
I personally find analyzing the incorrectly predicted instances to be very insightful. This helps in understanding what features actually drove the prediction to be incorrect.
Incorrectly classified ‘Non Default Payment’ instance (Actual: ‘Default Payment’):
The above instance is incorrectly predicted as Non Default Payment owing to the features that denote previous payment history (PAY_0, PAY_2, PAY_3). For this instance the previous payments were done duly on time. This might have swayed the prediction to be incorrect.
With the above example, the general idea of LIME sounds reasonable, but there are some potential drawbacks that LIME is susceptible to.
1. Local linear behavior
LIME currently implements linear models locally to get the explanations. While this assumption is valid for a small region around the instance being explained. But for more complex data sets, as we increase the area of the local region this assumption might not be true. Thus, the explanations from the local interpretable models might not explain the behavior of the global linear model.
2. Number of features
The number of features to be selected for the explanation needs to be optimally selected. The number of features should be selected such that the complexity of the model is maintained along with the simplicity of the explanation. In python implementation of LIME, it provides the following options: ‘forward_selection’, ‘lasso_path’, ‘none’ or ‘auto’.
3. Models without probability scores
LIME currently doesn’t support classifier models without probability scores.
4. No aggregated view
LIME implementation doesn’t provide an aggregated view of the feature explanation. The user has to in turn analyze the individual instances.
5. Similarity score
For tabular data, LIME creates new samples by perturbing each feature individually, drawing from a normal distribution with mean and standard deviation taken from the feature. For continuous features LIME provides an option to perturb samples from the normal centered on the instance being explained or from the normal centered on the mean of the feature data set. LIME weighs higher the samples close to the instance of interest. Now in high dimensional space with each feature having a different range of values, calculating similarity index is challenging and might not result in clean results.
6. Definition of the neighborhood
Defining a correct neighborhood to run the locally interpretable models is a difficult problem. So why do we need to define a local neighborhood along the instance being explained? The accuracy of the of the explanation model to the original model is enforced through the loss function over a set of samples in the simplified input space weighted by the local kernel. Currently LIME uses an exponential smoothing kernel. The width of this kernel determines the influence on the local model of the instance being explained, and again there is no best way to determine the width of the kernel.
7. Consistency of the model explanations
LIME explanations can sometimes lack consistency due to the sampling bias, calculation of the similarity score and definition of the neighborhood.
LIME provides very simple interpretable explanations and is a quick way to analyze the contribution of each feature. LIME can also be used on text and image data, and the execution time of LIME is less as compared to other explainable techniques such as SHAP. Another feature I like about LIME is that the local surrogate models can actually use independent features other than the ones used in the global model.
Additional reading resources:
https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdfhttps://github.com/marcotcr/limehttps://christophm.github.io/interpretable-ml-book/lime.html
https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf
https://github.com/marcotcr/lime
https://christophm.github.io/interpretable-ml-book/lime.html
In the next blog, I will explain another popular technique: SHAP that is used for explaining model predictions.
This field of Explainable AI is evolving rapidly, and there are lot of new developments in terms of tools and frameworks. In the comments section please write your feedback on the blog, and the latest tools you have used in this field. Also, comment if you would like me to write about any particular topic.
This content was originally published on my personal blogging website: http://datascienceninja.com/. Click here to view it and subscribe to receive updates on latest blogs instantly. | [
{
"code": null,
"e": 279,
"s": 172,
"text": "Feature Importance using Gini IndexFeature Importance using Permutation ImportancePartial Dependence Plots"
},
{
"code": null,
"e": 315,
"s": 279,
"text": "Feature Importance using Gini Index"
},
{
"code": null,
"e": 363,
"s": 315,
"text": "Feature Importance using Permutation Importance"
},
{
"code": null,
"e": 388,
"s": 363,
"text": "Partial Dependence Plots"
},
{
"code": null,
"e": 819,
"s": 388,
"text": "I would like to begin by asking the following question: “Can we trust the model predictions just because the model performance is convincingly high on the test data?” Many people might answer this question as ‘Yes’. But this is not always true. High model performance should not be considered an indicator to trust the model predictions, as the signals being picked up by the model can be random and might not make business sense."
},
{
"code": null,
"e": 1078,
"s": 819,
"text": "In this blog I am going to talk about LIME , a method that helps in understanding the reasons behind predictions at an instance level. This is a model agnostic technique i.e. you can use this with any model be it Neural Networks, Tree based models, SVMs etc."
},
{
"code": null,
"e": 1254,
"s": 1078,
"text": "LIME stands for Local Interpretable Model-Agnostic Explanations. The word Local seems to be most confounding for many people. So let me start by explaining why the word Local."
},
{
"code": null,
"e": 1569,
"s": 1254,
"text": "The relation between the target and the independent variable can get very complex at a global level, for example refer to the image below. Looking at this global view does not provide much insight on the relationship between the independent feature and the target, so LIME zooms in goes at a very very local level."
},
{
"code": null,
"e": 1839,
"s": 1569,
"text": "Let’s assume that the instance being explained is marked by green star. The principal intuition behind LIME is that after zooming in and analyzing the relationship between target and the independent feature locally, we can approximate this relation using linear models."
},
{
"code": null,
"e": 1898,
"s": 1839,
"text": "Now let’s understand what are the steps of how LIME works:"
},
{
"code": null,
"e": 2001,
"s": 1898,
"text": "Sample and SMOTE the instances in the original data close to the green star (instance being explained)"
},
{
"code": null,
"e": 2087,
"s": 2001,
"text": "Calculate the distance between the sampled instances and the instance being explained"
},
{
"code": null,
"e": 2181,
"s": 2087,
"text": "For these synthetically generated instances, make predictions using the original global model"
},
{
"code": null,
"e": 2224,
"s": 2181,
"text": "Fit a simple linear model on this data set"
},
{
"code": null,
"e": 2423,
"s": 2224,
"text": "This linear model is weighted based on the similarity index calculated in step 2. This is to ensure that the errors on the instances most closest to the original instance are valued more than others"
},
{
"code": null,
"e": 2678,
"s": 2423,
"text": "Now let’s get into the example to explain the LIME results. I have used the data set from Kaggle. The objective is to identify the strongest predictors of default credit card payment. I have fitted a Scikit-learn implementation of Gradient Boosted Trees."
},
{
"code": null,
"e": 2731,
"s": 2678,
"text": "Correctly classified ‘Non Default Payment’ instance:"
},
{
"code": null,
"e": 3100,
"s": 2731,
"text": "Orange color indicates that the contribution is towards a ‘Default Payment’, and Blue color indicates that the contribution is towards ‘Non Default Payment’ . The global model’s prediction for this instance is 0.072 where as for the local model the prediction is 0.066. So, for this particular instance the prediction of the global model and local model is very close."
},
{
"code": null,
"e": 3574,
"s": 3100,
"text": "The above example shows that the top contributor for this particular instance to be tagged as ‘Non Default Payment’ is the repayment status in the previous month (PAY_0 = 1). This means that in the previous month the payment was done on time. The interpretation of the other variables is also similar (PAY_2, PAY_3 etc.). This indicates that if the payments for previous months was done on time then there is high likelihood for the person to not default in the next month."
},
{
"code": null,
"e": 4034,
"s": 3574,
"text": "High amount of LIMIT_BAL also contributes towards ‘Non Default Payment’. This makes sense too because high LIMIT_BAL means that the supplementary credit left is low. So, there is a chance that the next month’s bill will potentially be low, and hence less chance to default. Now, for continuous features LIME’s output provides more detail as it specifies a range of feature values that are causing that feature to have its influence. E.g. ‘LIMIT_BAL > 240,000’"
},
{
"code": null,
"e": 4210,
"s": 4034,
"text": "I personally find analyzing the incorrectly predicted instances to be very insightful. This helps in understanding what features actually drove the prediction to be incorrect."
},
{
"code": null,
"e": 4293,
"s": 4210,
"text": "Incorrectly classified ‘Non Default Payment’ instance (Actual: ‘Default Payment’):"
},
{
"code": null,
"e": 4561,
"s": 4293,
"text": "The above instance is incorrectly predicted as Non Default Payment owing to the features that denote previous payment history (PAY_0, PAY_2, PAY_3). For this instance the previous payments were done duly on time. This might have swayed the prediction to be incorrect."
},
{
"code": null,
"e": 4697,
"s": 4561,
"text": "With the above example, the general idea of LIME sounds reasonable, but there are some potential drawbacks that LIME is susceptible to."
},
{
"code": null,
"e": 4722,
"s": 4697,
"text": "1. Local linear behavior"
},
{
"code": null,
"e": 5111,
"s": 4722,
"text": "LIME currently implements linear models locally to get the explanations. While this assumption is valid for a small region around the instance being explained. But for more complex data sets, as we increase the area of the local region this assumption might not be true. Thus, the explanations from the local interpretable models might not explain the behavior of the global linear model."
},
{
"code": null,
"e": 5133,
"s": 5111,
"text": "2. Number of features"
},
{
"code": null,
"e": 5485,
"s": 5133,
"text": "The number of features to be selected for the explanation needs to be optimally selected. The number of features should be selected such that the complexity of the model is maintained along with the simplicity of the explanation. In python implementation of LIME, it provides the following options: ‘forward_selection’, ‘lasso_path’, ‘none’ or ‘auto’."
},
{
"code": null,
"e": 5522,
"s": 5485,
"text": "3. Models without probability scores"
},
{
"code": null,
"e": 5599,
"s": 5522,
"text": "LIME currently doesn’t support classifier models without probability scores."
},
{
"code": null,
"e": 5621,
"s": 5599,
"text": "4. No aggregated view"
},
{
"code": null,
"e": 5762,
"s": 5621,
"text": "LIME implementation doesn’t provide an aggregated view of the feature explanation. The user has to in turn analyze the individual instances."
},
{
"code": null,
"e": 5782,
"s": 5762,
"text": "5. Similarity score"
},
{
"code": null,
"e": 6380,
"s": 5782,
"text": "For tabular data, LIME creates new samples by perturbing each feature individually, drawing from a normal distribution with mean and standard deviation taken from the feature. For continuous features LIME provides an option to perturb samples from the normal centered on the instance being explained or from the normal centered on the mean of the feature data set. LIME weighs higher the samples close to the instance of interest. Now in high dimensional space with each feature having a different range of values, calculating similarity index is challenging and might not result in clean results."
},
{
"code": null,
"e": 6414,
"s": 6380,
"text": "6. Definition of the neighborhood"
},
{
"code": null,
"e": 7006,
"s": 6414,
"text": "Defining a correct neighborhood to run the locally interpretable models is a difficult problem. So why do we need to define a local neighborhood along the instance being explained? The accuracy of the of the explanation model to the original model is enforced through the loss function over a set of samples in the simplified input space weighted by the local kernel. Currently LIME uses an exponential smoothing kernel. The width of this kernel determines the influence on the local model of the instance being explained, and again there is no best way to determine the width of the kernel."
},
{
"code": null,
"e": 7047,
"s": 7006,
"text": "7. Consistency of the model explanations"
},
{
"code": null,
"e": 7194,
"s": 7047,
"text": "LIME explanations can sometimes lack consistency due to the sampling bias, calculation of the similarity score and definition of the neighborhood."
},
{
"code": null,
"e": 7607,
"s": 7194,
"text": "LIME provides very simple interpretable explanations and is a quick way to analyze the contribution of each feature. LIME can also be used on text and image data, and the execution time of LIME is less as compared to other explainable techniques such as SHAP. Another feature I like about LIME is that the local surrogate models can actually use independent features other than the ones used in the global model."
},
{
"code": null,
"e": 7637,
"s": 7607,
"text": "Additional reading resources:"
},
{
"code": null,
"e": 7791,
"s": 7637,
"text": "https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdfhttps://github.com/marcotcr/limehttps://christophm.github.io/interpretable-ml-book/lime.html"
},
{
"code": null,
"e": 7853,
"s": 7791,
"text": "https://www.kdd.org/kdd2016/papers/files/rfp0573-ribeiroA.pdf"
},
{
"code": null,
"e": 7886,
"s": 7853,
"text": "https://github.com/marcotcr/lime"
},
{
"code": null,
"e": 7947,
"s": 7886,
"text": "https://christophm.github.io/interpretable-ml-book/lime.html"
},
{
"code": null,
"e": 8059,
"s": 7947,
"text": "In the next blog, I will explain another popular technique: SHAP that is used for explaining model predictions."
},
{
"code": null,
"e": 8367,
"s": 8059,
"text": "This field of Explainable AI is evolving rapidly, and there are lot of new developments in terms of tools and frameworks. In the comments section please write your feedback on the blog, and the latest tools you have used in this field. Also, comment if you would like me to write about any particular topic."
}
] |
How to make a specific text on TextView bold in Android? | This example demonstrates how do I make a specific text on TextView bold in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:id="@+id/coordinator_layout"
android:layout_width="match_parent"
android:layout_height="match_parent">
<TextView
android:id="@+id/textView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:textSize="40sp"
android:layout_gravity="center"
android:layout_margin="25dp" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.java
import android.graphics.Typeface;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.text.SpannableString;
import android.text.Spanned;
import android.text.style.StyleSpan;
import android.widget.TextView;
public class MainActivity extends AppCompatActivity {
TextView textView;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
textView = (TextView) findViewById(R.id.textView);
String text = "My favourite food is Biriyani";
SpannableString ss = new SpannableString(text);
StyleSpan boldSpan = new StyleSpan(Typeface.BOLD);
ss.setSpan(boldSpan, 21, 29,Spanned.SPAN_EXCLUSIVE_EXCLUSIVE);
textView.setText(ss);
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="app.com.sample" >
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme" >
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −
Click here to download the project code. | [
{
"code": null,
"e": 1147,
"s": 1062,
"text": "This example demonstrates how do I make a specific text on TextView bold in android."
},
{
"code": null,
"e": 1276,
"s": 1147,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1341,
"s": 1276,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 1833,
"s": 1341,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout\n xmlns:android=\"http://schemas.android.com/apk/res/android\"\n android:id=\"@+id/coordinator_layout\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\">\n <TextView\n android:id=\"@+id/textView\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:textSize=\"40sp\"\n android:layout_gravity=\"center\"\n android:layout_margin=\"25dp\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 1890,
"s": 1833,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 2692,
"s": 1890,
"text": "import android.graphics.Typeface;\nimport android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.text.SpannableString;\nimport android.text.Spanned;\nimport android.text.style.StyleSpan;\nimport android.widget.TextView;\npublic class MainActivity extends AppCompatActivity {\n TextView textView;\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n textView = (TextView) findViewById(R.id.textView);\n String text = \"My favourite food is Biriyani\";\n SpannableString ss = new SpannableString(text);\n StyleSpan boldSpan = new StyleSpan(Typeface.BOLD);\n ss.setSpan(boldSpan, 21, 29,Spanned.SPAN_EXCLUSIVE_EXCLUSIVE);\n textView.setText(ss);\n }\n}"
},
{
"code": null,
"e": 2747,
"s": 2692,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 3419,
"s": 2747,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"app.com.sample\" >\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\" >\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 3766,
"s": 3419,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen −"
},
{
"code": null,
"e": 3807,
"s": 3766,
"text": "Click here to download the project code."
}
] |
Accelerating end-to-end Machine Learning workflows with NVIDIA RAPIDS | by Dhrumil Patel | Towards Data Science | From space exploration to robotics, from game engines to medical research, and from training a single model to powering the entire data center, GPUs are an integral part of the process of how we deal with data.
If one thing that we need today, more than ever, is the speed to process the enormous data we generate. With the latest GTC 2020 keynote, which was delivered from the CEO’s kitchen, NVIDIA made clear that they want to utilize GPUs in all the fields possible.
In this article, we are going to learn how we can accelerate an end-to-end machine learning workflow with NVIDIA’s RAPIDS. If you don’t already know how end-to-end workflow works or what on earth RAPIDS is, stay with me, you are in for a treat. We will be discussing the following in this article, feel free to jump forward if you know a topic or two.
What is RAPIDS?
How end-to-end workflows work
Predicting NYC Taxi fares and comparing the time taken by CPU and GPU
Also, before we start, please note that the machine I am using has the following specs:
Alright so let’s jump right into it. RAPIDS is a market/domain-specific library that runs on top of CUDA, a parallel computing platform and API created by, you guessed it right, NVIDIA.
While several libraries target different domains, RAPIDS was specifically designed for GPU accelerated Data Analytics and later was adopted by the Data Science community to train their machine learning models faster. I mean who doesn’t like training their models in the blink of an eye? Right? To know more about RAPIDS and recent development, head over to their website and introductory blog.
If you are here reading how to accelerate your machine learning end-to-end workflow, high chances are you will be familiar with the workflow that I am about to explain. If so, you can skip to the next part where we predict the NYC taxi fares with CPU and GPU. But if you are new and want to know more, stay with me.
Take a good look at the diagram. Did? Perfect. Now let’s understand this step by step:
ETL — Extract, Transform, and LoadThis is where most data scientists spend their time, cleaning data. In this step, we do all the things that are necessary to form a good dataset before we feed it to our machine learning algorithm — from creating data frames to doing feature engineering, all fall under ETL. If you want to start slow and want to know how to implement basic ETL, head over to this article where we perform ETL and predict who survived the Titanic.TrainingTo obtain the best results, we first have to train our model with data, so when the next time it sees something similar, it knows what it is looking at. This is the stage where your model’s training and tuning takes place.InferenceWe then put our model into operation to be able to respond to user queries after going through several processes. For example, ranking. Based on user queries we rank the results and we deliver it back to the user. Think of how Google presents to you a new set of results for each new query. Neat huh?
ETL — Extract, Transform, and LoadThis is where most data scientists spend their time, cleaning data. In this step, we do all the things that are necessary to form a good dataset before we feed it to our machine learning algorithm — from creating data frames to doing feature engineering, all fall under ETL. If you want to start slow and want to know how to implement basic ETL, head over to this article where we perform ETL and predict who survived the Titanic.
TrainingTo obtain the best results, we first have to train our model with data, so when the next time it sees something similar, it knows what it is looking at. This is the stage where your model’s training and tuning takes place.
InferenceWe then put our model into operation to be able to respond to user queries after going through several processes. For example, ranking. Based on user queries we rank the results and we deliver it back to the user. Think of how Google presents to you a new set of results for each new query. Neat huh?
To avoid any misconception here, know that this is just a general idea of how an end-to-end workflow works. There is a lot of work behind the scene. Just like a movie, or a play, or opera.
You might be wondering why we need to speed this process up? Let me briefly tell you why.
What happens is, all the three stages that we discussed above have their own set of challenges when it comes to computation. With an enormous amount of data generated every day and high data processing requirements (Terabytes of Dataset), let’s just say CPUs are simply not enough.
Data scientists spend most of their time cleaning and processing (aka ETL), and no one wants that process to take half a day or an entire day. I know, I wouldn’t.
And with companies quickly moving towards acquiring and processing petabytes of data, we need speed now more than ever. Let’s understand that better with an example.
First things first, you should know your data better than you know yourself. The NYC taxi dataset is popular and widely available, you can download the data set from here. It contains 9 files for 9 different months starting from January 2014, and each file is approximately 2–2.2GB large, yes each file. You probably get the idea. We will be processing all files together.
First, let’s see what’s in those files. There are 17 columns or features in the dataset. I have listed each one of them, and they are self-explanatory.
vendor idpickup_datetimedropoff_datetimepassenger_counttrip_distancepickup_latitudepickup_longituderate_codedropoff_latitudedropoff_longitudepayment_typefare_amountsurchargemta_taxtip_ammounttolls_ammounttotal_amount
Simple, right? Good. Now what about the records, you might be wondering. We have whooping 124M rows or records in all those 9 files combined. Yes, that’s right, 124 Million. 124,649,497 to be precise. Any guess how much time it will take to complete the ETL process? Stay with me to find out.
We will be using cuDF with Dask & XGBoost to scale GPU DataFrame ETL style operations and for model training. To know more about cuDF read fellow communicator George’s post on “How to Use CuPy to Make Numpy Over 10X Faster”. For the sake of simplicity, I am going to keep the code minimal in the post but you can find the entire ipynb notebook and many other E2E example notebooks on RAPIDS’ GitHub repository here.
As we already know we need to clean the data, so we will tidy up our dataset.
We have the same column names represented differently in different CSV files. For example, one file has rate_code and another has RateCodeID, while both are universally accepted ways to represent column names, we have to go with either one. I always choose the first one as the underscore separates the words and it is easy for my eyes to read them. Forever team lazy.
Outliers are there. Always. To break something, to stop something from running efficiently and we have to handle them. For example, fare less than $0 or greater than $500, who would give $500? The same with passenger count, discard entries with < 0 and > 6 entries.
After cleaning up the dataset we have discarded almost 7 million records and now have 117 million from which we can actually gain insights.
Let’s imagine you are going to make a trip to New York, don’t go just imagine, on the 25th, and want to build a model to predict what fare prices will be like in the last few days given the data of the first part of the month.
We are measuring the time to know how much time it will take for your cluster to load data from the storage bucket and ETL portion of the workflow. At this point, we have 92M data for training and the remaining 25% for testing.
The wall time represents the total time, which is 2min 3s or 123 seconds, on the CPU. Impressive, huh? But what if I tell you we can achieve even faster results?
In the same process, just after enabling 1 GPU (NVIDIA TITAN RTX 24 GB DDR6) we can finish the same job in 63 seconds. Nearly a 2x boost up. If you think that is great, wait till I show you the boost up on training time. Also, I was monitoring the CPU usage while doing so. All cores working, it was soothing to my eyes, not going to lie.
We are going to train the XGBoost Regression model on our training data and find out how much time it will take to complete the entire training.
The time is 6min 36s, so 396 seconds. Not bad. Let’s check the same after enabling the TITAN RTX GPU.
WHAT! Did we just train our entire model with 92 million records in 33.9 seconds? Damn right we did. Honestly, I didn’t believe it at first. But yes, this is reality. More than 12x speedup just by enabling 1 GPU, what if we use multiple?
As we have our trained model, we now have to test it with the 25% of records we held out.
Measuring the performance of our model is an essential part. We will do that by computing root mean squared error. If you want to learn more about the cost functions, head over to this article on Cost Functions by fellow communicator Kamil.
Not bad at all. There are further improvements that we can do but it is out of this article’s scope. Some other time perhaps.
With more than ever data available to us in multiple forms, we need tools that can process data faster to gain insights and turn that insight to drive better decisions from medical researching to space exploration. With the NVIDIA RAPIDS (software stack) and NVIDIA GPUs (hardware), we can achieve nearly 6X speedup on end-to-end machine learning pipeline. Also, here is a photo of the beast.
Isn’t it beautiful? You can find the notebooks here and here, and the data here. | [
{
"code": null,
"e": 383,
"s": 172,
"text": "From space exploration to robotics, from game engines to medical research, and from training a single model to powering the entire data center, GPUs are an integral part of the process of how we deal with data."
},
{
"code": null,
"e": 642,
"s": 383,
"text": "If one thing that we need today, more than ever, is the speed to process the enormous data we generate. With the latest GTC 2020 keynote, which was delivered from the CEO’s kitchen, NVIDIA made clear that they want to utilize GPUs in all the fields possible."
},
{
"code": null,
"e": 994,
"s": 642,
"text": "In this article, we are going to learn how we can accelerate an end-to-end machine learning workflow with NVIDIA’s RAPIDS. If you don’t already know how end-to-end workflow works or what on earth RAPIDS is, stay with me, you are in for a treat. We will be discussing the following in this article, feel free to jump forward if you know a topic or two."
},
{
"code": null,
"e": 1010,
"s": 994,
"text": "What is RAPIDS?"
},
{
"code": null,
"e": 1040,
"s": 1010,
"text": "How end-to-end workflows work"
},
{
"code": null,
"e": 1110,
"s": 1040,
"text": "Predicting NYC Taxi fares and comparing the time taken by CPU and GPU"
},
{
"code": null,
"e": 1198,
"s": 1110,
"text": "Also, before we start, please note that the machine I am using has the following specs:"
},
{
"code": null,
"e": 1384,
"s": 1198,
"text": "Alright so let’s jump right into it. RAPIDS is a market/domain-specific library that runs on top of CUDA, a parallel computing platform and API created by, you guessed it right, NVIDIA."
},
{
"code": null,
"e": 1778,
"s": 1384,
"text": "While several libraries target different domains, RAPIDS was specifically designed for GPU accelerated Data Analytics and later was adopted by the Data Science community to train their machine learning models faster. I mean who doesn’t like training their models in the blink of an eye? Right? To know more about RAPIDS and recent development, head over to their website and introductory blog."
},
{
"code": null,
"e": 2094,
"s": 1778,
"text": "If you are here reading how to accelerate your machine learning end-to-end workflow, high chances are you will be familiar with the workflow that I am about to explain. If so, you can skip to the next part where we predict the NYC taxi fares with CPU and GPU. But if you are new and want to know more, stay with me."
},
{
"code": null,
"e": 2181,
"s": 2094,
"text": "Take a good look at the diagram. Did? Perfect. Now let’s understand this step by step:"
},
{
"code": null,
"e": 3185,
"s": 2181,
"text": "ETL — Extract, Transform, and LoadThis is where most data scientists spend their time, cleaning data. In this step, we do all the things that are necessary to form a good dataset before we feed it to our machine learning algorithm — from creating data frames to doing feature engineering, all fall under ETL. If you want to start slow and want to know how to implement basic ETL, head over to this article where we perform ETL and predict who survived the Titanic.TrainingTo obtain the best results, we first have to train our model with data, so when the next time it sees something similar, it knows what it is looking at. This is the stage where your model’s training and tuning takes place.InferenceWe then put our model into operation to be able to respond to user queries after going through several processes. For example, ranking. Based on user queries we rank the results and we deliver it back to the user. Think of how Google presents to you a new set of results for each new query. Neat huh?"
},
{
"code": null,
"e": 3650,
"s": 3185,
"text": "ETL — Extract, Transform, and LoadThis is where most data scientists spend their time, cleaning data. In this step, we do all the things that are necessary to form a good dataset before we feed it to our machine learning algorithm — from creating data frames to doing feature engineering, all fall under ETL. If you want to start slow and want to know how to implement basic ETL, head over to this article where we perform ETL and predict who survived the Titanic."
},
{
"code": null,
"e": 3881,
"s": 3650,
"text": "TrainingTo obtain the best results, we first have to train our model with data, so when the next time it sees something similar, it knows what it is looking at. This is the stage where your model’s training and tuning takes place."
},
{
"code": null,
"e": 4191,
"s": 3881,
"text": "InferenceWe then put our model into operation to be able to respond to user queries after going through several processes. For example, ranking. Based on user queries we rank the results and we deliver it back to the user. Think of how Google presents to you a new set of results for each new query. Neat huh?"
},
{
"code": null,
"e": 4380,
"s": 4191,
"text": "To avoid any misconception here, know that this is just a general idea of how an end-to-end workflow works. There is a lot of work behind the scene. Just like a movie, or a play, or opera."
},
{
"code": null,
"e": 4470,
"s": 4380,
"text": "You might be wondering why we need to speed this process up? Let me briefly tell you why."
},
{
"code": null,
"e": 4752,
"s": 4470,
"text": "What happens is, all the three stages that we discussed above have their own set of challenges when it comes to computation. With an enormous amount of data generated every day and high data processing requirements (Terabytes of Dataset), let’s just say CPUs are simply not enough."
},
{
"code": null,
"e": 4915,
"s": 4752,
"text": "Data scientists spend most of their time cleaning and processing (aka ETL), and no one wants that process to take half a day or an entire day. I know, I wouldn’t."
},
{
"code": null,
"e": 5081,
"s": 4915,
"text": "And with companies quickly moving towards acquiring and processing petabytes of data, we need speed now more than ever. Let’s understand that better with an example."
},
{
"code": null,
"e": 5454,
"s": 5081,
"text": "First things first, you should know your data better than you know yourself. The NYC taxi dataset is popular and widely available, you can download the data set from here. It contains 9 files for 9 different months starting from January 2014, and each file is approximately 2–2.2GB large, yes each file. You probably get the idea. We will be processing all files together."
},
{
"code": null,
"e": 5606,
"s": 5454,
"text": "First, let’s see what’s in those files. There are 17 columns or features in the dataset. I have listed each one of them, and they are self-explanatory."
},
{
"code": null,
"e": 5823,
"s": 5606,
"text": "vendor idpickup_datetimedropoff_datetimepassenger_counttrip_distancepickup_latitudepickup_longituderate_codedropoff_latitudedropoff_longitudepayment_typefare_amountsurchargemta_taxtip_ammounttolls_ammounttotal_amount"
},
{
"code": null,
"e": 6116,
"s": 5823,
"text": "Simple, right? Good. Now what about the records, you might be wondering. We have whooping 124M rows or records in all those 9 files combined. Yes, that’s right, 124 Million. 124,649,497 to be precise. Any guess how much time it will take to complete the ETL process? Stay with me to find out."
},
{
"code": null,
"e": 6532,
"s": 6116,
"text": "We will be using cuDF with Dask & XGBoost to scale GPU DataFrame ETL style operations and for model training. To know more about cuDF read fellow communicator George’s post on “How to Use CuPy to Make Numpy Over 10X Faster”. For the sake of simplicity, I am going to keep the code minimal in the post but you can find the entire ipynb notebook and many other E2E example notebooks on RAPIDS’ GitHub repository here."
},
{
"code": null,
"e": 6610,
"s": 6532,
"text": "As we already know we need to clean the data, so we will tidy up our dataset."
},
{
"code": null,
"e": 6979,
"s": 6610,
"text": "We have the same column names represented differently in different CSV files. For example, one file has rate_code and another has RateCodeID, while both are universally accepted ways to represent column names, we have to go with either one. I always choose the first one as the underscore separates the words and it is easy for my eyes to read them. Forever team lazy."
},
{
"code": null,
"e": 7245,
"s": 6979,
"text": "Outliers are there. Always. To break something, to stop something from running efficiently and we have to handle them. For example, fare less than $0 or greater than $500, who would give $500? The same with passenger count, discard entries with < 0 and > 6 entries."
},
{
"code": null,
"e": 7385,
"s": 7245,
"text": "After cleaning up the dataset we have discarded almost 7 million records and now have 117 million from which we can actually gain insights."
},
{
"code": null,
"e": 7612,
"s": 7385,
"text": "Let’s imagine you are going to make a trip to New York, don’t go just imagine, on the 25th, and want to build a model to predict what fare prices will be like in the last few days given the data of the first part of the month."
},
{
"code": null,
"e": 7840,
"s": 7612,
"text": "We are measuring the time to know how much time it will take for your cluster to load data from the storage bucket and ETL portion of the workflow. At this point, we have 92M data for training and the remaining 25% for testing."
},
{
"code": null,
"e": 8002,
"s": 7840,
"text": "The wall time represents the total time, which is 2min 3s or 123 seconds, on the CPU. Impressive, huh? But what if I tell you we can achieve even faster results?"
},
{
"code": null,
"e": 8341,
"s": 8002,
"text": "In the same process, just after enabling 1 GPU (NVIDIA TITAN RTX 24 GB DDR6) we can finish the same job in 63 seconds. Nearly a 2x boost up. If you think that is great, wait till I show you the boost up on training time. Also, I was monitoring the CPU usage while doing so. All cores working, it was soothing to my eyes, not going to lie."
},
{
"code": null,
"e": 8486,
"s": 8341,
"text": "We are going to train the XGBoost Regression model on our training data and find out how much time it will take to complete the entire training."
},
{
"code": null,
"e": 8588,
"s": 8486,
"text": "The time is 6min 36s, so 396 seconds. Not bad. Let’s check the same after enabling the TITAN RTX GPU."
},
{
"code": null,
"e": 8826,
"s": 8588,
"text": "WHAT! Did we just train our entire model with 92 million records in 33.9 seconds? Damn right we did. Honestly, I didn’t believe it at first. But yes, this is reality. More than 12x speedup just by enabling 1 GPU, what if we use multiple?"
},
{
"code": null,
"e": 8916,
"s": 8826,
"text": "As we have our trained model, we now have to test it with the 25% of records we held out."
},
{
"code": null,
"e": 9157,
"s": 8916,
"text": "Measuring the performance of our model is an essential part. We will do that by computing root mean squared error. If you want to learn more about the cost functions, head over to this article on Cost Functions by fellow communicator Kamil."
},
{
"code": null,
"e": 9283,
"s": 9157,
"text": "Not bad at all. There are further improvements that we can do but it is out of this article’s scope. Some other time perhaps."
},
{
"code": null,
"e": 9676,
"s": 9283,
"text": "With more than ever data available to us in multiple forms, we need tools that can process data faster to gain insights and turn that insight to drive better decisions from medical researching to space exploration. With the NVIDIA RAPIDS (software stack) and NVIDIA GPUs (hardware), we can achieve nearly 6X speedup on end-to-end machine learning pipeline. Also, here is a photo of the beast."
}
] |
How to use the same JavaScript in multiple content pages? | To use the same JavaScript in more than one page, add the js code in an external JavaScript file. Let’s say the following demo.js is our external JavaScript file −
function display() {
alert("Hello World!");
}
Now add the external JavaScript file to the following HTML web page. In the same way, you can add it to multiple content page.
<html>
<body>
<form>
<input type="button" value="Result" onclick="display()"/>
</form>
<script src="demo.js">
</script>
</body>
</html> | [
{
"code": null,
"e": 1226,
"s": 1062,
"text": "To use the same JavaScript in more than one page, add the js code in an external JavaScript file. Let’s say the following demo.js is our external JavaScript file −"
},
{
"code": null,
"e": 1275,
"s": 1226,
"text": "function display() {\n alert(\"Hello World!\");\n}"
},
{
"code": null,
"e": 1402,
"s": 1275,
"text": "Now add the external JavaScript file to the following HTML web page. In the same way, you can add it to multiple content page."
},
{
"code": null,
"e": 1577,
"s": 1402,
"text": "<html>\n <body>\n <form>\n <input type=\"button\" value=\"Result\" onclick=\"display()\"/>\n </form>\n <script src=\"demo.js\">\n </script>\n </body>\n</html>"
}
] |
C# Decision Making (if, if-else, if-else-if ladder, nested if, switch, nested switch) - GeeksforGeeks | 06 Jul, 2021
Decision Making in programming is similar to decision making in real life. In programming too, a certain block of code needs to be executed when some condition is fulfilled. A programming language uses control statements to control the flow of execution of program based on certain conditions. These are used to cause the flow of execution to advance and branch based on changes to the state of a program.The conditional statements of C#:
if
if-else
if-else-if
Nested if
Switch
Nested switch
IF Statement The if statement checks the given condition. If the condition evaluates to be true then the block of code/statements will execute otherwise not. Syntax:
if(condition)
{
//code to be executed
}
Note: If the curly brackets { } are not used with if statements than the statement just next to it is only considered associated with the if statement. Example:
if (condition)
statement 1;
statement 2;
In this example, only statement 1 is considered to be associated with the if statement.Flowchart:
Example:
Csharp
// C# program to illustrate if statementusing System; public class GFG { public static void Main(string[] args) { string name = "Geek"; if (name == "Geek") { Console.WriteLine("GeeksForGeeks"); } }}
Output:
GeeksForGeeks
IF – else Statement The if statement evaluates the code if the condition is true but what if the condition is not true, here comes the else statement. It tells the code what to do when the if condition is false.Syntax:
if(condition)
{
// code if condition is true
}
else
{
// code if condition is false
}
Flowchart:
Example:
Csharp
// C# program to illustrate// if-else statementusing System; public class GFG { public static void Main(string[] args) { string name = "Geek"; if (name == "Geeks") { Console.WriteLine("GeeksForGeeksr"); } else { Console.WriteLine("Geeks"); } }}
Output:
Geeks
If – else – if ladder Statement The if-else-if ladder statement executes one condition from multiple statements. The execution starts from top and checked for each if condition. The statement of if block will be executed which evaluates to be true. If none of the if condition evaluates to be true then the last else block is evaluated. Syntax:
if(condition1)
{
// code to be executed if condition1 is true
}
else if(condition2)
{
// code to be executed if condition2 is true
}
else if(condition3)
{
// code to be executed if condition3 is true
}
...
else
{
// code to be executed if all the conditions are false
}
Flowchart:
Example:
Csharp
// C# program to illustrate// if-else-if ladderusing System; class GFG { public static void Main(String[] args) { int i = 20; if (i == 10) Console.WriteLine("i is 10"); else if (i == 15) Console.WriteLine("i is 15"); else if (i == 20) Console.WriteLine("i is 20"); else Console.WriteLine("i is not present"); }}
Output:
i is 20
Nested – If Statement if statement inside an if statement is known as nested if. if statement in this case is the target of another if or else statement. When more then one condition needs to be true and one of the condition is the sub-condition of parent condition, nested if can be used.Syntax:
if (condition1)
{
// code to be executed
// if condition2 is true
if (condition2)
{
// code to be executed
// if condition2 is true
}
}
Flowchart:
Example:
csharp
// C# program to illustrate// nested-if statementusing System; class GFG { public static void Main(String[] args) { int i = 10; if (i == 10) { // Nested - if statement // Will only be executed if statement // above it is true if (i < 12) Console.WriteLine("i is smaller than 12 too"); else Console.WriteLine("i is greater than 15"); } }}
Output:
i is smaller than 12 too
Switch Statement Switch statement is an alternative to long if-else-if ladders. The expression is checked for different cases and the one match is executed. break statement is used to move out of the switch. If the break is not used, the control will flow to all cases below it until break is found or switch comes to an end. There is default case (optional) at the end of switch, if none of the case matches then default case is executed.Syntax:
switch (expression)
{
case value1: // statement sequence
break;
case value2: // statement sequence
break;
.
.
.
case valueN: // statement sequence
break;
default: // default statement sequence
}
Flow Diagram of Switch – case :
Example:
Csharp
// C# example for switch caseusing System; public class GFG{ public static void Main(String[] args) { int number = 30; switch(number) { case 10: Console.WriteLine("case 10"); break; case 20: Console.WriteLine("case 20"); break; case 30: Console.WriteLine("case 30"); break; default: Console.WriteLine("None matches"); break; } }}
Output:
case 30
Nested switch Nested Switch case are allowed in C# . In this case, switch is present inside other switch case. Inner switch is present in one of the cases in parent switch.Example:
Csharp
// C# example for nested switch caseusing System; public class GFG{ public static void Main(String[] args) { int j = 5; switch (j) { case 5: Console.WriteLine(5); switch (j - 1) { case 4: Console.WriteLine(4); switch (j - 2) { case 3: Console.WriteLine(3); break; } break; } break; case 10: Console.WriteLine(10); break; case 15: Console.WriteLine(15); break; default: Console.WriteLine(100); break; } }}
Output:
5
4
3
surindertarika1234
CSharp-Basics
CSharp-ControlFlow
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between Abstract Class and Interface in C#
C# | IsNullOrEmpty() Method
C# | How to check whether a List contains a specified element
String.Split() Method in C# with Examples
C# | Arrays of Strings
C# | String.IndexOf( ) Method | Set - 1
Extension Method in C#
C# | Delegates
Difference between Ref and Out keywords in C#
C# | Replace() Method | [
{
"code": null,
"e": 25224,
"s": 25196,
"text": "\n06 Jul, 2021"
},
{
"code": null,
"e": 25665,
"s": 25224,
"text": "Decision Making in programming is similar to decision making in real life. In programming too, a certain block of code needs to be executed when some condition is fulfilled. A programming language uses control statements to control the flow of execution of program based on certain conditions. These are used to cause the flow of execution to advance and branch based on changes to the state of a program.The conditional statements of C#: "
},
{
"code": null,
"e": 25668,
"s": 25665,
"text": "if"
},
{
"code": null,
"e": 25676,
"s": 25668,
"text": "if-else"
},
{
"code": null,
"e": 25687,
"s": 25676,
"text": "if-else-if"
},
{
"code": null,
"e": 25697,
"s": 25687,
"text": "Nested if"
},
{
"code": null,
"e": 25704,
"s": 25697,
"text": "Switch"
},
{
"code": null,
"e": 25718,
"s": 25704,
"text": "Nested switch"
},
{
"code": null,
"e": 25886,
"s": 25718,
"text": "IF Statement The if statement checks the given condition. If the condition evaluates to be true then the block of code/statements will execute otherwise not. Syntax: "
},
{
"code": null,
"e": 25951,
"s": 25886,
"text": "if(condition)\n { \n //code to be executed \n } "
},
{
"code": null,
"e": 26114,
"s": 25951,
"text": "Note: If the curly brackets { } are not used with if statements than the statement just next to it is only considered associated with the if statement. Example: "
},
{
"code": null,
"e": 26158,
"s": 26114,
"text": "if (condition)\n statement 1;\nstatement 2;"
},
{
"code": null,
"e": 26258,
"s": 26158,
"text": "In this example, only statement 1 is considered to be associated with the if statement.Flowchart: "
},
{
"code": null,
"e": 26269,
"s": 26258,
"text": "Example: "
},
{
"code": null,
"e": 26276,
"s": 26269,
"text": "Csharp"
},
{
"code": "// C# program to illustrate if statementusing System; public class GFG { public static void Main(string[] args) { string name = \"Geek\"; if (name == \"Geek\") { Console.WriteLine(\"GeeksForGeeks\"); } }}",
"e": 26517,
"s": 26276,
"text": null
},
{
"code": null,
"e": 26527,
"s": 26517,
"text": "Output: "
},
{
"code": null,
"e": 26541,
"s": 26527,
"text": "GeeksForGeeks"
},
{
"code": null,
"e": 26762,
"s": 26541,
"text": "IF – else Statement The if statement evaluates the code if the condition is true but what if the condition is not true, here comes the else statement. It tells the code what to do when the if condition is false.Syntax: "
},
{
"code": null,
"e": 26898,
"s": 26762,
"text": " if(condition)\n { \n // code if condition is true \n }\n else\n { \n // code if condition is false \n } \n "
},
{
"code": null,
"e": 26911,
"s": 26898,
"text": "Flowchart: "
},
{
"code": null,
"e": 26922,
"s": 26911,
"text": "Example: "
},
{
"code": null,
"e": 26929,
"s": 26922,
"text": "Csharp"
},
{
"code": "// C# program to illustrate// if-else statementusing System; public class GFG { public static void Main(string[] args) { string name = \"Geek\"; if (name == \"Geeks\") { Console.WriteLine(\"GeeksForGeeksr\"); } else { Console.WriteLine(\"Geeks\"); } }}",
"e": 27241,
"s": 26929,
"text": null
},
{
"code": null,
"e": 27251,
"s": 27241,
"text": "Output: "
},
{
"code": null,
"e": 27257,
"s": 27251,
"text": "Geeks"
},
{
"code": null,
"e": 27604,
"s": 27257,
"text": "If – else – if ladder Statement The if-else-if ladder statement executes one condition from multiple statements. The execution starts from top and checked for each if condition. The statement of if block will be executed which evaluates to be true. If none of the if condition evaluates to be true then the last else block is evaluated. Syntax: "
},
{
"code": null,
"e": 28051,
"s": 27604,
"text": " if(condition1)\n { \n // code to be executed if condition1 is true \n }\n else if(condition2)\n { \n // code to be executed if condition2 is true \n } \n else if(condition3)\n { \n // code to be executed if condition3 is true \n } \n ... \n else\n {\n // code to be executed if all the conditions are false \n } \n "
},
{
"code": null,
"e": 28064,
"s": 28051,
"text": "Flowchart: "
},
{
"code": null,
"e": 28075,
"s": 28064,
"text": "Example: "
},
{
"code": null,
"e": 28082,
"s": 28075,
"text": "Csharp"
},
{
"code": "// C# program to illustrate// if-else-if ladderusing System; class GFG { public static void Main(String[] args) { int i = 20; if (i == 10) Console.WriteLine(\"i is 10\"); else if (i == 15) Console.WriteLine(\"i is 15\"); else if (i == 20) Console.WriteLine(\"i is 20\"); else Console.WriteLine(\"i is not present\"); }}",
"e": 28484,
"s": 28082,
"text": null
},
{
"code": null,
"e": 28494,
"s": 28484,
"text": "Output: "
},
{
"code": null,
"e": 28502,
"s": 28494,
"text": "i is 20"
},
{
"code": null,
"e": 28801,
"s": 28502,
"text": "Nested – If Statement if statement inside an if statement is known as nested if. if statement in this case is the target of another if or else statement. When more then one condition needs to be true and one of the condition is the sub-condition of parent condition, nested if can be used.Syntax: "
},
{
"code": null,
"e": 29066,
"s": 28801,
"text": " if (condition1) \n {\n // code to be executed \n // if condition2 is true \n if (condition2) \n {\n // code to be executed \n // if condition2 is true \n }\n }"
},
{
"code": null,
"e": 29079,
"s": 29066,
"text": "Flowchart: "
},
{
"code": null,
"e": 29089,
"s": 29079,
"text": "Example: "
},
{
"code": null,
"e": 29096,
"s": 29089,
"text": "csharp"
},
{
"code": "// C# program to illustrate// nested-if statementusing System; class GFG { public static void Main(String[] args) { int i = 10; if (i == 10) { // Nested - if statement // Will only be executed if statement // above it is true if (i < 12) Console.WriteLine(\"i is smaller than 12 too\"); else Console.WriteLine(\"i is greater than 15\"); } }}",
"e": 29552,
"s": 29096,
"text": null
},
{
"code": null,
"e": 29562,
"s": 29552,
"text": "Output: "
},
{
"code": null,
"e": 29587,
"s": 29562,
"text": "i is smaller than 12 too"
},
{
"code": null,
"e": 30036,
"s": 29587,
"text": "Switch Statement Switch statement is an alternative to long if-else-if ladders. The expression is checked for different cases and the one match is executed. break statement is used to move out of the switch. If the break is not used, the control will flow to all cases below it until break is found or switch comes to an end. There is default case (optional) at the end of switch, if none of the case matches then default case is executed.Syntax: "
},
{
"code": null,
"e": 30271,
"s": 30036,
"text": "switch (expression)\n {\ncase value1: // statement sequence\n break;\ncase value2: // statement sequence\n break;\n.\n.\n.\ncase valueN: // statement sequence\n break;\ndefault: // default statement sequence\n}"
},
{
"code": null,
"e": 30305,
"s": 30271,
"text": "Flow Diagram of Switch – case : "
},
{
"code": null,
"e": 30316,
"s": 30305,
"text": "Example: "
},
{
"code": null,
"e": 30323,
"s": 30316,
"text": "Csharp"
},
{
"code": "// C# example for switch caseusing System; public class GFG{ public static void Main(String[] args) { int number = 30; switch(number) { case 10: Console.WriteLine(\"case 10\"); break; case 20: Console.WriteLine(\"case 20\"); break; case 30: Console.WriteLine(\"case 30\"); break; default: Console.WriteLine(\"None matches\"); break; } }}",
"e": 30782,
"s": 30323,
"text": null
},
{
"code": null,
"e": 30792,
"s": 30782,
"text": "Output: "
},
{
"code": null,
"e": 30800,
"s": 30792,
"text": "case 30"
},
{
"code": null,
"e": 30983,
"s": 30800,
"text": "Nested switch Nested Switch case are allowed in C# . In this case, switch is present inside other switch case. Inner switch is present in one of the cases in parent switch.Example: "
},
{
"code": null,
"e": 30990,
"s": 30983,
"text": "Csharp"
},
{
"code": "// C# example for nested switch caseusing System; public class GFG{ public static void Main(String[] args) { int j = 5; switch (j) { case 5: Console.WriteLine(5); switch (j - 1) { case 4: Console.WriteLine(4); switch (j - 2) { case 3: Console.WriteLine(3); break; } break; } break; case 10: Console.WriteLine(10); break; case 15: Console.WriteLine(15); break; default: Console.WriteLine(100); break; } }}",
"e": 31781,
"s": 30990,
"text": null
},
{
"code": null,
"e": 31791,
"s": 31781,
"text": "Output: "
},
{
"code": null,
"e": 31797,
"s": 31791,
"text": "5\n4\n3"
},
{
"code": null,
"e": 31818,
"s": 31799,
"text": "surindertarika1234"
},
{
"code": null,
"e": 31832,
"s": 31818,
"text": "CSharp-Basics"
},
{
"code": null,
"e": 31851,
"s": 31832,
"text": "CSharp-ControlFlow"
},
{
"code": null,
"e": 31854,
"s": 31851,
"text": "C#"
},
{
"code": null,
"e": 31952,
"s": 31854,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31961,
"s": 31952,
"text": "Comments"
},
{
"code": null,
"e": 31974,
"s": 31961,
"text": "Old Comments"
},
{
"code": null,
"e": 32028,
"s": 31974,
"text": "Difference between Abstract Class and Interface in C#"
},
{
"code": null,
"e": 32056,
"s": 32028,
"text": "C# | IsNullOrEmpty() Method"
},
{
"code": null,
"e": 32118,
"s": 32056,
"text": "C# | How to check whether a List contains a specified element"
},
{
"code": null,
"e": 32160,
"s": 32118,
"text": "String.Split() Method in C# with Examples"
},
{
"code": null,
"e": 32183,
"s": 32160,
"text": "C# | Arrays of Strings"
},
{
"code": null,
"e": 32223,
"s": 32183,
"text": "C# | String.IndexOf( ) Method | Set - 1"
},
{
"code": null,
"e": 32246,
"s": 32223,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 32261,
"s": 32246,
"text": "C# | Delegates"
},
{
"code": null,
"e": 32307,
"s": 32261,
"text": "Difference between Ref and Out keywords in C#"
}
] |
Display numbers with thousands separator in Java | To display number with thousands separator, set a comma flag.
System.out.printf( "%,d\n",78567);
The above would result.
78, 567
Let’s check for bigger numbers.
System.out.printf( "%,d\n", 463758);
The above would result.
463,758
Live Demo
public class Demo {
public static void main( String args[] ) {
System.out.printf( "%,d\n", 95647 );
System.out.printf( "%,d\n", 687467 );
System.out.printf( "%,.2f\n", 7546.21 );
System.out.printf( "%,.2f\n", 463758.787 );
System.out.printf( "%,.2f", 123456.5 );
}
}
95,647
687,467
7,546.21
463,758.79
123,456.50 | [
{
"code": null,
"e": 1124,
"s": 1062,
"text": "To display number with thousands separator, set a comma flag."
},
{
"code": null,
"e": 1159,
"s": 1124,
"text": "System.out.printf( \"%,d\\n\",78567);"
},
{
"code": null,
"e": 1183,
"s": 1159,
"text": "The above would result."
},
{
"code": null,
"e": 1191,
"s": 1183,
"text": "78, 567"
},
{
"code": null,
"e": 1223,
"s": 1191,
"text": "Let’s check for bigger numbers."
},
{
"code": null,
"e": 1260,
"s": 1223,
"text": "System.out.printf( \"%,d\\n\", 463758);"
},
{
"code": null,
"e": 1284,
"s": 1260,
"text": "The above would result."
},
{
"code": null,
"e": 1292,
"s": 1284,
"text": "463,758"
},
{
"code": null,
"e": 1303,
"s": 1292,
"text": " Live Demo"
},
{
"code": null,
"e": 1606,
"s": 1303,
"text": "public class Demo {\n public static void main( String args[] ) {\n System.out.printf( \"%,d\\n\", 95647 );\n System.out.printf( \"%,d\\n\", 687467 );\n System.out.printf( \"%,.2f\\n\", 7546.21 );\n System.out.printf( \"%,.2f\\n\", 463758.787 );\n System.out.printf( \"%,.2f\", 123456.5 );\n }\n}"
},
{
"code": null,
"e": 1652,
"s": 1606,
"text": "95,647\n687,467\n7,546.21\n463,758.79\n123,456.50"
}
] |
Training Neural Networks with Validation using PyTorch - GeeksforGeeks | 19 Aug, 2021
Neural Networks are a biologically-inspired programming paradigm that deep learning is built around. Python provides various libraries using which you can create and train neural networks over given data. PyTorch is one such library that provides us with various utilities to build and train neural networks easily. When it comes to Neural Networks it becomes essential to set optimal architecture and hyper parameters. While training a neural network the training loss always keeps reducing provided the learning rate is optimal. But it’s important that our network performs better not only on data it’s trained on but also data that it has never seen before. One way to measure this is by introducing a validation set to keep track of the testing accuracy of the neural network. In this article we’ll how we can keep track of validation accuracy at each training step and also save the model weights with the best validation accuracy.
Installing PyTorch is pretty similar to any other python library. We can use pip or conda to install PyTorch:-
pip install torch torchvision
This command will install PyTorch along with torchvision which provides various datasets, models, and transforms for computer vision. To install using conda you can use the following command:-
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
For this tutorial, we are going to use the MNIST dataset that’s provided in the torchvision library. In Deep Learning we often train our neural networks in batches of a certain size, DataLoader is a data loading utility in PyTorch that creates an iterable over these batches of the dataset. Let’s start by loading our data:-
from torchvision import datasets, transforms
from torch.utils.data import DataLoader, random_split
transforms = transforms.Compose([
transforms.ToTensor()
])
In the above code, we declared a variable called transform which essentially helps us transform the raw data in the defined format. Here our transform is simply taking the raw data and converting it to a Tensor. A Tensor is a fancy way of saying a n-dimensional matrix.
train = datasets.MNIST('', train = True, transform = transforms, download = True)
train, valid = random_split(train,[50000,10000])
Now we are downloading our raw data and apply transform over it to convert it to Tensors, train tells if the data that’s being loaded is training data or testing data. In the end, we did a split the train tensor into 2 tensors of 50000 and 10000 data points which become our train and valid tensors.
trainloader = DataLoader(train, batch_size=32)
validloader = DataLoader(valid, batch_size=32)
Now we just created our DataLoaders of the above tensors of 32 batch size. Now that we have the data let’s start by creating our neural network.
There are 2 ways we can create neural networks in PyTorch i.e. using the Sequential() method or using the class method. We’ll use the class method to create our neural network since it gives more control over data flow. The format to create a neural network using the class method is as follows:-
from torch import nn
class model(nn.Module):
def __init__(self):
# Define Model Here
def forward(self, x):
# Define Forward Pass Here
So in the __init__() method we define our layers and other variables and in the forward() method we define our forward pass i.e. how data flows through the layers.
import torch
from torch import nn
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self):
super(Network,self).__init__()
self.fc1 = nn.Linear(28*28, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 10)
def forward(self, x):
x = x.view(1,-1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Network()
if torch.cuda.is_available():
model = model.cuda()
In the above code, we defined a neural network with the following architecture:-
Input Layer: 784 nodes, MNIST images are of dimension 28*28 which have 784 pixels so when flatted it’ll become the input to the neural network with 784 input nodes.
Hidden Layer 1: 256 nodes
Hidden Layer 2: 128 nodes
Output Layer: 10 nodes, for 10 classes i.e. numbers 0-9
nn.Linear() or Linear Layer is used to apply a linear transformation to the incoming data. If you are familiar with TensorFlow it’s pretty much like the Dense Layer.
In the forward() method we start off by flattening the image and passing it through each layer and applying the activation function for the same. After that, we create our neural network instance, and lastly, we are just checking if the machine has a GPU and if it has we’ll transfer our model there for faster computation.
Optimizers define how the weights of the neural network are to be updated, in this tutorial we’ll use SGD Optimizer or Stochastic Gradient Descent Optimizer. Optimizers take model parameters and learning rate as the input arguments. There are various optimizers you can try like Adam, Adagrad, etc.
The criterion is the loss that you want to minimize which in this case is the CrossEntropyLoss() which is the combination of log_softmax() and NLLLoss().
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)
The training step in PyTorch is almost identical almost every time you train it. But before implementing that let’s learn about 2 modes of the model object:-
Training Mode: Set by model.train(), it tells your model that you are training the model. So layers like dropout etc. which behave differently while training and testing can behave accordingly.
Evaluation Mode: Set by model.eval(), it tells your model that you are testing the model.
Even though you don’t need it here it’s still better to know about them. Now that we have that clear let’s understand the training steps:-
Move data to GPU (Optional)
Clear the gradients using optimizer.zero_grad()
Make a forward pass
Calculate the loss
Perform a backward pass using loss.backward() to calculate the gradients
Take optimizer step using optimizer.step() to update the weights
The validation and Testing steps are also similar but there you just make a forward pass and calculate the loss. A Simple training loop without validation is written like the following:-
epochs = 5
for e in range(epochs):
train_loss = 0.0
for data, labels in tqdm(trainloader):
# Transfer Data to GPU if available
if torch.cuda.is_available():
data, labels = data.cuda(), labels.cuda()
# Clear the gradients
optimizer.zero_grad()
# Forward Pass
target = model(data)
# Find the Loss
loss = criterion(target,labels)
# Calculate gradients
loss.backward()
# Update Weights
optimizer.step()
# Calculate Loss
train_loss += loss.item()
print(f'Epoch {e+1} \t\t Training Loss: {train_loss / len(trainloader)}')
If you add the validation loop it’ll be the same but with forward pass and loss calculation only. But it may happen that your last iteration isn’t the one that gave you the least validation loss. To tackle this we can set a max valid loss which can be np.inf and if the current valid loss is lesser than we can save the state dictionary of the model which we can load later, like a checkpoint. state_dict is an OrderedDict object that maps each layer to its parameter tensor.
import numpy as np
epochs = 5
min_valid_loss = np.inf
for e in range(epochs):
train_loss = 0.0
model.train() # Optional when not using Model Specific layer
for data, labels in trainloader:
if torch.cuda.is_available():
data, labels = data.cuda(), labels.cuda()
optimizer.zero_grad()
target = model(data)
loss = criterion(target,labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
valid_loss = 0.0
model.eval() # Optional when not using Model Specific layer
for data, labels in validloader:
if torch.cuda.is_available():
data, labels = data.cuda(), labels.cuda()
target = model(data)
loss = criterion(target,labels)
valid_loss = loss.item() * data.size(0)
print(f'Epoch {e+1} \t\t Training Loss: {train_loss / len(trainloader)} \t\t Validation Loss: {valid_loss / len(validloader)}')
if min_valid_loss > valid_loss:
print(f'Validation Loss Decreased({min_valid_loss:.6f}--->{valid_loss:.6f}) \t Saving The Model')
min_valid_loss = valid_loss
# Saving State Dict
torch.save(model.state_dict(), 'saved_model.pth')
After running the above code you should get the following output, although your loss might vary:-
Python3
import torchfrom torch import nnimport torch.nn.functional as Ffrom torchvision import datasets, transformsfrom torch.utils.data import DataLoader, random_splitimport numpy as np #Declare transform to convert raw data to tensortransforms = transforms.Compose([ transforms.ToTensor()]) # Loading Data and splitting it into train and validation datatrain = datasets.MNIST('', train = True, transform = transforms, download = True)train, valid = random_split(train,[50000,10000]) # Create Dataloader of the above tensor with batch size = 32trainloader = DataLoader(train, batch_size=32)validloader = DataLoader(valid, batch_size=32) # Building Our Modeclass Network(nn.Module): # Declaring the Architecture def __init__(self): super(Network,self).__init__() self.fc1 = nn.Linear(28*28, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 10) # Forward Pass def forward(self, x): x = x.view(x.shape[0],-1) # Flatten the images x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x model = Network()if torch.cuda.is_available(): model = model.cuda() # Declaring Criterion and Optimizercriterion = nn.CrossEntropyLoss()optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) # Training with Validationepochs = 5min_valid_loss = np.inf for e in range(epochs): train_loss = 0.0 for data, labels in trainloader: # Transfer Data to GPU if available if torch.cuda.is_available(): data, labels = data.cuda(), labels.cuda() # Clear the gradients optimizer.zero_grad() # Forward Pass target = model(data) # Find the Loss loss = criterion(target,labels) # Calculate gradients loss.backward() # Update Weights optimizer.step() # Calculate Loss train_loss += loss.item() valid_loss = 0.0 model.eval() # Optional when not using Model Specific layer for data, labels in validloader: # Transfer Data to GPU if available if torch.cuda.is_available(): data, labels = data.cuda(), labels.cuda() # Forward Pass target = model(data) # Find the Loss loss = criterion(target,labels) # Calculate Loss valid_loss += loss.item() print(f'Epoch {e+1} \t\t Training Loss: {\ train_loss / len(trainloader)} \t\t Validation Loss: {\ valid_loss / len(validloader)}') if min_valid_loss > valid_loss: print(f'Validation Loss Decreased({min_valid_loss:.6f\ }--->{valid_loss:.6f}) \t Saving The Model') min_valid_loss = valid_loss # Saving State Dict torch.save(model.state_dict(), 'saved_model.pth')
herumbshandilya
Neural Network
Machine Learning
Python
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Support Vector Machine Algorithm
Intuition of Adam Optimizer
Introduction to Recurrent Neural Network
CNN | Introduction to Pooling Layer
Singular Value Decomposition (SVD)
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
How to get column names in Pandas dataframe | [
{
"code": null,
"e": 25725,
"s": 25697,
"text": "\n19 Aug, 2021"
},
{
"code": null,
"e": 26662,
"s": 25725,
"text": "Neural Networks are a biologically-inspired programming paradigm that deep learning is built around. Python provides various libraries using which you can create and train neural networks over given data. PyTorch is one such library that provides us with various utilities to build and train neural networks easily. When it comes to Neural Networks it becomes essential to set optimal architecture and hyper parameters. While training a neural network the training loss always keeps reducing provided the learning rate is optimal. But it’s important that our network performs better not only on data it’s trained on but also data that it has never seen before. One way to measure this is by introducing a validation set to keep track of the testing accuracy of the neural network. In this article we’ll how we can keep track of validation accuracy at each training step and also save the model weights with the best validation accuracy."
},
{
"code": null,
"e": 26773,
"s": 26662,
"text": "Installing PyTorch is pretty similar to any other python library. We can use pip or conda to install PyTorch:-"
},
{
"code": null,
"e": 26803,
"s": 26773,
"text": "pip install torch torchvision"
},
{
"code": null,
"e": 26996,
"s": 26803,
"text": "This command will install PyTorch along with torchvision which provides various datasets, models, and transforms for computer vision. To install using conda you can use the following command:-"
},
{
"code": null,
"e": 27069,
"s": 26996,
"text": "conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch"
},
{
"code": null,
"e": 27394,
"s": 27069,
"text": "For this tutorial, we are going to use the MNIST dataset that’s provided in the torchvision library. In Deep Learning we often train our neural networks in batches of a certain size, DataLoader is a data loading utility in PyTorch that creates an iterable over these batches of the dataset. Let’s start by loading our data:-"
},
{
"code": null,
"e": 27586,
"s": 27394,
"text": "from torchvision import datasets, transforms\nfrom torch.utils.data import DataLoader, random_split\n\ntransforms = transforms.Compose([\n transforms.ToTensor()\n])"
},
{
"code": null,
"e": 27857,
"s": 27586,
"text": "In the above code, we declared a variable called transform which essentially helps us transform the raw data in the defined format. Here our transform is simply taking the raw data and converting it to a Tensor. A Tensor is a fancy way of saying a n-dimensional matrix. "
},
{
"code": null,
"e": 27988,
"s": 27857,
"text": "train = datasets.MNIST('', train = True, transform = transforms, download = True)\ntrain, valid = random_split(train,[50000,10000])"
},
{
"code": null,
"e": 28288,
"s": 27988,
"text": "Now we are downloading our raw data and apply transform over it to convert it to Tensors, train tells if the data that’s being loaded is training data or testing data. In the end, we did a split the train tensor into 2 tensors of 50000 and 10000 data points which become our train and valid tensors."
},
{
"code": null,
"e": 28382,
"s": 28288,
"text": "trainloader = DataLoader(train, batch_size=32)\nvalidloader = DataLoader(valid, batch_size=32)"
},
{
"code": null,
"e": 28527,
"s": 28382,
"text": "Now we just created our DataLoaders of the above tensors of 32 batch size. Now that we have the data let’s start by creating our neural network."
},
{
"code": null,
"e": 28824,
"s": 28527,
"text": "There are 2 ways we can create neural networks in PyTorch i.e. using the Sequential() method or using the class method. We’ll use the class method to create our neural network since it gives more control over data flow. The format to create a neural network using the class method is as follows:-"
},
{
"code": null,
"e": 28992,
"s": 28824,
"text": "from torch import nn\n\nclass model(nn.Module):\n def __init__(self):\n # Define Model Here\n \n def forward(self, x):\n # Define Forward Pass Here"
},
{
"code": null,
"e": 29157,
"s": 28992,
"text": "So in the __init__() method we define our layers and other variables and in the forward() method we define our forward pass i.e. how data flows through the layers. "
},
{
"code": null,
"e": 29663,
"s": 29157,
"text": "import torch\nfrom torch import nn\nimport torch.nn.functional as F\n\nclass Network(nn.Module):\n def __init__(self):\n super(Network,self).__init__()\n self.fc1 = nn.Linear(28*28, 256)\n self.fc2 = nn.Linear(256, 128)\n self.fc3 = nn.Linear(128, 10)\n\n def forward(self, x):\n x = x.view(1,-1)\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.fc3(x)\n return x\n\nmodel = Network()\n\nif torch.cuda.is_available():\n model = model.cuda()"
},
{
"code": null,
"e": 29744,
"s": 29663,
"text": "In the above code, we defined a neural network with the following architecture:-"
},
{
"code": null,
"e": 29909,
"s": 29744,
"text": "Input Layer: 784 nodes, MNIST images are of dimension 28*28 which have 784 pixels so when flatted it’ll become the input to the neural network with 784 input nodes."
},
{
"code": null,
"e": 29935,
"s": 29909,
"text": "Hidden Layer 1: 256 nodes"
},
{
"code": null,
"e": 29961,
"s": 29935,
"text": "Hidden Layer 2: 128 nodes"
},
{
"code": null,
"e": 30017,
"s": 29961,
"text": "Output Layer: 10 nodes, for 10 classes i.e. numbers 0-9"
},
{
"code": null,
"e": 30184,
"s": 30017,
"text": "nn.Linear() or Linear Layer is used to apply a linear transformation to the incoming data. If you are familiar with TensorFlow it’s pretty much like the Dense Layer. "
},
{
"code": null,
"e": 30508,
"s": 30184,
"text": "In the forward() method we start off by flattening the image and passing it through each layer and applying the activation function for the same. After that, we create our neural network instance, and lastly, we are just checking if the machine has a GPU and if it has we’ll transfer our model there for faster computation."
},
{
"code": null,
"e": 30807,
"s": 30508,
"text": "Optimizers define how the weights of the neural network are to be updated, in this tutorial we’ll use SGD Optimizer or Stochastic Gradient Descent Optimizer. Optimizers take model parameters and learning rate as the input arguments. There are various optimizers you can try like Adam, Adagrad, etc."
},
{
"code": null,
"e": 30961,
"s": 30807,
"text": "The criterion is the loss that you want to minimize which in this case is the CrossEntropyLoss() which is the combination of log_softmax() and NLLLoss()."
},
{
"code": null,
"e": 31054,
"s": 30961,
"text": "criterion = nn.CrossEntropyLoss()\noptimizer = torch.optim.SGD(model.parameters(), lr = 0.01)"
},
{
"code": null,
"e": 31212,
"s": 31054,
"text": "The training step in PyTorch is almost identical almost every time you train it. But before implementing that let’s learn about 2 modes of the model object:-"
},
{
"code": null,
"e": 31407,
"s": 31212,
"text": "Training Mode: Set by model.train(), it tells your model that you are training the model. So layers like dropout etc. which behave differently while training and testing can behave accordingly."
},
{
"code": null,
"e": 31498,
"s": 31407,
"text": "Evaluation Mode: Set by model.eval(), it tells your model that you are testing the model."
},
{
"code": null,
"e": 31637,
"s": 31498,
"text": "Even though you don’t need it here it’s still better to know about them. Now that we have that clear let’s understand the training steps:-"
},
{
"code": null,
"e": 31665,
"s": 31637,
"text": "Move data to GPU (Optional)"
},
{
"code": null,
"e": 31713,
"s": 31665,
"text": "Clear the gradients using optimizer.zero_grad()"
},
{
"code": null,
"e": 31733,
"s": 31713,
"text": "Make a forward pass"
},
{
"code": null,
"e": 31752,
"s": 31733,
"text": "Calculate the loss"
},
{
"code": null,
"e": 31825,
"s": 31752,
"text": "Perform a backward pass using loss.backward() to calculate the gradients"
},
{
"code": null,
"e": 31890,
"s": 31825,
"text": "Take optimizer step using optimizer.step() to update the weights"
},
{
"code": null,
"e": 32077,
"s": 31890,
"text": "The validation and Testing steps are also similar but there you just make a forward pass and calculate the loss. A Simple training loop without validation is written like the following:-"
},
{
"code": null,
"e": 32745,
"s": 32077,
"text": "epochs = 5\n\nfor e in range(epochs):\n train_loss = 0.0\n for data, labels in tqdm(trainloader):\n # Transfer Data to GPU if available\n if torch.cuda.is_available():\n data, labels = data.cuda(), labels.cuda()\n \n # Clear the gradients\n optimizer.zero_grad()\n # Forward Pass\n target = model(data)\n # Find the Loss\n loss = criterion(target,labels)\n # Calculate gradients \n loss.backward()\n # Update Weights\n optimizer.step()\n # Calculate Loss\n train_loss += loss.item()\n \n print(f'Epoch {e+1} \\t\\t Training Loss: {train_loss / len(trainloader)}')"
},
{
"code": null,
"e": 33221,
"s": 32745,
"text": "If you add the validation loop it’ll be the same but with forward pass and loss calculation only. But it may happen that your last iteration isn’t the one that gave you the least validation loss. To tackle this we can set a max valid loss which can be np.inf and if the current valid loss is lesser than we can save the state dictionary of the model which we can load later, like a checkpoint. state_dict is an OrderedDict object that maps each layer to its parameter tensor."
},
{
"code": null,
"e": 34456,
"s": 33221,
"text": "import numpy as np\nepochs = 5\nmin_valid_loss = np.inf\n\nfor e in range(epochs):\n train_loss = 0.0\n model.train() # Optional when not using Model Specific layer\n for data, labels in trainloader:\n if torch.cuda.is_available():\n data, labels = data.cuda(), labels.cuda()\n \n optimizer.zero_grad()\n target = model(data)\n loss = criterion(target,labels)\n loss.backward()\n optimizer.step()\n train_loss += loss.item()\n \n valid_loss = 0.0\n model.eval() # Optional when not using Model Specific layer\n for data, labels in validloader:\n if torch.cuda.is_available():\n data, labels = data.cuda(), labels.cuda()\n \n target = model(data)\n loss = criterion(target,labels)\n valid_loss = loss.item() * data.size(0)\n\n print(f'Epoch {e+1} \\t\\t Training Loss: {train_loss / len(trainloader)} \\t\\t Validation Loss: {valid_loss / len(validloader)}')\n if min_valid_loss > valid_loss:\n print(f'Validation Loss Decreased({min_valid_loss:.6f}--->{valid_loss:.6f}) \\t Saving The Model')\n min_valid_loss = valid_loss\n # Saving State Dict\n torch.save(model.state_dict(), 'saved_model.pth')"
},
{
"code": null,
"e": 34554,
"s": 34456,
"text": "After running the above code you should get the following output, although your loss might vary:-"
},
{
"code": null,
"e": 34562,
"s": 34554,
"text": "Python3"
},
{
"code": "import torchfrom torch import nnimport torch.nn.functional as Ffrom torchvision import datasets, transformsfrom torch.utils.data import DataLoader, random_splitimport numpy as np #Declare transform to convert raw data to tensortransforms = transforms.Compose([ transforms.ToTensor()]) # Loading Data and splitting it into train and validation datatrain = datasets.MNIST('', train = True, transform = transforms, download = True)train, valid = random_split(train,[50000,10000]) # Create Dataloader of the above tensor with batch size = 32trainloader = DataLoader(train, batch_size=32)validloader = DataLoader(valid, batch_size=32) # Building Our Modeclass Network(nn.Module): # Declaring the Architecture def __init__(self): super(Network,self).__init__() self.fc1 = nn.Linear(28*28, 256) self.fc2 = nn.Linear(256, 128) self.fc3 = nn.Linear(128, 10) # Forward Pass def forward(self, x): x = x.view(x.shape[0],-1) # Flatten the images x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x model = Network()if torch.cuda.is_available(): model = model.cuda() # Declaring Criterion and Optimizercriterion = nn.CrossEntropyLoss()optimizer = torch.optim.SGD(model.parameters(), lr = 0.01) # Training with Validationepochs = 5min_valid_loss = np.inf for e in range(epochs): train_loss = 0.0 for data, labels in trainloader: # Transfer Data to GPU if available if torch.cuda.is_available(): data, labels = data.cuda(), labels.cuda() # Clear the gradients optimizer.zero_grad() # Forward Pass target = model(data) # Find the Loss loss = criterion(target,labels) # Calculate gradients loss.backward() # Update Weights optimizer.step() # Calculate Loss train_loss += loss.item() valid_loss = 0.0 model.eval() # Optional when not using Model Specific layer for data, labels in validloader: # Transfer Data to GPU if available if torch.cuda.is_available(): data, labels = data.cuda(), labels.cuda() # Forward Pass target = model(data) # Find the Loss loss = criterion(target,labels) # Calculate Loss valid_loss += loss.item() print(f'Epoch {e+1} \\t\\t Training Loss: {\\ train_loss / len(trainloader)} \\t\\t Validation Loss: {\\ valid_loss / len(validloader)}') if min_valid_loss > valid_loss: print(f'Validation Loss Decreased({min_valid_loss:.6f\\ }--->{valid_loss:.6f}) \\t Saving The Model') min_valid_loss = valid_loss # Saving State Dict torch.save(model.state_dict(), 'saved_model.pth')",
"e": 37349,
"s": 34562,
"text": null
},
{
"code": null,
"e": 37365,
"s": 37349,
"text": "herumbshandilya"
},
{
"code": null,
"e": 37380,
"s": 37365,
"text": "Neural Network"
},
{
"code": null,
"e": 37397,
"s": 37380,
"text": "Machine Learning"
},
{
"code": null,
"e": 37404,
"s": 37397,
"text": "Python"
},
{
"code": null,
"e": 37421,
"s": 37404,
"text": "Machine Learning"
},
{
"code": null,
"e": 37519,
"s": 37421,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 37552,
"s": 37519,
"text": "Support Vector Machine Algorithm"
},
{
"code": null,
"e": 37580,
"s": 37552,
"text": "Intuition of Adam Optimizer"
},
{
"code": null,
"e": 37621,
"s": 37580,
"text": "Introduction to Recurrent Neural Network"
},
{
"code": null,
"e": 37657,
"s": 37621,
"text": "CNN | Introduction to Pooling Layer"
},
{
"code": null,
"e": 37692,
"s": 37657,
"text": "Singular Value Decomposition (SVD)"
},
{
"code": null,
"e": 37720,
"s": 37692,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 37770,
"s": 37720,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 37792,
"s": 37770,
"text": "Python map() function"
}
] |
JavaFX | MenuButton - GeeksforGeeks | 01 Dec, 2021
MenuButton is a part of the JavaFX library. The menuButton when pressed shows a context menu that displays a set of items and the user may select any item. It typically contains several menu items and the user may select at most one MenuItem at a time. Constructor of the MenuButton class are:
MenuButton(): creates a new menu buttonMenuButton(String t): creates a menubutton with a specified textMenuButton(String t, Node g): creates a menubutton with a specified text and graphicMenuButton(String t, Node g, MenuItem... i) creates a menubutton with a specified text, graphic and menuitems
MenuButton(): creates a new menu button
MenuButton(String t): creates a menubutton with a specified text
MenuButton(String t, Node g): creates a menubutton with a specified text and graphic
MenuButton(String t, Node g, MenuItem... i) creates a menubutton with a specified text, graphic and menuitems
Commonly used methods:
Below programs illustrate the MenuButton class:
Program to create a MenuButton and add MenuItems to it: A MenuButton will be created by name m and 3 menuitems m1, m2, m3 will be added to the menuButton m. The menubar will be created inside a scene, which in turn will be hosted inside a stage. The function setTitle() is used to provide title to the stage. Then a tilepane is created, on which addChildren() method is called to attach the menubutton inside the scene. Finally, the show() method is called to display the final results.
Java
// Program to create a menubutton and add menuitems to itimport javafx.application.Application;import javafx.scene.Scene;import javafx.scene.control.*;import javafx.scene.layout.*;import javafx.event.ActionEvent;import javafx.event.EventHandler;import javafx.collections.*;import javafx.stage.Stage;import javafx.scene.text.Text.*;import javafx.scene.paint.*;import javafx.scene.text.*;public class MenuButton_1 extends Application { // labels Label l; // launch the application public void start(Stage s) { // set title for the stage s.setTitle("creating MenuButton "); // create a tile pane TilePane r = new TilePane(); // create a label Label l1 = new Label("This is a MenuButton example "); // create a menu button MenuButton m = new MenuButton("menuButton"); // create menuitems MenuItem m1 = new MenuItem("menu item 1"); MenuItem m2 = new MenuItem("menu item 2"); MenuItem m3 = new MenuItem("menu item 3"); // add menu items to menu m.getItems().add(m1); m.getItems().add(m2); m.getItems().add(m3); // create a tilepane TilePane vb = new TilePane(l1, m); // create a scene Scene sc = new Scene(vb, 200, 200); // set the scene s.setScene(sc); s.show(); } public static void main(String args[]) { // launch the application launch(args); }}
Output:
Program to create a menubutton and add menuitems to it and also add an event handler to handle events: A menuButton will be created by name m and 3 menuitems m1, m2, m3 will be added to the menuButton m. The menubar will be created inside a scene, which in turn will be hosted inside a stage. The function setTitle() is used to provide title to the stage. Then a tilepane is created, on which addChildren() method is called to attach the menubutton inside the scene. Finally, the show() method is called to display the final results. Event handler will be created that will handle the events of the menu items. A label l2 will be created to show which menuitem is selected.
Java
// Program to create a menubutton and add menuitems// to it and also add event handler to handle eventsimport javafx.application.Application;import javafx.scene.Scene;import javafx.scene.control.*;import javafx.scene.layout.*;import javafx.stage.WindowEvent;import javafx.event.EventHandler.*;import javafx.event.EventHandler;import javafx.event.ActionEvent;import javafx.collections.*;import javafx.stage.Stage;import javafx.scene.text.Text.*;import javafx.scene.paint.*;import javafx.scene.text.*;public class MenuButton_2 extends Application { // labels Label l; // launch the application public void start(Stage s) { // set title for the stage s.setTitle("creating MenuButton "); // create a tile pane TilePane r = new TilePane(); // create a label Label l1 = new Label("This is a MenuButton example "); // create a menu MenuButton m = new MenuButton("MenuButton"); // create menuitems MenuItem m1 = new MenuItem("menu item 1"); MenuItem m2 = new MenuItem("menu item 2"); MenuItem m3 = new MenuItem("menu item 3"); // add menu items to menu m.getItems().add(m1); m.getItems().add(m2); m.getItems().add(m3); // label to display the selected menuitem Label l2 = new Label("default menuitem selected"); // create action event EventHandler<ActionEvent> event1 = new EventHandler<ActionEvent>() { public void handle(ActionEvent e) { l2.setText(((MenuItem)e.getSource()).getText() + " selected"); } }; // add action events to the menuitems m1.setOnAction(event1); m3.setOnAction(event1); m2.setOnAction(event1); // create a tilepane TilePane vb = new TilePane(l1); vb.getChildren().add(m); vb.getChildren().add(l2); // create a scene Scene sc = new Scene(vb, 200, 200); // set the scene s.setScene(sc); s.show(); } public static void main(String args[]) { // launch the application launch(args); }}
Output:
Note: The above programs might not run in an online IDE. Please use an offline compiler.Reference: https://docs.oracle.com/javase/8/javafx/api/javafx/scene/control/MenuButton.html
ManasChhabra2
rs1686740
JavaFX
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
HashMap in Java with Examples
Interfaces in Java
Initialize an ArrayList in Java
ArrayList in Java
Stack Class in Java
Singleton Class in Java
Multidimensional Arrays in Java
Set in Java
Multithreading in Java | [
{
"code": null,
"e": 25309,
"s": 25281,
"text": "\n01 Dec, 2021"
},
{
"code": null,
"e": 25604,
"s": 25309,
"text": "MenuButton is a part of the JavaFX library. The menuButton when pressed shows a context menu that displays a set of items and the user may select any item. It typically contains several menu items and the user may select at most one MenuItem at a time. Constructor of the MenuButton class are: "
},
{
"code": null,
"e": 25901,
"s": 25604,
"text": "MenuButton(): creates a new menu buttonMenuButton(String t): creates a menubutton with a specified textMenuButton(String t, Node g): creates a menubutton with a specified text and graphicMenuButton(String t, Node g, MenuItem... i) creates a menubutton with a specified text, graphic and menuitems"
},
{
"code": null,
"e": 25941,
"s": 25901,
"text": "MenuButton(): creates a new menu button"
},
{
"code": null,
"e": 26006,
"s": 25941,
"text": "MenuButton(String t): creates a menubutton with a specified text"
},
{
"code": null,
"e": 26091,
"s": 26006,
"text": "MenuButton(String t, Node g): creates a menubutton with a specified text and graphic"
},
{
"code": null,
"e": 26201,
"s": 26091,
"text": "MenuButton(String t, Node g, MenuItem... i) creates a menubutton with a specified text, graphic and menuitems"
},
{
"code": null,
"e": 26225,
"s": 26201,
"text": "Commonly used methods: "
},
{
"code": null,
"e": 26274,
"s": 26225,
"text": "Below programs illustrate the MenuButton class: "
},
{
"code": null,
"e": 26761,
"s": 26274,
"text": "Program to create a MenuButton and add MenuItems to it: A MenuButton will be created by name m and 3 menuitems m1, m2, m3 will be added to the menuButton m. The menubar will be created inside a scene, which in turn will be hosted inside a stage. The function setTitle() is used to provide title to the stage. Then a tilepane is created, on which addChildren() method is called to attach the menubutton inside the scene. Finally, the show() method is called to display the final results."
},
{
"code": null,
"e": 26766,
"s": 26761,
"text": "Java"
},
{
"code": "// Program to create a menubutton and add menuitems to itimport javafx.application.Application;import javafx.scene.Scene;import javafx.scene.control.*;import javafx.scene.layout.*;import javafx.event.ActionEvent;import javafx.event.EventHandler;import javafx.collections.*;import javafx.stage.Stage;import javafx.scene.text.Text.*;import javafx.scene.paint.*;import javafx.scene.text.*;public class MenuButton_1 extends Application { // labels Label l; // launch the application public void start(Stage s) { // set title for the stage s.setTitle(\"creating MenuButton \"); // create a tile pane TilePane r = new TilePane(); // create a label Label l1 = new Label(\"This is a MenuButton example \"); // create a menu button MenuButton m = new MenuButton(\"menuButton\"); // create menuitems MenuItem m1 = new MenuItem(\"menu item 1\"); MenuItem m2 = new MenuItem(\"menu item 2\"); MenuItem m3 = new MenuItem(\"menu item 3\"); // add menu items to menu m.getItems().add(m1); m.getItems().add(m2); m.getItems().add(m3); // create a tilepane TilePane vb = new TilePane(l1, m); // create a scene Scene sc = new Scene(vb, 200, 200); // set the scene s.setScene(sc); s.show(); } public static void main(String args[]) { // launch the application launch(args); }}",
"e": 28223,
"s": 26766,
"text": null
},
{
"code": null,
"e": 28233,
"s": 28223,
"text": "Output: "
},
{
"code": null,
"e": 28908,
"s": 28233,
"text": "Program to create a menubutton and add menuitems to it and also add an event handler to handle events: A menuButton will be created by name m and 3 menuitems m1, m2, m3 will be added to the menuButton m. The menubar will be created inside a scene, which in turn will be hosted inside a stage. The function setTitle() is used to provide title to the stage. Then a tilepane is created, on which addChildren() method is called to attach the menubutton inside the scene. Finally, the show() method is called to display the final results. Event handler will be created that will handle the events of the menu items. A label l2 will be created to show which menuitem is selected. "
},
{
"code": null,
"e": 28913,
"s": 28908,
"text": "Java"
},
{
"code": "// Program to create a menubutton and add menuitems// to it and also add event handler to handle eventsimport javafx.application.Application;import javafx.scene.Scene;import javafx.scene.control.*;import javafx.scene.layout.*;import javafx.stage.WindowEvent;import javafx.event.EventHandler.*;import javafx.event.EventHandler;import javafx.event.ActionEvent;import javafx.collections.*;import javafx.stage.Stage;import javafx.scene.text.Text.*;import javafx.scene.paint.*;import javafx.scene.text.*;public class MenuButton_2 extends Application { // labels Label l; // launch the application public void start(Stage s) { // set title for the stage s.setTitle(\"creating MenuButton \"); // create a tile pane TilePane r = new TilePane(); // create a label Label l1 = new Label(\"This is a MenuButton example \"); // create a menu MenuButton m = new MenuButton(\"MenuButton\"); // create menuitems MenuItem m1 = new MenuItem(\"menu item 1\"); MenuItem m2 = new MenuItem(\"menu item 2\"); MenuItem m3 = new MenuItem(\"menu item 3\"); // add menu items to menu m.getItems().add(m1); m.getItems().add(m2); m.getItems().add(m3); // label to display the selected menuitem Label l2 = new Label(\"default menuitem selected\"); // create action event EventHandler<ActionEvent> event1 = new EventHandler<ActionEvent>() { public void handle(ActionEvent e) { l2.setText(((MenuItem)e.getSource()).getText() + \" selected\"); } }; // add action events to the menuitems m1.setOnAction(event1); m3.setOnAction(event1); m2.setOnAction(event1); // create a tilepane TilePane vb = new TilePane(l1); vb.getChildren().add(m); vb.getChildren().add(l2); // create a scene Scene sc = new Scene(vb, 200, 200); // set the scene s.setScene(sc); s.show(); } public static void main(String args[]) { // launch the application launch(args); }}",
"e": 31052,
"s": 28913,
"text": null
},
{
"code": null,
"e": 31062,
"s": 31052,
"text": "Output: "
},
{
"code": null,
"e": 31242,
"s": 31062,
"text": "Note: The above programs might not run in an online IDE. Please use an offline compiler.Reference: https://docs.oracle.com/javase/8/javafx/api/javafx/scene/control/MenuButton.html"
},
{
"code": null,
"e": 31256,
"s": 31242,
"text": "ManasChhabra2"
},
{
"code": null,
"e": 31266,
"s": 31256,
"text": "rs1686740"
},
{
"code": null,
"e": 31273,
"s": 31266,
"text": "JavaFX"
},
{
"code": null,
"e": 31278,
"s": 31273,
"text": "Java"
},
{
"code": null,
"e": 31283,
"s": 31278,
"text": "Java"
},
{
"code": null,
"e": 31381,
"s": 31283,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31396,
"s": 31381,
"text": "Stream In Java"
},
{
"code": null,
"e": 31426,
"s": 31396,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 31445,
"s": 31426,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 31477,
"s": 31445,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 31495,
"s": 31477,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 31515,
"s": 31495,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 31539,
"s": 31515,
"text": "Singleton Class in Java"
},
{
"code": null,
"e": 31571,
"s": 31539,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 31583,
"s": 31571,
"text": "Set in Java"
}
] |
Tryit Editor v3.7 | HTML form attributes
Tryit: HTML form action attribute | [
{
"code": null,
"e": 31,
"s": 10,
"text": "HTML form attributes"
}
] |
Double elements and append zeros in linked list - GeeksforGeeks | 22 Mar, 2022
Given a linked list with some two adjacent repeating nodes before a zero, task is to double the first and make next 0. After this, append all the zeros to tail.Prerequisite: Basics of implementation of Singly Linked List
Examples :
Input : 4 -> 4 -> 0 -> 2 -> 3 -> 4 ->
3 -> 3 -> 0 -> 4 ->
Output : 8-> 2-> 3-> 4-> 6-> 4-> 0->
0-> 0-> 0->
Explanation :
First, after doubling the first element and making
second element 0 before all zeros.
8 -> 0 -> 0 -> 2 -> 3 -> 4 -> 6 -> 0
-> 0 -> 4 ->
Next :
8 -> 6 -> 5 -> 6 -> 0 -> 0 -> 0 ->
0 -> 0 -> 0 -> 0 ->
Input : 0 -> 4 -> 4 -> 0 -> 3 -> 3 -> 0
-> 5 -> 0 -> 0 -> 6 ->
Output : 8 -> 6 -> 5 -> 6 -> 0 -> 0 -> 0
-> 0 -> 0 -> 0 -> 0 ->
Traverse through the linked list, and wherever there are two adjacent same data of nodes before a 0 (e.g. 4 -> 4 -> 0), then, double first element and make another as 0 (e.g. 8 -> 0 -> 0 ->). Finally, traverse the linked list and linearly point all the zeros to tail.
Java
C++
Python3
C#
Javascript
// Java code to modify linked listimport java.util.*; // Linked List Nodeclass Node { int data; Node next; // Constructor public Node(int data) { this.data = data; next = null; }} // Class to perform operations// on linked listclass GfG { // Recursive function to double the one of two // elements and make next one as 0, // which are equal before 0 public static void changeTwoBefore0(Node head) { // there should be atleast three elements // to perform required operation if (head == null || head.next == null || head.next.next == null) return; // when two continuous elements // are same if ((head.data == head.next.data) && (head.next.next.data == 0)) { int temp = head.data; head.data = 2 * temp; head.next.data = 0; if (head.next.next.next != null) head = head.next.next.next; else return; } else head = head.next; // recursive call to changeTwoBefore0 // for next element changeTwoBefore0(head); } // function to append zeros at tail public static Node appendZero(Node head) { if (head == null || head.next == null) return head; // Find tail node Node tail = head; while (tail.next != null) tail = tail.next; Node origTail = tail; // Case when starting nodes have 0 values // we need to change head in this case. Node curr = head; while (curr.next != null && curr.data == 0) { tail.next = curr; tail = curr; curr = curr.next; } head = curr; // Now moving other 0s to end Node prev = curr; curr = curr.next; // We check until original tail while (curr != origTail) { // If current data is 0, append // after tail and update tail. if (curr.data == 0) { tail.next = curr; tail = curr; prev.next = curr.next; } else prev = curr; // We always move current curr = curr.next; } // Finally making sure that linked // list is null terminated. tail.next = null; return head; } public static Node doubleAndAppend0(Node head) { // Change two same nodes before 0 changeTwoBefore0(head); // Move all 0s to end return appendZero(head); } // function to display the nodes public static void display(Node head) { while (head != null) { System.out.print(head.data + " -> "); head = head.next; } } // Driver code public static void main(String[] args) { Node head = new Node(4); head.next = new Node(4); head.next.next = new Node(0); head.next.next.next = new Node(2); head.next.next.next.next = new Node(3); head.next.next.next.next.next = new Node(4); head.next.next.next.next.next.next = new Node(3); head.next.next.next.next.next.next.next = new Node(3); head.next.next.next.next.next.next.next.next = new Node(0); head.next.next.next.next.next.next.next.next.next = new Node(4); System.out.println("Original linked list :"); display(head); head = doubleAndAppend0(head); System.out.println("\nModified linked list :"); display(head); }}
// C++ code to modify linked list#include <bits/stdc++.h>using namespace std; // Linked List Nodeclass Node {public: int data; Node* next; // Constructorpublic: Node(int dat) { data = dat; next = NULL; }}; // Recursive function to double the one of two// elements and make next one as 0,// which are equal before 0void changeTwoBefore0(Node* head){ // there should be atleast three elements // to perform required operation if (head == NULL || head->next == NULL || head->next->next == NULL) return; // when two continuous elements // are same if ((head->data == head->next->data) && (head->next->next->data == 0)) { int temp = head->data; head->data = 2 * temp; head->next->data = 0; if (head->next->next->next != NULL) head = head->next->next->next; else return; } else head = head->next; // recursive call to changeTwoBefore0 // for next element changeTwoBefore0(head);}// function to append zeros at tailNode* appendZero(Node* head){ if (head == NULL || head->next == NULL) return head; // Find tail node Node* tail = head; while (tail->next != NULL) tail = tail->next; Node* origTail = tail; // Case when starting nodes have 0 values // we need to change head in this case. Node* curr = head; while (curr->next != NULL && curr->data == 0) { tail->next = curr; tail = curr; curr = curr->next; } head = curr; // Now moving other 0s to end Node* prev = curr; curr = curr->next; // We check until original tail while (curr != origTail) { // If current data is 0, append // after tail and update tail. if (curr->data == 0) { tail->next = curr; tail = curr; prev->next = curr->next; } else prev = curr; // We always move current curr = curr->next; } // Finally making sure that linked // list is NULL terminated. tail->next = NULL; return head;} Node* doubleAndAppend0(Node* head){ // Change two same nodes before 0 changeTwoBefore0(head); // Move all 0s to end return appendZero(head);} // function to display the nodesvoid display(Node* head){ while (head != NULL) { cout << head->data << " -> "; head = head->next; }} // Driver Codeint main(){ Node* head = new Node(4); head->next = new Node(4); head->next->next = new Node(0); head->next->next->next = new Node(2); head->next->next->next->next = new Node(3); head->next->next->next->next->next = new Node(4); head->next->next->next->next->next->next = new Node(3); head->next->next->next->next->next->next->next = new Node(3); head->next->next->next->next->next->next->next->next = new Node(0); head->next->next->next->next->next->next->next->next ->next = new Node(4); cout << "Original linked list :"; display(head); head = doubleAndAppend0(head); cout << "\nModified linked list :"; display(head); return 0;} // contributed by ajaykr00kj
# Python3 code to modify linked list # Linked List Nodeclass Node: # Constructor def __init__(self, data): self.data = data self.next = None # Recursive function to double the one of two# elements and make next one as 0,# which are equal before 0def changeTwoBefore0 (head): # there should be atleast three elements # to perform required operation if (head == None or head.next == None or head.next.next == None): return # when two continuous elements # are same if ((head.data == head.next.data) and (head.next.next.data == 0)): temp = head.data head.data = 2*temp head.next.data = 0 if (head.next.next.next != None): head = head.next.next.next else: return else: head = head.next # recursive call to changeTwoBefore0 # for next element changeTwoBefore0(head) # function to append zeros at taildef appendZero( head): if (head == None or head.next == None): return head # Find tail node tail = head while (tail.next != None): tail = tail.next origTail = tail # Case when starting nodes have 0 values # we need to change head in this case. curr = head while (curr.next != None and curr.data == 0): tail.next = curr tail = curr curr = curr.next head = curr # Now moving other 0s to end prev = curr curr = curr.next # We check until original tail while (curr != origTail): # If current data is 0, append # after tail and update tail. if (curr.data == 0): tail.next = curr tail = curr prev.next = curr.next else: prev = curr # We always move current curr = curr.next # Finally making sure that linked # list is None terminated. tail.next = None return head def doubleAndAppend0(head): # Change two same nodes before 0 changeTwoBefore0(head) # Move all 0s to end return appendZero(head) # function to display the nodesdef display( head): while (head != None): print(head.data ,end = " -> ") head = head.next # Driver code head = Node(4)head.next = Node(4)head.next.next = Node(0)head.next.next.next = Node(2)head.next.next.next.next = Node(3)head.next.next.next.next.next = Node(4)head.next.next.next.next.next.next = Node(3)head.next.next.next.next.next.next.next = Node(3)head.next.next.next.next.next.next.next.next = Node(0)head.next.next.next.next.next.next.next.next.next = Node(4) print("Original linked list :")display(head) head = doubleAndAppend0(head) print("\nModified linked list :")display(head) # This code is contributed by Arnab Kundu
// C# code to modify linked listusing System; // Linked List Nodepublic class Node{ public int data; public Node next; // Constructor public Node(int data) { this.data = data; next = null; }} // Class to perform operations// on linked listpublic class GfG{ // Recursive function to double the one of two // elements and make next one as 0, // which are equal before 0 public static void changeTwoBefore0(Node head) { // there should be atleast three elements // to perform required operation if (head == null || head.next == null || head.next.next == null) return; // when two continuous elements // are same if ((head.data == head.next.data) && (head.next.next.data == 0)) { int temp = head.data; head.data = 2*temp; head.next.data = 0; if (head.next.next.next != null) head = head.next.next.next; else return; } else head = head.next; // recursive call to changeTwoBefore0 // for next element changeTwoBefore0(head); } // function to append zeros at tail public static Node appendZero(Node head) { if (head == null || head.next == null) return head; // Find tail node Node tail = head; while (tail.next != null) tail = tail.next; Node origTail = tail; // Case when starting nodes have 0 values // we need to change head in this case. Node curr = head; while (curr.next != null && curr.data == 0) { tail.next = curr; tail = curr; curr = curr.next; } head = curr; // Now moving other 0s to end Node prev = curr; curr = curr.next; // We check until original tail while (curr != origTail) { // If current data is 0, append // after tail and update tail. if (curr.data == 0) { tail.next = curr; tail = curr; prev.next = curr.next; } else prev = curr; // We always move current curr = curr.next; } // Finally making sure that linked // list is null terminated. tail.next = null; return head; } public static Node doubleAndAppend0(Node head) { // Change two same nodes before 0 changeTwoBefore0(head); // Move all 0s to end return appendZero(head); } // function to display the nodes public static void display(Node head) { while (head != null) { Console.Write(head.data + " -> "); head = head.next; } } // Driver code public static void Main() { Node head = new Node(4); head.next = new Node(4); head.next.next = new Node(0); head.next.next.next = new Node(2); head.next.next.next.next = new Node(3); head.next.next.next.next.next = new Node(4); head.next.next.next.next.next.next = new Node(3); head.next.next.next.next.next.next.next = new Node(3); head.next.next.next.next.next.next.next.next = new Node(0); head.next.next.next.next.next.next.next.next.next = new Node(4); Console.Write("Original linked list :\n"); display(head); head = doubleAndAppend0(head); Console.WriteLine("\nModified linked list :"); display(head); }} /* This code contributed by PrinciRaj1992 */
<script> // JavaScript code to modify linked list // Linked List Nodeclass Node{ constructor(data) { this.data = data; this.next = null; }} // Class to perform operations// on linked list// Recursive function to double the one of two// elements and make next one as 0,// which are equal before 0function changeTwoBefore0(head){ // there should be atleast three elements // to perform required operation if (head == null || head.next == null || head.next.next == null) return; // when two continuous elements // are same if ((head.data == head.next.data) && (head.next.next.data == 0)) { var temp = head.data; head.data = 2*temp; head.next.data = 0; if (head.next.next.next != null) head = head.next.next.next; else return; } else head = head.next; // recursive call to changeTwoBefore0 // for next element changeTwoBefore0(head);}// function to append zeros at tailfunction appendZero(head){ if (head == null || head.next == null) return head; // Find tail node var tail = head; while (tail.next != null) tail = tail.next; var origTail = tail; // Case when starting nodes have 0 values // we need to change head in this case. var curr = head; while (curr.next != null && curr.data == 0) { tail.next = curr; tail = curr; curr = curr.next; } head = curr; // Now moving other 0s to end var prev = curr; curr = curr.next; // We check until original tail while (curr != origTail) { // If current data is 0, append // after tail and update tail. if (curr.data == 0) { tail.next = curr; tail = curr; prev.next = curr.next; } else prev = curr; // We always move current curr = curr.next; } // Finally making sure that linked // list is null terminated. tail.next = null; return head;} function doubleAndAppend0(head){ // Change two same nodes before 0 changeTwoBefore0(head); // Move all 0s to end return appendZero(head);}// function to display the nodesfunction display( head){ while (head != null) { document.write(head.data + " -> "); head = head.next; }} // Driver codevar head = new Node(4);head.next = new Node(4);head.next.next = new Node(0);head.next.next.next = new Node(2);head.next.next.next.next = new Node(3);head.next.next.next.next.next = new Node(4);head.next.next.next.next.next.next = new Node(3);head.next.next.next.next.next.next.next = new Node(3);head.next.next.next.next.next.next.next.next = new Node(0);head.next.next.next.next.next.next.next.next.next = new Node(4);document.write("Original linked list :<br>");display(head);head = doubleAndAppend0(head);document.write("<br>Modified linked list :<br>");display(head); </script>
Original linked list :
4 -> 4 -> 0 -> 2 -> 3 -> 4 -> 3 -> 3 -> 0 -> 4 ->
Modified linked list :
8 -> 2 -> 3 -> 4 -> 6 -> 4 -> 0 -> 0 -> 0 -> 0 ->
Time complexity: O(n), where n is the number of nodes of linked list.
princiraj1992
Akanksha_Rai
andrew1234
ajaykr00kj
rrrtnx
simmytarika5
Microsoft
Linked List
Microsoft
Linked List
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Circular Linked List | Set 2 (Traversal)
Swap nodes in a linked list without swapping data
Program to implement Singly Linked List in C++ using class
Circular Singly Linked List | Insertion
Given a linked list which is sorted, how will you insert in sorted way
Real-time application of Data Structures
Delete a node in a Doubly Linked List
Linked List Implementation in C#
Insert a node at a specific position in a linked list
Priority Queue using Linked List | [
{
"code": null,
"e": 26205,
"s": 26177,
"text": "\n22 Mar, 2022"
},
{
"code": null,
"e": 26426,
"s": 26205,
"text": "Given a linked list with some two adjacent repeating nodes before a zero, task is to double the first and make next 0. After this, append all the zeros to tail.Prerequisite: Basics of implementation of Singly Linked List"
},
{
"code": null,
"e": 26438,
"s": 26426,
"text": "Examples : "
},
{
"code": null,
"e": 26929,
"s": 26438,
"text": "Input : 4 -> 4 -> 0 -> 2 -> 3 -> 4 -> \n 3 -> 3 -> 0 -> 4 -> \nOutput : 8-> 2-> 3-> 4-> 6-> 4-> 0-> \n 0-> 0-> 0-> \n\nExplanation :\nFirst, after doubling the first element and making\nsecond element 0 before all zeros.\n8 -> 0 -> 0 -> 2 -> 3 -> 4 -> 6 -> 0 \n-> 0 -> 4 ->\nNext :\n8 -> 6 -> 5 -> 6 -> 0 -> 0 -> 0 -> \n0 -> 0 -> 0 -> 0 -> \n\nInput : 0 -> 4 -> 4 -> 0 -> 3 -> 3 -> 0 \n -> 5 -> 0 -> 0 -> 6 ->\nOutput : 8 -> 6 -> 5 -> 6 -> 0 -> 0 -> 0 \n -> 0 -> 0 -> 0 -> 0 ->"
},
{
"code": null,
"e": 27198,
"s": 26929,
"text": "Traverse through the linked list, and wherever there are two adjacent same data of nodes before a 0 (e.g. 4 -> 4 -> 0), then, double first element and make another as 0 (e.g. 8 -> 0 -> 0 ->). Finally, traverse the linked list and linearly point all the zeros to tail. "
},
{
"code": null,
"e": 27203,
"s": 27198,
"text": "Java"
},
{
"code": null,
"e": 27207,
"s": 27203,
"text": "C++"
},
{
"code": null,
"e": 27215,
"s": 27207,
"text": "Python3"
},
{
"code": null,
"e": 27218,
"s": 27215,
"text": "C#"
},
{
"code": null,
"e": 27229,
"s": 27218,
"text": "Javascript"
},
{
"code": "// Java code to modify linked listimport java.util.*; // Linked List Nodeclass Node { int data; Node next; // Constructor public Node(int data) { this.data = data; next = null; }} // Class to perform operations// on linked listclass GfG { // Recursive function to double the one of two // elements and make next one as 0, // which are equal before 0 public static void changeTwoBefore0(Node head) { // there should be atleast three elements // to perform required operation if (head == null || head.next == null || head.next.next == null) return; // when two continuous elements // are same if ((head.data == head.next.data) && (head.next.next.data == 0)) { int temp = head.data; head.data = 2 * temp; head.next.data = 0; if (head.next.next.next != null) head = head.next.next.next; else return; } else head = head.next; // recursive call to changeTwoBefore0 // for next element changeTwoBefore0(head); } // function to append zeros at tail public static Node appendZero(Node head) { if (head == null || head.next == null) return head; // Find tail node Node tail = head; while (tail.next != null) tail = tail.next; Node origTail = tail; // Case when starting nodes have 0 values // we need to change head in this case. Node curr = head; while (curr.next != null && curr.data == 0) { tail.next = curr; tail = curr; curr = curr.next; } head = curr; // Now moving other 0s to end Node prev = curr; curr = curr.next; // We check until original tail while (curr != origTail) { // If current data is 0, append // after tail and update tail. if (curr.data == 0) { tail.next = curr; tail = curr; prev.next = curr.next; } else prev = curr; // We always move current curr = curr.next; } // Finally making sure that linked // list is null terminated. tail.next = null; return head; } public static Node doubleAndAppend0(Node head) { // Change two same nodes before 0 changeTwoBefore0(head); // Move all 0s to end return appendZero(head); } // function to display the nodes public static void display(Node head) { while (head != null) { System.out.print(head.data + \" -> \"); head = head.next; } } // Driver code public static void main(String[] args) { Node head = new Node(4); head.next = new Node(4); head.next.next = new Node(0); head.next.next.next = new Node(2); head.next.next.next.next = new Node(3); head.next.next.next.next.next = new Node(4); head.next.next.next.next.next.next = new Node(3); head.next.next.next.next.next.next.next = new Node(3); head.next.next.next.next.next.next.next.next = new Node(0); head.next.next.next.next.next.next.next.next.next = new Node(4); System.out.println(\"Original linked list :\"); display(head); head = doubleAndAppend0(head); System.out.println(\"\\nModified linked list :\"); display(head); }}",
"e": 30822,
"s": 27229,
"text": null
},
{
"code": "// C++ code to modify linked list#include <bits/stdc++.h>using namespace std; // Linked List Nodeclass Node {public: int data; Node* next; // Constructorpublic: Node(int dat) { data = dat; next = NULL; }}; // Recursive function to double the one of two// elements and make next one as 0,// which are equal before 0void changeTwoBefore0(Node* head){ // there should be atleast three elements // to perform required operation if (head == NULL || head->next == NULL || head->next->next == NULL) return; // when two continuous elements // are same if ((head->data == head->next->data) && (head->next->next->data == 0)) { int temp = head->data; head->data = 2 * temp; head->next->data = 0; if (head->next->next->next != NULL) head = head->next->next->next; else return; } else head = head->next; // recursive call to changeTwoBefore0 // for next element changeTwoBefore0(head);}// function to append zeros at tailNode* appendZero(Node* head){ if (head == NULL || head->next == NULL) return head; // Find tail node Node* tail = head; while (tail->next != NULL) tail = tail->next; Node* origTail = tail; // Case when starting nodes have 0 values // we need to change head in this case. Node* curr = head; while (curr->next != NULL && curr->data == 0) { tail->next = curr; tail = curr; curr = curr->next; } head = curr; // Now moving other 0s to end Node* prev = curr; curr = curr->next; // We check until original tail while (curr != origTail) { // If current data is 0, append // after tail and update tail. if (curr->data == 0) { tail->next = curr; tail = curr; prev->next = curr->next; } else prev = curr; // We always move current curr = curr->next; } // Finally making sure that linked // list is NULL terminated. tail->next = NULL; return head;} Node* doubleAndAppend0(Node* head){ // Change two same nodes before 0 changeTwoBefore0(head); // Move all 0s to end return appendZero(head);} // function to display the nodesvoid display(Node* head){ while (head != NULL) { cout << head->data << \" -> \"; head = head->next; }} // Driver Codeint main(){ Node* head = new Node(4); head->next = new Node(4); head->next->next = new Node(0); head->next->next->next = new Node(2); head->next->next->next->next = new Node(3); head->next->next->next->next->next = new Node(4); head->next->next->next->next->next->next = new Node(3); head->next->next->next->next->next->next->next = new Node(3); head->next->next->next->next->next->next->next->next = new Node(0); head->next->next->next->next->next->next->next->next ->next = new Node(4); cout << \"Original linked list :\"; display(head); head = doubleAndAppend0(head); cout << \"\\nModified linked list :\"; display(head); return 0;} // contributed by ajaykr00kj",
"e": 33997,
"s": 30822,
"text": null
},
{
"code": "# Python3 code to modify linked list # Linked List Nodeclass Node: # Constructor def __init__(self, data): self.data = data self.next = None # Recursive function to double the one of two# elements and make next one as 0,# which are equal before 0def changeTwoBefore0 (head): # there should be atleast three elements # to perform required operation if (head == None or head.next == None or head.next.next == None): return # when two continuous elements # are same if ((head.data == head.next.data) and (head.next.next.data == 0)): temp = head.data head.data = 2*temp head.next.data = 0 if (head.next.next.next != None): head = head.next.next.next else: return else: head = head.next # recursive call to changeTwoBefore0 # for next element changeTwoBefore0(head) # function to append zeros at taildef appendZero( head): if (head == None or head.next == None): return head # Find tail node tail = head while (tail.next != None): tail = tail.next origTail = tail # Case when starting nodes have 0 values # we need to change head in this case. curr = head while (curr.next != None and curr.data == 0): tail.next = curr tail = curr curr = curr.next head = curr # Now moving other 0s to end prev = curr curr = curr.next # We check until original tail while (curr != origTail): # If current data is 0, append # after tail and update tail. if (curr.data == 0): tail.next = curr tail = curr prev.next = curr.next else: prev = curr # We always move current curr = curr.next # Finally making sure that linked # list is None terminated. tail.next = None return head def doubleAndAppend0(head): # Change two same nodes before 0 changeTwoBefore0(head) # Move all 0s to end return appendZero(head) # function to display the nodesdef display( head): while (head != None): print(head.data ,end = \" -> \") head = head.next # Driver code head = Node(4)head.next = Node(4)head.next.next = Node(0)head.next.next.next = Node(2)head.next.next.next.next = Node(3)head.next.next.next.next.next = Node(4)head.next.next.next.next.next.next = Node(3)head.next.next.next.next.next.next.next = Node(3)head.next.next.next.next.next.next.next.next = Node(0)head.next.next.next.next.next.next.next.next.next = Node(4) print(\"Original linked list :\")display(head) head = doubleAndAppend0(head) print(\"\\nModified linked list :\")display(head) # This code is contributed by Arnab Kundu",
"e": 36845,
"s": 33997,
"text": null
},
{
"code": "// C# code to modify linked listusing System; // Linked List Nodepublic class Node{ public int data; public Node next; // Constructor public Node(int data) { this.data = data; next = null; }} // Class to perform operations// on linked listpublic class GfG{ // Recursive function to double the one of two // elements and make next one as 0, // which are equal before 0 public static void changeTwoBefore0(Node head) { // there should be atleast three elements // to perform required operation if (head == null || head.next == null || head.next.next == null) return; // when two continuous elements // are same if ((head.data == head.next.data) && (head.next.next.data == 0)) { int temp = head.data; head.data = 2*temp; head.next.data = 0; if (head.next.next.next != null) head = head.next.next.next; else return; } else head = head.next; // recursive call to changeTwoBefore0 // for next element changeTwoBefore0(head); } // function to append zeros at tail public static Node appendZero(Node head) { if (head == null || head.next == null) return head; // Find tail node Node tail = head; while (tail.next != null) tail = tail.next; Node origTail = tail; // Case when starting nodes have 0 values // we need to change head in this case. Node curr = head; while (curr.next != null && curr.data == 0) { tail.next = curr; tail = curr; curr = curr.next; } head = curr; // Now moving other 0s to end Node prev = curr; curr = curr.next; // We check until original tail while (curr != origTail) { // If current data is 0, append // after tail and update tail. if (curr.data == 0) { tail.next = curr; tail = curr; prev.next = curr.next; } else prev = curr; // We always move current curr = curr.next; } // Finally making sure that linked // list is null terminated. tail.next = null; return head; } public static Node doubleAndAppend0(Node head) { // Change two same nodes before 0 changeTwoBefore0(head); // Move all 0s to end return appendZero(head); } // function to display the nodes public static void display(Node head) { while (head != null) { Console.Write(head.data + \" -> \"); head = head.next; } } // Driver code public static void Main() { Node head = new Node(4); head.next = new Node(4); head.next.next = new Node(0); head.next.next.next = new Node(2); head.next.next.next.next = new Node(3); head.next.next.next.next.next = new Node(4); head.next.next.next.next.next.next = new Node(3); head.next.next.next.next.next.next.next = new Node(3); head.next.next.next.next.next.next.next.next = new Node(0); head.next.next.next.next.next.next.next.next.next = new Node(4); Console.Write(\"Original linked list :\\n\"); display(head); head = doubleAndAppend0(head); Console.WriteLine(\"\\nModified linked list :\"); display(head); }} /* This code contributed by PrinciRaj1992 */",
"e": 40554,
"s": 36845,
"text": null
},
{
"code": "<script> // JavaScript code to modify linked list // Linked List Nodeclass Node{ constructor(data) { this.data = data; this.next = null; }} // Class to perform operations// on linked list// Recursive function to double the one of two// elements and make next one as 0,// which are equal before 0function changeTwoBefore0(head){ // there should be atleast three elements // to perform required operation if (head == null || head.next == null || head.next.next == null) return; // when two continuous elements // are same if ((head.data == head.next.data) && (head.next.next.data == 0)) { var temp = head.data; head.data = 2*temp; head.next.data = 0; if (head.next.next.next != null) head = head.next.next.next; else return; } else head = head.next; // recursive call to changeTwoBefore0 // for next element changeTwoBefore0(head);}// function to append zeros at tailfunction appendZero(head){ if (head == null || head.next == null) return head; // Find tail node var tail = head; while (tail.next != null) tail = tail.next; var origTail = tail; // Case when starting nodes have 0 values // we need to change head in this case. var curr = head; while (curr.next != null && curr.data == 0) { tail.next = curr; tail = curr; curr = curr.next; } head = curr; // Now moving other 0s to end var prev = curr; curr = curr.next; // We check until original tail while (curr != origTail) { // If current data is 0, append // after tail and update tail. if (curr.data == 0) { tail.next = curr; tail = curr; prev.next = curr.next; } else prev = curr; // We always move current curr = curr.next; } // Finally making sure that linked // list is null terminated. tail.next = null; return head;} function doubleAndAppend0(head){ // Change two same nodes before 0 changeTwoBefore0(head); // Move all 0s to end return appendZero(head);}// function to display the nodesfunction display( head){ while (head != null) { document.write(head.data + \" -> \"); head = head.next; }} // Driver codevar head = new Node(4);head.next = new Node(4);head.next.next = new Node(0);head.next.next.next = new Node(2);head.next.next.next.next = new Node(3);head.next.next.next.next.next = new Node(4);head.next.next.next.next.next.next = new Node(3);head.next.next.next.next.next.next.next = new Node(3);head.next.next.next.next.next.next.next.next = new Node(0);head.next.next.next.next.next.next.next.next.next = new Node(4);document.write(\"Original linked list :<br>\");display(head);head = doubleAndAppend0(head);document.write(\"<br>Modified linked list :<br>\");display(head); </script>",
"e": 43534,
"s": 40554,
"text": null
},
{
"code": null,
"e": 43682,
"s": 43534,
"text": "Original linked list :\n4 -> 4 -> 0 -> 2 -> 3 -> 4 -> 3 -> 3 -> 0 -> 4 -> \nModified linked list :\n8 -> 2 -> 3 -> 4 -> 6 -> 4 -> 0 -> 0 -> 0 -> 0 -> "
},
{
"code": null,
"e": 43752,
"s": 43682,
"text": "Time complexity: O(n), where n is the number of nodes of linked list."
},
{
"code": null,
"e": 43766,
"s": 43752,
"text": "princiraj1992"
},
{
"code": null,
"e": 43779,
"s": 43766,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 43790,
"s": 43779,
"text": "andrew1234"
},
{
"code": null,
"e": 43801,
"s": 43790,
"text": "ajaykr00kj"
},
{
"code": null,
"e": 43808,
"s": 43801,
"text": "rrrtnx"
},
{
"code": null,
"e": 43821,
"s": 43808,
"text": "simmytarika5"
},
{
"code": null,
"e": 43831,
"s": 43821,
"text": "Microsoft"
},
{
"code": null,
"e": 43843,
"s": 43831,
"text": "Linked List"
},
{
"code": null,
"e": 43853,
"s": 43843,
"text": "Microsoft"
},
{
"code": null,
"e": 43865,
"s": 43853,
"text": "Linked List"
},
{
"code": null,
"e": 43963,
"s": 43865,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 44004,
"s": 43963,
"text": "Circular Linked List | Set 2 (Traversal)"
},
{
"code": null,
"e": 44054,
"s": 44004,
"text": "Swap nodes in a linked list without swapping data"
},
{
"code": null,
"e": 44113,
"s": 44054,
"text": "Program to implement Singly Linked List in C++ using class"
},
{
"code": null,
"e": 44153,
"s": 44113,
"text": "Circular Singly Linked List | Insertion"
},
{
"code": null,
"e": 44224,
"s": 44153,
"text": "Given a linked list which is sorted, how will you insert in sorted way"
},
{
"code": null,
"e": 44265,
"s": 44224,
"text": "Real-time application of Data Structures"
},
{
"code": null,
"e": 44303,
"s": 44265,
"text": "Delete a node in a Doubly Linked List"
},
{
"code": null,
"e": 44336,
"s": 44303,
"text": "Linked List Implementation in C#"
},
{
"code": null,
"e": 44390,
"s": 44336,
"text": "Insert a node at a specific position in a linked list"
}
] |
Batch Normalization and Dropout in Neural Networks with Pytorch | by Niranjan Kumar | Towards Data Science | In this article, we will discuss why we need batch normalization and dropout in deep neural networks followed by experiments using Pytorch on a standard data set to see the effects of batch normalization and dropout. This article is based on my understanding of deep learning lectures from PadhAI.
Citation Note: The content and the structure of this article is based on the deep learning lectures from One-Fourth Labs — PadhAI.
Before we discuss batch normalization, we will learn about why normalizing the inputs speed up the training of a neural network.
Consider a scenario where we have 2D data with features x_1 and x_2 going into a neural network. One of these features x_1 has a wider spread from -200 to 200 and another feature x_2 has a narrower spread from -10 to 10.
Once we normalized the data, the spread of the data for both the features is concentrated in one region ie... from -2 to 2. Spread would look like this,
Let’s discuss why normalizing inputs help?
Before we normalized the inputs, the weights associated with these inputs would vary a lot because the input features present in different ranges varying from -200 to 200 and from -2 to 2. To accommodate this range difference between the features some weights would have to be large and then some have to be small. If we have larger weights then the updates associated with the back-propagation would also be large and vice versa. Because of this uneven distribution of weights for the inputs, the learning algorithm keeps oscillating in the plateau region before it finds the global minima.
To avoid the learning algorithm spend much time oscillating in the plateau, we normalize the input features such that all the features would be on the same scale. Since our inputs are on the same scale, the weights associated with them would also be on the same scale. Thus helping the network to train faster.
We have normalized the inputs but what about hidden representatives?
By normalizing the inputs we are able to bring all the inputs features to the same scale. In the neural network, we need to compute the pre-activation for the first neuron of the first layer a11. We know that pre-activation is nothing but the weighted sum of inputs plus bias. In other words, it is the dot product between the first row of the weight matrix W1 and the input matrix X plus bias b11.
The mathematical equation for pre-activation at each layer ‘i’ is given by,
The activation at each layer is equal to applying the activation function to the output of the pre-activation of that layer. The mathematical equation for the activation at each layer ‘i’ is given by,
Similarly, the activation values for ‘n’ number of hidden layers present in the network need to be computed. The activation values will act as an input to the next hidden layers present in the network. so it doesn’t matter what we have done to the input whether we normalized them or not, the activation values would vary a lot as we do deeper and deeper into the network based on the weight associated with the corresponding neuron.
In order to bring all the activation values to the same scale, we normalize the activation values such that the hidden representation doesn’t vary drastically and also helps us to get improvement in the training speed.
Why is it called batch normalization?
Since we are computing the mean and standard deviation from a single batch as opposed to computing it from the entire data. Batch normalization is done individually at each hidden neuron in the network.
Since we are normalizing all the activations in the network, are we enforcing some constraints that could deteriorate the performance of the network?
In order to maintain the representative power of the hidden neural network, batch normalization introduces two extra parameters — Gamma and Beta. Once we normalize the activation, we need to perform one more step to get the final activation value that can be feed as the input to another layer.
The parameters Gamma and Beta are learned along with other parameters of the network. If Gamma (γ) is equal to the mean (μ) and Beta (β) is equal to the standard deviation(σ) then the activation h_final is equal to the h_norm, thus preserving the representative power of the network.
All the code discussed in the article is present on my GitHub. You can open the code notebook with any setup by directly opening my Jupyter Notebook on Github with Colab which runs on Google’s Virtual Machine. Click here, if you just want to quickly open the notebook and follow along with this tutorial.
github.com
To see how batch normalization works we will build a neural network using Pytorch and test it on the MNIST data set.
In this section, we will build a fully connected neural network (DNN) to classify the MNIST data instead of using CNN. The main purpose of using DNN is to explain how batch normalization works in case of 1D input like an array. Before we feed the MNIST images of size 28x28 to the network, we flatten them into a one-dimensional input array of size 784.
We will create two deep neural networks with three fully connected linear layers and alternating ReLU activation in between them. In the case of network with batch normalization, we will apply batch normalization before ReLU as provided in the original paper. Since our input is a 1D array we will use BatchNorm1d class present in the Pytorch nn module.
import torch.nn as nnnn.BatchNorm1d(48) #48 corresponds to the number of input features it is getting from the previous layer.
To get a better insight into how batch normalization helps in faster converge of the network, we will look at the distribution of values across multiple hidden layers in the network during the training phase.
For consistency, we will plot the output of the second linear layer from the two networks and compare the distributions of the output from that layer across the networks. The results look like this:
From the graphs, we can conclude that the distribution of values without batch normalization has changed significantly between iterations of inputs within each epoch which means that the subsequent layers in the network without batch normalization are seeing a varying distribution of input data. But the change in the distribution of values for the model with batch normalization seems to be slightly negligible.
Also, we can see that the loss of the network with batch normalization reduces much faster than the normal network because of the covariance shift i.e...shifting of hidden values for each batch of input. This helps in faster converge of the network and reduces the training time.
In the previous section, we have seen how to write batch normalization between linear layers for feed-forward neural networks which take a 1D array as an input. In this section, we will discuss how to implement batch normalization for Convolution Neural Networks from a syntactical point of view.
We will take the same MNIST data images and write a network that implements batch normalization. The batch of RGB images has four dimensions — batch_size x channels x height x width. In the case of images, we normalize the batch over each channel. The class BatchNorm2d applies batch normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension).
The class BatchNorm2d takes the number of channels it receives from the output of a previous layer as a parameter.
In this section of the article, we discuss the concept of dropout in neural networks specifically how it helps to reduce overfitting and generalization error. After that, we will implement a neural network with and without dropout to see how dropout influences the performance of a network using Pytorch.
Dropout is a regularization technique that “drops out” or “deactivates” few neurons in the neural network randomly in order to avoid the problem of overfitting.
Training one deep neural network with large parameters on the data might lead to overfitting. Can we train multiple neural networks with different configurations on the same dataset and take the average value of these predictions?.
But creating an ensemble of neural networks with different architectures and training them wouldn’t be feasible in practice. Dropout to the rescue.
Dropout deactivates the neurons randomly at each training step instead of training the data on the original network, we train the data on the network with dropped out nodes. In the next iteration of the training step, the hidden neurons which are deactivated by dropout changes because of its probabilistic behavior. In this way, by applying dropout i.e...deactivating certain individual nodes at random during training we can simulate an ensemble of neural network with different architectures.
In each training iteration, each node in the network is associated with a probability p whether to keep in the network or to deactivate it (dropout) out of the network with probability 1-p. That means the weights associated with the nodes got updated only p fraction of times because nodes are active only p times during training.
During test time, we consider the original neural network with all activations present and scale the output of each node by a value p. Since each node is activated the only p times.
To visualize how dropout reduces the overfitting of a neural network, we will generate a simple random data points using Pytorch torch.unsqueeze. The utility of the dropout is best shown on custom data that has the potential to overfit.
Once we generate the data, we can visualize the tensors using matplotlib scatter plot as shown below.
To show the overfitting, we will train two networks — one without dropout and another with dropout. The network without dropout has 3 fully connected hidden layers with ReLU as the activation function for the hidden layers and the network with dropout also has similar architecture but with dropout applied after first & second Linear layer.
In Pytorch, we can apply a dropout using torch.nn module.
import torch.nn as nnnn.Dropout(0.5) #apply dropout in a neural network
In this example, I have used a dropout fraction of 0.5 after the first linear layer and 0.2 after the second linear layer. Once we train the two different models i.e...one without dropout and another with dropout and plot the test results, it would look like this:
From the above graphs, we can conclude that as we increase the number of epochs the model without dropout is overfitting the data. The model without dropout is learning the noise associated with the data instead of generalizing for the data. We can see that the loss associated with the model without drop increases as we increase the number of epochs unlike the loss associated with the model with dropout.
github.com
If you want to learn more about Artificial Neural Networks using Keras & Tensorflow 2.0(Python or R). Check out the Artificial Neural Networks by Abhishek and Pukhraj from Starttechacademy. They explain the fundamentals of deep learning in a simplistic manner.
In this article, we have discussed why we need batch normalization and then we went on to visualize the effect of batch normalization on the outputs of hidden layers using the MNIST data set. After that, we discussed the working of dropout and it prevents the problem of overfitting the data. Finally, we visualized the performance of two networks with and without dropout to see the effect of dropout.
Recommended Reading
www.marktechpost.com
towardsdatascience.com
Feel free to reach out to me via LinkedIn or twitter if you face any problems while implementing the code present in my GitHub repository.
Until next time Peace :)
NK.
Disclaimer — There might be some affiliate links in this post to relevant resources. You can purchase the bundle at the lowest price possible. I will receive a small commission if you purchase the course. | [
{
"code": null,
"e": 470,
"s": 172,
"text": "In this article, we will discuss why we need batch normalization and dropout in deep neural networks followed by experiments using Pytorch on a standard data set to see the effects of batch normalization and dropout. This article is based on my understanding of deep learning lectures from PadhAI."
},
{
"code": null,
"e": 601,
"s": 470,
"text": "Citation Note: The content and the structure of this article is based on the deep learning lectures from One-Fourth Labs — PadhAI."
},
{
"code": null,
"e": 730,
"s": 601,
"text": "Before we discuss batch normalization, we will learn about why normalizing the inputs speed up the training of a neural network."
},
{
"code": null,
"e": 951,
"s": 730,
"text": "Consider a scenario where we have 2D data with features x_1 and x_2 going into a neural network. One of these features x_1 has a wider spread from -200 to 200 and another feature x_2 has a narrower spread from -10 to 10."
},
{
"code": null,
"e": 1104,
"s": 951,
"text": "Once we normalized the data, the spread of the data for both the features is concentrated in one region ie... from -2 to 2. Spread would look like this,"
},
{
"code": null,
"e": 1147,
"s": 1104,
"text": "Let’s discuss why normalizing inputs help?"
},
{
"code": null,
"e": 1739,
"s": 1147,
"text": "Before we normalized the inputs, the weights associated with these inputs would vary a lot because the input features present in different ranges varying from -200 to 200 and from -2 to 2. To accommodate this range difference between the features some weights would have to be large and then some have to be small. If we have larger weights then the updates associated with the back-propagation would also be large and vice versa. Because of this uneven distribution of weights for the inputs, the learning algorithm keeps oscillating in the plateau region before it finds the global minima."
},
{
"code": null,
"e": 2050,
"s": 1739,
"text": "To avoid the learning algorithm spend much time oscillating in the plateau, we normalize the input features such that all the features would be on the same scale. Since our inputs are on the same scale, the weights associated with them would also be on the same scale. Thus helping the network to train faster."
},
{
"code": null,
"e": 2119,
"s": 2050,
"text": "We have normalized the inputs but what about hidden representatives?"
},
{
"code": null,
"e": 2518,
"s": 2119,
"text": "By normalizing the inputs we are able to bring all the inputs features to the same scale. In the neural network, we need to compute the pre-activation for the first neuron of the first layer a11. We know that pre-activation is nothing but the weighted sum of inputs plus bias. In other words, it is the dot product between the first row of the weight matrix W1 and the input matrix X plus bias b11."
},
{
"code": null,
"e": 2594,
"s": 2518,
"text": "The mathematical equation for pre-activation at each layer ‘i’ is given by,"
},
{
"code": null,
"e": 2795,
"s": 2594,
"text": "The activation at each layer is equal to applying the activation function to the output of the pre-activation of that layer. The mathematical equation for the activation at each layer ‘i’ is given by,"
},
{
"code": null,
"e": 3229,
"s": 2795,
"text": "Similarly, the activation values for ‘n’ number of hidden layers present in the network need to be computed. The activation values will act as an input to the next hidden layers present in the network. so it doesn’t matter what we have done to the input whether we normalized them or not, the activation values would vary a lot as we do deeper and deeper into the network based on the weight associated with the corresponding neuron."
},
{
"code": null,
"e": 3448,
"s": 3229,
"text": "In order to bring all the activation values to the same scale, we normalize the activation values such that the hidden representation doesn’t vary drastically and also helps us to get improvement in the training speed."
},
{
"code": null,
"e": 3486,
"s": 3448,
"text": "Why is it called batch normalization?"
},
{
"code": null,
"e": 3689,
"s": 3486,
"text": "Since we are computing the mean and standard deviation from a single batch as opposed to computing it from the entire data. Batch normalization is done individually at each hidden neuron in the network."
},
{
"code": null,
"e": 3839,
"s": 3689,
"text": "Since we are normalizing all the activations in the network, are we enforcing some constraints that could deteriorate the performance of the network?"
},
{
"code": null,
"e": 4134,
"s": 3839,
"text": "In order to maintain the representative power of the hidden neural network, batch normalization introduces two extra parameters — Gamma and Beta. Once we normalize the activation, we need to perform one more step to get the final activation value that can be feed as the input to another layer."
},
{
"code": null,
"e": 4418,
"s": 4134,
"text": "The parameters Gamma and Beta are learned along with other parameters of the network. If Gamma (γ) is equal to the mean (μ) and Beta (β) is equal to the standard deviation(σ) then the activation h_final is equal to the h_norm, thus preserving the representative power of the network."
},
{
"code": null,
"e": 4723,
"s": 4418,
"text": "All the code discussed in the article is present on my GitHub. You can open the code notebook with any setup by directly opening my Jupyter Notebook on Github with Colab which runs on Google’s Virtual Machine. Click here, if you just want to quickly open the notebook and follow along with this tutorial."
},
{
"code": null,
"e": 4734,
"s": 4723,
"text": "github.com"
},
{
"code": null,
"e": 4851,
"s": 4734,
"text": "To see how batch normalization works we will build a neural network using Pytorch and test it on the MNIST data set."
},
{
"code": null,
"e": 5205,
"s": 4851,
"text": "In this section, we will build a fully connected neural network (DNN) to classify the MNIST data instead of using CNN. The main purpose of using DNN is to explain how batch normalization works in case of 1D input like an array. Before we feed the MNIST images of size 28x28 to the network, we flatten them into a one-dimensional input array of size 784."
},
{
"code": null,
"e": 5559,
"s": 5205,
"text": "We will create two deep neural networks with three fully connected linear layers and alternating ReLU activation in between them. In the case of network with batch normalization, we will apply batch normalization before ReLU as provided in the original paper. Since our input is a 1D array we will use BatchNorm1d class present in the Pytorch nn module."
},
{
"code": null,
"e": 5686,
"s": 5559,
"text": "import torch.nn as nnnn.BatchNorm1d(48) #48 corresponds to the number of input features it is getting from the previous layer."
},
{
"code": null,
"e": 5895,
"s": 5686,
"text": "To get a better insight into how batch normalization helps in faster converge of the network, we will look at the distribution of values across multiple hidden layers in the network during the training phase."
},
{
"code": null,
"e": 6094,
"s": 5895,
"text": "For consistency, we will plot the output of the second linear layer from the two networks and compare the distributions of the output from that layer across the networks. The results look like this:"
},
{
"code": null,
"e": 6508,
"s": 6094,
"text": "From the graphs, we can conclude that the distribution of values without batch normalization has changed significantly between iterations of inputs within each epoch which means that the subsequent layers in the network without batch normalization are seeing a varying distribution of input data. But the change in the distribution of values for the model with batch normalization seems to be slightly negligible."
},
{
"code": null,
"e": 6788,
"s": 6508,
"text": "Also, we can see that the loss of the network with batch normalization reduces much faster than the normal network because of the covariance shift i.e...shifting of hidden values for each batch of input. This helps in faster converge of the network and reduces the training time."
},
{
"code": null,
"e": 7085,
"s": 6788,
"text": "In the previous section, we have seen how to write batch normalization between linear layers for feed-forward neural networks which take a 1D array as an input. In this section, we will discuss how to implement batch normalization for Convolution Neural Networks from a syntactical point of view."
},
{
"code": null,
"e": 7462,
"s": 7085,
"text": "We will take the same MNIST data images and write a network that implements batch normalization. The batch of RGB images has four dimensions — batch_size x channels x height x width. In the case of images, we normalize the batch over each channel. The class BatchNorm2d applies batch normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension)."
},
{
"code": null,
"e": 7577,
"s": 7462,
"text": "The class BatchNorm2d takes the number of channels it receives from the output of a previous layer as a parameter."
},
{
"code": null,
"e": 7882,
"s": 7577,
"text": "In this section of the article, we discuss the concept of dropout in neural networks specifically how it helps to reduce overfitting and generalization error. After that, we will implement a neural network with and without dropout to see how dropout influences the performance of a network using Pytorch."
},
{
"code": null,
"e": 8043,
"s": 7882,
"text": "Dropout is a regularization technique that “drops out” or “deactivates” few neurons in the neural network randomly in order to avoid the problem of overfitting."
},
{
"code": null,
"e": 8275,
"s": 8043,
"text": "Training one deep neural network with large parameters on the data might lead to overfitting. Can we train multiple neural networks with different configurations on the same dataset and take the average value of these predictions?."
},
{
"code": null,
"e": 8423,
"s": 8275,
"text": "But creating an ensemble of neural networks with different architectures and training them wouldn’t be feasible in practice. Dropout to the rescue."
},
{
"code": null,
"e": 8919,
"s": 8423,
"text": "Dropout deactivates the neurons randomly at each training step instead of training the data on the original network, we train the data on the network with dropped out nodes. In the next iteration of the training step, the hidden neurons which are deactivated by dropout changes because of its probabilistic behavior. In this way, by applying dropout i.e...deactivating certain individual nodes at random during training we can simulate an ensemble of neural network with different architectures."
},
{
"code": null,
"e": 9250,
"s": 8919,
"text": "In each training iteration, each node in the network is associated with a probability p whether to keep in the network or to deactivate it (dropout) out of the network with probability 1-p. That means the weights associated with the nodes got updated only p fraction of times because nodes are active only p times during training."
},
{
"code": null,
"e": 9432,
"s": 9250,
"text": "During test time, we consider the original neural network with all activations present and scale the output of each node by a value p. Since each node is activated the only p times."
},
{
"code": null,
"e": 9669,
"s": 9432,
"text": "To visualize how dropout reduces the overfitting of a neural network, we will generate a simple random data points using Pytorch torch.unsqueeze. The utility of the dropout is best shown on custom data that has the potential to overfit."
},
{
"code": null,
"e": 9771,
"s": 9669,
"text": "Once we generate the data, we can visualize the tensors using matplotlib scatter plot as shown below."
},
{
"code": null,
"e": 10113,
"s": 9771,
"text": "To show the overfitting, we will train two networks — one without dropout and another with dropout. The network without dropout has 3 fully connected hidden layers with ReLU as the activation function for the hidden layers and the network with dropout also has similar architecture but with dropout applied after first & second Linear layer."
},
{
"code": null,
"e": 10171,
"s": 10113,
"text": "In Pytorch, we can apply a dropout using torch.nn module."
},
{
"code": null,
"e": 10243,
"s": 10171,
"text": "import torch.nn as nnnn.Dropout(0.5) #apply dropout in a neural network"
},
{
"code": null,
"e": 10508,
"s": 10243,
"text": "In this example, I have used a dropout fraction of 0.5 after the first linear layer and 0.2 after the second linear layer. Once we train the two different models i.e...one without dropout and another with dropout and plot the test results, it would look like this:"
},
{
"code": null,
"e": 10916,
"s": 10508,
"text": "From the above graphs, we can conclude that as we increase the number of epochs the model without dropout is overfitting the data. The model without dropout is learning the noise associated with the data instead of generalizing for the data. We can see that the loss associated with the model without drop increases as we increase the number of epochs unlike the loss associated with the model with dropout."
},
{
"code": null,
"e": 10927,
"s": 10916,
"text": "github.com"
},
{
"code": null,
"e": 11188,
"s": 10927,
"text": "If you want to learn more about Artificial Neural Networks using Keras & Tensorflow 2.0(Python or R). Check out the Artificial Neural Networks by Abhishek and Pukhraj from Starttechacademy. They explain the fundamentals of deep learning in a simplistic manner."
},
{
"code": null,
"e": 11591,
"s": 11188,
"text": "In this article, we have discussed why we need batch normalization and then we went on to visualize the effect of batch normalization on the outputs of hidden layers using the MNIST data set. After that, we discussed the working of dropout and it prevents the problem of overfitting the data. Finally, we visualized the performance of two networks with and without dropout to see the effect of dropout."
},
{
"code": null,
"e": 11611,
"s": 11591,
"text": "Recommended Reading"
},
{
"code": null,
"e": 11632,
"s": 11611,
"text": "www.marktechpost.com"
},
{
"code": null,
"e": 11655,
"s": 11632,
"text": "towardsdatascience.com"
},
{
"code": null,
"e": 11794,
"s": 11655,
"text": "Feel free to reach out to me via LinkedIn or twitter if you face any problems while implementing the code present in my GitHub repository."
},
{
"code": null,
"e": 11819,
"s": 11794,
"text": "Until next time Peace :)"
},
{
"code": null,
"e": 11823,
"s": 11819,
"text": "NK."
}
] |
How to create an ordered list with list items numbered with numbers in HTML? | To create ordered list in HTML, use the <ol> tag. Ordered list starts with the <ol> tag. The list item starts with the <li> tag and will be marked as numbers, lowercase letters uppercase letters, roman letters, etc. The default numbers for list items.
For creating an ordered list with numbers, use the <ol> tag attribute type. This attribute will assign the value number i.e. <ol type = “1”> to create ordered list numbered with numbers.
You can try to run the following code to create an ordered list of list items numbered with numbers in HTML −
Live Demo
<!DOCTYPE html>
<html>
<head>
<title>World Cup Teams</title>
</head>
<body>
<h1>List of teams for World Cup</h1>
<ol type = "1">
<li>India</li>
<li>Australia</li>
<li>South Africa</li>
<li>New Zealand</li>
<li>Pakistan</li>
<li>Srilanka</li>
<li>West Indies</li>
<li>Bangladesh</li>
</ol>
</body>
</html> | [
{
"code": null,
"e": 1314,
"s": 1062,
"text": "To create ordered list in HTML, use the <ol> tag. Ordered list starts with the <ol> tag. The list item starts with the <li> tag and will be marked as numbers, lowercase letters uppercase letters, roman letters, etc. The default numbers for list items."
},
{
"code": null,
"e": 1501,
"s": 1314,
"text": "For creating an ordered list with numbers, use the <ol> tag attribute type. This attribute will assign the value number i.e. <ol type = “1”> to create ordered list numbered with numbers."
},
{
"code": null,
"e": 1611,
"s": 1501,
"text": "You can try to run the following code to create an ordered list of list items numbered with numbers in HTML −"
},
{
"code": null,
"e": 1621,
"s": 1611,
"text": "Live Demo"
},
{
"code": null,
"e": 2034,
"s": 1621,
"text": "<!DOCTYPE html>\n<html>\n <head>\n <title>World Cup Teams</title>\n </head>\n <body>\n <h1>List of teams for World Cup</h1>\n <ol type = \"1\">\n <li>India</li>\n <li>Australia</li>\n <li>South Africa</li>\n <li>New Zealand</li>\n <li>Pakistan</li>\n <li>Srilanka</li>\n <li>West Indies</li>\n <li>Bangladesh</li>\n </ol>\n </body>\n</html>"
}
] |
Java Program For Reversing A Linked List In Groups Of Given Size- Set 2 - GeeksforGeeks | 11 Jan, 2022
Given a linked list, write a function to reverse every k nodes (where k is an input to the function). Examples:
Input: 1->2->3->4->5->6->7->8->NULL and k = 3
Output: 3->2->1->6->5->4->8->7->NULL.
Input: 1->2->3->4->5->6->7->8->NULL and k = 5
Output: 5->4->3->2->1->8->7->6->NULL.
We have already discussed its solution in below post Reverse a Linked List in groups of given size | Set 1In this post, we have used a stack which will store the nodes of the given linked list. Firstly, push the k elements of the linked list in the stack. Now pop elements one by one and keep track of the previously popped node. Point the next pointer of prev node to top element of stack. Repeat this process, until NULL is reached.This algorithm uses O(k) extra space.
Java
// Java program to reverse a linked list // in groups of given size import java.util.*;class GfG { // Link list node static class Node { int data; Node next; } static Node head = null; /* Reverses the linked list in groups of size k and returns the pointer to the new head node. */ static Node Reverse(Node head, int k) { // Create a stack of Node* Stack<Node> mystack = new Stack<Node> (); Node current = head; Node prev = null; while (current != null) { // Terminate the loop whichever // comes first either current == NULL // or count >= k int count = 0; while (current != null && count < k) { mystack.push(current); current = current.next; count++; } // Now pop the elements of stack // one by one while (mystack.size() > 0) { // If final list has not been // started yet. if (prev == null) { prev = mystack.peek(); head = prev; mystack.pop(); } else { prev.next = mystack.peek(); prev = prev.next; mystack.pop(); } } } // Next of last element will point // to NULL. prev.next = null; return head; } // UTILITY FUNCTIONS // Function to push a node static void push( int new_data) { // Allocate node Node new_node = new Node(); // Put in the data new_node.data = new_data; // Link the old list off the // new node new_node.next = head; // Move the head to point to // the new node head = new_node; } // Function to print linked list static void printList(Node node) { while (node != null) { System.out.print(node.data + " "); node = node.next; } } // Driver code public static void main(String[] args) { // Start with the empty list //Node head = null; /* Created Linked list is 1->2->3->4->5->6->7->8->9 */ push( 9); push( 8); push( 7); push( 6); push( 5); push(4); push(3); push(2); push( 1); System.out.println( "Given linked list "); printList(head); head = Reverse(head, 3); System.out.println(); System.out.println( "Reversed Linked list "); printList(head); } } // This code is contributed by Prerna Saini
Output:
Given Linked List
1 2 3 4 5 6 7 8 9
Reversed list
3 2 1 6 5 4 9 8 7
Please refer complete article on Reverse a Linked List in groups of given size | Set 2 for more details!
Accolite
Adobe
Linked Lists
MakeMyTrip
Microsoft
Paytm
Snapdeal
VMWare
Java
Java Programs
Linked List
Paytm
VMWare
Accolite
Microsoft
Snapdeal
MakeMyTrip
Adobe
Linked List
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Constructors in Java
Exceptions in Java
Functional Interfaces in Java
Different ways of Reading a text file in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java?
Iterate through List in Java | [
{
"code": null,
"e": 25315,
"s": 25287,
"text": "\n11 Jan, 2022"
},
{
"code": null,
"e": 25427,
"s": 25315,
"text": "Given a linked list, write a function to reverse every k nodes (where k is an input to the function). Examples:"
},
{
"code": null,
"e": 25598,
"s": 25427,
"text": "Input: 1->2->3->4->5->6->7->8->NULL and k = 3 \nOutput: 3->2->1->6->5->4->8->7->NULL. \n\nInput: 1->2->3->4->5->6->7->8->NULL and k = 5\nOutput: 5->4->3->2->1->8->7->6->NULL."
},
{
"code": null,
"e": 26070,
"s": 25598,
"text": "We have already discussed its solution in below post Reverse a Linked List in groups of given size | Set 1In this post, we have used a stack which will store the nodes of the given linked list. Firstly, push the k elements of the linked list in the stack. Now pop elements one by one and keep track of the previously popped node. Point the next pointer of prev node to top element of stack. Repeat this process, until NULL is reached.This algorithm uses O(k) extra space."
},
{
"code": null,
"e": 26075,
"s": 26070,
"text": "Java"
},
{
"code": "// Java program to reverse a linked list // in groups of given size import java.util.*;class GfG { // Link list node static class Node { int data; Node next; } static Node head = null; /* Reverses the linked list in groups of size k and returns the pointer to the new head node. */ static Node Reverse(Node head, int k) { // Create a stack of Node* Stack<Node> mystack = new Stack<Node> (); Node current = head; Node prev = null; while (current != null) { // Terminate the loop whichever // comes first either current == NULL // or count >= k int count = 0; while (current != null && count < k) { mystack.push(current); current = current.next; count++; } // Now pop the elements of stack // one by one while (mystack.size() > 0) { // If final list has not been // started yet. if (prev == null) { prev = mystack.peek(); head = prev; mystack.pop(); } else { prev.next = mystack.peek(); prev = prev.next; mystack.pop(); } } } // Next of last element will point // to NULL. prev.next = null; return head; } // UTILITY FUNCTIONS // Function to push a node static void push( int new_data) { // Allocate node Node new_node = new Node(); // Put in the data new_node.data = new_data; // Link the old list off the // new node new_node.next = head; // Move the head to point to // the new node head = new_node; } // Function to print linked list static void printList(Node node) { while (node != null) { System.out.print(node.data + \" \"); node = node.next; } } // Driver code public static void main(String[] args) { // Start with the empty list //Node head = null; /* Created Linked list is 1->2->3->4->5->6->7->8->9 */ push( 9); push( 8); push( 7); push( 6); push( 5); push(4); push(3); push(2); push( 1); System.out.println( \"Given linked list \"); printList(head); head = Reverse(head, 3); System.out.println(); System.out.println( \"Reversed Linked list \"); printList(head); } } // This code is contributed by Prerna Saini",
"e": 28982,
"s": 26075,
"text": null
},
{
"code": null,
"e": 28991,
"s": 28982,
"text": "Output: "
},
{
"code": null,
"e": 29060,
"s": 28991,
"text": "Given Linked List\n1 2 3 4 5 6 7 8 9 \nReversed list\n3 2 1 6 5 4 9 8 7"
},
{
"code": null,
"e": 29165,
"s": 29060,
"text": "Please refer complete article on Reverse a Linked List in groups of given size | Set 2 for more details!"
},
{
"code": null,
"e": 29174,
"s": 29165,
"text": "Accolite"
},
{
"code": null,
"e": 29180,
"s": 29174,
"text": "Adobe"
},
{
"code": null,
"e": 29193,
"s": 29180,
"text": "Linked Lists"
},
{
"code": null,
"e": 29204,
"s": 29193,
"text": "MakeMyTrip"
},
{
"code": null,
"e": 29214,
"s": 29204,
"text": "Microsoft"
},
{
"code": null,
"e": 29220,
"s": 29214,
"text": "Paytm"
},
{
"code": null,
"e": 29229,
"s": 29220,
"text": "Snapdeal"
},
{
"code": null,
"e": 29236,
"s": 29229,
"text": "VMWare"
},
{
"code": null,
"e": 29241,
"s": 29236,
"text": "Java"
},
{
"code": null,
"e": 29255,
"s": 29241,
"text": "Java Programs"
},
{
"code": null,
"e": 29267,
"s": 29255,
"text": "Linked List"
},
{
"code": null,
"e": 29273,
"s": 29267,
"text": "Paytm"
},
{
"code": null,
"e": 29280,
"s": 29273,
"text": "VMWare"
},
{
"code": null,
"e": 29289,
"s": 29280,
"text": "Accolite"
},
{
"code": null,
"e": 29299,
"s": 29289,
"text": "Microsoft"
},
{
"code": null,
"e": 29308,
"s": 29299,
"text": "Snapdeal"
},
{
"code": null,
"e": 29319,
"s": 29308,
"text": "MakeMyTrip"
},
{
"code": null,
"e": 29325,
"s": 29319,
"text": "Adobe"
},
{
"code": null,
"e": 29337,
"s": 29325,
"text": "Linked List"
},
{
"code": null,
"e": 29342,
"s": 29337,
"text": "Java"
},
{
"code": null,
"e": 29440,
"s": 29342,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29455,
"s": 29440,
"text": "Stream In Java"
},
{
"code": null,
"e": 29476,
"s": 29455,
"text": "Constructors in Java"
},
{
"code": null,
"e": 29495,
"s": 29476,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 29525,
"s": 29495,
"text": "Functional Interfaces in Java"
},
{
"code": null,
"e": 29571,
"s": 29525,
"text": "Different ways of Reading a text file in Java"
},
{
"code": null,
"e": 29597,
"s": 29571,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 29631,
"s": 29597,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 29678,
"s": 29631,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 29710,
"s": 29678,
"text": "How to Iterate HashMap in Java?"
}
] |
Bin Packing Problem (Minimize number of used Bins) in C++? | In case of given m elements of different weights and bins each of capacity C, assign each element to a bin so that number of total implemented bins is minimized. Assumption should be that all elements have weights less than bin capacity.
Placing data on multiple disks.
Placing data on multiple disks.
Loading of containers like trucks.
Loading of containers like trucks.
Packing advertisements in fixed length radio/TV station breaks.
Packing advertisements in fixed length radio/TV station breaks.
Job scheduling.
Job scheduling.
Input: weight[] = {4, 1, 8, 1, 4, 2}
Bin Capacity c = 10
Output: 2
We require at least 2 bins to accommodate all elements
First bin consists {4, 4, 2} and second bin {8, 2}
We can always calculate a lower bound on minimum number of bins required using ceil() function. The lower bound can be given below −
Bins with minimum number >= ceil ((Total Weight) / (Bin Capacity))
Bins with minimum number >= ceil ((Total Weight) / (Bin Capacity))
In the above example, lower bound for first example is “ceil(4 + 1 + 8 + 2 + 4 + 1)/10” = 2
In the above example, lower bound for first example is “ceil(4 + 1 + 8 + 2 + 4 + 1)/10” = 2
The following approximate algorithms are implemented for this problem.
The following approximate algorithms are implemented for this problem.
These algorithms are implemented for Bin Packing problems where elements arrive one at a time (in unknown order), each must be put in a bin, before considering the next element.
Next Fit −
When processing next element, verify if it fits in the same bin as the last element. Implement a new bin only if it does not.
Live Demo
// C++ program to calculate number of bins required implementing next fit algorithm.
#include <bits/stdc++.h>
using namespace std;
// We have to return number of bins needed implementing next fit online algorithm
int nextFit(int weight1[], int m, int C){
// Result (Count of bins) and remaining capacity in current bin are initialized.
int res = 0, bin_rem = C;
// We have to place elements one by one
for (int i = 0; i < m; i++) {
// If this element can't fit in current bin
if (weight1[i] > bin_rem) {
res++; // Use a new bin
bin_rem = C - weight1[i];
}
else
bin_rem -= weight1[i];
}
return res;
}
// Driver program
int main(){
int weight1[] = { 3, 6, 5, 8, 2, 4, 9 };
int C = 10;
int m = sizeof(weight1) / sizeof(weight1[0]);
cout<< "Number of bins required in Next Fit : "
<<nextFit(weight1, m, C);
return 0;
}
Number of bins required in Next Fit : 4
Next Fit is treated as a simple algorithm. It needs only O(m) time and O(1) extra space to process m elements.
First Fit−
When processing the next element, examine or scan the previous bins in order and place the element in the first bin that fits.If element does not fit in any of the existing bins at that time we start a new bin only.
Live Demo
// C++ program to find number of bins needed implementing First Fit algorithm.
#include <bits/stdc++.h>
using namespace std;
// We have to return number of bins needed implementing first fit online algorithm
int firstFit(int weight1[], int m, int C){
// We have to initialize result (Count of bins)
int res = 0;
// We have to create an array to store remaining space in bins there can be maximum n bins
int bin_rem[m];
// We have to place elements one by one
for (int i = 0; i < m; i++) {
// We have to find the first bin that can accommodate weight1[i]
int j;
for (j = 0; j < res; j++) {
if (bin_rem[j] >= weight1[i]) {
bin_rem[j] = bin_rem[j] - weight1[i];
break;
}
}
// If no bin could accommodate weight1[i]
if (j == res) {
bin_rem[res] = C - weight1[i];
res++;
}
}
return res;
}
// Driver program
int main(){
int weight1[] = { 2, 5, 4, 7, 1, 3, 8 };
int C = 10;
int m = sizeof(weight1) / sizeof(weight1[0]);
cout<< "Number of bins required in First Fit : "
<<firstFit(weight1, m, C);
return 0;
}
Number of bins required in First Fit : 4
The above implementation of First Fit consumes O(m2) time, but First Fit can be used in O(m log m) time implementing Self-Balancing Binary Search Trees.
Best Fit −
The concept is to places the next item in the tightest position. That means put it in the bin so that at least emptyspace is left.
Live Demo
// C++ program to calculate number of bins required implementing Best fit algorithm.
#include <bits/stdc++.h>
using namespace std;
// We have to returns number of bins required implementing best fit online algorithm
int bestFit(int weight1[], int m, int C){
// We have to initialize result (Count of bins)
int res = 0;
// We have to create an array to store remaining space in bins there can be maximum n bins
int bin_rem[m];
// Place elements one by one
for (int i = 0; i < m; i++){
// Find the best bin that can accomodate weight1[i]
int j;
// We have to initialize minimum space left and index of best bin
int min = C + 1, bi = 0;
for (j = 0; j < res; j++){
if (bin_rem[j] >= weight1[i] && bin_rem[j] - weight1[i] < min) {
bi = j;
min = bin_rem[j] - weight1[i];
}
}
// We create a new bin,if no bin could accommodate weight1[i],
if (min == C + 1) {
bin_rem[res] = C - weight1[i];
res++;
}
else // Assign the element to best bin
bin_rem[bi] -= weight1[i];
}
return res;
}
// Driver program
int main(){
int weight1[] = { 2, 5, 4, 7, 1, 3, 8 };
int C = 10;
int m = sizeof(weight1) / sizeof(weight1[0]);
cout<< "Number of bins required in Best Fit : "
<<bestFit(weight1, m, C);
return 0;
}
Number of bins required in Best Fit : 4
Best Fit can also be applied in O(m log m) time implementing Self-Balancing Binary Search Trees.
In case of the offline version, we have all elements upfront.A problem with online algorithms is that packing large elements is difficult, especially if they occur late in the sequence. We can overcome this by sorting the input sequence, and placing the large elements first.
First Fit Decreasing −
Live Demo
// C++ program to locate number of bins needed implementing First Fit Decreasing algorithm.
#include <bits/stdc++.h>
using namespace std;
/* We copy firstFit() from above */
int firstFit(int weight1[], int m, int C){
// We have to initialize result (Count of bins)
int res = 0;
// We have to create an array to store remaining space in bins there can be maximum n bins
int bin_rem[m];
// We have to place elements one by one
for (int i = 0; i < m; i++) {
// We have to find the first bin that can accommodate weight1[i]
int j;
for (j = 0; j < res; j++) {
if (bin_rem[j] >= weight1[i]) {
bin_rem[j] = bin_rem[j] - weight1[i];
break;
}
}
// If no bin could accommodate weight1[i]
if (j == res) {
bin_rem[res] = C - weight1[i];
res++;
}
}
return res;
}
// We have to returns number of bins required implementing first fit decreasing offline algorithm
int firstFitDec(int weight1[], int m, int C){
// At first we sort all weights in decreasing order
sort(weight1, weight1 + m, std::greater<int>());
// Now we call first fit for sorted items
return firstFit(weight1, m, C);
}
// Driver program
int main(){
int weight1[] = { 2, 5, 4, 7, 1, 3, 8 };
int C = 10;
int m = sizeof(weight1) / sizeof(weight1[0]);
cout<< "Number of bins required in First Fit "
<< "Decreasing : " << firstFitDec(weight1, m, C);
return 0;
}
Number of bins required in First Fit Decreasing : 3
First Fit decreasing generates the best result for the sample input as elements are sorted first.
First Fit Decreasing can also be used in O(m log m) time implementing Self-Balancing Binary Search Trees. | [
{
"code": null,
"e": 1300,
"s": 1062,
"text": "In case of given m elements of different weights and bins each of capacity C, assign each element to a bin so that number of total implemented bins is minimized. Assumption should be that all elements have weights less than bin capacity."
},
{
"code": null,
"e": 1332,
"s": 1300,
"text": "Placing data on multiple disks."
},
{
"code": null,
"e": 1364,
"s": 1332,
"text": "Placing data on multiple disks."
},
{
"code": null,
"e": 1399,
"s": 1364,
"text": "Loading of containers like trucks."
},
{
"code": null,
"e": 1434,
"s": 1399,
"text": "Loading of containers like trucks."
},
{
"code": null,
"e": 1498,
"s": 1434,
"text": "Packing advertisements in fixed length radio/TV station breaks."
},
{
"code": null,
"e": 1562,
"s": 1498,
"text": "Packing advertisements in fixed length radio/TV station breaks."
},
{
"code": null,
"e": 1578,
"s": 1562,
"text": "Job scheduling."
},
{
"code": null,
"e": 1594,
"s": 1578,
"text": "Job scheduling."
},
{
"code": null,
"e": 1767,
"s": 1594,
"text": "Input: weight[] = {4, 1, 8, 1, 4, 2}\nBin Capacity c = 10\nOutput: 2\nWe require at least 2 bins to accommodate all elements\nFirst bin consists {4, 4, 2} and second bin {8, 2}"
},
{
"code": null,
"e": 1900,
"s": 1767,
"text": "We can always calculate a lower bound on minimum number of bins required using ceil() function. The lower bound can be given below −"
},
{
"code": null,
"e": 1967,
"s": 1900,
"text": "Bins with minimum number >= ceil ((Total Weight) / (Bin Capacity))"
},
{
"code": null,
"e": 2034,
"s": 1967,
"text": "Bins with minimum number >= ceil ((Total Weight) / (Bin Capacity))"
},
{
"code": null,
"e": 2126,
"s": 2034,
"text": "In the above example, lower bound for first example is “ceil(4 + 1 + 8 + 2 + 4 + 1)/10” = 2"
},
{
"code": null,
"e": 2218,
"s": 2126,
"text": "In the above example, lower bound for first example is “ceil(4 + 1 + 8 + 2 + 4 + 1)/10” = 2"
},
{
"code": null,
"e": 2289,
"s": 2218,
"text": "The following approximate algorithms are implemented for this problem."
},
{
"code": null,
"e": 2360,
"s": 2289,
"text": "The following approximate algorithms are implemented for this problem."
},
{
"code": null,
"e": 2538,
"s": 2360,
"text": "These algorithms are implemented for Bin Packing problems where elements arrive one at a time (in unknown order), each must be put in a bin, before considering the next element."
},
{
"code": null,
"e": 2549,
"s": 2538,
"text": "Next Fit −"
},
{
"code": null,
"e": 2675,
"s": 2549,
"text": "When processing next element, verify if it fits in the same bin as the last element. Implement a new bin only if it does not."
},
{
"code": null,
"e": 2686,
"s": 2675,
"text": " Live Demo"
},
{
"code": null,
"e": 3588,
"s": 2686,
"text": "// C++ program to calculate number of bins required implementing next fit algorithm.\n#include <bits/stdc++.h>\nusing namespace std;\n// We have to return number of bins needed implementing next fit online algorithm\nint nextFit(int weight1[], int m, int C){\n // Result (Count of bins) and remaining capacity in current bin are initialized.\n int res = 0, bin_rem = C;\n // We have to place elements one by one\n for (int i = 0; i < m; i++) {\n // If this element can't fit in current bin\n if (weight1[i] > bin_rem) {\n res++; // Use a new bin\n bin_rem = C - weight1[i];\n }\n else\n bin_rem -= weight1[i];\n }\n return res;\n}\n// Driver program\nint main(){\n int weight1[] = { 3, 6, 5, 8, 2, 4, 9 };\n int C = 10;\n int m = sizeof(weight1) / sizeof(weight1[0]);\n cout<< \"Number of bins required in Next Fit : \"\n <<nextFit(weight1, m, C);\n return 0;\n}"
},
{
"code": null,
"e": 3628,
"s": 3588,
"text": "Number of bins required in Next Fit : 4"
},
{
"code": null,
"e": 3739,
"s": 3628,
"text": "Next Fit is treated as a simple algorithm. It needs only O(m) time and O(1) extra space to process m elements."
},
{
"code": null,
"e": 3750,
"s": 3739,
"text": "First Fit−"
},
{
"code": null,
"e": 3966,
"s": 3750,
"text": "When processing the next element, examine or scan the previous bins in order and place the element in the first bin that fits.If element does not fit in any of the existing bins at that time we start a new bin only."
},
{
"code": null,
"e": 3977,
"s": 3966,
"text": " Live Demo"
},
{
"code": null,
"e": 5122,
"s": 3977,
"text": "// C++ program to find number of bins needed implementing First Fit algorithm.\n#include <bits/stdc++.h>\nusing namespace std;\n// We have to return number of bins needed implementing first fit online algorithm\nint firstFit(int weight1[], int m, int C){\n // We have to initialize result (Count of bins)\n int res = 0;\n // We have to create an array to store remaining space in bins there can be maximum n bins\n int bin_rem[m];\n // We have to place elements one by one\n for (int i = 0; i < m; i++) {\n // We have to find the first bin that can accommodate weight1[i]\n int j;\n for (j = 0; j < res; j++) {\n if (bin_rem[j] >= weight1[i]) {\n bin_rem[j] = bin_rem[j] - weight1[i];\n break;\n }\n }\n // If no bin could accommodate weight1[i]\n if (j == res) {\n bin_rem[res] = C - weight1[i];\n res++;\n }\n }\n return res;\n}\n// Driver program\nint main(){\n int weight1[] = { 2, 5, 4, 7, 1, 3, 8 };\n int C = 10;\n int m = sizeof(weight1) / sizeof(weight1[0]);\n cout<< \"Number of bins required in First Fit : \"\n <<firstFit(weight1, m, C);\n return 0;\n}"
},
{
"code": null,
"e": 5163,
"s": 5122,
"text": "Number of bins required in First Fit : 4"
},
{
"code": null,
"e": 5316,
"s": 5163,
"text": "The above implementation of First Fit consumes O(m2) time, but First Fit can be used in O(m log m) time implementing Self-Balancing Binary Search Trees."
},
{
"code": null,
"e": 5327,
"s": 5316,
"text": "Best Fit −"
},
{
"code": null,
"e": 5458,
"s": 5327,
"text": "The concept is to places the next item in the tightest position. That means put it in the bin so that at least emptyspace is left."
},
{
"code": null,
"e": 5469,
"s": 5458,
"text": " Live Demo"
},
{
"code": null,
"e": 6802,
"s": 5469,
"text": "// C++ program to calculate number of bins required implementing Best fit algorithm.\n#include <bits/stdc++.h>\nusing namespace std;\n// We have to returns number of bins required implementing best fit online algorithm\nint bestFit(int weight1[], int m, int C){\n // We have to initialize result (Count of bins)\n int res = 0;\n // We have to create an array to store remaining space in bins there can be maximum n bins\n int bin_rem[m];\n // Place elements one by one\n for (int i = 0; i < m; i++){\n // Find the best bin that can accomodate weight1[i]\n int j;\n // We have to initialize minimum space left and index of best bin\n int min = C + 1, bi = 0;\n for (j = 0; j < res; j++){\n if (bin_rem[j] >= weight1[i] && bin_rem[j] - weight1[i] < min) {\n bi = j;\n min = bin_rem[j] - weight1[i];\n }\n }\n // We create a new bin,if no bin could accommodate weight1[i],\n if (min == C + 1) {\n bin_rem[res] = C - weight1[i];\n res++;\n }\n else // Assign the element to best bin\n bin_rem[bi] -= weight1[i];\n }\n return res;\n}\n// Driver program\nint main(){\n int weight1[] = { 2, 5, 4, 7, 1, 3, 8 };\n int C = 10;\n int m = sizeof(weight1) / sizeof(weight1[0]);\n cout<< \"Number of bins required in Best Fit : \"\n <<bestFit(weight1, m, C);\n return 0;\n}"
},
{
"code": null,
"e": 6842,
"s": 6802,
"text": "Number of bins required in Best Fit : 4"
},
{
"code": null,
"e": 6939,
"s": 6842,
"text": "Best Fit can also be applied in O(m log m) time implementing Self-Balancing Binary Search Trees."
},
{
"code": null,
"e": 7215,
"s": 6939,
"text": "In case of the offline version, we have all elements upfront.A problem with online algorithms is that packing large elements is difficult, especially if they occur late in the sequence. We can overcome this by sorting the input sequence, and placing the large elements first."
},
{
"code": null,
"e": 7238,
"s": 7215,
"text": "First Fit Decreasing −"
},
{
"code": null,
"e": 7249,
"s": 7238,
"text": " Live Demo"
},
{
"code": null,
"e": 8696,
"s": 7249,
"text": "// C++ program to locate number of bins needed implementing First Fit Decreasing algorithm.\n#include <bits/stdc++.h>\nusing namespace std;\n/* We copy firstFit() from above */\nint firstFit(int weight1[], int m, int C){\n // We have to initialize result (Count of bins)\n int res = 0;\n // We have to create an array to store remaining space in bins there can be maximum n bins\n int bin_rem[m];\n // We have to place elements one by one\n for (int i = 0; i < m; i++) {\n // We have to find the first bin that can accommodate weight1[i]\n int j;\n for (j = 0; j < res; j++) {\n if (bin_rem[j] >= weight1[i]) {\n bin_rem[j] = bin_rem[j] - weight1[i];\n break;\n }\n }\n // If no bin could accommodate weight1[i]\n if (j == res) {\n bin_rem[res] = C - weight1[i];\n res++;\n }\n }\nreturn res;\n}\n// We have to returns number of bins required implementing first fit decreasing offline algorithm\nint firstFitDec(int weight1[], int m, int C){\n // At first we sort all weights in decreasing order\n sort(weight1, weight1 + m, std::greater<int>());\n // Now we call first fit for sorted items\n return firstFit(weight1, m, C);\n}\n// Driver program\nint main(){\n int weight1[] = { 2, 5, 4, 7, 1, 3, 8 };\n int C = 10;\n int m = sizeof(weight1) / sizeof(weight1[0]);\n cout<< \"Number of bins required in First Fit \"\n << \"Decreasing : \" << firstFitDec(weight1, m, C);\n return 0;\n}"
},
{
"code": null,
"e": 8748,
"s": 8696,
"text": "Number of bins required in First Fit Decreasing : 3"
},
{
"code": null,
"e": 8846,
"s": 8748,
"text": "First Fit decreasing generates the best result for the sample input as elements are sorted first."
},
{
"code": null,
"e": 8952,
"s": 8846,
"text": "First Fit Decreasing can also be used in O(m log m) time implementing Self-Balancing Binary Search Trees."
}
] |
How to return object from an array with highest key values along with name - JavaScript? | Suppose, we have an array of objects that contains information about marks of some students in a test −
const students = [
{ name: 'Andy', total: 40 },
{ name: 'Seric', total: 50 },
{ name: 'Stephen', total: 85 },
{ name: 'David', total: 30 },
{ name: 'Phil', total: 40 },
{ name: 'Eric', total: 82 },
{ name: 'Cameron', total: 30 },
{ name: 'Geoff', total: 30 }
];
We are required to write a JavaScript function that takes in one such array and returns a object with the name and total of the student that have highest value for total.
Therefore, for the above array, the output should be −
{ name: 'Stephen', total: 85 }
Following is the code −
const students = [
{ name: 'Andy', total: 40 },
{ name: 'Seric', total: 50 },
{ name: 'Stephen', total: 85 },
{ name: 'David', total: 30 },
{ name: 'Phil', total: 40 },
{ name: 'Eric', total: 82 },
{ name: 'Cameron', total: 30 },
{ name: 'Geoff', total: 30 }
];
const pickHighest = arr => {
const res = {
name: '',
total: -Infinity
};
arr.forEach(el => {
const { name, total } = el;
if(total > res.total){
res.name = name;
res.total = total;
};
});
return res;
};
console.log(pickHighest(students));
This will produce the following output on console −
{ name: 'Stephen', total: 85 } | [
{
"code": null,
"e": 1166,
"s": 1062,
"text": "Suppose, we have an array of objects that contains information about marks of some students in a test −"
},
{
"code": null,
"e": 1452,
"s": 1166,
"text": "const students = [\n { name: 'Andy', total: 40 },\n { name: 'Seric', total: 50 },\n { name: 'Stephen', total: 85 },\n { name: 'David', total: 30 },\n { name: 'Phil', total: 40 },\n { name: 'Eric', total: 82 },\n { name: 'Cameron', total: 30 },\n { name: 'Geoff', total: 30 }\n];"
},
{
"code": null,
"e": 1623,
"s": 1452,
"text": "We are required to write a JavaScript function that takes in one such array and returns a object with the name and total of the student that have highest value for total."
},
{
"code": null,
"e": 1678,
"s": 1623,
"text": "Therefore, for the above array, the output should be −"
},
{
"code": null,
"e": 1710,
"s": 1678,
"text": "{ name: 'Stephen', total: 85 } "
},
{
"code": null,
"e": 1734,
"s": 1710,
"text": "Following is the code −"
},
{
"code": null,
"e": 2321,
"s": 1734,
"text": "const students = [\n { name: 'Andy', total: 40 },\n { name: 'Seric', total: 50 },\n { name: 'Stephen', total: 85 },\n { name: 'David', total: 30 },\n { name: 'Phil', total: 40 },\n { name: 'Eric', total: 82 },\n { name: 'Cameron', total: 30 },\n { name: 'Geoff', total: 30 }\n];\nconst pickHighest = arr => {\n const res = {\n name: '',\n total: -Infinity\n };\n arr.forEach(el => {\n const { name, total } = el;\n if(total > res.total){\n res.name = name;\n res.total = total;\n };\n });\n return res;\n};\nconsole.log(pickHighest(students));"
},
{
"code": null,
"e": 2373,
"s": 2321,
"text": "This will produce the following output on console −"
},
{
"code": null,
"e": 2404,
"s": 2373,
"text": "{ name: 'Stephen', total: 85 }"
}
] |
What is Control Statements? | Control statements are the statements that change the flow of execution of statements. For example, If, If-else, Switch-Case, while-do statements. In programming languages, Boolean expressions are used to
Alter the flow of control− Boolean expressions are used as conditional expressions in a statement that alter the flow of control. The value of such Boolean expression is implicit in a position reached in a program. For example, if (E) S, the expression E should be true if statement S is reached.
Alter the flow of control− Boolean expressions are used as conditional expressions in a statement that alter the flow of control. The value of such Boolean expression is implicit in a position reached in a program. For example, if (E) S, the expression E should be true if statement S is reached.
Compute logical values− A Boolean expression can define true or false values. Such Boolean expressions can be computed in parallel to arithmetic expressions using three address instruction with logical operators.
Compute logical values− A Boolean expression can define true or false values. Such Boolean expressions can be computed in parallel to arithmetic expressions using three address instruction with logical operators.
The designed use of the Boolean expression is decided by its syntactic context. For example, an expression following the keyword ‘if’ is used to alter the flow of control while an expression on the right side of an assignment can indicate a logical value. Such syntactic contexts can be defined in several ways. It can use several nonterminals,
use inherited attributes, or set a flag during parsing. It can generate a syntax tree and invoke multiple procedures for the two different uses of Boolean expressions.
Example1− Generate three Address Code for
while (a < b) do
If (c < d) then
x = y + z
else
x = y – z.
Solution
(1) If a < b goto(3)
(2) goto(11)
(3) If c < d goto(5)
(4) goto(8)
(5) t1 = y + z
(6) x = t1
(7) goto(1)
(8) t2 = y − z
(9) x = t2
(10) goto(1)
Example2− Generate three Address Code for
while (A < C and B < D) do
If A=1 then C = C + 1
else while A≤D do A = A + 2.
Solution
(1) If A < C goto(3)
(2) goto(15)
(3) If B < D goto(5)
(4) goto(15)
(5) If A = 1 goto(7)
(6) goto (10)
(7) T1 = C + 1
(8) C = T1
(9) goto (1)
(10) If A ≤ D goto (12)
(11) goto (1)
(12) T2 = A + 2
(13) A = T2
(14) goto (10)
Example3− Generate three Address Code for
Switch (a)
{
Case 1−
b = c + d
break;
Case 2−
e = f + g
break;
}
Solution
(1) If a = 1 goto(3)
(2) If a = 2 goto (6)
(3) t1 = c + d
(4) b = t1
(5) goto(9)
(6) t2 = f + g
(7) e = t2
(8) goto(9)
Example4− Generate three address code for
while (i<5)
{
i=i+1;
}
Solution
(1)if i<5 goto(3)
(2) goto (5)
(3) i = i + 1
(4) goto (1)
Example5− Convert the following control statements into Three Address Code.
If (A < B OR C > D)
X = X + Y + Z
Solution
(1) If A < B goto(4)
(2) If C > D goto(4)
(3) goto(7)
(4) T1− = X + Y
(5) T2 = T1 + Z
(6) X− = T2 | [
{
"code": null,
"e": 1267,
"s": 1062,
"text": "Control statements are the statements that change the flow of execution of statements. For example, If, If-else, Switch-Case, while-do statements. In programming languages, Boolean expressions are used to"
},
{
"code": null,
"e": 1564,
"s": 1267,
"text": "Alter the flow of control− Boolean expressions are used as conditional expressions in a statement that alter the flow of control. The value of such Boolean expression is implicit in a position reached in a program. For example, if (E) S, the expression E should be true if statement S is reached."
},
{
"code": null,
"e": 1861,
"s": 1564,
"text": "Alter the flow of control− Boolean expressions are used as conditional expressions in a statement that alter the flow of control. The value of such Boolean expression is implicit in a position reached in a program. For example, if (E) S, the expression E should be true if statement S is reached."
},
{
"code": null,
"e": 2074,
"s": 1861,
"text": "Compute logical values− A Boolean expression can define true or false values. Such Boolean expressions can be computed in parallel to arithmetic expressions using three address instruction with logical operators."
},
{
"code": null,
"e": 2287,
"s": 2074,
"text": "Compute logical values− A Boolean expression can define true or false values. Such Boolean expressions can be computed in parallel to arithmetic expressions using three address instruction with logical operators."
},
{
"code": null,
"e": 2800,
"s": 2287,
"text": "The designed use of the Boolean expression is decided by its syntactic context. For example, an expression following the keyword ‘if’ is used to alter the flow of control while an expression on the right side of an assignment can indicate a logical value. Such syntactic contexts can be defined in several ways. It can use several nonterminals,\nuse inherited attributes, or set a flag during parsing. It can generate a syntax tree and invoke multiple procedures for the two different uses of Boolean expressions."
},
{
"code": null,
"e": 2842,
"s": 2800,
"text": "Example1− Generate three Address Code for"
},
{
"code": null,
"e": 2859,
"s": 2842,
"text": "while (a < b) do"
},
{
"code": null,
"e": 2875,
"s": 2859,
"text": "If (c < d) then"
},
{
"code": null,
"e": 2888,
"s": 2875,
"text": " x = y + z"
},
{
"code": null,
"e": 2893,
"s": 2888,
"text": "else"
},
{
"code": null,
"e": 2907,
"s": 2893,
"text": " x = y – z."
},
{
"code": null,
"e": 2916,
"s": 2907,
"text": "Solution"
},
{
"code": null,
"e": 2938,
"s": 2916,
"text": "(1) If a < b goto(3) "
},
{
"code": null,
"e": 2952,
"s": 2938,
"text": "(2) goto(11) "
},
{
"code": null,
"e": 2974,
"s": 2952,
"text": "(3) If c < d goto(5) "
},
{
"code": null,
"e": 2987,
"s": 2974,
"text": "(4) goto(8) "
},
{
"code": null,
"e": 3003,
"s": 2987,
"text": "(5) t1 = y + z "
},
{
"code": null,
"e": 3015,
"s": 3003,
"text": "(6) x = t1 "
},
{
"code": null,
"e": 3028,
"s": 3015,
"text": "(7) goto(1) "
},
{
"code": null,
"e": 3044,
"s": 3028,
"text": "(8) t2 = y − z "
},
{
"code": null,
"e": 3056,
"s": 3044,
"text": "(9) x = t2 "
},
{
"code": null,
"e": 3069,
"s": 3056,
"text": "(10) goto(1)"
},
{
"code": null,
"e": 3111,
"s": 3069,
"text": "Example2− Generate three Address Code for"
},
{
"code": null,
"e": 3189,
"s": 3111,
"text": "while (A < C and B < D) do\nIf A=1 then C = C + 1\nelse while A≤D do A = A + 2."
},
{
"code": null,
"e": 3198,
"s": 3189,
"text": "Solution"
},
{
"code": null,
"e": 3219,
"s": 3198,
"text": "(1) If A < C goto(3)"
},
{
"code": null,
"e": 3232,
"s": 3219,
"text": "(2) goto(15)"
},
{
"code": null,
"e": 3253,
"s": 3232,
"text": "(3) If B < D goto(5)"
},
{
"code": null,
"e": 3266,
"s": 3253,
"text": "(4) goto(15)"
},
{
"code": null,
"e": 3287,
"s": 3266,
"text": "(5) If A = 1 goto(7)"
},
{
"code": null,
"e": 3301,
"s": 3287,
"text": "(6) goto (10)"
},
{
"code": null,
"e": 3316,
"s": 3301,
"text": "(7) T1 = C + 1"
},
{
"code": null,
"e": 3327,
"s": 3316,
"text": "(8) C = T1"
},
{
"code": null,
"e": 3340,
"s": 3327,
"text": "(9) goto (1)"
},
{
"code": null,
"e": 3364,
"s": 3340,
"text": "(10) If A ≤ D goto (12)"
},
{
"code": null,
"e": 3378,
"s": 3364,
"text": "(11) goto (1)"
},
{
"code": null,
"e": 3394,
"s": 3378,
"text": "(12) T2 = A + 2"
},
{
"code": null,
"e": 3406,
"s": 3394,
"text": "(13) A = T2"
},
{
"code": null,
"e": 3421,
"s": 3406,
"text": "(14) goto (10)"
},
{
"code": null,
"e": 3463,
"s": 3421,
"text": "Example3− Generate three Address Code for"
},
{
"code": null,
"e": 3558,
"s": 3463,
"text": "Switch (a)\n{\n Case 1−\n b = c + d\n break;\n Case 2−\n e = f + g\n break;\n}"
},
{
"code": null,
"e": 3567,
"s": 3558,
"text": "Solution"
},
{
"code": null,
"e": 3588,
"s": 3567,
"text": "(1) If a = 1 goto(3)"
},
{
"code": null,
"e": 3610,
"s": 3588,
"text": "(2) If a = 2 goto (6)"
},
{
"code": null,
"e": 3625,
"s": 3610,
"text": "(3) t1 = c + d"
},
{
"code": null,
"e": 3636,
"s": 3625,
"text": "(4) b = t1"
},
{
"code": null,
"e": 3648,
"s": 3636,
"text": "(5) goto(9)"
},
{
"code": null,
"e": 3663,
"s": 3648,
"text": "(6) t2 = f + g"
},
{
"code": null,
"e": 3674,
"s": 3663,
"text": "(7) e = t2"
},
{
"code": null,
"e": 3686,
"s": 3674,
"text": "(8) goto(9)"
},
{
"code": null,
"e": 3728,
"s": 3686,
"text": "Example4− Generate three address code for"
},
{
"code": null,
"e": 3754,
"s": 3728,
"text": "while (i<5)\n{\n i=i+1;\n}"
},
{
"code": null,
"e": 3763,
"s": 3754,
"text": "Solution"
},
{
"code": null,
"e": 3781,
"s": 3763,
"text": "(1)if i<5 goto(3)"
},
{
"code": null,
"e": 3794,
"s": 3781,
"text": "(2) goto (5)"
},
{
"code": null,
"e": 3808,
"s": 3794,
"text": "(3) i = i + 1"
},
{
"code": null,
"e": 3821,
"s": 3808,
"text": "(4) goto (1)"
},
{
"code": null,
"e": 3897,
"s": 3821,
"text": "Example5− Convert the following control statements into Three Address Code."
},
{
"code": null,
"e": 3931,
"s": 3897,
"text": "If (A < B OR C > D)\nX = X + Y + Z"
},
{
"code": null,
"e": 3940,
"s": 3931,
"text": "Solution"
},
{
"code": null,
"e": 3961,
"s": 3940,
"text": "(1) If A < B goto(4)"
},
{
"code": null,
"e": 3982,
"s": 3961,
"text": "(2) If C > D goto(4)"
},
{
"code": null,
"e": 3994,
"s": 3982,
"text": "(3) goto(7)"
},
{
"code": null,
"e": 4010,
"s": 3994,
"text": "(4) T1− = X + Y"
},
{
"code": null,
"e": 4026,
"s": 4010,
"text": "(5) T2 = T1 + Z"
},
{
"code": null,
"e": 4038,
"s": 4026,
"text": "(6) X− = T2"
}
] |
H2 Database - Merge | MERGE command is used to update the existing rows and insert new rows into a table. The primary key column plays an important role while using this command; it is used to find the row.
Following is the generic syntax of the MERGE command.
MERGE INTO tableName [ ( columnName [,...] ) ]
[ KEY ( columnName [,...] ) ]
{ VALUES { ( { DEFAULT | expression } [,...] ) } [,...] | select }
In the above syntax, the KEY clause is used to specify the primary key column name. Along with VALUES clause, we can use primitive values to insert or we can retrieve and store another table values into this table using the select command.
In this example, let us try to add a new record into Customers table. Following are the details of the new record in the table.
Using the following query, let us insert the given record into the H2 database query.
MERGE INTO CUSTOMER KEY (ID) VALUES (8, 'Lokesh', 32, 'Hyderabad', 2500);
The above query produces the following output.
Update count: 1
Let us verify the records of the Customer table by executing the following query.
SELECT * FROM CUSTOMER;
The above query produces the following output.
Now let us try to update the record using the Merge command. Following are the details of the record to be updated.
Use the following query to insert the given record into the H2 database query.
MERGE INTO CUSTOMER KEY (ID) VALUES (8, 'Loki', 32, 'Hyderabad', 3000);
The above query produces the following output.
Update count: 1
Let us verify the records of the Customer table by executing the following query.
SELECT * FROM CUSTOMER;
The above query produces the following output −
14 Lectures
1 hours
Mahesh Kumar
100 Lectures
9.5 hours
Hari Om Singh
108 Lectures
8 hours
Pavan Lalwani
10 Lectures
1 hours
Deepti Trivedi
20 Lectures
2 hours
Deepti Trivedi
14 Lectures
1 hours
Deepti Trivedi
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2293,
"s": 2107,
"text": "MERGE command is used to update the existing rows and insert new rows into a table. The primary key column plays an important role while using this command; it is used to find the row."
},
{
"code": null,
"e": 2347,
"s": 2293,
"text": "Following is the generic syntax of the MERGE command."
},
{
"code": null,
"e": 2495,
"s": 2347,
"text": "MERGE INTO tableName [ ( columnName [,...] ) ] \n[ KEY ( columnName [,...] ) ] \n{ VALUES { ( { DEFAULT | expression } [,...] ) } [,...] | select } \n"
},
{
"code": null,
"e": 2735,
"s": 2495,
"text": "In the above syntax, the KEY clause is used to specify the primary key column name. Along with VALUES clause, we can use primitive values to insert or we can retrieve and store another table values into this table using the select command."
},
{
"code": null,
"e": 2863,
"s": 2735,
"text": "In this example, let us try to add a new record into Customers table. Following are the details of the new record in the table."
},
{
"code": null,
"e": 2949,
"s": 2863,
"text": "Using the following query, let us insert the given record into the H2 database query."
},
{
"code": null,
"e": 3024,
"s": 2949,
"text": "MERGE INTO CUSTOMER KEY (ID) VALUES (8, 'Lokesh', 32, 'Hyderabad', 2500);\n"
},
{
"code": null,
"e": 3071,
"s": 3024,
"text": "The above query produces the following output."
},
{
"code": null,
"e": 3089,
"s": 3071,
"text": "Update count: 1 \n"
},
{
"code": null,
"e": 3171,
"s": 3089,
"text": "Let us verify the records of the Customer table by executing the following query."
},
{
"code": null,
"e": 3195,
"s": 3171,
"text": "SELECT * FROM CUSTOMER;"
},
{
"code": null,
"e": 3242,
"s": 3195,
"text": "The above query produces the following output."
},
{
"code": null,
"e": 3358,
"s": 3242,
"text": "Now let us try to update the record using the Merge command. Following are the details of the record to be updated."
},
{
"code": null,
"e": 3437,
"s": 3358,
"text": "Use the following query to insert the given record into the H2 database query."
},
{
"code": null,
"e": 3510,
"s": 3437,
"text": "MERGE INTO CUSTOMER KEY (ID) VALUES (8, 'Loki', 32, 'Hyderabad', 3000);\n"
},
{
"code": null,
"e": 3557,
"s": 3510,
"text": "The above query produces the following output."
},
{
"code": null,
"e": 3575,
"s": 3557,
"text": "Update count: 1 \n"
},
{
"code": null,
"e": 3657,
"s": 3575,
"text": "Let us verify the records of the Customer table by executing the following query."
},
{
"code": null,
"e": 3683,
"s": 3657,
"text": "SELECT * FROM CUSTOMER; \n"
},
{
"code": null,
"e": 3731,
"s": 3683,
"text": "The above query produces the following output −"
},
{
"code": null,
"e": 3764,
"s": 3731,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 3778,
"s": 3764,
"text": " Mahesh Kumar"
},
{
"code": null,
"e": 3814,
"s": 3778,
"text": "\n 100 Lectures \n 9.5 hours \n"
},
{
"code": null,
"e": 3829,
"s": 3814,
"text": " Hari Om Singh"
},
{
"code": null,
"e": 3863,
"s": 3829,
"text": "\n 108 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 3878,
"s": 3863,
"text": " Pavan Lalwani"
},
{
"code": null,
"e": 3911,
"s": 3878,
"text": "\n 10 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 3927,
"s": 3911,
"text": " Deepti Trivedi"
},
{
"code": null,
"e": 3960,
"s": 3927,
"text": "\n 20 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3976,
"s": 3960,
"text": " Deepti Trivedi"
},
{
"code": null,
"e": 4009,
"s": 3976,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 4025,
"s": 4009,
"text": " Deepti Trivedi"
},
{
"code": null,
"e": 4032,
"s": 4025,
"text": " Print"
},
{
"code": null,
"e": 4043,
"s": 4032,
"text": " Add Notes"
}
] |
Spring MVC - Internal Resource View Resolver Example | The InternalResourceViewResolver is used to resolve the provided URI to actual URI. The following example shows how to use the InternalResourceViewResolver using the Spring Web MVC Framework. The InternalResourceViewResolver allows mapping webpages with requests.
package com.tutorialspoint;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.ui.ModelMap;
@Controller
@RequestMapping("/hello")
public class HelloController{
@RequestMapping(method = RequestMethod.GET)
public String printHello(ModelMap model) {
model.addAttribute("message", "Hello Spring MVC Framework!");
return "hello";
}
}
<bean class = "org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name = "prefix" value = "/WEB-INF/jsp/"/>
<property name = "suffix" value = ".jsp"/>
</bean>
For example, using the above configuration, if URI
/hello is requested, DispatcherServlet will forward the request to the prefix + viewname + suffix = /WEB-INF/jsp/hello.jsp.
/hello is requested, DispatcherServlet will forward the request to the prefix + viewname + suffix = /WEB-INF/jsp/hello.jsp.
To start with, let us have a working Eclipse IDE in place and then consider the following steps to develop a Dynamic Form based Web Application using the Spring Web Framework.
package com.tutorialspoint;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.ui.ModelMap;
@Controller
@RequestMapping("/hello")
public class HelloController{
@RequestMapping(method = RequestMethod.GET)
public String printHello(ModelMap model) {
model.addAttribute("message", "Hello Spring MVC Framework!");
return "hello";
}
}
<beans xmlns = "http://www.springframework.org/schema/beans"
xmlns:context = "http://www.springframework.org/schema/context"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-3.0.xsd">
<context:component-scan base-package = "com.tutorialspoint" />
<bean class = "org.springframework.web.servlet.view.InternalResourceViewResolver">
<property name = "prefix" value = "/WEB-INF/jsp/" />
<property name = "suffix" value = ".jsp" />
</bean>
</beans>
<%@ page contentType = "text/html; charset = UTF-8" %>
<html>
<head>
<title>Hello World</title>
</head>
<body>
<h2>${message}</h2>
</body>
</html>
Once you are done with creating source and configuration files, export your application. Right click on your application, use Export → WAR File option and save the TestWeb.war file in Tomcat's webapps folder.
Now, start your Tomcat server and make sure you are able to access other webpages from the webapps folder using a standard browser. Try to access the URL – http://localhost:8080/TestWeb/hello and if everything is fine with the Spring Web Application, we will see the following screen.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 3055,
"s": 2791,
"text": "The InternalResourceViewResolver is used to resolve the provided URI to actual URI. The following example shows how to use the InternalResourceViewResolver using the Spring Web MVC Framework. The InternalResourceViewResolver allows mapping webpages with requests."
},
{
"code": null,
"e": 3561,
"s": 3055,
"text": "package com.tutorialspoint;\n\nimport org.springframework.stereotype.Controller;\nimport org.springframework.web.bind.annotation.RequestMapping;\nimport org.springframework.web.bind.annotation.RequestMethod;\nimport org.springframework.ui.ModelMap;\n\n@Controller\n@RequestMapping(\"/hello\")\npublic class HelloController{\n \n @RequestMapping(method = RequestMethod.GET)\n public String printHello(ModelMap model) {\n model.addAttribute(\"message\", \"Hello Spring MVC Framework!\");\n\n return \"hello\";\n }\n}"
},
{
"code": null,
"e": 3753,
"s": 3561,
"text": "<bean class = \"org.springframework.web.servlet.view.InternalResourceViewResolver\">\n <property name = \"prefix\" value = \"/WEB-INF/jsp/\"/>\n <property name = \"suffix\" value = \".jsp\"/>\n</bean>"
},
{
"code": null,
"e": 3804,
"s": 3753,
"text": "For example, using the above configuration, if URI"
},
{
"code": null,
"e": 3928,
"s": 3804,
"text": "/hello is requested, DispatcherServlet will forward the request to the prefix + viewname + suffix = /WEB-INF/jsp/hello.jsp."
},
{
"code": null,
"e": 4052,
"s": 3928,
"text": "/hello is requested, DispatcherServlet will forward the request to the prefix + viewname + suffix = /WEB-INF/jsp/hello.jsp."
},
{
"code": null,
"e": 4228,
"s": 4052,
"text": "To start with, let us have a working Eclipse IDE in place and then consider the following steps to develop a Dynamic Form based Web Application using the Spring Web Framework."
},
{
"code": null,
"e": 4735,
"s": 4228,
"text": "package com.tutorialspoint;\n\nimport org.springframework.stereotype.Controller;\nimport org.springframework.web.bind.annotation.RequestMapping;\nimport org.springframework.web.bind.annotation.RequestMethod;\nimport org.springframework.ui.ModelMap;\n\n@Controller\n@RequestMapping(\"/hello\")\npublic class HelloController{\n \n @RequestMapping(method = RequestMethod.GET)\n public String printHello(ModelMap model) {\n model.addAttribute(\"message\", \"Hello Spring MVC Framework!\");\n\n return \"hello\";\n }\n\n}"
},
{
"code": null,
"e": 5477,
"s": 4735,
"text": "<beans xmlns = \"http://www.springframework.org/schema/beans\"\n xmlns:context = \"http://www.springframework.org/schema/context\"\n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation = \"\n http://www.springframework.org/schema/beans \n http://www.springframework.org/schema/beans/spring-beans-3.0.xsd\n http://www.springframework.org/schema/context \n http://www.springframework.org/schema/context/spring-context-3.0.xsd\">\n\n <context:component-scan base-package = \"com.tutorialspoint\" />\n\n <bean class = \"org.springframework.web.servlet.view.InternalResourceViewResolver\">\n <property name = \"prefix\" value = \"/WEB-INF/jsp/\" />\n <property name = \"suffix\" value = \".jsp\" />\n </bean>\n \n</beans>"
},
{
"code": null,
"e": 5648,
"s": 5477,
"text": "<%@ page contentType = \"text/html; charset = UTF-8\" %>\n<html>\n <head>\n <title>Hello World</title>\n </head>\n <body>\n <h2>${message}</h2>\n </body>\n</html>"
},
{
"code": null,
"e": 5857,
"s": 5648,
"text": "Once you are done with creating source and configuration files, export your application. Right click on your application, use Export → WAR File option and save the TestWeb.war file in Tomcat's webapps folder."
},
{
"code": null,
"e": 6142,
"s": 5857,
"text": "Now, start your Tomcat server and make sure you are able to access other webpages from the webapps folder using a standard browser. Try to access the URL – http://localhost:8080/TestWeb/hello and if everything is fine with the Spring Web Application, we will see the following screen."
},
{
"code": null,
"e": 6149,
"s": 6142,
"text": " Print"
},
{
"code": null,
"e": 6160,
"s": 6149,
"text": " Add Notes"
}
] |
Part 1 — Initial Setup + Depth (OpenCV Spatial AI Competition Journey)| by Ibai Gorordo | Aug, 2020 | Towards Data Science | Update: This article is part of a series where I will be documenting my journey on the development of a social distancing feedback system for the blind as part of the OpenCV Spatial Competition. Check out the full series: Part 1, Part 2.
Recently, the people at OpenCV launched the OpenCV Spatial AI competition sponsored by Intel as part of OpenCV’s 20th anniversary celebration. The main objective of the competition is to develop applications that benefit from the features of the new OpenCV AI Kit with Depth (OAK-D). The competition consists of two phases, the winners of the first phase were able to obtain an OAK-D for free to develop their application and the second phase winners will receive a cash prize of up to $3,000.
The OAK-D contains a 12 MP RGB camera for deep neural inference and a stereo camera for depth estimation in real time using Intel’s Myriad X Vision Processing Unit (VPU).
If you want to know more about the OAK-D, make sure to check the interview by Ritesh Kanjee to Brandon Gilles, who is the Chief Architect of the OpenCV AI Kit. The kit has raised over $800,000 as part of their Kickstarter capaign with mote than 4,000 supporters. If you are interested, you can also find more about the kit in Luxoni’s community slack channel (https://luxonis-community.slack.com/)
Due to the interesting features that the OAK-D, I decided to apply for the OpenCV Spatial AI competition and was lucky enough to be selected as one of the winners of Phase 1. You can also check the projects for the rest of the Phase 1 winners here.
This publication is part of a series of post where I will be describing my journey developing with the new OAK-D as part of my competition project.
The title of my proposal is “Social distancing feedback for visually impaired people using a wearable camera”. Due to the current worldwide outbreak of COVID-19, social distancing has become a new social norm as a measure to prevent the widespread of the pandemic.
However, visually impaired people are struggling to keep independence in the new socially distanced normal1,2. For blind people, it is not possible to easily confirm if they are keeping the social distance with the people around them. As an example, a video in the Royal National Institute of Blind People (RNIB) Twitter account showed the difficulties blind people are struggling with in their daily life due to social distancing.
Moreover, common solutions for the blind such as white cane or dog cannot assist the blind to keep the social distance. Even worse, blind people cannot know if the people close to them is wearing a mask or not, thus they suffer a higher risk of infection.
For those reasons, the objective of my project is to develop a feedback system for the blind that informs about the distance to other people around and whether someone is not wearing a mask.
For that type of project, where the depth and Artificial Intelligence needs to be combined in real time, the OAK-D is the ideal system. As shown in one example of the DepthAI experiments, the OAK-D is able to detect in real time the position of the faces in an image and whether they are wearing a mask or not. By combining this information with the depth information obtained from the stereo cameras, it is possible to estimate the position of the people around the user and if someone is not wearing a mask.
Then, the system will inform the user about the distance to the people around using five haptic motors attached to the OAK-D board. The haptic motors will be related to 5 direction angles: -40, -20, 0, 20 and 40 degrees. For example, if the system detects that there is a person near at an angle of -20 degrees (as in the image above), then the second motor from the left will vibrate. Moreover, in order to inform about how close the person is, the intensity of the motor will change as the detected person gets closers. Finally, if the system detect that there is a person not wearing a mask, the system will inform the user by changing the vibration pattern.
This week I received the OpenCV AI Kit. As shown in the image below, the kit contains a OAK-D, a USB-C cable, a 5V (3A) wall charger and a 3D printed GoPro mount.
The OAK-D has a small size (46 x 100 mm) with a T shape. Actually, the lower part of the board matches with the width of the Raspberry Pi Zero, so the system combining both boards can have a compact size as shown in the image below.
In order to connect with the OAK-D, the people at Luxonis have developed the DepthAI Python API. The DepthAI API is open source and can run in different Operating Systems including Ubuntu, Raspbian and macOS. In the case of Windows 10, as of today (August 8, 2020) it is still experimental. However, following the instructions descibed in here, the process was quite easy. One important note, if you do not want to have to compile the API, make sure to use the Python 3.7 (32 bit). I tried to use the Python 3.8 (32 bit) but it did not work correctly, so make sure you are using the correct Python version.
Once I installed the depthAI API and its dependencies, I was able to run the default demo by running the following command (make sure to be in the depthai folder):
python depthai.py
This demo by default runs the MobileNet SSD object detection model that can detect 20 different types of objects (bicycle, car, cat...) inside an image. Moreover, the demo combines the bounding box of the detected object with the depth information of the stereo cameras to provide the 3D position of each detected object. As an example, below, I show an output of the demo for the detection of a water bottle, which is one of the classes that can detect the default demo model.
The demo was able to track the object and calculate the depth at 30 fps without any problem. By looking at the Python code of the depthai.py script, I saw that the demo can be configured to other modes by add arguments when running the demo. For example, running the following code it is possible to obtain the colorized depth (Note: only works for the Refactory version for Windows 10, in the original repository the configuration has changed):
python depthai.py --streams depth_color_h
Overall, the depth looks pretty good with some black regions on the left of the background. However, that region contains glass panels and probably the stereo camera system cannot extract many features from it, so that why no depth was provided for those regions.
Even though the depth estimation of the OAK-D is not its main feature, I wanted to compare the depth estimation of the OAK-D with the latest Azure Kinect DK. For that purpose, I modified wrote a small Python script (hello_depth.py) that reads the raw deoth values and displays the depth as in the Azure Kinect.
As for the Azure Kinect, I used the depth estimation example program in my Python repository for the Azure Kinect SDK. In the image below the estimated depth for both devices is compared.
As it can be observed, even though the OAK-D uses a estereo camera the results are very good. Particularly, in the case of the table, the OAK-D was able to estimate the depth correctly whereas the TOF sensor in the Azure Kinect failed.
This is all for this first part, in the next part I will test the face mask detection example using the OAK-D. I will also be uploading all the scripts for this project in my repository https://github.com/ibaiGorordo/Social-Distance-Feedback.
[1]: Gus Alexiou. (June 7 2020). Blind People’s Social Distancing Nightmare To Intensify As Lockdowns Ease, https://medium.com/@arinbasu/you-can-use-footnotes-thus-babus%C2%B9-6c485c4eff1e
[2]: BBC news. (June 24 2020). Social distancing a ‘struggle’ for those visually impaired, https://medium.com/@arinbasu/you-can-use-footnotes-thus-babus%C2%B9-6c485c4eff1e | [
{
"code": null,
"e": 409,
"s": 171,
"text": "Update: This article is part of a series where I will be documenting my journey on the development of a social distancing feedback system for the blind as part of the OpenCV Spatial Competition. Check out the full series: Part 1, Part 2."
},
{
"code": null,
"e": 903,
"s": 409,
"text": "Recently, the people at OpenCV launched the OpenCV Spatial AI competition sponsored by Intel as part of OpenCV’s 20th anniversary celebration. The main objective of the competition is to develop applications that benefit from the features of the new OpenCV AI Kit with Depth (OAK-D). The competition consists of two phases, the winners of the first phase were able to obtain an OAK-D for free to develop their application and the second phase winners will receive a cash prize of up to $3,000."
},
{
"code": null,
"e": 1074,
"s": 903,
"text": "The OAK-D contains a 12 MP RGB camera for deep neural inference and a stereo camera for depth estimation in real time using Intel’s Myriad X Vision Processing Unit (VPU)."
},
{
"code": null,
"e": 1472,
"s": 1074,
"text": "If you want to know more about the OAK-D, make sure to check the interview by Ritesh Kanjee to Brandon Gilles, who is the Chief Architect of the OpenCV AI Kit. The kit has raised over $800,000 as part of their Kickstarter capaign with mote than 4,000 supporters. If you are interested, you can also find more about the kit in Luxoni’s community slack channel (https://luxonis-community.slack.com/)"
},
{
"code": null,
"e": 1721,
"s": 1472,
"text": "Due to the interesting features that the OAK-D, I decided to apply for the OpenCV Spatial AI competition and was lucky enough to be selected as one of the winners of Phase 1. You can also check the projects for the rest of the Phase 1 winners here."
},
{
"code": null,
"e": 1869,
"s": 1721,
"text": "This publication is part of a series of post where I will be describing my journey developing with the new OAK-D as part of my competition project."
},
{
"code": null,
"e": 2134,
"s": 1869,
"text": "The title of my proposal is “Social distancing feedback for visually impaired people using a wearable camera”. Due to the current worldwide outbreak of COVID-19, social distancing has become a new social norm as a measure to prevent the widespread of the pandemic."
},
{
"code": null,
"e": 2566,
"s": 2134,
"text": "However, visually impaired people are struggling to keep independence in the new socially distanced normal1,2. For blind people, it is not possible to easily confirm if they are keeping the social distance with the people around them. As an example, a video in the Royal National Institute of Blind People (RNIB) Twitter account showed the difficulties blind people are struggling with in their daily life due to social distancing."
},
{
"code": null,
"e": 2822,
"s": 2566,
"text": "Moreover, common solutions for the blind such as white cane or dog cannot assist the blind to keep the social distance. Even worse, blind people cannot know if the people close to them is wearing a mask or not, thus they suffer a higher risk of infection."
},
{
"code": null,
"e": 3013,
"s": 2822,
"text": "For those reasons, the objective of my project is to develop a feedback system for the blind that informs about the distance to other people around and whether someone is not wearing a mask."
},
{
"code": null,
"e": 3523,
"s": 3013,
"text": "For that type of project, where the depth and Artificial Intelligence needs to be combined in real time, the OAK-D is the ideal system. As shown in one example of the DepthAI experiments, the OAK-D is able to detect in real time the position of the faces in an image and whether they are wearing a mask or not. By combining this information with the depth information obtained from the stereo cameras, it is possible to estimate the position of the people around the user and if someone is not wearing a mask."
},
{
"code": null,
"e": 4185,
"s": 3523,
"text": "Then, the system will inform the user about the distance to the people around using five haptic motors attached to the OAK-D board. The haptic motors will be related to 5 direction angles: -40, -20, 0, 20 and 40 degrees. For example, if the system detects that there is a person near at an angle of -20 degrees (as in the image above), then the second motor from the left will vibrate. Moreover, in order to inform about how close the person is, the intensity of the motor will change as the detected person gets closers. Finally, if the system detect that there is a person not wearing a mask, the system will inform the user by changing the vibration pattern."
},
{
"code": null,
"e": 4348,
"s": 4185,
"text": "This week I received the OpenCV AI Kit. As shown in the image below, the kit contains a OAK-D, a USB-C cable, a 5V (3A) wall charger and a 3D printed GoPro mount."
},
{
"code": null,
"e": 4581,
"s": 4348,
"text": "The OAK-D has a small size (46 x 100 mm) with a T shape. Actually, the lower part of the board matches with the width of the Raspberry Pi Zero, so the system combining both boards can have a compact size as shown in the image below."
},
{
"code": null,
"e": 5188,
"s": 4581,
"text": "In order to connect with the OAK-D, the people at Luxonis have developed the DepthAI Python API. The DepthAI API is open source and can run in different Operating Systems including Ubuntu, Raspbian and macOS. In the case of Windows 10, as of today (August 8, 2020) it is still experimental. However, following the instructions descibed in here, the process was quite easy. One important note, if you do not want to have to compile the API, make sure to use the Python 3.7 (32 bit). I tried to use the Python 3.8 (32 bit) but it did not work correctly, so make sure you are using the correct Python version."
},
{
"code": null,
"e": 5352,
"s": 5188,
"text": "Once I installed the depthAI API and its dependencies, I was able to run the default demo by running the following command (make sure to be in the depthai folder):"
},
{
"code": null,
"e": 5370,
"s": 5352,
"text": "python depthai.py"
},
{
"code": null,
"e": 5848,
"s": 5370,
"text": "This demo by default runs the MobileNet SSD object detection model that can detect 20 different types of objects (bicycle, car, cat...) inside an image. Moreover, the demo combines the bounding box of the detected object with the depth information of the stereo cameras to provide the 3D position of each detected object. As an example, below, I show an output of the demo for the detection of a water bottle, which is one of the classes that can detect the default demo model."
},
{
"code": null,
"e": 6294,
"s": 5848,
"text": "The demo was able to track the object and calculate the depth at 30 fps without any problem. By looking at the Python code of the depthai.py script, I saw that the demo can be configured to other modes by add arguments when running the demo. For example, running the following code it is possible to obtain the colorized depth (Note: only works for the Refactory version for Windows 10, in the original repository the configuration has changed):"
},
{
"code": null,
"e": 6336,
"s": 6294,
"text": "python depthai.py --streams depth_color_h"
},
{
"code": null,
"e": 6600,
"s": 6336,
"text": "Overall, the depth looks pretty good with some black regions on the left of the background. However, that region contains glass panels and probably the stereo camera system cannot extract many features from it, so that why no depth was provided for those regions."
},
{
"code": null,
"e": 6911,
"s": 6600,
"text": "Even though the depth estimation of the OAK-D is not its main feature, I wanted to compare the depth estimation of the OAK-D with the latest Azure Kinect DK. For that purpose, I modified wrote a small Python script (hello_depth.py) that reads the raw deoth values and displays the depth as in the Azure Kinect."
},
{
"code": null,
"e": 7099,
"s": 6911,
"text": "As for the Azure Kinect, I used the depth estimation example program in my Python repository for the Azure Kinect SDK. In the image below the estimated depth for both devices is compared."
},
{
"code": null,
"e": 7335,
"s": 7099,
"text": "As it can be observed, even though the OAK-D uses a estereo camera the results are very good. Particularly, in the case of the table, the OAK-D was able to estimate the depth correctly whereas the TOF sensor in the Azure Kinect failed."
},
{
"code": null,
"e": 7578,
"s": 7335,
"text": "This is all for this first part, in the next part I will test the face mask detection example using the OAK-D. I will also be uploading all the scripts for this project in my repository https://github.com/ibaiGorordo/Social-Distance-Feedback."
},
{
"code": null,
"e": 7767,
"s": 7578,
"text": "[1]: Gus Alexiou. (June 7 2020). Blind People’s Social Distancing Nightmare To Intensify As Lockdowns Ease, https://medium.com/@arinbasu/you-can-use-footnotes-thus-babus%C2%B9-6c485c4eff1e"
}
] |
Topic Modeling in One Line with Top2Vec | by Jiahao Weng | Towards Data Science | Topic modeling was not always so easy. Not long ago, the prevalent method for topic modeling was Latent Dirichlet Allocation (LDA). Using LDA with Gensim is simple to code, but the results... I often struggled to get any useful insights. It, therefore, impressed me when I was exploring the use of the module Top2Vec and found it to be so easy to use and the results so useful.
Released in March 2020, Top2Vec is created by Dimo Angelov, with its paper published on ArXiv in August 2020. Despite being new, the algorithms used by Top2Vec are well-established — Doc2Vec, UMAP, HDBSCAN. It also supports the use of embedding models like Universal Sentence Encoder and BERT.
In this article, we shall look at the high level workings of Top2Vec and illustrate the use of Top2Vec through topic modeling of hotel reviews. You can imagine this being useful to the hotel management by organizing the reviews for understanding key aspects of customer feedback, and needless to say, this can be applied to other kinds of reviews too.
Top2Vec automatically detects the topics present in text and does not require traditional text preprocessing like stop-words removal, stemming or lemmatization.
There are three key steps taken by Top2Vec.
Given a list of documents, Top2Vec converts each document to a numeric representation (or document vector) through Doc2Vec or a pre-trained model. Similarly, each unique word will also have a numeric representation (word vector).
The numeric representations are created such that the semantic contents of the documents are captured, and similar documents and words will be close to each other. In the example below, we see that the documents (blue points) and words (green points) relating to washroom are close to each other while those related to parking bunch together.
With the numeric representations, we could have proceeded to cluster the documents to find the topics, but document vectors in high dimensional space tend to be sparse. As such, dimensionality reduction is performed using UMAP (Uniform Manifold Approximation and Projection) to help find dense areas. In the paper, the author finds 5 dimensions to give the best resultsfor the downstream task of clustering.
After compressing the numeric representations into a lower dimensional space, we are now ready to find dense areas of documents using HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise).
In the diagram below, each dot represents a document and the dense areas of documents are colored, with each area representing a topic. The gray dots are outliers not belonging to any cluster.
For each dense area, a numeric representation of the topic (topic vector) is then obtained by taking the arithmetic mean of all the document vectors in the same cluster. And finally, each document is assigned a topic number based on the nearest topic vector to its document vector.
With a better understanding of Top2Vec, we are now ready to dive into our hotel reviews example.
The dataset that we are using can be downloaded from Kaggle and contains 515,000 customer reviews of luxury hotels across Europe.
It contains various columns but we will just be using one column in our illustration — Negative_Review.
An example of Negative_Review:
Room was not cleaned correctly Wine Champagne glasses left dirty in the room the floors felt dirty and the shower drain was clogged From the day we arrived
After some cleaning up of the data, we are then ready to feed in our list of reviews to Top2Vec.
The code to run the modeling is as follows, where hotel_reviews is our data.
model = Top2Vec(documents=hotel_reviews)
And that’s it! This single line of code will pre-process the documents for training, create document and word vectors using Doc2Vec, perform dimensionality reduction on the vectors, and finally find the topics through clustering.
To use the pre-trained models instead, we just need to add the embedding_model parameter.
model = Top2Vec(documents=hotel_reviews, embedding_model='distiluse-base-multilingual-cased')
In our case, we experimented with Doc2Vec and other pre-trained models for creating the document vectors, and find that “distiluse-base-multilingual-cased” pre-trained model works best for our example.
We are now ready to interpret the topics. Top2Vec has methods that make evaluation easier by generating word clouds and retrieving the top similar documents for each topic.
Let’s look at our top 3 topics.
From the word clouds and sample reviews, we can clearly infer that topic 1 has to do with bad washrooms, topic 2 is about small rooms and topic 3 are complains on parking matters.
All these results were obtained without us telling the computer the topics to look for and with only a few lines of code. Power! 💪
If you would like to try it out on your own, our example notebook can be found here for your reference.
Through the hotel reviews example, we hope that you are now equipped with one more tool in your machine learning arsenal to perform topic modeling. Interestingly, I have read cases where similar methodology to the one by Top2Vec is extended to customer segmentation, with the main differences being each “word” is a retailer ID and each “document” shows the sequence of retailer IDs visited by the customer. An idea for the next project 😉.
Thanks for reading and I hope the article was useful :) Please also feel free to comment with any questions or suggestions that you may have. | [
{
"code": null,
"e": 550,
"s": 172,
"text": "Topic modeling was not always so easy. Not long ago, the prevalent method for topic modeling was Latent Dirichlet Allocation (LDA). Using LDA with Gensim is simple to code, but the results... I often struggled to get any useful insights. It, therefore, impressed me when I was exploring the use of the module Top2Vec and found it to be so easy to use and the results so useful."
},
{
"code": null,
"e": 844,
"s": 550,
"text": "Released in March 2020, Top2Vec is created by Dimo Angelov, with its paper published on ArXiv in August 2020. Despite being new, the algorithms used by Top2Vec are well-established — Doc2Vec, UMAP, HDBSCAN. It also supports the use of embedding models like Universal Sentence Encoder and BERT."
},
{
"code": null,
"e": 1196,
"s": 844,
"text": "In this article, we shall look at the high level workings of Top2Vec and illustrate the use of Top2Vec through topic modeling of hotel reviews. You can imagine this being useful to the hotel management by organizing the reviews for understanding key aspects of customer feedback, and needless to say, this can be applied to other kinds of reviews too."
},
{
"code": null,
"e": 1357,
"s": 1196,
"text": "Top2Vec automatically detects the topics present in text and does not require traditional text preprocessing like stop-words removal, stemming or lemmatization."
},
{
"code": null,
"e": 1401,
"s": 1357,
"text": "There are three key steps taken by Top2Vec."
},
{
"code": null,
"e": 1631,
"s": 1401,
"text": "Given a list of documents, Top2Vec converts each document to a numeric representation (or document vector) through Doc2Vec or a pre-trained model. Similarly, each unique word will also have a numeric representation (word vector)."
},
{
"code": null,
"e": 1974,
"s": 1631,
"text": "The numeric representations are created such that the semantic contents of the documents are captured, and similar documents and words will be close to each other. In the example below, we see that the documents (blue points) and words (green points) relating to washroom are close to each other while those related to parking bunch together."
},
{
"code": null,
"e": 2382,
"s": 1974,
"text": "With the numeric representations, we could have proceeded to cluster the documents to find the topics, but document vectors in high dimensional space tend to be sparse. As such, dimensionality reduction is performed using UMAP (Uniform Manifold Approximation and Projection) to help find dense areas. In the paper, the author finds 5 dimensions to give the best resultsfor the downstream task of clustering."
},
{
"code": null,
"e": 2600,
"s": 2382,
"text": "After compressing the numeric representations into a lower dimensional space, we are now ready to find dense areas of documents using HDBSCAN (Hierarchical Density-Based Spatial Clustering of Applications with Noise)."
},
{
"code": null,
"e": 2793,
"s": 2600,
"text": "In the diagram below, each dot represents a document and the dense areas of documents are colored, with each area representing a topic. The gray dots are outliers not belonging to any cluster."
},
{
"code": null,
"e": 3075,
"s": 2793,
"text": "For each dense area, a numeric representation of the topic (topic vector) is then obtained by taking the arithmetic mean of all the document vectors in the same cluster. And finally, each document is assigned a topic number based on the nearest topic vector to its document vector."
},
{
"code": null,
"e": 3172,
"s": 3075,
"text": "With a better understanding of Top2Vec, we are now ready to dive into our hotel reviews example."
},
{
"code": null,
"e": 3302,
"s": 3172,
"text": "The dataset that we are using can be downloaded from Kaggle and contains 515,000 customer reviews of luxury hotels across Europe."
},
{
"code": null,
"e": 3406,
"s": 3302,
"text": "It contains various columns but we will just be using one column in our illustration — Negative_Review."
},
{
"code": null,
"e": 3437,
"s": 3406,
"text": "An example of Negative_Review:"
},
{
"code": null,
"e": 3593,
"s": 3437,
"text": "Room was not cleaned correctly Wine Champagne glasses left dirty in the room the floors felt dirty and the shower drain was clogged From the day we arrived"
},
{
"code": null,
"e": 3690,
"s": 3593,
"text": "After some cleaning up of the data, we are then ready to feed in our list of reviews to Top2Vec."
},
{
"code": null,
"e": 3767,
"s": 3690,
"text": "The code to run the modeling is as follows, where hotel_reviews is our data."
},
{
"code": null,
"e": 3808,
"s": 3767,
"text": "model = Top2Vec(documents=hotel_reviews)"
},
{
"code": null,
"e": 4038,
"s": 3808,
"text": "And that’s it! This single line of code will pre-process the documents for training, create document and word vectors using Doc2Vec, perform dimensionality reduction on the vectors, and finally find the topics through clustering."
},
{
"code": null,
"e": 4128,
"s": 4038,
"text": "To use the pre-trained models instead, we just need to add the embedding_model parameter."
},
{
"code": null,
"e": 4222,
"s": 4128,
"text": "model = Top2Vec(documents=hotel_reviews, embedding_model='distiluse-base-multilingual-cased')"
},
{
"code": null,
"e": 4424,
"s": 4222,
"text": "In our case, we experimented with Doc2Vec and other pre-trained models for creating the document vectors, and find that “distiluse-base-multilingual-cased” pre-trained model works best for our example."
},
{
"code": null,
"e": 4597,
"s": 4424,
"text": "We are now ready to interpret the topics. Top2Vec has methods that make evaluation easier by generating word clouds and retrieving the top similar documents for each topic."
},
{
"code": null,
"e": 4629,
"s": 4597,
"text": "Let’s look at our top 3 topics."
},
{
"code": null,
"e": 4809,
"s": 4629,
"text": "From the word clouds and sample reviews, we can clearly infer that topic 1 has to do with bad washrooms, topic 2 is about small rooms and topic 3 are complains on parking matters."
},
{
"code": null,
"e": 4940,
"s": 4809,
"text": "All these results were obtained without us telling the computer the topics to look for and with only a few lines of code. Power! 💪"
},
{
"code": null,
"e": 5044,
"s": 4940,
"text": "If you would like to try it out on your own, our example notebook can be found here for your reference."
},
{
"code": null,
"e": 5484,
"s": 5044,
"text": "Through the hotel reviews example, we hope that you are now equipped with one more tool in your machine learning arsenal to perform topic modeling. Interestingly, I have read cases where similar methodology to the one by Top2Vec is extended to customer segmentation, with the main differences being each “word” is a retailer ID and each “document” shows the sequence of retailer IDs visited by the customer. An idea for the next project 😉."
}
] |
Crop Images in CSS with object-fit and object-position | CSS object-fit and object-position property helps us crop images and specify how it is displayed in an element.
The syntax of CSS object-fit property is as follows −
Selector {
object-fit: /*value*/
object-position:/*value*/
}
The following examples illustrate CSS object-fit property.
Live Demo
<!DOCTYPE html>
<html>
<head>
<style>
img {
object-fit: cover;
}
img:last-of-type {
object-fit: contain;
}
</style>
</head>
<body>
cover
<img src="https://images.unsplash.com/photo-1611423837981- 2cba00ee7631?crop=entropy&cs=tinysrgb&fit=crop&fm=jpg&h=120&ixlib=rb1.2.1&q=80&w=120" width="200" height="250"/>
<img src="https://images.unsplash.com/photo-1611423837981- 2cba00ee7631?crop=entropy&cs=tinysrgb&fit=crop&fm=jpg&h=120&ixlib=rb1.2.1&q=80&w=120" width="200" height="250"/>
contain
</body>
</html>
This will produce the following result −
Live Demo
<!DOCTYPE html>
<html>
<style>
div {
border: 1px solid blue;
width:100%;
height:300px;
}
img {
float:left;
width:50%;
height:100%;
object-fit:cover;
object-position: 20px -10px;
}
</style>
<body>
<div >
<img src="https://images.unsplash.com/photo-1611800065908-
233b597db552?crop=entropy&cs=tinysrgb&fit=crop&fm=jpg&h=350&ixlib=rb1.2.1&q=80&w=250" />
<img src="https://images.unsplash.com/photo-1612626256634-
991e6e977fc1?crop=entropy&cs=tinysrgb&fit=crop&fm=jpg&h=350&ixlib=rb1.2.1&q=80&w=250"/>
</div>
</body>
</html>
This will produce the following result −
Effect of resizing | [
{
"code": null,
"e": 1174,
"s": 1062,
"text": "CSS object-fit and object-position property helps us crop images and specify how it is displayed in an element."
},
{
"code": null,
"e": 1228,
"s": 1174,
"text": "The syntax of CSS object-fit property is as follows −"
},
{
"code": null,
"e": 1295,
"s": 1228,
"text": "Selector {\n object-fit: /*value*/\n object-position:/*value*/\n}"
},
{
"code": null,
"e": 1354,
"s": 1295,
"text": "The following examples illustrate CSS object-fit property."
},
{
"code": null,
"e": 1365,
"s": 1354,
"text": " Live Demo"
},
{
"code": null,
"e": 1876,
"s": 1365,
"text": "<!DOCTYPE html>\n<html>\n<head>\n<style>\nimg {\n object-fit: cover;\n}\nimg:last-of-type {\n object-fit: contain;\n}\n</style>\n</head>\n<body>\ncover\n<img src=\"https://images.unsplash.com/photo-1611423837981- 2cba00ee7631?crop=entropy&cs=tinysrgb&fit=crop&fm=jpg&h=120&ixlib=rb1.2.1&q=80&w=120\" width=\"200\" height=\"250\"/>\n<img src=\"https://images.unsplash.com/photo-1611423837981- 2cba00ee7631?crop=entropy&cs=tinysrgb&fit=crop&fm=jpg&h=120&ixlib=rb1.2.1&q=80&w=120\" width=\"200\" height=\"250\"/>\ncontain\n</body>\n</html>"
},
{
"code": null,
"e": 1917,
"s": 1876,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 1928,
"s": 1917,
"text": " Live Demo"
},
{
"code": null,
"e": 2473,
"s": 1928,
"text": "<!DOCTYPE html>\n<html>\n<style>\ndiv {\n border: 1px solid blue;\n width:100%;\n height:300px;\n}\nimg {\n float:left;\n width:50%;\n height:100%;\n object-fit:cover;\n object-position: 20px -10px;\n}\n</style>\n<body>\n<div >\n<img src=\"https://images.unsplash.com/photo-1611800065908-\n233b597db552?crop=entropy&cs=tinysrgb&fit=crop&fm=jpg&h=350&ixlib=rb1.2.1&q=80&w=250\" />\n<img src=\"https://images.unsplash.com/photo-1612626256634-\n991e6e977fc1?crop=entropy&cs=tinysrgb&fit=crop&fm=jpg&h=350&ixlib=rb1.2.1&q=80&w=250\"/>\n</div>\n</body>\n</html>"
},
{
"code": null,
"e": 2514,
"s": 2473,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 2533,
"s": 2514,
"text": "Effect of resizing"
}
] |
Node.js fs.readFileSync() Method - GeeksforGeeks | 03 Nov, 2021
The fs.readFileSync() method is an inbuilt application programming interface of fs module which is used to read the file and return its content. In fs.readFile() method, we can read a file in a non-blocking asynchronous way, but in fs.readFileSync() method, we can read files in a synchronous way, i.e. we are telling node.js to block other parallel process and do the current file reading process. That is, when the fs.readFileSync() method is called the original node program stops executing, and node waits for the fs.readFileSync() function to get executed, after getting the result of the method the remaining node program is executed.
Syntax:
fs.readFileSync( path, options )
Parameters:
path: It takes the relative path of the text file. The path can be of URL type. The file can also be a file descriptor. If both the files are in the same folder just give the filename in quotes.
options: It is an optional parameter which contains the encoding and flag, the encoding contains data specification. It’s default value is null which returns raw buffer and the flag contains indication of operations in the file. It’s default value is ‘r’.
Return Value: This method returns the content of the file.Example 1:
input.txt file contents:
This is some text data which is stored in input.txt file.
index.js file:
javascript
// Node.js program to demonstrate the // fs.readFileSync() method // Include fs moduleconst fs = require('fs'); // Calling the readFileSync() method// to read 'input.txt' fileconst data = fs.readFileSync('./input.txt', {encoding:'utf8', flag:'r'}); // Display the file dataconsole.log(data);
Output:
This is some text data which is stored in input.txt file.
Now, the question is how is this fs.readFileSync() method is different from fs.readFile() method. An example where we can find out when to use fs.readFileSync() and fs.readFile() method.Let’s say there are two input files input1.txt and input2.txt and both files are saved in the same folder.Example 2:
input1.txt file contents:
(1) This is some text data which is stored in input1.txt file.
input2.txt file contents:
(2) This is some text data which is stored in input2.txt file.
index.js file:
javascript
// Node.js program to demonstrate the // fs.readFileSync() method // Include fs moduleconst fs = require('fs'); // Calling the fs.readFile() method// for reading file 'input1.txt'fs.readFile('./input1.txt', {encoding:'utf8', flag:'r'}, function(err, data) { if(err) console.log(err); else console.log(data);}); // Calling the fs.readFileSync() method// for reading file 'input2.txt'const data = fs.readFileSync('./input2.txt', {encoding:'utf8', flag:'r'}); // Display dataconsole.log(data);
Output:
(2) This is some text data which is stored in input2.txt file.
(1) This is some text data which is stored in input1.txt file.
Observation: We can observe that when we are calling the fs.readFile() method which reads ‘input1.txt’ and after that we are calling the fs.readFileSync() method which reads the ‘input2.txt’ file, but seeing the output we find that ‘input2.txt’ file is readed first then ‘input1.txt’ file, this happens because when the compiler compiles the node.js file it first calls the fs.readFile() method which reads a file, but parallelly it continues to compile the remaining program too, after that it calls the fs.readFileSync() method which reads another file, when calling the readFileSync() method the compiler stops other parallel process (which is here reading the ‘input1.txt’ file from readFile() method) and continues the ongoing process to it’s end, whenever the ongoing process ends the compiler resumes the remaining process that’s why we observe that the contents of ‘input2.txt’ is printed first rather than ‘input1.txt’. So, we can use fs.readFileSync() method for tasks like pausing other parallel processes and read a file synchronously.Reference: https://nodejs.org/api/fs.html#fs_fs_readfilesync_path_options
surinderdawra388
khushboogoyal499
Node.js-fs-module
Picked
Node.js
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to install the previous version of node.js and npm ?
Difference between promise and async await in Node.js
Express.js express.Router() Function
Express.js res.render() Function
How to read and write Excel file in Node.js ?
Roadmap to Become a Web Developer in 2022
How to fetch data from an API in ReactJS ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
Convert a string to an integer in JavaScript | [
{
"code": null,
"e": 24574,
"s": 24546,
"text": "\n03 Nov, 2021"
},
{
"code": null,
"e": 25216,
"s": 24574,
"text": "The fs.readFileSync() method is an inbuilt application programming interface of fs module which is used to read the file and return its content. In fs.readFile() method, we can read a file in a non-blocking asynchronous way, but in fs.readFileSync() method, we can read files in a synchronous way, i.e. we are telling node.js to block other parallel process and do the current file reading process. That is, when the fs.readFileSync() method is called the original node program stops executing, and node waits for the fs.readFileSync() function to get executed, after getting the result of the method the remaining node program is executed. "
},
{
"code": null,
"e": 25226,
"s": 25216,
"text": "Syntax: "
},
{
"code": null,
"e": 25259,
"s": 25226,
"text": "fs.readFileSync( path, options )"
},
{
"code": null,
"e": 25273,
"s": 25259,
"text": "Parameters: "
},
{
"code": null,
"e": 25468,
"s": 25273,
"text": "path: It takes the relative path of the text file. The path can be of URL type. The file can also be a file descriptor. If both the files are in the same folder just give the filename in quotes."
},
{
"code": null,
"e": 25724,
"s": 25468,
"text": "options: It is an optional parameter which contains the encoding and flag, the encoding contains data specification. It’s default value is null which returns raw buffer and the flag contains indication of operations in the file. It’s default value is ‘r’."
},
{
"code": null,
"e": 25795,
"s": 25724,
"text": "Return Value: This method returns the content of the file.Example 1: "
},
{
"code": null,
"e": 25820,
"s": 25795,
"text": "input.txt file contents:"
},
{
"code": null,
"e": 25878,
"s": 25820,
"text": "This is some text data which is stored in input.txt file."
},
{
"code": null,
"e": 25893,
"s": 25878,
"text": "index.js file:"
},
{
"code": null,
"e": 25904,
"s": 25893,
"text": "javascript"
},
{
"code": "// Node.js program to demonstrate the // fs.readFileSync() method // Include fs moduleconst fs = require('fs'); // Calling the readFileSync() method// to read 'input.txt' fileconst data = fs.readFileSync('./input.txt', {encoding:'utf8', flag:'r'}); // Display the file dataconsole.log(data);",
"e": 26208,
"s": 25904,
"text": null
},
{
"code": null,
"e": 26217,
"s": 26208,
"text": "Output: "
},
{
"code": null,
"e": 26275,
"s": 26217,
"text": "This is some text data which is stored in input.txt file."
},
{
"code": null,
"e": 26580,
"s": 26275,
"text": "Now, the question is how is this fs.readFileSync() method is different from fs.readFile() method. An example where we can find out when to use fs.readFileSync() and fs.readFile() method.Let’s say there are two input files input1.txt and input2.txt and both files are saved in the same folder.Example 2: "
},
{
"code": null,
"e": 26606,
"s": 26580,
"text": "input1.txt file contents:"
},
{
"code": null,
"e": 26669,
"s": 26606,
"text": "(1) This is some text data which is stored in input1.txt file."
},
{
"code": null,
"e": 26695,
"s": 26669,
"text": "input2.txt file contents:"
},
{
"code": null,
"e": 26758,
"s": 26695,
"text": "(2) This is some text data which is stored in input2.txt file."
},
{
"code": null,
"e": 26773,
"s": 26758,
"text": "index.js file:"
},
{
"code": null,
"e": 26784,
"s": 26773,
"text": "javascript"
},
{
"code": "// Node.js program to demonstrate the // fs.readFileSync() method // Include fs moduleconst fs = require('fs'); // Calling the fs.readFile() method// for reading file 'input1.txt'fs.readFile('./input1.txt', {encoding:'utf8', flag:'r'}, function(err, data) { if(err) console.log(err); else console.log(data);}); // Calling the fs.readFileSync() method// for reading file 'input2.txt'const data = fs.readFileSync('./input2.txt', {encoding:'utf8', flag:'r'}); // Display dataconsole.log(data);",
"e": 27324,
"s": 26784,
"text": null
},
{
"code": null,
"e": 27333,
"s": 27324,
"text": "Output: "
},
{
"code": null,
"e": 27459,
"s": 27333,
"text": "(2) This is some text data which is stored in input2.txt file.\n(1) This is some text data which is stored in input1.txt file."
},
{
"code": null,
"e": 28581,
"s": 27459,
"text": "Observation: We can observe that when we are calling the fs.readFile() method which reads ‘input1.txt’ and after that we are calling the fs.readFileSync() method which reads the ‘input2.txt’ file, but seeing the output we find that ‘input2.txt’ file is readed first then ‘input1.txt’ file, this happens because when the compiler compiles the node.js file it first calls the fs.readFile() method which reads a file, but parallelly it continues to compile the remaining program too, after that it calls the fs.readFileSync() method which reads another file, when calling the readFileSync() method the compiler stops other parallel process (which is here reading the ‘input1.txt’ file from readFile() method) and continues the ongoing process to it’s end, whenever the ongoing process ends the compiler resumes the remaining process that’s why we observe that the contents of ‘input2.txt’ is printed first rather than ‘input1.txt’. So, we can use fs.readFileSync() method for tasks like pausing other parallel processes and read a file synchronously.Reference: https://nodejs.org/api/fs.html#fs_fs_readfilesync_path_options "
},
{
"code": null,
"e": 28598,
"s": 28581,
"text": "surinderdawra388"
},
{
"code": null,
"e": 28615,
"s": 28598,
"text": "khushboogoyal499"
},
{
"code": null,
"e": 28633,
"s": 28615,
"text": "Node.js-fs-module"
},
{
"code": null,
"e": 28640,
"s": 28633,
"text": "Picked"
},
{
"code": null,
"e": 28648,
"s": 28640,
"text": "Node.js"
},
{
"code": null,
"e": 28665,
"s": 28648,
"text": "Web Technologies"
},
{
"code": null,
"e": 28763,
"s": 28665,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28772,
"s": 28763,
"text": "Comments"
},
{
"code": null,
"e": 28785,
"s": 28772,
"text": "Old Comments"
},
{
"code": null,
"e": 28842,
"s": 28785,
"text": "How to install the previous version of node.js and npm ?"
},
{
"code": null,
"e": 28896,
"s": 28842,
"text": "Difference between promise and async await in Node.js"
},
{
"code": null,
"e": 28933,
"s": 28896,
"text": "Express.js express.Router() Function"
},
{
"code": null,
"e": 28966,
"s": 28933,
"text": "Express.js res.render() Function"
},
{
"code": null,
"e": 29012,
"s": 28966,
"text": "How to read and write Excel file in Node.js ?"
},
{
"code": null,
"e": 29054,
"s": 29012,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 29097,
"s": 29054,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 29159,
"s": 29097,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 29209,
"s": 29159,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
Dictionary.ContainsKey() Method in C# | The Dictionary.ContainsKey() method in C# checks whether the Dictionary<TKey, TValue< has the specified key or not.
public bool ContainsKey (TKey key);
Above, the parameter key is the key to be found.
Let us now see an example to implement the Dictionary.ContainsKey() method −
using System;
using System.Collections.Generic;
public class Demo {
public static void Main(){
Dictionary<string, string> dict =
new Dictionary<string, string>();
dict.Add("One", "John");
dict.Add("Two", "Tom");
dict.Add("Three", "Jacob");
dict.Add("Four", "Kevin");
dict.Add("Five", "Nathan");
Console.WriteLine("Count of elements = "+dict.Count);
Console.WriteLine("\nKey/value pairs...");
foreach(KeyValuePair<string, string> res in dict){
Console.WriteLine("Key = {0}, Value = {1}", res.Key, res.Value);
}
if (dict.ContainsKey("Three"))
Console.WriteLine("Key found!");
else
Console.WriteLine("Key isn't in the dictionary!");
dict.Clear();
Console.WriteLine("Cleared Key/value pairs...");
foreach(KeyValuePair<string, string> res in dict){
Console.WriteLine("Key = {0}, Value = {1}", res.Key, res.Value);
}
Console.WriteLine("Count of elements now = "+dict.Count);
}
}
This will produce the following output −
Count of elements = 5
Key/value pairs...
Key = One, Value = John
Key = Two, Value = Tom
Key = Three, Value = Jacob
Key = Four, Value = Kevin
Key = Five, Value = Nathan
Key found!
Cleared Key/value pairs...
Count of elements now = 0
Let us now see another example to implement the Dictionary. ContainsKey property −
using System;
using System.Collections.Generic;
public class Demo {
public static void Main(){
Dictionary<string, string> dict =
new Dictionary<string, string>();
dict.Add("One", "John");
dict.Add("Two", "Tom");
dict.Add("Three", "Jacob");
dict.Add("Four", "Kevin");
dict.Add("Five", "Nathan");
Console.WriteLine("Count of elements = "+dict.Count);
Console.WriteLine("\nKey/value pairs...");
foreach(KeyValuePair<string, string> res in dict){
Console.WriteLine("Key = {0}, Value = {1}", res.Key, res.Value);
}
if (dict.ContainsKey("mykey"))
Console.WriteLine("Key found!");
else
Console.WriteLine("Key isn't in the dictionary!");
dict.Clear();
Console.WriteLine("Cleared Key/value pairs...");
foreach(KeyValuePair<string, string> res in dict){
Console.WriteLine("Key = {0}, Value = {1}", res.Key, res.Value);
}
Console.WriteLine("Count of elements now = "+dict.Count);
}
}
This will produce the following output −
Count of elements = 5
Key/value pairs...
Key = One, Value = John
Key = Two, Value = Tom
Key = Three, Value = Jacob
Key = Four, Value = Kevin
Key = Five, Value = Nathan
Key isn't in the dictionary!
Cleared Key/value pairs...
Count of elements now = 0 | [
{
"code": null,
"e": 1178,
"s": 1062,
"text": "The Dictionary.ContainsKey() method in C# checks whether the Dictionary<TKey, TValue< has the specified key or not."
},
{
"code": null,
"e": 1214,
"s": 1178,
"text": "public bool ContainsKey (TKey key);"
},
{
"code": null,
"e": 1263,
"s": 1214,
"text": "Above, the parameter key is the key to be found."
},
{
"code": null,
"e": 1340,
"s": 1263,
"text": "Let us now see an example to implement the Dictionary.ContainsKey() method −"
},
{
"code": null,
"e": 2363,
"s": 1340,
"text": "using System;\nusing System.Collections.Generic;\npublic class Demo {\n public static void Main(){\n Dictionary<string, string> dict =\n new Dictionary<string, string>();\n dict.Add(\"One\", \"John\");\n dict.Add(\"Two\", \"Tom\");\n dict.Add(\"Three\", \"Jacob\");\n dict.Add(\"Four\", \"Kevin\");\n dict.Add(\"Five\", \"Nathan\");\n Console.WriteLine(\"Count of elements = \"+dict.Count);\n Console.WriteLine(\"\\nKey/value pairs...\");\n foreach(KeyValuePair<string, string> res in dict){\n Console.WriteLine(\"Key = {0}, Value = {1}\", res.Key, res.Value);\n }\n if (dict.ContainsKey(\"Three\"))\n Console.WriteLine(\"Key found!\");\n else\n Console.WriteLine(\"Key isn't in the dictionary!\");\n dict.Clear();\n Console.WriteLine(\"Cleared Key/value pairs...\");\n foreach(KeyValuePair<string, string> res in dict){\n Console.WriteLine(\"Key = {0}, Value = {1}\", res.Key, res.Value);\n }\n Console.WriteLine(\"Count of elements now = \"+dict.Count);\n }\n}"
},
{
"code": null,
"e": 2404,
"s": 2363,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 2636,
"s": 2404,
"text": "Count of elements = 5\nKey/value pairs...\nKey = One, Value = John\nKey = Two, Value = Tom\nKey = Three, Value = Jacob\nKey = Four, Value = Kevin\nKey = Five, Value = Nathan\nKey found!\nCleared Key/value pairs...\nCount of elements now = 0"
},
{
"code": null,
"e": 2719,
"s": 2636,
"text": "Let us now see another example to implement the Dictionary. ContainsKey property −"
},
{
"code": null,
"e": 3742,
"s": 2719,
"text": "using System;\nusing System.Collections.Generic;\npublic class Demo {\n public static void Main(){\n Dictionary<string, string> dict =\n new Dictionary<string, string>();\n dict.Add(\"One\", \"John\");\n dict.Add(\"Two\", \"Tom\");\n dict.Add(\"Three\", \"Jacob\");\n dict.Add(\"Four\", \"Kevin\");\n dict.Add(\"Five\", \"Nathan\");\n Console.WriteLine(\"Count of elements = \"+dict.Count);\n Console.WriteLine(\"\\nKey/value pairs...\");\n foreach(KeyValuePair<string, string> res in dict){\n Console.WriteLine(\"Key = {0}, Value = {1}\", res.Key, res.Value);\n }\n if (dict.ContainsKey(\"mykey\"))\n Console.WriteLine(\"Key found!\");\n else\n Console.WriteLine(\"Key isn't in the dictionary!\");\n dict.Clear();\n Console.WriteLine(\"Cleared Key/value pairs...\");\n foreach(KeyValuePair<string, string> res in dict){\n Console.WriteLine(\"Key = {0}, Value = {1}\", res.Key, res.Value);\n }\n Console.WriteLine(\"Count of elements now = \"+dict.Count);\n }\n}"
},
{
"code": null,
"e": 3783,
"s": 3742,
"text": "This will produce the following output −"
},
{
"code": null,
"e": 4033,
"s": 3783,
"text": "Count of elements = 5\nKey/value pairs...\nKey = One, Value = John\nKey = Two, Value = Tom\nKey = Three, Value = Jacob\nKey = Four, Value = Kevin\nKey = Five, Value = Nathan\nKey isn't in the dictionary!\nCleared Key/value pairs...\nCount of elements now = 0"
}
] |
CausalImpact and R: Analysing Time Series Interventions | by Michael Grogan | Towards Data Science | Disclaimer: This article is written on an “as is” basis and without warranty. It was written with the intention of providing an overview of data science concepts, and should not be interpreted as investment advice, or any other sort of professional advice.
Interventions (or external factors independent of a time series) can often influence said series.
For instance, a marketing campaign can influence the number of sales a company makes going forward. A change in government policy can significantly influence economic activity.
The main problem in analysing interventions is that it is not possible to analyse what would have happened had the intervention not occurred — at least not directly.
The CausalImpact library in R allows for analysis of intervention effects through using a separate series (one which is not affected by the intervention) as a covariate.
In November 2017, the Bank of England decided to raise interest rates. When the central bank of a country decides to manipulate interest rates in this manner — the value of a currency can be affected — given that such a decision affects the rate that investors can yield through holding that particular currency.
In this regard, we would like to examine if a rise in interest rates affected the value of the GBP/USD.
To do this, the EUR/USD currency is chosen as a covariate — this currency is hypothesised as not being affected by the intervention — as the ECB did not raise interest rates during this period.
The relevant data is sourced from the FRED database using Quandl.
Here is a plot of the two currencies:
set.seed(1)x1 <- eurusd$Valuey <- gbpusd$Valuedata <- cbind(y, x1)dim(data)head(data)matplot(data, type = "l")
When working with financial time series, it is common to convert data into logarithmic format. While this makes sense when it comes to analysing stock prices that have significantly different price scales, the data is analysed “as is” in this example.
The reason is that currency data has a much smaller scale than most other asset prices, and using a logarithmic scale would bring the values so close together that any interventions that could be detected across the original series may not be detected when using a logarithmic scale.
Using this data, a pre-period and post-period is defined, with the data then plotted.
pre.period <- c(1, 230)post.period <- c(231, 249)impact <- CausalImpact(data, pre.period, post.period)plot(impact)
A summary of the impact is generated:
> summary(impact)Posterior inference {CausalImpact}Average Cumulative Actual 1.3 25.5 Prediction (s.d.) 1.3 (0.0099) 25.4 (0.1878) 95% CI [1.3, 1.4] [25.1, 25.8] Absolute effect (s.d.) 0.0014 (0.0099) 0.0275 (0.1878)95% CI [-0.018, 0.02] [-0.349, 0.38] Relative effect (s.d.) 0.11% (0.74%) 0.11% (0.74%) 95% CI [-1.4%, 1.5%] [-1.4%, 1.5%]Posterior tail-area probability p: 0.44186Posterior prob. of a causal effect: 56%For more details, type: summary(impact, "report")
We see that the predicted vs. actual values (average) are the same at 1.3, with only a slight difference across the cumulative values (25.5 vs. 25.4).
The posterior probability of a causal effect is 56%.
Let’s take a closer look at the generated summary report. Part of it states:
Although the intervention appears to have caused a positive effect, this effect is not statistically significant when considering the entire post-intervention period as a whole. Individual days or shorter stretches within the intervention period may of course still have had a significant effect, as indicated whenever the lower limit of the impact time series (lower plot) was above zero. The apparent effect could be the result of random fluctuations that are unrelated to the intervention. This is often the case when the intervention period is very long and includes much of the time when the effect has already worn off. It can also be the case when the intervention period is too short to distinguish the signal from the noise. Finally, failing to find a significant effect can happen when there are not enough control variables or when these variables do not correlate well with the response variable during the learning period.The probability of obtaining this effect by chance is p = 0.442. This means the effect may be spurious and would generally not be considered statistically significant.
When the same example was analysed with pycausalimpact, the results came in slightly differently.
The model was defined as follows:
>>> ci = CausalImpact(data, pre_period, post_period)>>> print(ci.summary())>>> print(ci.summary(output='report'))>>> ci.plot()
Here is the generated output:
Posterior Inference {Causal Impact} Average CumulativeActual 1.34 25.46Prediction (s.d.) 1.32 (0.0) 25.17 (0.08)95% CI [1.32, 1.33] [25.01, 25.33]Absolute effect (s.d.) 0.02 (0.0) 0.29 (0.08)95% CI [0.01, 0.02] [0.13, 0.45]Relative effect (s.d.) 1.15% (0.32%) 1.15% (0.32%)95% CI [0.52%, 1.77%] [0.52%, 1.77%]Posterior tail-area probability p: 0.0Posterior prob. of a causal effect: 100.0%
Unlike the model formulated in R, this model predicted a 100% posterior probability of a causal effect.
The R and Python versions of this library differ in the way they analyse interventions. Specifically, R relies more on prior probabilities whereby previous knowledge is factored into the model assumptions, whereby Python relies more on analysis of the structural time series components in order to maximise the likelihood function.
That said, suppose the prior is now set to None, i.e. prior_level_sd=None.
ci = CausalImpact(data, pre_period, post_period, prior_level_sd=None)print(ci.summary())print(ci.summary(output='report'))ci.plot()
Here are the results:
Posterior Inference {Causal Impact} Average CumulativeActual 1.34 25.46Prediction (s.d.) 1.35 (0.02) 25.57 (0.3)95% CI [1.31, 1.38] [24.98, 26.15]Absolute effect (s.d.) -0.01 (0.02) -0.11 (0.3)95% CI [-0.04, 0.02] [-0.7, 0.47]Relative effect (s.d.) -0.45% (1.17%) -0.45% (1.17%)95% CI [-2.72%, 1.86%] [-2.72%, 1.86%]Posterior tail-area probability p: 0.37Posterior prob. of a causal effect: 63.34%
This model does predict a change in the rate from 1.34 to 1.35, but the model notes:
The probability of obtaining this effect by chance is p = 36.66%.This means the effect may be spurious and would generally not beconsidered statistically significant.
In this regard, the choice of prior can significantly affect the assumptions of the model and hence whether an intervention will ultimately be detected or not.
In this article, you have seen:
The use of CausalImpact in R to predict interventions
How the R and Python versions differ from each other
Role of prior assumptions in determining the detection of interventions
Intervention analysis and the generation of summary reports
Many thanks for reading, and any questions or feedback are greatly welcomed.
GitHub: dafiti/causalimpact
Inferring the effect of an event using CausalImpact by Kay Brodersen
investing.com: U.K. Interest Rate Decision | [
{
"code": null,
"e": 428,
"s": 171,
"text": "Disclaimer: This article is written on an “as is” basis and without warranty. It was written with the intention of providing an overview of data science concepts, and should not be interpreted as investment advice, or any other sort of professional advice."
},
{
"code": null,
"e": 526,
"s": 428,
"text": "Interventions (or external factors independent of a time series) can often influence said series."
},
{
"code": null,
"e": 703,
"s": 526,
"text": "For instance, a marketing campaign can influence the number of sales a company makes going forward. A change in government policy can significantly influence economic activity."
},
{
"code": null,
"e": 869,
"s": 703,
"text": "The main problem in analysing interventions is that it is not possible to analyse what would have happened had the intervention not occurred — at least not directly."
},
{
"code": null,
"e": 1039,
"s": 869,
"text": "The CausalImpact library in R allows for analysis of intervention effects through using a separate series (one which is not affected by the intervention) as a covariate."
},
{
"code": null,
"e": 1352,
"s": 1039,
"text": "In November 2017, the Bank of England decided to raise interest rates. When the central bank of a country decides to manipulate interest rates in this manner — the value of a currency can be affected — given that such a decision affects the rate that investors can yield through holding that particular currency."
},
{
"code": null,
"e": 1456,
"s": 1352,
"text": "In this regard, we would like to examine if a rise in interest rates affected the value of the GBP/USD."
},
{
"code": null,
"e": 1650,
"s": 1456,
"text": "To do this, the EUR/USD currency is chosen as a covariate — this currency is hypothesised as not being affected by the intervention — as the ECB did not raise interest rates during this period."
},
{
"code": null,
"e": 1716,
"s": 1650,
"text": "The relevant data is sourced from the FRED database using Quandl."
},
{
"code": null,
"e": 1754,
"s": 1716,
"text": "Here is a plot of the two currencies:"
},
{
"code": null,
"e": 1865,
"s": 1754,
"text": "set.seed(1)x1 <- eurusd$Valuey <- gbpusd$Valuedata <- cbind(y, x1)dim(data)head(data)matplot(data, type = \"l\")"
},
{
"code": null,
"e": 2117,
"s": 1865,
"text": "When working with financial time series, it is common to convert data into logarithmic format. While this makes sense when it comes to analysing stock prices that have significantly different price scales, the data is analysed “as is” in this example."
},
{
"code": null,
"e": 2401,
"s": 2117,
"text": "The reason is that currency data has a much smaller scale than most other asset prices, and using a logarithmic scale would bring the values so close together that any interventions that could be detected across the original series may not be detected when using a logarithmic scale."
},
{
"code": null,
"e": 2487,
"s": 2401,
"text": "Using this data, a pre-period and post-period is defined, with the data then plotted."
},
{
"code": null,
"e": 2602,
"s": 2487,
"text": "pre.period <- c(1, 230)post.period <- c(231, 249)impact <- CausalImpact(data, pre.period, post.period)plot(impact)"
},
{
"code": null,
"e": 2640,
"s": 2602,
"text": "A summary of the impact is generated:"
},
{
"code": null,
"e": 3378,
"s": 2640,
"text": "> summary(impact)Posterior inference {CausalImpact}Average Cumulative Actual 1.3 25.5 Prediction (s.d.) 1.3 (0.0099) 25.4 (0.1878) 95% CI [1.3, 1.4] [25.1, 25.8] Absolute effect (s.d.) 0.0014 (0.0099) 0.0275 (0.1878)95% CI [-0.018, 0.02] [-0.349, 0.38] Relative effect (s.d.) 0.11% (0.74%) 0.11% (0.74%) 95% CI [-1.4%, 1.5%] [-1.4%, 1.5%]Posterior tail-area probability p: 0.44186Posterior prob. of a causal effect: 56%For more details, type: summary(impact, \"report\")"
},
{
"code": null,
"e": 3529,
"s": 3378,
"text": "We see that the predicted vs. actual values (average) are the same at 1.3, with only a slight difference across the cumulative values (25.5 vs. 25.4)."
},
{
"code": null,
"e": 3582,
"s": 3529,
"text": "The posterior probability of a causal effect is 56%."
},
{
"code": null,
"e": 3659,
"s": 3582,
"text": "Let’s take a closer look at the generated summary report. Part of it states:"
},
{
"code": null,
"e": 4762,
"s": 3659,
"text": "Although the intervention appears to have caused a positive effect, this effect is not statistically significant when considering the entire post-intervention period as a whole. Individual days or shorter stretches within the intervention period may of course still have had a significant effect, as indicated whenever the lower limit of the impact time series (lower plot) was above zero. The apparent effect could be the result of random fluctuations that are unrelated to the intervention. This is often the case when the intervention period is very long and includes much of the time when the effect has already worn off. It can also be the case when the intervention period is too short to distinguish the signal from the noise. Finally, failing to find a significant effect can happen when there are not enough control variables or when these variables do not correlate well with the response variable during the learning period.The probability of obtaining this effect by chance is p = 0.442. This means the effect may be spurious and would generally not be considered statistically significant."
},
{
"code": null,
"e": 4860,
"s": 4762,
"text": "When the same example was analysed with pycausalimpact, the results came in slightly differently."
},
{
"code": null,
"e": 4894,
"s": 4860,
"text": "The model was defined as follows:"
},
{
"code": null,
"e": 5021,
"s": 4894,
"text": ">>> ci = CausalImpact(data, pre_period, post_period)>>> print(ci.summary())>>> print(ci.summary(output='report'))>>> ci.plot()"
},
{
"code": null,
"e": 5051,
"s": 5021,
"text": "Here is the generated output:"
},
{
"code": null,
"e": 5618,
"s": 5051,
"text": "Posterior Inference {Causal Impact} Average CumulativeActual 1.34 25.46Prediction (s.d.) 1.32 (0.0) 25.17 (0.08)95% CI [1.32, 1.33] [25.01, 25.33]Absolute effect (s.d.) 0.02 (0.0) 0.29 (0.08)95% CI [0.01, 0.02] [0.13, 0.45]Relative effect (s.d.) 1.15% (0.32%) 1.15% (0.32%)95% CI [0.52%, 1.77%] [0.52%, 1.77%]Posterior tail-area probability p: 0.0Posterior prob. of a causal effect: 100.0%"
},
{
"code": null,
"e": 5722,
"s": 5618,
"text": "Unlike the model formulated in R, this model predicted a 100% posterior probability of a causal effect."
},
{
"code": null,
"e": 6054,
"s": 5722,
"text": "The R and Python versions of this library differ in the way they analyse interventions. Specifically, R relies more on prior probabilities whereby previous knowledge is factored into the model assumptions, whereby Python relies more on analysis of the structural time series components in order to maximise the likelihood function."
},
{
"code": null,
"e": 6129,
"s": 6054,
"text": "That said, suppose the prior is now set to None, i.e. prior_level_sd=None."
},
{
"code": null,
"e": 6261,
"s": 6129,
"text": "ci = CausalImpact(data, pre_period, post_period, prior_level_sd=None)print(ci.summary())print(ci.summary(output='report'))ci.plot()"
},
{
"code": null,
"e": 6283,
"s": 6261,
"text": "Here are the results:"
},
{
"code": null,
"e": 6852,
"s": 6283,
"text": "Posterior Inference {Causal Impact} Average CumulativeActual 1.34 25.46Prediction (s.d.) 1.35 (0.02) 25.57 (0.3)95% CI [1.31, 1.38] [24.98, 26.15]Absolute effect (s.d.) -0.01 (0.02) -0.11 (0.3)95% CI [-0.04, 0.02] [-0.7, 0.47]Relative effect (s.d.) -0.45% (1.17%) -0.45% (1.17%)95% CI [-2.72%, 1.86%] [-2.72%, 1.86%]Posterior tail-area probability p: 0.37Posterior prob. of a causal effect: 63.34%"
},
{
"code": null,
"e": 6937,
"s": 6852,
"text": "This model does predict a change in the rate from 1.34 to 1.35, but the model notes:"
},
{
"code": null,
"e": 7104,
"s": 6937,
"text": "The probability of obtaining this effect by chance is p = 36.66%.This means the effect may be spurious and would generally not beconsidered statistically significant."
},
{
"code": null,
"e": 7264,
"s": 7104,
"text": "In this regard, the choice of prior can significantly affect the assumptions of the model and hence whether an intervention will ultimately be detected or not."
},
{
"code": null,
"e": 7296,
"s": 7264,
"text": "In this article, you have seen:"
},
{
"code": null,
"e": 7350,
"s": 7296,
"text": "The use of CausalImpact in R to predict interventions"
},
{
"code": null,
"e": 7403,
"s": 7350,
"text": "How the R and Python versions differ from each other"
},
{
"code": null,
"e": 7475,
"s": 7403,
"text": "Role of prior assumptions in determining the detection of interventions"
},
{
"code": null,
"e": 7535,
"s": 7475,
"text": "Intervention analysis and the generation of summary reports"
},
{
"code": null,
"e": 7612,
"s": 7535,
"text": "Many thanks for reading, and any questions or feedback are greatly welcomed."
},
{
"code": null,
"e": 7640,
"s": 7612,
"text": "GitHub: dafiti/causalimpact"
},
{
"code": null,
"e": 7709,
"s": 7640,
"text": "Inferring the effect of an event using CausalImpact by Kay Brodersen"
}
] |
How to implement AlarmManager in Android using Kotlin? | This example demonstrates how to to implement AlarmManager in Android using Kotlin.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:padding="2dp"
tools:context=".MainActivity">
<TimePicker
android:id="@+id/timePicker"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true" />
<Button
android:id="@+id/buttonAlarm"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/timePicker"
android:layout_centerInParent="true"
android:layout_marginTop="10dp"
android:text="Set Alarm" />
</RelativeLayout>
Step 3 − Add the following code to src/MainActivity.kt
import android.app.AlarmManager
import android.app.PendingIntent
import android.content.BroadcastReceiver
import android.content.Context
import android.content.Intent
import android.os.Build
import android.os.Bundle
import android.util.Log
import android.widget.Button
import android.widget.TimePicker
import android.widget.Toast
import androidx.appcompat.app.AppCompatActivity
import java.util.*
class MainActivity : AppCompatActivity() {
lateinit var btnSetAlarm: Button
lateinit var timePicker: TimePicker
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContentView(R.layout.activity_main)
title = "KotlinApp"
timePicker = findViewById(R.id.timePicker)
btnSetAlarm = findViewById(R.id.buttonAlarm)
btnSetAlarm.setOnClickListener {
val calendar: Calendar = Calendar.getInstance()
if (Build.VERSION.SDK_INT >= 23) {
calendar.set(
calendar.get(Calendar.YEAR),
calendar.get(Calendar.MONTH),
calendar.get(Calendar.DAY_OF_MONTH),
timePicker.hour,
timePicker.minute,
0
)
} else {
calendar.set(
calendar.get(Calendar.YEAR),
calendar.get(Calendar.MONTH),
calendar.get(Calendar.DAY_OF_MONTH),
timePicker.currentHour,
timePicker.currentMinute, 0
)
}
setAlarm(calendar.timeInMillis)
}
}
private fun setAlarm(timeInMillis: Long) {
val alarmManager = getSystemService(Context.ALARM_SERVICE) as AlarmManager
val intent = Intent(this, MyAlarm::class.java)
val pendingIntent = PendingIntent.getBroadcast(this, 0, intent, 0)
alarmManager.setRepeating(
AlarmManager.RTC,
timeInMillis,
AlarmManager.INTERVAL_DAY,
pendingIntent
)
Toast.makeText(this, "Alarm is set", Toast.LENGTH_SHORT).show()
}
private class MyAlarm : BroadcastReceiver() {
override fun onReceive(
context: Context,
intent: Intent
) {
Log.d("Alarm Bell", "Alarm just fired")
}
}
}
Step 4 − Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.example.q11">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen | [
{
"code": null,
"e": 1146,
"s": 1062,
"text": "This example demonstrates how to to implement AlarmManager in Android using Kotlin."
},
{
"code": null,
"e": 1275,
"s": 1146,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1340,
"s": 1275,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2134,
"s": 1340,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<RelativeLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:padding=\"2dp\"\n tools:context=\".MainActivity\">\n <TimePicker\n android:id=\"@+id/timePicker\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"wrap_content\"\n android:layout_centerHorizontal=\"true\" />\n <Button\n android:id=\"@+id/buttonAlarm\"\n android:layout_width=\"wrap_content\"\n android:layout_height=\"wrap_content\"\n android:layout_below=\"@id/timePicker\"\n android:layout_centerInParent=\"true\"\n android:layout_marginTop=\"10dp\"\n android:text=\"Set Alarm\" />\n</RelativeLayout>"
},
{
"code": null,
"e": 2189,
"s": 2134,
"text": "Step 3 − Add the following code to src/MainActivity.kt"
},
{
"code": null,
"e": 4389,
"s": 2189,
"text": "import android.app.AlarmManager\nimport android.app.PendingIntent\nimport android.content.BroadcastReceiver\nimport android.content.Context\nimport android.content.Intent\nimport android.os.Build\nimport android.os.Bundle\nimport android.util.Log\nimport android.widget.Button\nimport android.widget.TimePicker\nimport android.widget.Toast\nimport androidx.appcompat.app.AppCompatActivity\nimport java.util.*\nclass MainActivity : AppCompatActivity() {\n lateinit var btnSetAlarm: Button\n lateinit var timePicker: TimePicker\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContentView(R.layout.activity_main)\n title = \"KotlinApp\"\n timePicker = findViewById(R.id.timePicker)\n btnSetAlarm = findViewById(R.id.buttonAlarm)\n btnSetAlarm.setOnClickListener {\n val calendar: Calendar = Calendar.getInstance()\n if (Build.VERSION.SDK_INT >= 23) {\n calendar.set(\n calendar.get(Calendar.YEAR),\n calendar.get(Calendar.MONTH),\n calendar.get(Calendar.DAY_OF_MONTH),\n timePicker.hour,\n timePicker.minute,\n 0\n )\n } else {\n calendar.set(\n calendar.get(Calendar.YEAR),\n calendar.get(Calendar.MONTH),\n calendar.get(Calendar.DAY_OF_MONTH),\n timePicker.currentHour,\n timePicker.currentMinute, 0\n )\n }\n setAlarm(calendar.timeInMillis)\n }\n }\n private fun setAlarm(timeInMillis: Long) {\n val alarmManager = getSystemService(Context.ALARM_SERVICE) as AlarmManager\n val intent = Intent(this, MyAlarm::class.java)\n val pendingIntent = PendingIntent.getBroadcast(this, 0, intent, 0)\n alarmManager.setRepeating(\n AlarmManager.RTC,\n timeInMillis,\n AlarmManager.INTERVAL_DAY,\n pendingIntent\n )\n Toast.makeText(this, \"Alarm is set\", Toast.LENGTH_SHORT).show()\n }\n private class MyAlarm : BroadcastReceiver() {\n override fun onReceive(\n context: Context,\n intent: Intent\n ) {\n Log.d(\"Alarm Bell\", \"Alarm just fired\")\n }\n }\n}"
},
{
"code": null,
"e": 4444,
"s": 4389,
"text": "Step 4 − Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 5115,
"s": 4444,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\" package=\"com.example.q11\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 5463,
"s": 5115,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click the Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen"
}
] |
numpy.char.add() | This function performs elementwise string concatenation.
import numpy as np
print 'Concatenate two strings:'
print np.char.add(['hello'],[' xyz'])
print '\n'
print 'Concatenation example:'
print np.char.add(['hello', 'hi'],[' abc', ' xyz'])
Its output would be as follows −
Concatenate two strings:
['hello xyz']
Concatenation example:
['hello abc' 'hi xyz']
63 Lectures
6 hours
Abhilash Nelson
19 Lectures
8 hours
DATAhill Solutions Srinivas Reddy
12 Lectures
3 hours
DATAhill Solutions Srinivas Reddy
10 Lectures
2.5 hours
Akbar Khan
20 Lectures
2 hours
Pruthviraja L
63 Lectures
6 hours
Anmol
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2300,
"s": 2243,
"text": "This function performs elementwise string concatenation."
},
{
"code": null,
"e": 2489,
"s": 2300,
"text": "import numpy as np \nprint 'Concatenate two strings:' \nprint np.char.add(['hello'],[' xyz']) \nprint '\\n'\n\nprint 'Concatenation example:' \nprint np.char.add(['hello', 'hi'],[' abc', ' xyz'])"
},
{
"code": null,
"e": 2522,
"s": 2489,
"text": "Its output would be as follows −"
},
{
"code": null,
"e": 2609,
"s": 2522,
"text": "Concatenate two strings:\n['hello xyz']\n\nConcatenation example:\n['hello abc' 'hi xyz']\n"
},
{
"code": null,
"e": 2642,
"s": 2609,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 2659,
"s": 2642,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 2692,
"s": 2659,
"text": "\n 19 Lectures \n 8 hours \n"
},
{
"code": null,
"e": 2727,
"s": 2692,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 2760,
"s": 2727,
"text": "\n 12 Lectures \n 3 hours \n"
},
{
"code": null,
"e": 2795,
"s": 2760,
"text": " DATAhill Solutions Srinivas Reddy"
},
{
"code": null,
"e": 2830,
"s": 2795,
"text": "\n 10 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 2842,
"s": 2830,
"text": " Akbar Khan"
},
{
"code": null,
"e": 2875,
"s": 2842,
"text": "\n 20 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 2890,
"s": 2875,
"text": " Pruthviraja L"
},
{
"code": null,
"e": 2923,
"s": 2890,
"text": "\n 63 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 2930,
"s": 2923,
"text": " Anmol"
},
{
"code": null,
"e": 2937,
"s": 2930,
"text": " Print"
},
{
"code": null,
"e": 2948,
"s": 2937,
"text": " Add Notes"
}
] |
Stream.concat() in Java - GeeksforGeeks | 06 Dec, 2018
Stream.concat() method creates a concatenated stream in which the elements are all the elements of the first stream followed by all the elements of the second stream. The resulting stream is ordered if both of the input streams are ordered, and parallel if either of the input streams is parallel.
Syntax :
static <T> Stream<T> concat(Stream<? extends T> stream1,
Stream<? extends T> stream2)
Where, T is the type of stream elements,
stream1 represents the first stream,
stream2 represents the second stream and
the function returns the concatenation of
the two input streams
The calls to Stream.concat(stream1, stream2) can be think of as forming a binary tree. The concatenation of all the input streams is at the root. The individual input streams are at the leaves. Below given are some examples of trees upto four input streams a, b, c and d.
For two streams a and b, the tree looks like :For three streams a, b and c, the tree looks like :For four streams a, b, c and d, the tree looks like :Each additional input stream adds one layer of depth to the tree and one layer of indirection to reach all the other streams.
Note : The elements returned by Stream.concat() method is ordered. For example, the following two lines returns the same result:
Stream.concat(Stream.concat(stream1, stream2), stream3);
Stream.concat(stream1, Stream.concat(stream2, stream3));
But the result for the following two are different.
Stream.concat(Stream.concat(stream1, stream2), stream3);
Stream.concat(Stream.concat(stream2, stream1), stream3);
Below are some examples to understand the implementation of the function in a better way.Example 1 :
// Implementation of Stream.concat()// method in Java 8 with 2 Streamsimport java.util.*;import java.util.stream.IntStream;import java.util.stream.Stream; class GFG { // Driver code public static void main(String[] args) { // Creating two Streams Stream<String> stream1 = Stream.of("Geeks", "for"); Stream<String> stream2 = Stream.of("GeeksQuiz", "GeeksforGeeks"); // concatenating both the Streams // with Stream.concat() function // and displaying the result Stream.concat(stream1, stream2) .forEach(element -> System.out.println(element)); }}
Geeks
for
GeeksQuiz
GeeksforGeeks
Example 2 :
// Implementation of Stream.concat()// method in Java 8 with more than// two Streamsimport java.util.*;import java.util.stream.IntStream;import java.util.stream.Stream; class GFG { // Driver code public static void main(String[] args) { // Creating more than two Streams Stream<String> stream1 = Stream.of("Geeks"); Stream<String> stream2 = Stream.of("GeeksQuiz"); Stream<String> stream3 = Stream.of("GeeksforGeeks"); Stream<String> stream4 = Stream.of("GFG"); // concatenating all the Streams // with Stream.concat() function // and displaying the result Stream.concat(Stream.concat(Stream.concat(stream1, stream2), stream3), stream4) .forEach(element -> System.out.println(element)); }}
Geeks
GeeksQuiz
GeeksforGeeks
GFG
Example 3 :
// Implementation of Stream.concat()// method in Java 8 with DoubleStreamimport java.util.*;import java.util.stream.Stream;import java.util.stream.DoubleStream; class GFG { // Driver code public static void main(String[] args) { // Creating two Streams DoubleStream Stream1 = DoubleStream.of(1520, 1620); DoubleStream Stream2 = DoubleStream.of(1720, 1820); // concatenating both the Streams and // displaying the result DoubleStream.concat(Stream1, Stream2) .forEach(element -> System.out.println(element)); }}
1520.0
1620.0
1720.0
1820.0
Example 4 :
// Implementation of Stream.concat()// method in Java 8 and removing// the duplicatesimport java.util.*;import java.util.stream.IntStream;import java.util.stream.Stream; class GFG { // Driver code public static void main(String[] args) { // Creating two Streams Stream<String> stream1 = Stream.of("Geeks", "for", "GeeksforGeeks"); Stream<String> stream2 = Stream.of("GeeksQuiz", "GeeksforGeeks", "for"); // concatenating both the Streams // with Stream.concat() function // and displaying the result after // removing the duplicates Stream.concat(stream1, stream2).distinct().forEach(element -> System.out.println(element)); }}
Geeks
for
GeeksforGeeks
GeeksQuiz
Example 5 :
// Implementation of Stream.concat()// method in Java 8 with LongStreamimport java.util.*;import java.util.stream.Stream;import java.util.stream.LongStream; class GFG { // Driver code public static void main(String[] args) { // Creating two Streams LongStream Stream1 = LongStream.of(1520, 1620); LongStream Stream2 = LongStream.of(1720, 1820); // concatenating both the Streams and // displaying the result LongStream.concat(Stream1, Stream2) .forEach(element -> System.out.println(element)); }}
1520
1620
1720
1820
Java - util package
Java-Functions
java-stream
Java-Stream interface
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
HashMap in Java with Examples
Initialize an ArrayList in Java
Object Oriented Programming (OOPs) Concept in Java
Interfaces in Java
ArrayList in Java
How to iterate any Map in Java
Multidimensional Arrays in Java
Singleton Class in Java
Stack Class in Java
Set in Java | [
{
"code": null,
"e": 24212,
"s": 24184,
"text": "\n06 Dec, 2018"
},
{
"code": null,
"e": 24510,
"s": 24212,
"text": "Stream.concat() method creates a concatenated stream in which the elements are all the elements of the first stream followed by all the elements of the second stream. The resulting stream is ordered if both of the input streams are ordered, and parallel if either of the input streams is parallel."
},
{
"code": null,
"e": 24519,
"s": 24510,
"text": "Syntax :"
},
{
"code": null,
"e": 24819,
"s": 24519,
"text": "static <T> Stream<T> concat(Stream<? extends T> stream1, \n Stream<? extends T> stream2)\n\nWhere, T is the type of stream elements,\nstream1 represents the first stream,\nstream2 represents the second stream and\nthe function returns the concatenation of\nthe two input streams\n"
},
{
"code": null,
"e": 25091,
"s": 24819,
"text": "The calls to Stream.concat(stream1, stream2) can be think of as forming a binary tree. The concatenation of all the input streams is at the root. The individual input streams are at the leaves. Below given are some examples of trees upto four input streams a, b, c and d."
},
{
"code": null,
"e": 25367,
"s": 25091,
"text": "For two streams a and b, the tree looks like :For three streams a, b and c, the tree looks like :For four streams a, b, c and d, the tree looks like :Each additional input stream adds one layer of depth to the tree and one layer of indirection to reach all the other streams."
},
{
"code": null,
"e": 25496,
"s": 25367,
"text": "Note : The elements returned by Stream.concat() method is ordered. For example, the following two lines returns the same result:"
},
{
"code": null,
"e": 25611,
"s": 25496,
"text": "Stream.concat(Stream.concat(stream1, stream2), stream3);\nStream.concat(stream1, Stream.concat(stream2, stream3));\n"
},
{
"code": null,
"e": 25663,
"s": 25611,
"text": "But the result for the following two are different."
},
{
"code": null,
"e": 25779,
"s": 25663,
"text": "Stream.concat(Stream.concat(stream1, stream2), stream3); \nStream.concat(Stream.concat(stream2, stream1), stream3);\n"
},
{
"code": null,
"e": 25880,
"s": 25779,
"text": "Below are some examples to understand the implementation of the function in a better way.Example 1 :"
},
{
"code": "// Implementation of Stream.concat()// method in Java 8 with 2 Streamsimport java.util.*;import java.util.stream.IntStream;import java.util.stream.Stream; class GFG { // Driver code public static void main(String[] args) { // Creating two Streams Stream<String> stream1 = Stream.of(\"Geeks\", \"for\"); Stream<String> stream2 = Stream.of(\"GeeksQuiz\", \"GeeksforGeeks\"); // concatenating both the Streams // with Stream.concat() function // and displaying the result Stream.concat(stream1, stream2) .forEach(element -> System.out.println(element)); }}",
"e": 26503,
"s": 25880,
"text": null
},
{
"code": null,
"e": 26538,
"s": 26503,
"text": "Geeks\nfor\nGeeksQuiz\nGeeksforGeeks\n"
},
{
"code": null,
"e": 26550,
"s": 26538,
"text": "Example 2 :"
},
{
"code": "// Implementation of Stream.concat()// method in Java 8 with more than// two Streamsimport java.util.*;import java.util.stream.IntStream;import java.util.stream.Stream; class GFG { // Driver code public static void main(String[] args) { // Creating more than two Streams Stream<String> stream1 = Stream.of(\"Geeks\"); Stream<String> stream2 = Stream.of(\"GeeksQuiz\"); Stream<String> stream3 = Stream.of(\"GeeksforGeeks\"); Stream<String> stream4 = Stream.of(\"GFG\"); // concatenating all the Streams // with Stream.concat() function // and displaying the result Stream.concat(Stream.concat(Stream.concat(stream1, stream2), stream3), stream4) .forEach(element -> System.out.println(element)); }}",
"e": 27360,
"s": 26550,
"text": null
},
{
"code": null,
"e": 27395,
"s": 27360,
"text": "Geeks\nGeeksQuiz\nGeeksforGeeks\nGFG\n"
},
{
"code": null,
"e": 27407,
"s": 27395,
"text": "Example 3 :"
},
{
"code": "// Implementation of Stream.concat()// method in Java 8 with DoubleStreamimport java.util.*;import java.util.stream.Stream;import java.util.stream.DoubleStream; class GFG { // Driver code public static void main(String[] args) { // Creating two Streams DoubleStream Stream1 = DoubleStream.of(1520, 1620); DoubleStream Stream2 = DoubleStream.of(1720, 1820); // concatenating both the Streams and // displaying the result DoubleStream.concat(Stream1, Stream2) .forEach(element -> System.out.println(element)); }}",
"e": 27990,
"s": 27407,
"text": null
},
{
"code": null,
"e": 28019,
"s": 27990,
"text": "1520.0\n1620.0\n1720.0\n1820.0\n"
},
{
"code": null,
"e": 28031,
"s": 28019,
"text": "Example 4 :"
},
{
"code": "// Implementation of Stream.concat()// method in Java 8 and removing// the duplicatesimport java.util.*;import java.util.stream.IntStream;import java.util.stream.Stream; class GFG { // Driver code public static void main(String[] args) { // Creating two Streams Stream<String> stream1 = Stream.of(\"Geeks\", \"for\", \"GeeksforGeeks\"); Stream<String> stream2 = Stream.of(\"GeeksQuiz\", \"GeeksforGeeks\", \"for\"); // concatenating both the Streams // with Stream.concat() function // and displaying the result after // removing the duplicates Stream.concat(stream1, stream2).distinct().forEach(element -> System.out.println(element)); }}",
"e": 28734,
"s": 28031,
"text": null
},
{
"code": null,
"e": 28769,
"s": 28734,
"text": "Geeks\nfor\nGeeksforGeeks\nGeeksQuiz\n"
},
{
"code": null,
"e": 28781,
"s": 28769,
"text": "Example 5 :"
},
{
"code": "// Implementation of Stream.concat()// method in Java 8 with LongStreamimport java.util.*;import java.util.stream.Stream;import java.util.stream.LongStream; class GFG { // Driver code public static void main(String[] args) { // Creating two Streams LongStream Stream1 = LongStream.of(1520, 1620); LongStream Stream2 = LongStream.of(1720, 1820); // concatenating both the Streams and // displaying the result LongStream.concat(Stream1, Stream2) .forEach(element -> System.out.println(element)); }}",
"e": 29350,
"s": 28781,
"text": null
},
{
"code": null,
"e": 29371,
"s": 29350,
"text": "1520\n1620\n1720\n1820\n"
},
{
"code": null,
"e": 29391,
"s": 29371,
"text": "Java - util package"
},
{
"code": null,
"e": 29406,
"s": 29391,
"text": "Java-Functions"
},
{
"code": null,
"e": 29418,
"s": 29406,
"text": "java-stream"
},
{
"code": null,
"e": 29440,
"s": 29418,
"text": "Java-Stream interface"
},
{
"code": null,
"e": 29445,
"s": 29440,
"text": "Java"
},
{
"code": null,
"e": 29450,
"s": 29445,
"text": "Java"
},
{
"code": null,
"e": 29548,
"s": 29450,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29557,
"s": 29548,
"text": "Comments"
},
{
"code": null,
"e": 29570,
"s": 29557,
"text": "Old Comments"
},
{
"code": null,
"e": 29600,
"s": 29570,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 29632,
"s": 29600,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 29683,
"s": 29632,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 29702,
"s": 29683,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 29720,
"s": 29702,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 29751,
"s": 29720,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 29783,
"s": 29751,
"text": "Multidimensional Arrays in Java"
},
{
"code": null,
"e": 29807,
"s": 29783,
"text": "Singleton Class in Java"
},
{
"code": null,
"e": 29827,
"s": 29807,
"text": "Stack Class in Java"
}
] |
How to convert Java Array/Collection to JSON array? | Google provides a library named org.json.JSONArray and, following is the maven dependency to add library to your project.
<dependency>
<groupId>com.googlecode.json-simple</groupId>
<artifactId>json-simple</artifactId>
<version>1.1</version>
</dependency>
The JSONArray class of the org.json package provides put() method. Using this method, you can populate the JSONArray object with the contents of the elements.
import org.json.JSONArray;
public class ArrayToJson {
public static void main(String args[]) {
String [] myArray = {"JavaFX", "HBase", "JOGL", "WebGL"};
JSONArray jsArray = new JSONArray();
for (int i = 0; i < myArray.length; i++) {
jsArray.put(myArray[i]);
}
System.out.println(jsArray);
}
}
["JavaFX","HBase","JOGL","WebGL"]
In the same way you can pass a collection object to the constructor of the JSONArray class.
import java.util.ArrayList;
import org.json.JSONArray;
public class ArrayToJson {
public static void main(String args[]) {
ArrayList <String> arrayList = new ArrayList<String>();
arrayList.add("JavaFX");
arrayList.add("HBase");
arrayList.add("JOGL");
arrayList.add("WebGL");
JSONArray jsArray2 = new JSONArray(arrayList);
System.out.println(jsArray2);
}
}
["JavaFX","HBase","JOGL","WebGL"] | [
{
"code": null,
"e": 1184,
"s": 1062,
"text": "Google provides a library named org.json.JSONArray and, following is the maven dependency to add library to your project."
},
{
"code": null,
"e": 1326,
"s": 1184,
"text": "<dependency>\n <groupId>com.googlecode.json-simple</groupId>\n <artifactId>json-simple</artifactId>\n <version>1.1</version>\n</dependency>"
},
{
"code": null,
"e": 1485,
"s": 1326,
"text": "The JSONArray class of the org.json package provides put() method. Using this method, you can populate the JSONArray object with the contents of the elements."
},
{
"code": null,
"e": 1823,
"s": 1485,
"text": "import org.json.JSONArray;\npublic class ArrayToJson {\n public static void main(String args[]) {\n String [] myArray = {\"JavaFX\", \"HBase\", \"JOGL\", \"WebGL\"};\n JSONArray jsArray = new JSONArray();\n for (int i = 0; i < myArray.length; i++) {\n jsArray.put(myArray[i]);\n }\n System.out.println(jsArray);\n }\n}"
},
{
"code": null,
"e": 1858,
"s": 1823,
"text": "[\"JavaFX\",\"HBase\",\"JOGL\",\"WebGL\"]\n"
},
{
"code": null,
"e": 1950,
"s": 1858,
"text": "In the same way you can pass a collection object to the constructor of the JSONArray class."
},
{
"code": null,
"e": 2354,
"s": 1950,
"text": "import java.util.ArrayList;\nimport org.json.JSONArray;\npublic class ArrayToJson {\n public static void main(String args[]) {\n ArrayList <String> arrayList = new ArrayList<String>();\n arrayList.add(\"JavaFX\");\n arrayList.add(\"HBase\");\n arrayList.add(\"JOGL\");\n arrayList.add(\"WebGL\");\n JSONArray jsArray2 = new JSONArray(arrayList);\n System.out.println(jsArray2);\n }\n}"
},
{
"code": null,
"e": 2389,
"s": 2354,
"text": "[\"JavaFX\",\"HBase\",\"JOGL\",\"WebGL\"]\n"
}
] |
What does the KEY keyword mean in MySQL? | Key is synonymous to an index. If you want to create an index for a column, then use ‘Key’.
As stated in the official docs:
KEY is normally a synonym for INDEX. The key attribute PRIMARY KEY can also be specified as just KEY when given in a column definition. This was implemented for compatibility with other database systems.
The key can be used with Primary Key:
Let us first create a table. Here is the query to set primary key for a column “id”.
mysql> create table KeyDemo
-> (
-> id int,
-> primary key(id)
-> );
Query OK, 0 rows affected (0.55 sec)
Inserting two records.
mysql> insert into KeyDemo values(1);
Query OK, 1 row affected (0.12 sec)
mysql> insert into KeyDemo values(2);
Query OK, 1 row affected (0.14 sec)
To display all the records.
mysql> select *from KeyDemo;
Here is the output.
+----+
| id |
+----+
| 1 |
| 2 |
+----+
2 rows in set (0.00 sec) | [
{
"code": null,
"e": 1154,
"s": 1062,
"text": "Key is synonymous to an index. If you want to create an index for a column, then use ‘Key’."
},
{
"code": null,
"e": 1186,
"s": 1154,
"text": "As stated in the official docs:"
},
{
"code": null,
"e": 1390,
"s": 1186,
"text": "KEY is normally a synonym for INDEX. The key attribute PRIMARY KEY can also be specified as just KEY when given in a column definition. This was implemented for compatibility with other database systems."
},
{
"code": null,
"e": 1429,
"s": 1390,
"text": "The key can be used with Primary Key:\n"
},
{
"code": null,
"e": 1514,
"s": 1429,
"text": "Let us first create a table. Here is the query to set primary key for a column “id”."
},
{
"code": null,
"e": 1636,
"s": 1514,
"text": "mysql> create table KeyDemo\n -> (\n -> id int,\n -> primary key(id)\n -> );\nQuery OK, 0 rows affected (0.55 sec)"
},
{
"code": null,
"e": 1659,
"s": 1636,
"text": "Inserting two records."
},
{
"code": null,
"e": 1808,
"s": 1659,
"text": "mysql> insert into KeyDemo values(1);\nQuery OK, 1 row affected (0.12 sec)\n\nmysql> insert into KeyDemo values(2);\nQuery OK, 1 row affected (0.14 sec)"
},
{
"code": null,
"e": 1836,
"s": 1808,
"text": "To display all the records."
},
{
"code": null,
"e": 1865,
"s": 1836,
"text": "mysql> select *from KeyDemo;"
},
{
"code": null,
"e": 1885,
"s": 1865,
"text": "Here is the output."
},
{
"code": null,
"e": 1953,
"s": 1885,
"text": "+----+\n| id |\n+----+\n| 1 |\n| 2 |\n+----+\n2 rows in set (0.00 sec)\n"
}
] |
Material Design Lite - Buttons | MDL provides a range of CSS classes to apply various predefined visual and behavioral
enhancements to the buttons. The following table lists down the available classes and their
effects.
mdl-button
Sets button as an MDL component, required.
mdl-js-button
Adds basic MDL behavior to button, required.
(none)
Sets flat display effect to button, default.
mdl-button--raised
Sets raised display effect; this is mutually exclusive with fab, mini-fab, and icon.
mdl-button--fab
Sets fab (circular) display effect; this is mutually exclusive with raised, mini-fab, and icon.
mdl-button--mini-fab
Sets mini-fab (small fab circular) display effect; this is mutually exclusive with raised, fab, and icon.
mdl-button--icon
Sets icon (small plain circular) display effect; this is mutually exclusive with raised, fab, and mini-fab.
mdl-button--colored
Sets colored display effect where the colors are defined in material.min.css.
mdl-button--primary
Sets primary color display effect where the colors are defined in material.min.css.
mdl-button--accent
Sets accent color display effect where the colors are defined in material.min.css.
mdl-js-ripple-effect
Sets ripple click effect, can be used in combination with any other class.
The following example will help you understand the use of the mdl-button classes to show the different types of buttons.
<html>
<head>
<script
src = "https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js">
</script>
<link rel = "stylesheet"
href = "https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css">
<link rel = "stylesheet"
href = "https://fonts.googleapis.com/icon?family=Material+Icons">
</head>
<body>
<table border = "0">
<tbody>
<tr>
<td><button class = "mdl-button mdl-js-button mdl-button--fab mdl-button--colored">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--fab mdl-js-ripple-effect mdl-button--colored">
<i class = "material-icons">add</i></button></td>
<td> </td>
</tr>
<tr>
<td>Colored fab (circular) display effect</td>
<td>Colored fab (circular) with ripple display effect</td>
<td> </td>
</tr>
<tr>
<td><button class = "mdl-button mdl-js-button mdl-button--fab">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--fab mdl-js-ripple-effect">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--fab" disabled>
<i class = "material-icons">add</i></button></td>
</tr>
<tr>
<td>Plain fab (circular) display effect</td>
<td>Plain fab (circular) with ripple display effect</td>
<td>Plain fab (circular) and disabled</td>
</tr>
<tr>
<td><button class = "mdl-button mdl-js-button mdl-button--raised">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--raised" disabled>
<i class = "material-icons">add</i></button></td>
</tr>
<tr>
<td>Raised button</td>
<td>Raised button with ripple display effect</td>
<td>Raised button and disabled</td>
</tr>
<tr>
<td><button class = "mdl-button mdl-js-button mdl-button--raised mdl-button--colored">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--raised mdl-button--accent">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect mdl-button--accent">
<i class = "material-icons">add</i></button></td>
</tr>
<tr>
<td>Colored Raised button</td>
<td>Accent-colored Raised button</td>
<td>Accent-colored raised button with ripple effect</td>
</tr>
<tr>
<td><button class = "mdl-button mdl-js-button">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-js-ripple-effect">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button" disabled>
<i class = "material-icons">add</i></button></td>
</tr>
<tr>
<td>Flat button</td>
<td>Flat button with ripple effect</td>
<td>Disabled flat button</td>
</tr>
<tr>
<td><button class = "mdl-button mdl-js-button mdl-button--primary">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--accent">
<i class = "material-icons">add</i></button></td>
<td> </td>
</tr>
<tr>
<td>Primary Flat button</td>
<td>Accent-colored Flat button</td>
<td> </td>
</tr>
<tr>
<td><button class = "mdl-button mdl-js-button mdl-button--icon">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--icon mdl-button--colored">
<i class = "material-icons">add</i></button></td>
<td> </td>
</tr>
<tr>
<td>Icon button</td>
<td>Colored Icon button</td>
<td> </td>
</tr>
<tr>
<td><button class = "mdl-button mdl-js-button mdl-button--fab mdl-button--mini-fab">
<i class = "material-icons">add</i></button></td>
<td><button class = "mdl-button mdl-js-button mdl-button--fab mdl-button--mini-fab mdl-button--colored">
<i class = "material-icons">add</i></button></td>
<td> </td>
</tr>
<tr>
<td>Mini Colored fab (circular) display effect</td>
<td>Mini Colored fab (circular) with ripple display effect</td>
<td> </td>
</tr>
</tbody>
</table>
</body>
</html>
Verify the result.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2073,
"s": 1886,
"text": "MDL provides a range of CSS classes to apply various predefined visual and behavioral\nenhancements to the buttons. The following table lists down the available classes and their\neffects."
},
{
"code": null,
"e": 2084,
"s": 2073,
"text": "mdl-button"
},
{
"code": null,
"e": 2127,
"s": 2084,
"text": "Sets button as an MDL component, required."
},
{
"code": null,
"e": 2141,
"s": 2127,
"text": "mdl-js-button"
},
{
"code": null,
"e": 2186,
"s": 2141,
"text": "Adds basic MDL behavior to button, required."
},
{
"code": null,
"e": 2193,
"s": 2186,
"text": "(none)"
},
{
"code": null,
"e": 2238,
"s": 2193,
"text": "Sets flat display effect to button, default."
},
{
"code": null,
"e": 2257,
"s": 2238,
"text": "mdl-button--raised"
},
{
"code": null,
"e": 2342,
"s": 2257,
"text": "Sets raised display effect; this is mutually exclusive with fab, mini-fab, and icon."
},
{
"code": null,
"e": 2358,
"s": 2342,
"text": "mdl-button--fab"
},
{
"code": null,
"e": 2454,
"s": 2358,
"text": "Sets fab (circular) display effect; this is mutually exclusive with raised, mini-fab, and icon."
},
{
"code": null,
"e": 2475,
"s": 2454,
"text": "mdl-button--mini-fab"
},
{
"code": null,
"e": 2581,
"s": 2475,
"text": "Sets mini-fab (small fab circular) display effect; this is mutually exclusive with raised, fab, and icon."
},
{
"code": null,
"e": 2598,
"s": 2581,
"text": "mdl-button--icon"
},
{
"code": null,
"e": 2706,
"s": 2598,
"text": "Sets icon (small plain circular) display effect; this is mutually exclusive with raised, fab, and mini-fab."
},
{
"code": null,
"e": 2726,
"s": 2706,
"text": "mdl-button--colored"
},
{
"code": null,
"e": 2804,
"s": 2726,
"text": "Sets colored display effect where the colors are defined in material.min.css."
},
{
"code": null,
"e": 2824,
"s": 2804,
"text": "mdl-button--primary"
},
{
"code": null,
"e": 2908,
"s": 2824,
"text": "Sets primary color display effect where the colors are defined in material.min.css."
},
{
"code": null,
"e": 2927,
"s": 2908,
"text": "mdl-button--accent"
},
{
"code": null,
"e": 3010,
"s": 2927,
"text": "Sets accent color display effect where the colors are defined in material.min.css."
},
{
"code": null,
"e": 3031,
"s": 3010,
"text": "mdl-js-ripple-effect"
},
{
"code": null,
"e": 3106,
"s": 3031,
"text": "Sets ripple click effect, can be used in combination with any other class."
},
{
"code": null,
"e": 3227,
"s": 3106,
"text": "The following example will help you understand the use of the mdl-button classes to show the different types of buttons."
},
{
"code": null,
"e": 9042,
"s": 3227,
"text": "<html>\n <head>\n <script \n src = \"https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js\">\n </script>\n <link rel = \"stylesheet\" \n href = \"https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css\">\n <link rel = \"stylesheet\" \n href = \"https://fonts.googleapis.com/icon?family=Material+Icons\">\t \n </head>\n \n <body>\n <table border = \"0\">\n <tbody>\n <tr>\n <td><button class = \"mdl-button mdl-js-button mdl-button--fab mdl-button--colored\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--fab mdl-js-ripple-effect mdl-button--colored\">\n <i class = \"material-icons\">add</i></button></td>\n <td> </td>\n </tr>\n \n <tr>\n <td>Colored fab (circular) display effect</td>\n <td>Colored fab (circular) with ripple display effect</td>\n <td> </td>\n </tr>\t\n \n <tr>\n <td><button class = \"mdl-button mdl-js-button mdl-button--fab\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--fab mdl-js-ripple-effect\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--fab\" disabled>\n <i class = \"material-icons\">add</i></button></td>\n </tr>\n \n <tr>\n <td>Plain fab (circular) display effect</td>\n <td>Plain fab (circular) with ripple display effect</td>\n <td>Plain fab (circular) and disabled</td>\n </tr>\t\n \n <tr>\n <td><button class = \"mdl-button mdl-js-button mdl-button--raised\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--raised\" disabled>\n <i class = \"material-icons\">add</i></button></td>\n </tr>\n \n <tr>\n <td>Raised button</td>\n <td>Raised button with ripple display effect</td>\n <td>Raised button and disabled</td>\n </tr>\n \n <tr>\n <td><button class = \"mdl-button mdl-js-button mdl-button--raised mdl-button--colored\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--raised mdl-button--accent\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--raised mdl-js-ripple-effect mdl-button--accent\">\n <i class = \"material-icons\">add</i></button></td>\n </tr>\n \n <tr>\n <td>Colored Raised button</td>\n <td>Accent-colored Raised button</td>\n <td>Accent-colored raised button with ripple effect</td>\n </tr>\t\t\t\t\n \n <tr>\n <td><button class = \"mdl-button mdl-js-button\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-js-ripple-effect\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button\" disabled>\n <i class = \"material-icons\">add</i></button></td>\n </tr>\n \n <tr>\n <td>Flat button</td>\n <td>Flat button with ripple effect</td>\n <td>Disabled flat button</td>\n </tr>\t\n \n <tr>\n <td><button class = \"mdl-button mdl-js-button mdl-button--primary\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--accent\">\n <i class = \"material-icons\">add</i></button></td>\n <td> </td>\n </tr>\n \n <tr>\n <td>Primary Flat button</td>\n <td>Accent-colored Flat button</td>\n <td> </td>\n </tr>\t\n \n <tr>\n <td><button class = \"mdl-button mdl-js-button mdl-button--icon\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--icon mdl-button--colored\">\n <i class = \"material-icons\">add</i></button></td>\n <td> </td>\n </tr>\n \n <tr>\n <td>Icon button</td>\n <td>Colored Icon button</td>\n <td> </td>\n </tr>\t\n \n <tr>\n <td><button class = \"mdl-button mdl-js-button mdl-button--fab mdl-button--mini-fab\">\n <i class = \"material-icons\">add</i></button></td>\n <td><button class = \"mdl-button mdl-js-button mdl-button--fab mdl-button--mini-fab mdl-button--colored\">\n <i class = \"material-icons\">add</i></button></td>\n <td> </td>\n </tr>\n \n <tr>\n <td>Mini Colored fab (circular) display effect</td>\n <td>Mini Colored fab (circular) with ripple display effect</td>\n <td> </td>\n </tr>\t\t\t\n </tbody>\n </table>\n \n </body>\n</html>"
},
{
"code": null,
"e": 9061,
"s": 9042,
"text": "Verify the result."
},
{
"code": null,
"e": 9068,
"s": 9061,
"text": " Print"
},
{
"code": null,
"e": 9079,
"s": 9068,
"text": " Add Notes"
}
] |
Count occurrences of strings formed using words in another string - GeeksforGeeks | 24 Jan, 2022
Given a string A and a vector of strings B, the task is to count the number of strings in vector B that only contains the words from A.
Examples:
Input: A=”blue green red yellow”B[]={“blue red”, “green pink”, “yellow green”}Output: 2
Input: A=”apple banana pear”B[]={“apple”, “banana apple”, “pear banana”}Output: 3
Approach: Follow the below steps to solve this problem:
Extract all the words of string A and store them in a set, say st.Now traverse on each string in vector B, and get all the words in that string.Now check that if all words are present in st, then increment ans by 1.Return ans as the solution to the problem.
Extract all the words of string A and store them in a set, say st.
Now traverse on each string in vector B, and get all the words in that string.
Now check that if all words are present in st, then increment ans by 1.
Return ans as the solution to the problem.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ code for the above approach #include <bits/stdc++.h>using namespace std; // Function to extract all words from a stringvector<string> getWords(string A){ vector<string> words; string t; for (int i = 0; i < A.size(); i++) { // If the character is a space if (A[i] == ' ') { if (t.size() > 0) { words.push_back(t); } t = ""; } // Else else { t += A[i]; } } // Last word if (t.size() > 0) { words.push_back(t); } return words;} // Function to count the number of strings in B// that only contains the words from Aint countStrings(string A, vector<string>& B){ unordered_set<string> st; vector<string> words; words = getWords(A); for (auto x : words) { st.insert(x); } // Variable to store the final answer int ans = 0; for (auto x : B) { words = getWords(x); bool flag = 0; for (auto y : words) { if (st.find(y) == st.end()) { flag = 1; break; } } // If all the words are in set st if (!flag) { ans++; } } return ans;} // Driver Codeint main(){ string A = "blue green red yellow"; vector<string> B = { "blue red", "green pink", "yellow green" }; cout << countStrings(A, B);}
// Java code for the above approachimport java.util.*; class GFG{ // Function to extract all words from a String static Vector<String> getWords(String A) { Vector<String> words = new Vector<String>(); String t=""; for (int i = 0; i < A.length(); i++) { // If the character is a space if (A.charAt(i) == ' ') { if (t.length() > 0) { words.add(t); } t = ""; } // Else else { t += A.charAt(i); } } // Last word if (t.length() > 0) { words.add(t); } return words; } // Function to count the number of Strings in B // that only contains the words from A static int countStrings(String A, String[] B) { HashSet<String> st = new HashSet<>(); Vector<String> words = new Vector<String>(); words = getWords(A); for (String x : words) { st.add(x); } // Variable to store the final answer int ans = 0; for (String x : B) { words = getWords(x); boolean flag = false; for (String y : words) { if (!st.contains(y)) { flag = true; break; } } // If all the words are in set st if (!flag) { ans++; } } return ans; } // Driver Code public static void main(String[] args) { String A = "blue green red yellow"; String []B = { "blue red", "green pink", "yellow green" }; System.out.print(countStrings(A, B)); }} // This code is contributed by 29AjayKumar
# Python 3 code for the above approach # Function to extract all words from a stringdef getWords(A): words = [] t = "" for i in range(len(A)): # If the character is a space if (A[i] == ' '): if (len(t) > 0): words.append(t) t = "" # Else else: t += A[i] # Last word if (len(t) > 0): words.append(t) return words # Function to count the number of strings in B# that only contains the words from Adef countStrings(A, B): st = set([]) words = [] words = getWords(A) for x in words: st.add(x) # Variable to store the final answer ans = 0 for x in B: words = getWords(x) flag = 0 for y in words: if (y not in st): flag = 1 break # If all the words are in set st if (not flag): ans += 1 return ans # Driver Codeif __name__ == "__main__": A = "blue green red yellow" B = ["blue red", "green pink", "yellow green"] print(countStrings(A, B)) # This code is contributed by ukasp.
// C# code for the above approachusing System;using System.Collections.Generic; public class GFG{ // Function to extract all words from a String static List<String> getWords(String A) { List<String> words = new List<String>(); String t=""; for (int i = 0; i < A.Length; i++) { // If the character is a space if (A[i] == ' ') { if (t.Length > 0) { words.Add(t); } t = ""; } // Else else { t += A[i]; } } // Last word if (t.Length > 0) { words.Add(t); } return words; } // Function to count the number of Strings in B // that only contains the words from A static int countStrings(String A, String[] B) { HashSet<String> st = new HashSet<String>(); List<String> words = new List<String>(); words = getWords(A); foreach (String x in words) { st.Add(x); } // Variable to store the readonly answer int ans = 0; foreach (String x in B) { words = getWords(x); bool flag = false; foreach (String y in words) { if (!st.Contains(y)) { flag = true; break; } } // If all the words are in set st if (!flag) { ans++; } } return ans; } // Driver Code public static void Main(String[] args) { String A = "blue green red yellow"; String []B = { "blue red", "green pink", "yellow green" }; Console.Write(countStrings(A, B)); }} // This code is contributed by 29AjayKumar
<script> // JavaScript code for the above approach // Function to extract all words from a string function getWords(A) { let words = []; let t; for (let i = 0; i < A.length; i++) { // If the character is a space if (A[i] == ' ') { if (t.length > 0) { words.push(t); } t = ""; } // Else else { t += A[i]; } } // Last word if (t.length > 0) { words.push(t); } return words; } // Function to count the number of strings in B // that only contains the words from A function countStrings(A, B) { let st = new Set(); let words; words = getWords(A); for (let x of words) { st.add(x); } // Variable to store the final answer let ans = 0; for (let x of B) { words = getWords(x); let flag = 0; for (let y = 0; y < words.length; y++) { if (st.has(words[y])) { flag = 1; break; } } // If all the words are in set st if (flag == 0) { ans++; } } return ans + 1; } // Driver Code let A = "blue green red yellow"; let B = ["blue red", "green pink", "yellow green"]; document.write(countStrings(A, B)); // This code is contributed by Potta Lokesh </script>
2
Time Complexity: O(N*M), where N is the size of vector B and M is the maximum length of in B.Auxiliary Space: O(N)
lokeshpotta20
29AjayKumar
ukasp
Algo-Geek 2021
Algo Geek
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Count of operation required to water all the plants
Lexicographically smallest string formed by concatenating any prefix and its mirrored form
Divide given number into two even parts
Check if an edge is a part of any Minimum Spanning Tree
Check if the given string is valid English word or not
Reverse a string in Java
Write a program to reverse an array or string
Longest Common Subsequence | DP-4
Write a program to print all permutations of a given string
C++ Data Types | [
{
"code": null,
"e": 26250,
"s": 26222,
"text": "\n24 Jan, 2022"
},
{
"code": null,
"e": 26386,
"s": 26250,
"text": "Given a string A and a vector of strings B, the task is to count the number of strings in vector B that only contains the words from A."
},
{
"code": null,
"e": 26396,
"s": 26386,
"text": "Examples:"
},
{
"code": null,
"e": 26484,
"s": 26396,
"text": "Input: A=”blue green red yellow”B[]={“blue red”, “green pink”, “yellow green”}Output: 2"
},
{
"code": null,
"e": 26566,
"s": 26484,
"text": "Input: A=”apple banana pear”B[]={“apple”, “banana apple”, “pear banana”}Output: 3"
},
{
"code": null,
"e": 26622,
"s": 26566,
"text": "Approach: Follow the below steps to solve this problem:"
},
{
"code": null,
"e": 26880,
"s": 26622,
"text": "Extract all the words of string A and store them in a set, say st.Now traverse on each string in vector B, and get all the words in that string.Now check that if all words are present in st, then increment ans by 1.Return ans as the solution to the problem."
},
{
"code": null,
"e": 26947,
"s": 26880,
"text": "Extract all the words of string A and store them in a set, say st."
},
{
"code": null,
"e": 27026,
"s": 26947,
"text": "Now traverse on each string in vector B, and get all the words in that string."
},
{
"code": null,
"e": 27098,
"s": 27026,
"text": "Now check that if all words are present in st, then increment ans by 1."
},
{
"code": null,
"e": 27141,
"s": 27098,
"text": "Return ans as the solution to the problem."
},
{
"code": null,
"e": 27192,
"s": 27141,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 27196,
"s": 27192,
"text": "C++"
},
{
"code": null,
"e": 27201,
"s": 27196,
"text": "Java"
},
{
"code": null,
"e": 27209,
"s": 27201,
"text": "Python3"
},
{
"code": null,
"e": 27212,
"s": 27209,
"text": "C#"
},
{
"code": null,
"e": 27223,
"s": 27212,
"text": "Javascript"
},
{
"code": "// C++ code for the above approach #include <bits/stdc++.h>using namespace std; // Function to extract all words from a stringvector<string> getWords(string A){ vector<string> words; string t; for (int i = 0; i < A.size(); i++) { // If the character is a space if (A[i] == ' ') { if (t.size() > 0) { words.push_back(t); } t = \"\"; } // Else else { t += A[i]; } } // Last word if (t.size() > 0) { words.push_back(t); } return words;} // Function to count the number of strings in B// that only contains the words from Aint countStrings(string A, vector<string>& B){ unordered_set<string> st; vector<string> words; words = getWords(A); for (auto x : words) { st.insert(x); } // Variable to store the final answer int ans = 0; for (auto x : B) { words = getWords(x); bool flag = 0; for (auto y : words) { if (st.find(y) == st.end()) { flag = 1; break; } } // If all the words are in set st if (!flag) { ans++; } } return ans;} // Driver Codeint main(){ string A = \"blue green red yellow\"; vector<string> B = { \"blue red\", \"green pink\", \"yellow green\" }; cout << countStrings(A, B);}",
"e": 28603,
"s": 27223,
"text": null
},
{
"code": "// Java code for the above approachimport java.util.*; class GFG{ // Function to extract all words from a String static Vector<String> getWords(String A) { Vector<String> words = new Vector<String>(); String t=\"\"; for (int i = 0; i < A.length(); i++) { // If the character is a space if (A.charAt(i) == ' ') { if (t.length() > 0) { words.add(t); } t = \"\"; } // Else else { t += A.charAt(i); } } // Last word if (t.length() > 0) { words.add(t); } return words; } // Function to count the number of Strings in B // that only contains the words from A static int countStrings(String A, String[] B) { HashSet<String> st = new HashSet<>(); Vector<String> words = new Vector<String>(); words = getWords(A); for (String x : words) { st.add(x); } // Variable to store the final answer int ans = 0; for (String x : B) { words = getWords(x); boolean flag = false; for (String y : words) { if (!st.contains(y)) { flag = true; break; } } // If all the words are in set st if (!flag) { ans++; } } return ans; } // Driver Code public static void main(String[] args) { String A = \"blue green red yellow\"; String []B = { \"blue red\", \"green pink\", \"yellow green\" }; System.out.print(countStrings(A, B)); }} // This code is contributed by 29AjayKumar",
"e": 30077,
"s": 28603,
"text": null
},
{
"code": "# Python 3 code for the above approach # Function to extract all words from a stringdef getWords(A): words = [] t = \"\" for i in range(len(A)): # If the character is a space if (A[i] == ' '): if (len(t) > 0): words.append(t) t = \"\" # Else else: t += A[i] # Last word if (len(t) > 0): words.append(t) return words # Function to count the number of strings in B# that only contains the words from Adef countStrings(A, B): st = set([]) words = [] words = getWords(A) for x in words: st.add(x) # Variable to store the final answer ans = 0 for x in B: words = getWords(x) flag = 0 for y in words: if (y not in st): flag = 1 break # If all the words are in set st if (not flag): ans += 1 return ans # Driver Codeif __name__ == \"__main__\": A = \"blue green red yellow\" B = [\"blue red\", \"green pink\", \"yellow green\"] print(countStrings(A, B)) # This code is contributed by ukasp.",
"e": 31192,
"s": 30077,
"text": null
},
{
"code": "// C# code for the above approachusing System;using System.Collections.Generic; public class GFG{ // Function to extract all words from a String static List<String> getWords(String A) { List<String> words = new List<String>(); String t=\"\"; for (int i = 0; i < A.Length; i++) { // If the character is a space if (A[i] == ' ') { if (t.Length > 0) { words.Add(t); } t = \"\"; } // Else else { t += A[i]; } } // Last word if (t.Length > 0) { words.Add(t); } return words; } // Function to count the number of Strings in B // that only contains the words from A static int countStrings(String A, String[] B) { HashSet<String> st = new HashSet<String>(); List<String> words = new List<String>(); words = getWords(A); foreach (String x in words) { st.Add(x); } // Variable to store the readonly answer int ans = 0; foreach (String x in B) { words = getWords(x); bool flag = false; foreach (String y in words) { if (!st.Contains(y)) { flag = true; break; } } // If all the words are in set st if (!flag) { ans++; } } return ans; } // Driver Code public static void Main(String[] args) { String A = \"blue green red yellow\"; String []B = { \"blue red\", \"green pink\", \"yellow green\" }; Console.Write(countStrings(A, B)); }} // This code is contributed by 29AjayKumar",
"e": 32686,
"s": 31192,
"text": null
},
{
"code": " <script> // JavaScript code for the above approach // Function to extract all words from a string function getWords(A) { let words = []; let t; for (let i = 0; i < A.length; i++) { // If the character is a space if (A[i] == ' ') { if (t.length > 0) { words.push(t); } t = \"\"; } // Else else { t += A[i]; } } // Last word if (t.length > 0) { words.push(t); } return words; } // Function to count the number of strings in B // that only contains the words from A function countStrings(A, B) { let st = new Set(); let words; words = getWords(A); for (let x of words) { st.add(x); } // Variable to store the final answer let ans = 0; for (let x of B) { words = getWords(x); let flag = 0; for (let y = 0; y < words.length; y++) { if (st.has(words[y])) { flag = 1; break; } } // If all the words are in set st if (flag == 0) { ans++; } } return ans + 1; } // Driver Code let A = \"blue green red yellow\"; let B = [\"blue red\", \"green pink\", \"yellow green\"]; document.write(countStrings(A, B)); // This code is contributed by Potta Lokesh </script>",
"e": 34343,
"s": 32686,
"text": null
},
{
"code": null,
"e": 34348,
"s": 34346,
"text": "2"
},
{
"code": null,
"e": 34465,
"s": 34350,
"text": "Time Complexity: O(N*M), where N is the size of vector B and M is the maximum length of in B.Auxiliary Space: O(N)"
},
{
"code": null,
"e": 34481,
"s": 34467,
"text": "lokeshpotta20"
},
{
"code": null,
"e": 34493,
"s": 34481,
"text": "29AjayKumar"
},
{
"code": null,
"e": 34499,
"s": 34493,
"text": "ukasp"
},
{
"code": null,
"e": 34514,
"s": 34499,
"text": "Algo-Geek 2021"
},
{
"code": null,
"e": 34524,
"s": 34514,
"text": "Algo Geek"
},
{
"code": null,
"e": 34532,
"s": 34524,
"text": "Strings"
},
{
"code": null,
"e": 34540,
"s": 34532,
"text": "Strings"
},
{
"code": null,
"e": 34638,
"s": 34540,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34647,
"s": 34638,
"text": "Comments"
},
{
"code": null,
"e": 34660,
"s": 34647,
"text": "Old Comments"
},
{
"code": null,
"e": 34712,
"s": 34660,
"text": "Count of operation required to water all the plants"
},
{
"code": null,
"e": 34803,
"s": 34712,
"text": "Lexicographically smallest string formed by concatenating any prefix and its mirrored form"
},
{
"code": null,
"e": 34843,
"s": 34803,
"text": "Divide given number into two even parts"
},
{
"code": null,
"e": 34899,
"s": 34843,
"text": "Check if an edge is a part of any Minimum Spanning Tree"
},
{
"code": null,
"e": 34954,
"s": 34899,
"text": "Check if the given string is valid English word or not"
},
{
"code": null,
"e": 34979,
"s": 34954,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 35025,
"s": 34979,
"text": "Write a program to reverse an array or string"
},
{
"code": null,
"e": 35059,
"s": 35025,
"text": "Longest Common Subsequence | DP-4"
},
{
"code": null,
"e": 35119,
"s": 35059,
"text": "Write a program to print all permutations of a given string"
}
] |
Duplicate Elements | Practice | GeeksforGeeks | You are given an array A of size N. The array contains almost distinct elements with some duplicated. You have to print the elements in sorted order which appears more than once.
Input Format:
The first line of input contains T, denoting the number of testcases. T testcases follow. Each testcase contains two lines of input. The first line of input contains size of array N. The second line contains N integers separated by spaces.
Output Format:
For each test case, in a new line, print the required answer. If there are no duplicates print -1.
Your Task:
Your task is to complete the function SortedDuplicates(arr, n) which accepts array and its size as an argument.
Constraints:
1 <= T <= 100
1 <= N <= 100
1 <= Ai <= 106
Examples:
Input:
1
9
3 4 5 7 8 1 2 1 3
Output:
1 3
+1
mashhadihossain1 month ago
SIMPLE JAVA SOLUTION (0.8/2.4 SEC)
public static void SortedDuplicates(int arr[], int n) { HashMap<Integer,Integer>map =new HashMap<>(); for(int i=0;i<n;i++) { map.put(arr[i],map.getOrDefault(arr[i],0)+1); } HashSet<Integer> set=new HashSet<>(); for(int key : map.keySet()) { if(map.get(key)>1) { set.add(key); } } ArrayList<Integer> list=new ArrayList<>(set); Collections.sort(list); if(list.isEmpty()) { System.out.print(-1); } else { for(int x : list) { System.out.print(x+" "); } } }
0
ankitparashxr3 months ago
java
int count = 0; ArrayList<Integer> a = new ArrayList<>(); Arrays.sort(arr); for(int i =1;i<arr.length;i++) { if(arr[i]==arr[i-1]) { if(!a.contains(arr[i])) { count++; System.out.print(arr[i]+" "); a.add(arr[i]); } } } if(count==0) { System.out.print("-1"); }
0
mainakdeykol4 months ago
public static void SortedDuplicates(int arr[], int n) { //Your code here TreeSet<Integer> tree=new TreeSet<>(); for(int i=0;i<n;i++){ for(int j=i+1;j<n;j++){ if(arr[i]==arr[j]){ tree.add(arr[i]); } } } if(tree.isEmpty()){ System.out.print(-1); } else{ for(int it:tree) System.out.print(it+" "); } }
+2
badgujarsachin834 months ago
public static void SortedDuplicates(int arr[], int n)
{
//Your code here
HashSet<Integer> hs = new HashSet<>();
TreeSet<Integer> tr = new TreeSet<>();
for(int i=0; i<arr.length; i++){
int a=arr[i];
if(hs.contains(a)==false){
hs.add(a);
}
else{
tr.add(a);
}
}
if(tr.size()==0){
System.out.print("-1");
}
else{
for(int it: tr){
System.out.print(it+ " ");
}
}
}
-1
rr71951424 months ago
Correct AnswerExecution Time:0.6
//----CODE IN JAVA----
HashSet<Integer> hs = new HashSet<>(); TreeSet<Integer> tr = new TreeSet<>(); for(int i=0; i<arr.length; i++){ int a=arr[i]; if(hs.contains(a)==false){ hs.add(a); } else{ tr.add(a); } } if(tr.size()==0){ System.out.print("-1"); } else{ for(int it: tr){ System.out.print(it+ " "); } }
-1
Aaditya Burujwale1 year ago
Aaditya Burujwale
Java Easy Solution - TC - O(n)
-1
Amir Ansari1 year ago
Amir Ansari
Correct AnswerExecution Time:0.88
public static void SortedDuplicates(int arr[], int n) { //Your code here Arrays.sort(arr); Map<integer, integer=""> map = new TreeMap<integer,integer>();
for(int i = 0; i < n ; i++){ if(map.containsKey(arr[i])){ map.put(arr[i],map.get(arr[i])+1); } else{ map.put(arr[i],1); }
} String strSorted=""; boolean isSorted = false; Set<integer> set = new TreeSet<integer>();
for(int i = 0; i < n; i++){ if(map.get(arr[i]) > 1){ isSorted = true; set.add(arr[i]); }
}
if(!isSorted) System.out.print("-1"); else{ Iterator it = set.iterator(); while(it.hasNext()){ System.out.print(it.next() +" "); } }
}
-1
Sanjit Paul1 year ago
Sanjit Paul
Easy Java solutionhttps://ide.geeksforgeeks.o...
-1
KASHIF AHMAD2 years ago
KASHIF AHMAD
Dont use new line or println anywhereIn java usually we use println at the end of the function , Since its a functional problem dont use new Line anywhere.
here's the simple java solution:- https://ide.geeksforgeeks.o...
-1
Shobhit Yadav2 years ago
Shobhit Yadav
EASY JAVA SOLUTION
int t=0; TreeMap<integer,integer> tm=new TreeMap<>(); for(int i=0;i<n;i++) {="" if(tm.containskey(arr[i]))="" tm.put(arr[i],arr[i]="arr[i]+1);" else="" tm.put(arr[i],0);="" }="" for(map.entry="" it="" :="" tm.entryset())="" {="" int="" value="(int)it.getValue();" int="" key="(int)it.getKey();" if(value-key="">0) System.out.print(key+" "); else t++; } if(t==n) System.out.print("-1");
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 406,
"s": 226,
"text": "You are given an array A of size N. The array contains almost distinct elements with some duplicated. You have to print the elements in sorted order which appears more than once. "
},
{
"code": null,
"e": 661,
"s": 406,
"text": "Input Format:\nThe first line of input contains T, denoting the number of testcases. T testcases follow. Each testcase contains two lines of input. The first line of input contains size of array N. The second line contains N integers separated by spaces."
},
{
"code": null,
"e": 775,
"s": 661,
"text": "Output Format:\nFor each test case, in a new line, print the required answer. If there are no duplicates print -1."
},
{
"code": null,
"e": 899,
"s": 775,
"text": "Your Task:\nYour task is to complete the function SortedDuplicates(arr, n) which accepts array and its size as an argument. "
},
{
"code": null,
"e": 955,
"s": 899,
"text": "Constraints:\n1 <= T <= 100\n1 <= N <= 100\n1 <= Ai <= 106"
},
{
"code": null,
"e": 1006,
"s": 955,
"text": "Examples:\nInput:\n1\n9\n3 4 5 7 8 1 2 1 3\nOutput:\n1 3"
},
{
"code": null,
"e": 1009,
"s": 1006,
"text": "+1"
},
{
"code": null,
"e": 1036,
"s": 1009,
"text": "mashhadihossain1 month ago"
},
{
"code": null,
"e": 1071,
"s": 1036,
"text": "SIMPLE JAVA SOLUTION (0.8/2.4 SEC)"
},
{
"code": null,
"e": 1720,
"s": 1071,
"text": "public static void SortedDuplicates(int arr[], int n) { HashMap<Integer,Integer>map =new HashMap<>(); for(int i=0;i<n;i++) { map.put(arr[i],map.getOrDefault(arr[i],0)+1); } HashSet<Integer> set=new HashSet<>(); for(int key : map.keySet()) { if(map.get(key)>1) { set.add(key); } } ArrayList<Integer> list=new ArrayList<>(set); Collections.sort(list); if(list.isEmpty()) { System.out.print(-1); } else { for(int x : list) { System.out.print(x+\" \"); } } } "
},
{
"code": null,
"e": 1722,
"s": 1720,
"text": "0"
},
{
"code": null,
"e": 1748,
"s": 1722,
"text": "ankitparashxr3 months ago"
},
{
"code": null,
"e": 1753,
"s": 1748,
"text": "java"
},
{
"code": null,
"e": 2188,
"s": 1753,
"text": "int count = 0; ArrayList<Integer> a = new ArrayList<>(); Arrays.sort(arr); for(int i =1;i<arr.length;i++) { if(arr[i]==arr[i-1]) { if(!a.contains(arr[i])) { count++; System.out.print(arr[i]+\" \"); a.add(arr[i]); } } } if(count==0) { System.out.print(\"-1\"); }"
},
{
"code": null,
"e": 2190,
"s": 2188,
"text": "0"
},
{
"code": null,
"e": 2215,
"s": 2190,
"text": "mainakdeykol4 months ago"
},
{
"code": null,
"e": 2659,
"s": 2215,
"text": "public static void SortedDuplicates(int arr[], int n) { //Your code here TreeSet<Integer> tree=new TreeSet<>(); for(int i=0;i<n;i++){ for(int j=i+1;j<n;j++){ if(arr[i]==arr[j]){ tree.add(arr[i]); } } } if(tree.isEmpty()){ System.out.print(-1); } else{ for(int it:tree) System.out.print(it+\" \"); } }"
},
{
"code": null,
"e": 2662,
"s": 2659,
"text": "+2"
},
{
"code": null,
"e": 2691,
"s": 2662,
"text": "badgujarsachin834 months ago"
},
{
"code": null,
"e": 3233,
"s": 2691,
"text": " public static void SortedDuplicates(int arr[], int n)\n {\n //Your code here\n HashSet<Integer> hs = new HashSet<>();\n TreeSet<Integer> tr = new TreeSet<>();\n for(int i=0; i<arr.length; i++){\n int a=arr[i];\n if(hs.contains(a)==false){\n hs.add(a);\n }\n else{\n tr.add(a);\n }\n }\n \n if(tr.size()==0){\n System.out.print(\"-1\");\n }\n else{\n for(int it: tr){\n System.out.print(it+ \" \");\n }\n }\n }"
},
{
"code": null,
"e": 3236,
"s": 3233,
"text": "-1"
},
{
"code": null,
"e": 3258,
"s": 3236,
"text": "rr71951424 months ago"
},
{
"code": null,
"e": 3291,
"s": 3258,
"text": "Correct AnswerExecution Time:0.6"
},
{
"code": null,
"e": 3316,
"s": 3293,
"text": "//----CODE IN JAVA----"
},
{
"code": null,
"e": 3734,
"s": 3316,
"text": "HashSet<Integer> hs = new HashSet<>(); TreeSet<Integer> tr = new TreeSet<>(); for(int i=0; i<arr.length; i++){ int a=arr[i]; if(hs.contains(a)==false){ hs.add(a); } else{ tr.add(a); } } if(tr.size()==0){ System.out.print(\"-1\"); } else{ for(int it: tr){ System.out.print(it+ \" \"); } }"
},
{
"code": null,
"e": 3737,
"s": 3734,
"text": "-1"
},
{
"code": null,
"e": 3765,
"s": 3737,
"text": "Aaditya Burujwale1 year ago"
},
{
"code": null,
"e": 3783,
"s": 3765,
"text": "Aaditya Burujwale"
},
{
"code": null,
"e": 3814,
"s": 3783,
"text": "Java Easy Solution - TC - O(n)"
},
{
"code": null,
"e": 3817,
"s": 3814,
"text": "-1"
},
{
"code": null,
"e": 3839,
"s": 3817,
"text": "Amir Ansari1 year ago"
},
{
"code": null,
"e": 3851,
"s": 3839,
"text": "Amir Ansari"
},
{
"code": null,
"e": 3885,
"s": 3851,
"text": "Correct AnswerExecution Time:0.88"
},
{
"code": null,
"e": 4064,
"s": 3885,
"text": "public static void SortedDuplicates(int arr[], int n) { //Your code here Arrays.sort(arr); Map<integer, integer=\"\"> map = new TreeMap<integer,integer>();"
},
{
"code": null,
"e": 4282,
"s": 4064,
"text": " for(int i = 0; i < n ; i++){ if(map.containsKey(arr[i])){ map.put(arr[i],map.get(arr[i])+1); } else{ map.put(arr[i],1); }"
},
{
"code": null,
"e": 4420,
"s": 4282,
"text": " } String strSorted=\"\"; boolean isSorted = false; Set<integer> set = new TreeSet<integer>();"
},
{
"code": null,
"e": 4604,
"s": 4420,
"text": " for(int i = 0; i < n; i++){ if(map.get(arr[i]) > 1){ isSorted = true; set.add(arr[i]); }"
},
{
"code": null,
"e": 4619,
"s": 4604,
"text": " }"
},
{
"code": null,
"e": 4872,
"s": 4619,
"text": " if(!isSorted) System.out.print(\"-1\"); else{ Iterator it = set.iterator(); while(it.hasNext()){ System.out.print(it.next() +\" \"); } }"
},
{
"code": null,
"e": 4879,
"s": 4872,
"text": " }"
},
{
"code": null,
"e": 4882,
"s": 4879,
"text": "-1"
},
{
"code": null,
"e": 4904,
"s": 4882,
"text": "Sanjit Paul1 year ago"
},
{
"code": null,
"e": 4916,
"s": 4904,
"text": "Sanjit Paul"
},
{
"code": null,
"e": 4965,
"s": 4916,
"text": "Easy Java solutionhttps://ide.geeksforgeeks.o..."
},
{
"code": null,
"e": 4968,
"s": 4965,
"text": "-1"
},
{
"code": null,
"e": 4992,
"s": 4968,
"text": "KASHIF AHMAD2 years ago"
},
{
"code": null,
"e": 5005,
"s": 4992,
"text": "KASHIF AHMAD"
},
{
"code": null,
"e": 5161,
"s": 5005,
"text": "Dont use new line or println anywhereIn java usually we use println at the end of the function , Since its a functional problem dont use new Line anywhere."
},
{
"code": null,
"e": 5226,
"s": 5161,
"text": "here's the simple java solution:- https://ide.geeksforgeeks.o..."
},
{
"code": null,
"e": 5229,
"s": 5226,
"text": "-1"
},
{
"code": null,
"e": 5254,
"s": 5229,
"text": "Shobhit Yadav2 years ago"
},
{
"code": null,
"e": 5268,
"s": 5254,
"text": "Shobhit Yadav"
},
{
"code": null,
"e": 5287,
"s": 5268,
"text": "EASY JAVA SOLUTION"
},
{
"code": null,
"e": 5735,
"s": 5287,
"text": " int t=0; TreeMap<integer,integer> tm=new TreeMap<>(); for(int i=0;i<n;i++) {=\"\" if(tm.containskey(arr[i]))=\"\" tm.put(arr[i],arr[i]=\"arr[i]+1);\" else=\"\" tm.put(arr[i],0);=\"\" }=\"\" for(map.entry=\"\" it=\"\" :=\"\" tm.entryset())=\"\" {=\"\" int=\"\" value=\"(int)it.getValue();\" int=\"\" key=\"(int)it.getKey();\" if(value-key=\"\">0) System.out.print(key+\" \"); else t++; } if(t==n) System.out.print(\"-1\");"
},
{
"code": null,
"e": 5881,
"s": 5735,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 5917,
"s": 5881,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 5927,
"s": 5917,
"text": "\nProblem\n"
},
{
"code": null,
"e": 5937,
"s": 5927,
"text": "\nContest\n"
},
{
"code": null,
"e": 6000,
"s": 5937,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 6148,
"s": 6000,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 6356,
"s": 6148,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 6462,
"s": 6356,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
Guide to fine-tuning Text Generation models: GPT-2, GPT-Neo and T5 | by Mohit Mayank | Towards Data Science | The code used in this article can be found here — GPT and T5. To read more about text generation models, see this. For more such articles visit my website or have a look at my latest short book on Data science. You can also connect with me on LinkedIn.
Recent researches in NLP led to the release of multiple massive-sized pre-trained text generation models like GPT-{1,2,3}, GPT-{Neo, J} and T5. If the audiences (including you and me) were not impressed with their tunable parameter’s size going into billions, we were enthralled by the ease with which they can be used for a completely new unseen task, and without training for a single epoch! While this is okay for quick experiments, for any real production deployment, it is still recommended to further train the models for the specific task. This is called fine-tuning, and in this article, we will practically learn the ways to fine-tune some of the best (read state-of-the-art) language models currently available. We will also compare their performance by fine-tuning on Twitter Sentiment detection dataset. Let's get started!
Text generation is an interesting task in NLP, where the intention is to generate text when provided with some prompt as input. Usually, we apply some form of the Sequence-to-Sequence model for this task. They are called language models, as they can be used to predict the next word based on the previous sentences. The recent surge in interest in this field is due to two main reasons, (1) the availability of several high performance pre-trained models, and (2) it's very easy to transform a large variety of NLP based tasks into the text-in text-out type of problem. This is very intuitively shown by T5 authors, where the same model can be used to do language translation, text regression, summarization, etc.
In this article, we will be concerned about the following models,
GPT-2: It is the second iteration of the original series of language models released by OpenAI. In fact, this series of GPT models made the language model famous! GPT stands for “Generative Pre-trained Transformer”, and currently we have 3 versions of the model (v1, v2 and v3). Out of these only GPT-1 and GPT-2 are open-sourced, and hence we will pick the latest version for our experiment. On the technical side, the architecture of GPT-2 is made up of the decoder part of the Transformer architecture.
GPT-Neo: This model was released by EleutherAI to counter the GPT-3 model which was not open-sourced. The architecture is quite similar to GPT-3, but training was done on The Pile, an 825 GB sized text dataset.
T5: stands for “Text-to-Text Transfer Transformer” and was Google’s answer to the world for open source language models. T5 paper showcase that using the complete encoder-decoder architecture (of the transformer) is better than only using the decoder (as done by the GPT series), hence they stay true to the original transformer architecture.
A brief comparison of the different models is shown below. One point to note, each model further releases several versions based on the tunable parameter size. For this article, we will pick 117M sized GPT-2, 125M sized GPT-Neo and 220M sized T5.
To test the performance of different language models, we will compare the accuracy of the models after finetuning on a simple task — sentiment detection. Here, we will use the Twitter Sentiment dataset, that can be download from here. In total, it contains over 1.6M tweets and their sentiment could be either positive or negative. For computation efficiency, we will sample 10k tweets with a nearly equal distribution of the sentiment classes. Then, we will train the model on 95% of the data, using the remaining 5% for the test purposes. For a fair comparison, we will use the same test and train split for all of the three models. Finally, we will perform 3 trials of splitting and training each model — this is a way to replicate a 3-fold validation test. We will report the individual and aggregated (mean) f1 macro score, which can be used for model’s performance comparison.
Now the next obvious question should be, how can we transform the sentiment detection task as a text generation one? The answer is quite simple, all we have to do is create an intuitive prompt (template with data) that could reflect how a similar representation could occur on the web. Let’s understand it this way, we want to provide the tweet as the input and want the sentiment as output. So, in our prompt, we should pass a single tweet after Tweet: prefix and expect the model to predict the sentiment on the next line after Sentiment: prefix. This process of creating an effective prompt is called prompt engineering, and it has been shown that by just changing the prompt, language models performs better! For our use case, we can start with a very simple prompt format. We will have two different prompts, one for training and one for the test. Examples are shown below.
Training prompt (as we want the model to learn this “pattern” to solve the “task”)
Tweet: I am not feeling well.Sentiment: Negative
Test prompt (as now we hope the model has learned the “task” and hence could complete the “pattern”)
Tweet: I am feeling well.Sentiment:
So during the testing, we will extract the word predicted by the model after the prefix Sentiment: and consider that word as the predicted sentiment label. Now let’s dive into the implementation!
One point to note — GPT-2 and GPT-Neo share nearly the same architecture, so the majority of the fine-tuning code remains the same. Hence for brevity’s sake, I will only share the code for GPT-2, but I will point out changes required to make it work for the GPT-Neo model as well. Ok, let’s get started by handling the dataset, for which we will begin with creating a Pytorch Dataset class, which defines how we prepare the data for the training.
This includes 3 modules:
__init__: where we basically tokenize and store the data.
__len__ : where we return the length of the total dataset. This is required for step size calculation within each epoch.
__getitem__ : where we fetch one data and then return it.
Some addition points — (1) on line 8, we define the mapping used to transform original numeric sentiment label to textual labels, (2) on line 12, we transform the data into the training prompt we decided on, and (3) on line 14 we perform the tokenization (splitting the tweet into tokens + replace them with their unique ids).
Next, let’s connect the data with the Dataset class. The code breakup is as follows,
Line 4-8: We begin with loading the dataset. You can download it from here and modify the local path at line 4. Next, we just subset the relevant columns and rename them. On line 8 we sample 10k tweets for this experiment.
Line 10–13: We split the data into train and test, with 95% and 5% split, respectively. We use stratifyflag such that the split is even in sentiment class distribution.
Line 16: We pass the train data to the SentimentDataset class. Note, we could have done the same for test data, but I just returned the test data in its original form.
Now we will prepare for the training of the model. Code breakdown is as follows,
Line 10–13: We load the tokenizer, add some special tokens we will use to denote separate parts of tweets and finally load the model. Note, the model_name is defined on line 5. Also note, we add the special tokens so that the model learns the start and end of the prompt. This will be helpful later on during the testing phase, as we don’t want the model to keep on writing the next word, but it should know when to stop the process. This can be done by setting the eos_token and training the model to predict it after the label, as done here.
Line 16: Load and prepare the dataset using the functions we defined before.
Line 21–24: We set configurations for the training process. In short, we define where and when to save the model, how long to train and where to save the logs and also the training strategy with batch_size, warmup_steps and weight_decay.
Line 27–31: We start the training by connecting the model with the training dataset. We also define how to process the training data inside data_collator. The first two elements within the collator are input_ids — the tokenized prompt and attention_mask — a simple 1/0 vector which denote which part of the tokenized vector is prompt and which part is the padding. The last part is quite interesting, where we pass the input data as the label instead of just the sentiment labels. This is because we are training a language model, hence we want the model to learn the pattern of the prompt and not just sentiment class. In a sense, the model learns to predict the words of the input tweet + sentiment structured in the prompt, and in the process learn the sentiment detection task.
This will begin the training. It could take some time based on your computer’s specifications.
Finally, we define the test block, where we take the trained model and apply it to the held-out test data. The code breakdown is as follows,
Line 5: We turn on the evaluation mode on the model.
Line 8–15: For each test data, we first prepare the prompt but with one big difference — we don’t include the sentiment label, as this is what we want the model to predict. Also, remember eos_token— we are hoping the model will predict the sentiment label and then break the operation by printing eos_token. Finally, we tokenize the test prompt.
Line 17: We take the test prompt and predict the next set of words. There are a lot of parameters in this function that defines how the next word is predicted. For details about what each one of them does refer to this or to better understand the different strategies of next word prediction refer to this.
Line 20–30: We start with decoding the predicted text i.e. we re-transform the predicted tokens ids into text. Then we extract the predicted sentiment label and store all relevant information into lists.
Line 33–37: We first combine all extracted info into a pandas dataframe for better readability and then use f1_score function from sklearn package to compute the performance of the complete model.
On running the code for GPT-2 and performing this operation three times with different random_state in the dataset split code, we observed that the model is in fact able to predict perfectly as expected. It is able to predict the label and then break its execution using eos_token. The average f1 macro performance score is 81.7%! This is well comparative with what we would expect a dedicated sentiment detection model to perform, and this goes on to highlight how easy it is to do transfer learning using text generating models in NLP.
GPT-Neo compliant code
To make the GPT-2 code work for GPT-Neo, we have to do the following modifications,
import GPTNeoForCausalLM
set model_name as "EleutherAI/gpt-neo-2.7B" (choose from any of the available sized models)
use GPTNeoForCausalLM in place of GPT2LMHeadModelwhen loading the model.
And that's it! On running the modified code for GPT-Neo, and following the same training strategy, the average f1 macro performance score was 80.7%!
The architecture of T5 is different from GPT models, as it stays true to the original transformer’s architecture, while the GPT models only keep the decoder part. For training T5 we will use an excellent wrapper package called SimpleT5, which removes most of the boilerplate from the training phase. Now please remember, while the syntax of training will change, the overall flow and intuition remains the same. Let’s start with the data part.
Here, the majority of the code remains the same as what we did before for the GPT models. One major change is that we don’t need the Dataset class, as SimpleT5 works directly on pandas dataframe. Hence we load the data, do some initial pre-processing, split the data and return the pandas dataframe. (no need to tokenize, create Dataset class, isn't this great!?)
One more point to note is that we do not need to create prompt formats for this package. This way we can separate out the input tweet and sentiment label into different columns, here source_text and target_text, respectively.
Loading and training the model is also super easy and can be done with 3 lines of code (if you can ignore my pretty line separations).
Next, we test the fine-tuned T5 model on the test dataset. As you can see the inference part is also super easy, on line 11, we use the predict function and just pass the source_text to get the predicted sentiment label. We later compare this with the original_label to generate the performance score at line no 18.
On running the T5 code, and following the same training strategy as before, the average f1 macro performance score was 80.7%!
Consolidating all of the results into a single table, we get,
One point I wanted to discuss is that I haven’t played at all with the hyperparameters. Add to that the prompt engineering methodology, and I think just by playing around with these two, we can further improve the performance metric for all of the models. I will leave that as an execise for the readers (do let me know if you get better scores!)
While GPT-2 may have won this round, the result table does show the prowess of text generation models on whole. All of them performed very well on the sentiment detection task, and all it took was a few epochs of training. Even if this experiment was done for a single task, I hope this helps to show how easy it is to use the TG models for completely new tasks. In a way, if we can transform the NLP problem into that of text generation, rest assured the pre-trained model will not fail, well at least not drastically :) This makes them the perfect baseline if not the state-of-the-art for many tasks.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, and Sharan Narang. Exploring the limits of transfer learning with a unified text-to-text transformer. 2020. arXiv:1910.10683
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, and others. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
GPT-Neo
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. 2017. arXiv:1706.03762. | [
{
"code": null,
"e": 425,
"s": 172,
"text": "The code used in this article can be found here — GPT and T5. To read more about text generation models, see this. For more such articles visit my website or have a look at my latest short book on Data science. You can also connect with me on LinkedIn."
},
{
"code": null,
"e": 1260,
"s": 425,
"text": "Recent researches in NLP led to the release of multiple massive-sized pre-trained text generation models like GPT-{1,2,3}, GPT-{Neo, J} and T5. If the audiences (including you and me) were not impressed with their tunable parameter’s size going into billions, we were enthralled by the ease with which they can be used for a completely new unseen task, and without training for a single epoch! While this is okay for quick experiments, for any real production deployment, it is still recommended to further train the models for the specific task. This is called fine-tuning, and in this article, we will practically learn the ways to fine-tune some of the best (read state-of-the-art) language models currently available. We will also compare their performance by fine-tuning on Twitter Sentiment detection dataset. Let's get started!"
},
{
"code": null,
"e": 1974,
"s": 1260,
"text": "Text generation is an interesting task in NLP, where the intention is to generate text when provided with some prompt as input. Usually, we apply some form of the Sequence-to-Sequence model for this task. They are called language models, as they can be used to predict the next word based on the previous sentences. The recent surge in interest in this field is due to two main reasons, (1) the availability of several high performance pre-trained models, and (2) it's very easy to transform a large variety of NLP based tasks into the text-in text-out type of problem. This is very intuitively shown by T5 authors, where the same model can be used to do language translation, text regression, summarization, etc."
},
{
"code": null,
"e": 2040,
"s": 1974,
"text": "In this article, we will be concerned about the following models,"
},
{
"code": null,
"e": 2546,
"s": 2040,
"text": "GPT-2: It is the second iteration of the original series of language models released by OpenAI. In fact, this series of GPT models made the language model famous! GPT stands for “Generative Pre-trained Transformer”, and currently we have 3 versions of the model (v1, v2 and v3). Out of these only GPT-1 and GPT-2 are open-sourced, and hence we will pick the latest version for our experiment. On the technical side, the architecture of GPT-2 is made up of the decoder part of the Transformer architecture."
},
{
"code": null,
"e": 2757,
"s": 2546,
"text": "GPT-Neo: This model was released by EleutherAI to counter the GPT-3 model which was not open-sourced. The architecture is quite similar to GPT-3, but training was done on The Pile, an 825 GB sized text dataset."
},
{
"code": null,
"e": 3100,
"s": 2757,
"text": "T5: stands for “Text-to-Text Transfer Transformer” and was Google’s answer to the world for open source language models. T5 paper showcase that using the complete encoder-decoder architecture (of the transformer) is better than only using the decoder (as done by the GPT series), hence they stay true to the original transformer architecture."
},
{
"code": null,
"e": 3347,
"s": 3100,
"text": "A brief comparison of the different models is shown below. One point to note, each model further releases several versions based on the tunable parameter size. For this article, we will pick 117M sized GPT-2, 125M sized GPT-Neo and 220M sized T5."
},
{
"code": null,
"e": 4230,
"s": 3347,
"text": "To test the performance of different language models, we will compare the accuracy of the models after finetuning on a simple task — sentiment detection. Here, we will use the Twitter Sentiment dataset, that can be download from here. In total, it contains over 1.6M tweets and their sentiment could be either positive or negative. For computation efficiency, we will sample 10k tweets with a nearly equal distribution of the sentiment classes. Then, we will train the model on 95% of the data, using the remaining 5% for the test purposes. For a fair comparison, we will use the same test and train split for all of the three models. Finally, we will perform 3 trials of splitting and training each model — this is a way to replicate a 3-fold validation test. We will report the individual and aggregated (mean) f1 macro score, which can be used for model’s performance comparison."
},
{
"code": null,
"e": 5109,
"s": 4230,
"text": "Now the next obvious question should be, how can we transform the sentiment detection task as a text generation one? The answer is quite simple, all we have to do is create an intuitive prompt (template with data) that could reflect how a similar representation could occur on the web. Let’s understand it this way, we want to provide the tweet as the input and want the sentiment as output. So, in our prompt, we should pass a single tweet after Tweet: prefix and expect the model to predict the sentiment on the next line after Sentiment: prefix. This process of creating an effective prompt is called prompt engineering, and it has been shown that by just changing the prompt, language models performs better! For our use case, we can start with a very simple prompt format. We will have two different prompts, one for training and one for the test. Examples are shown below."
},
{
"code": null,
"e": 5192,
"s": 5109,
"text": "Training prompt (as we want the model to learn this “pattern” to solve the “task”)"
},
{
"code": null,
"e": 5241,
"s": 5192,
"text": "Tweet: I am not feeling well.Sentiment: Negative"
},
{
"code": null,
"e": 5342,
"s": 5241,
"text": "Test prompt (as now we hope the model has learned the “task” and hence could complete the “pattern”)"
},
{
"code": null,
"e": 5379,
"s": 5342,
"text": "Tweet: I am feeling well.Sentiment: "
},
{
"code": null,
"e": 5575,
"s": 5379,
"text": "So during the testing, we will extract the word predicted by the model after the prefix Sentiment: and consider that word as the predicted sentiment label. Now let’s dive into the implementation!"
},
{
"code": null,
"e": 6022,
"s": 5575,
"text": "One point to note — GPT-2 and GPT-Neo share nearly the same architecture, so the majority of the fine-tuning code remains the same. Hence for brevity’s sake, I will only share the code for GPT-2, but I will point out changes required to make it work for the GPT-Neo model as well. Ok, let’s get started by handling the dataset, for which we will begin with creating a Pytorch Dataset class, which defines how we prepare the data for the training."
},
{
"code": null,
"e": 6047,
"s": 6022,
"text": "This includes 3 modules:"
},
{
"code": null,
"e": 6105,
"s": 6047,
"text": "__init__: where we basically tokenize and store the data."
},
{
"code": null,
"e": 6226,
"s": 6105,
"text": "__len__ : where we return the length of the total dataset. This is required for step size calculation within each epoch."
},
{
"code": null,
"e": 6284,
"s": 6226,
"text": "__getitem__ : where we fetch one data and then return it."
},
{
"code": null,
"e": 6611,
"s": 6284,
"text": "Some addition points — (1) on line 8, we define the mapping used to transform original numeric sentiment label to textual labels, (2) on line 12, we transform the data into the training prompt we decided on, and (3) on line 14 we perform the tokenization (splitting the tweet into tokens + replace them with their unique ids)."
},
{
"code": null,
"e": 6696,
"s": 6611,
"text": "Next, let’s connect the data with the Dataset class. The code breakup is as follows,"
},
{
"code": null,
"e": 6919,
"s": 6696,
"text": "Line 4-8: We begin with loading the dataset. You can download it from here and modify the local path at line 4. Next, we just subset the relevant columns and rename them. On line 8 we sample 10k tweets for this experiment."
},
{
"code": null,
"e": 7088,
"s": 6919,
"text": "Line 10–13: We split the data into train and test, with 95% and 5% split, respectively. We use stratifyflag such that the split is even in sentiment class distribution."
},
{
"code": null,
"e": 7256,
"s": 7088,
"text": "Line 16: We pass the train data to the SentimentDataset class. Note, we could have done the same for test data, but I just returned the test data in its original form."
},
{
"code": null,
"e": 7337,
"s": 7256,
"text": "Now we will prepare for the training of the model. Code breakdown is as follows,"
},
{
"code": null,
"e": 7881,
"s": 7337,
"text": "Line 10–13: We load the tokenizer, add some special tokens we will use to denote separate parts of tweets and finally load the model. Note, the model_name is defined on line 5. Also note, we add the special tokens so that the model learns the start and end of the prompt. This will be helpful later on during the testing phase, as we don’t want the model to keep on writing the next word, but it should know when to stop the process. This can be done by setting the eos_token and training the model to predict it after the label, as done here."
},
{
"code": null,
"e": 7958,
"s": 7881,
"text": "Line 16: Load and prepare the dataset using the functions we defined before."
},
{
"code": null,
"e": 8196,
"s": 7958,
"text": "Line 21–24: We set configurations for the training process. In short, we define where and when to save the model, how long to train and where to save the logs and also the training strategy with batch_size, warmup_steps and weight_decay."
},
{
"code": null,
"e": 8978,
"s": 8196,
"text": "Line 27–31: We start the training by connecting the model with the training dataset. We also define how to process the training data inside data_collator. The first two elements within the collator are input_ids — the tokenized prompt and attention_mask — a simple 1/0 vector which denote which part of the tokenized vector is prompt and which part is the padding. The last part is quite interesting, where we pass the input data as the label instead of just the sentiment labels. This is because we are training a language model, hence we want the model to learn the pattern of the prompt and not just sentiment class. In a sense, the model learns to predict the words of the input tweet + sentiment structured in the prompt, and in the process learn the sentiment detection task."
},
{
"code": null,
"e": 9073,
"s": 8978,
"text": "This will begin the training. It could take some time based on your computer’s specifications."
},
{
"code": null,
"e": 9214,
"s": 9073,
"text": "Finally, we define the test block, where we take the trained model and apply it to the held-out test data. The code breakdown is as follows,"
},
{
"code": null,
"e": 9267,
"s": 9214,
"text": "Line 5: We turn on the evaluation mode on the model."
},
{
"code": null,
"e": 9613,
"s": 9267,
"text": "Line 8–15: For each test data, we first prepare the prompt but with one big difference — we don’t include the sentiment label, as this is what we want the model to predict. Also, remember eos_token— we are hoping the model will predict the sentiment label and then break the operation by printing eos_token. Finally, we tokenize the test prompt."
},
{
"code": null,
"e": 9920,
"s": 9613,
"text": "Line 17: We take the test prompt and predict the next set of words. There are a lot of parameters in this function that defines how the next word is predicted. For details about what each one of them does refer to this or to better understand the different strategies of next word prediction refer to this."
},
{
"code": null,
"e": 10124,
"s": 9920,
"text": "Line 20–30: We start with decoding the predicted text i.e. we re-transform the predicted tokens ids into text. Then we extract the predicted sentiment label and store all relevant information into lists."
},
{
"code": null,
"e": 10321,
"s": 10124,
"text": "Line 33–37: We first combine all extracted info into a pandas dataframe for better readability and then use f1_score function from sklearn package to compute the performance of the complete model."
},
{
"code": null,
"e": 10859,
"s": 10321,
"text": "On running the code for GPT-2 and performing this operation three times with different random_state in the dataset split code, we observed that the model is in fact able to predict perfectly as expected. It is able to predict the label and then break its execution using eos_token. The average f1 macro performance score is 81.7%! This is well comparative with what we would expect a dedicated sentiment detection model to perform, and this goes on to highlight how easy it is to do transfer learning using text generating models in NLP."
},
{
"code": null,
"e": 10882,
"s": 10859,
"text": "GPT-Neo compliant code"
},
{
"code": null,
"e": 10966,
"s": 10882,
"text": "To make the GPT-2 code work for GPT-Neo, we have to do the following modifications,"
},
{
"code": null,
"e": 10991,
"s": 10966,
"text": "import GPTNeoForCausalLM"
},
{
"code": null,
"e": 11083,
"s": 10991,
"text": "set model_name as \"EleutherAI/gpt-neo-2.7B\" (choose from any of the available sized models)"
},
{
"code": null,
"e": 11156,
"s": 11083,
"text": "use GPTNeoForCausalLM in place of GPT2LMHeadModelwhen loading the model."
},
{
"code": null,
"e": 11305,
"s": 11156,
"text": "And that's it! On running the modified code for GPT-Neo, and following the same training strategy, the average f1 macro performance score was 80.7%!"
},
{
"code": null,
"e": 11749,
"s": 11305,
"text": "The architecture of T5 is different from GPT models, as it stays true to the original transformer’s architecture, while the GPT models only keep the decoder part. For training T5 we will use an excellent wrapper package called SimpleT5, which removes most of the boilerplate from the training phase. Now please remember, while the syntax of training will change, the overall flow and intuition remains the same. Let’s start with the data part."
},
{
"code": null,
"e": 12113,
"s": 11749,
"text": "Here, the majority of the code remains the same as what we did before for the GPT models. One major change is that we don’t need the Dataset class, as SimpleT5 works directly on pandas dataframe. Hence we load the data, do some initial pre-processing, split the data and return the pandas dataframe. (no need to tokenize, create Dataset class, isn't this great!?)"
},
{
"code": null,
"e": 12339,
"s": 12113,
"text": "One more point to note is that we do not need to create prompt formats for this package. This way we can separate out the input tweet and sentiment label into different columns, here source_text and target_text, respectively."
},
{
"code": null,
"e": 12474,
"s": 12339,
"text": "Loading and training the model is also super easy and can be done with 3 lines of code (if you can ignore my pretty line separations)."
},
{
"code": null,
"e": 12790,
"s": 12474,
"text": "Next, we test the fine-tuned T5 model on the test dataset. As you can see the inference part is also super easy, on line 11, we use the predict function and just pass the source_text to get the predicted sentiment label. We later compare this with the original_label to generate the performance score at line no 18."
},
{
"code": null,
"e": 12916,
"s": 12790,
"text": "On running the T5 code, and following the same training strategy as before, the average f1 macro performance score was 80.7%!"
},
{
"code": null,
"e": 12978,
"s": 12916,
"text": "Consolidating all of the results into a single table, we get,"
},
{
"code": null,
"e": 13325,
"s": 12978,
"text": "One point I wanted to discuss is that I haven’t played at all with the hyperparameters. Add to that the prompt engineering methodology, and I think just by playing around with these two, we can further improve the performance metric for all of the models. I will leave that as an execise for the readers (do let me know if you get better scores!)"
},
{
"code": null,
"e": 13928,
"s": 13325,
"text": "While GPT-2 may have won this round, the result table does show the prowess of text generation models on whole. All of them performed very well on the sentiment detection task, and all it took was a few epochs of training. Even if this experiment was done for a single task, I hope this helps to show how easy it is to use the TG models for completely new tasks. In a way, if we can transform the NLP problem into that of text generation, rest assured the pre-trained model will not fail, well at least not drastically :) This makes them the perfect baseline if not the state-of-the-art for many tasks."
},
{
"code": null,
"e": 14110,
"s": 13928,
"text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, and Sharan Narang. Exploring the limits of transfer learning with a unified text-to-text transformer. 2020. arXiv:1910.10683"
},
{
"code": null,
"e": 14283,
"s": 14110,
"text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, and others. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019."
},
{
"code": null,
"e": 14291,
"s": 14283,
"text": "GPT-Neo"
}
] |
Segregate even and odd nodes in a Linked List | 22 Jun, 2022
Given a Linked List of integers, write a function to modify the linked list such that all even numbers appear before all the odd numbers in the modified linked list. Also, keep the order of even and odd numbers same.Examples:
Input: 17->15->8->12->10->5->4->1->7->6->NULL
Output: 8->12->10->4->6->17->15->5->1->7->NULL
Input: 8->12->10->5->4->1->6->NULL
Output: 8->12->10->4->6->5->1->NULL
// If all numbers are even then do not change the list
Input: 8->12->10->NULL
Output: 8->12->10->NULL
// If all numbers are odd then do not change the list
Input: 1->3->5->7->NULL
Output: 1->3->5->7->NULL
Method 1 The idea is to get pointer to the last node of list. And then traverse the list starting from the head node and move the odd valued nodes from their current position to end of the list.Thanks to blunderboy for suggesting this method.Algorithm: ...1) Get pointer to the last node. ...2) Move all the odd nodes to the end. ........a) Consider all odd nodes before the first even node and move them to end. ........b) Change the head pointer to point to the first even node. ........b) Consider all odd nodes after the first even node and move them to the end.
C++
C
Java
Python
C#
Javascript
// C++ program to segregate even and// odd nodes in a Linked List#include <bits/stdc++.h>using namespace std; /* a node of the singly linked list */class Node{ public: int data; Node *next;}; void segregateEvenOdd(Node **head_ref){ Node *end = *head_ref; Node *prev = NULL; Node *curr = *head_ref; /* Get pointer to the last node */ while (end->next != NULL) end = end->next; Node *new_end = end; /* Consider all odd nodes before the first even node and move then after end */ while (curr->data % 2 != 0 && curr != end) { new_end->next = curr; curr = curr->next; new_end->next->next = NULL; new_end = new_end->next; } // 10->8->17->17->15 /* Do following steps only if there is any even node */ if (curr->data%2 == 0) { /* Change the head pointer to point to first even node */ *head_ref = curr; /* now current points to the first even node */ while (curr != end) { if ( (curr->data) % 2 == 0 ) { prev = curr; curr = curr->next; } else { /* break the link between prev and current */ prev->next = curr->next; /* Make next of curr as NULL */ curr->next = NULL; /* Move curr to end */ new_end->next = curr; /* make curr as new end of list */ new_end = curr; /* Update current pointer to next of the moved node */ curr = prev->next; } } } /* We must have prev set before executing lines following this statement */ else prev = curr; /* If there are more than 1 odd nodes and end of original list is odd then move this node to end to maintain same order of odd numbers in modified list */ if (new_end != end && (end->data) % 2 != 0) { prev->next = end->next; end->next = NULL; new_end->next = end; } return;} /* UTILITY FUNCTIONS *//* Function to insert a node at the beginning */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = new Node(); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(Node *node){ while (node != NULL) { cout << node->data <<" "; node = node->next; }} /* Driver code*/int main(){ /* Start with the empty list */ Node* head = NULL; /* Let us create a sample linked list as following 0->2->4->6->8->10->11 */ push(&head, 11); push(&head, 10); push(&head, 8); push(&head, 6); push(&head, 4); push(&head, 2); push(&head, 0); cout << "Original Linked list "; printList(head); segregateEvenOdd(&head); cout << "\nModified Linked list "; printList(head); return 0;} // This code is contributed by rathbhupendra
// C program to segregate even and odd nodes in a// Linked List#include <stdio.h>#include <stdlib.h> /* a node of the singly linked list */struct Node{ int data; struct Node *next;}; void segregateEvenOdd(struct Node **head_ref){ struct Node *end = *head_ref; struct Node *prev = NULL; struct Node *curr = *head_ref; /* Get pointer to the last node */ while (end->next != NULL) end = end->next; struct Node *new_end = end; /* Consider all odd nodes before the first even node and move then after end */ while (curr->data %2 != 0 && curr != end) { new_end->next = curr; curr = curr->next; new_end->next->next = NULL; new_end = new_end->next; } // 10->8->17->17->15 /* Do following steps only if there is any even node */ if (curr->data%2 == 0) { /* Change the head pointer to point to first even node */ *head_ref = curr; /* now current points to the first even node */ while (curr != end) { if ( (curr->data)%2 == 0 ) { prev = curr; curr = curr->next; } else { /* break the link between prev and current */ prev->next = curr->next; /* Make next of curr as NULL */ curr->next = NULL; /* Move curr to end */ new_end->next = curr; /* make curr as new end of list */ new_end = curr; /* Update current pointer to next of the moved node */ curr = prev->next; } } } /* We must have prev set before executing lines following this statement */ else prev = curr; /* If there are more than 1 odd nodes and end of original list is odd then move this node to end to maintain same order of odd numbers in modified list */ if (new_end!=end && (end->data)%2 != 0) { prev->next = end->next; end->next = NULL; new_end->next = end; } return;} /* UTILITY FUNCTIONS *//* Function to insert a node at the beginning */void push(struct Node** head_ref, int new_data){ /* allocate node */ struct Node* new_node = (struct Node*) malloc(sizeof(struct Node)); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(struct Node *node){ while (node!=NULL) { printf("%d ", node->data); node = node->next; }} /* Driver program to test above functions*/int main(){ /* Start with the empty list */ struct Node* head = NULL; /* Let us create a sample linked list as following 0->2->4->6->8->10->11 */ push(&head, 11); push(&head, 10); push(&head, 8); push(&head, 6); push(&head, 4); push(&head, 2); push(&head, 0); printf("\nOriginal Linked list \n"); printList(head); segregateEvenOdd(&head); printf("\nModified Linked list \n"); printList(head); return 0;}
// Java program to segregate even and odd nodes in a// Linked Listclass LinkedList{ Node head; // head of list /* Linked list Node*/ class Node { int data; Node next; Node(int d) { data = d; next = null; } } void segregateEvenOdd() { Node end = head; Node prev = null; Node curr = head; /* Get pointer to last Node */ while (end.next != null) end = end.next; Node new_end = end; // Consider all odd nodes before getting first even node while (curr.data %2 !=0 && curr != end) { new_end.next = curr; curr = curr.next; new_end.next.next = null; new_end = new_end.next; } // do following steps only if there is an even node if (curr.data %2 ==0) { head = curr; // now curr points to first even node while (curr != end) { if (curr.data % 2 == 0) { prev = curr; curr = curr.next; } else { /* Break the link between prev and curr*/ prev.next = curr.next; /* Make next of curr as null */ curr.next = null; /*Move curr to end */ new_end.next = curr; /*Make curr as new end of list */ new_end = curr; /*Update curr pointer */ curr = prev.next; } } } /* We have to set prev before executing rest of this code */ else prev = curr; if (new_end != end && end.data %2 != 0) { prev.next = end.next; end.next = null; new_end.next = end; } } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(int new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ Node new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list void printList() { Node temp = head; while(temp != null) { System.out.print(temp.data+" "); temp = temp.next; } System.out.println(); } /* Driver program to test above functions */ public static void main(String args[]) { LinkedList llist = new LinkedList(); llist.push(11); llist.push(10); llist.push(8); llist.push(6); llist.push(4); llist.push(2); llist.push(0); System.out.println("Original Linked List"); llist.printList(); llist.segregateEvenOdd(); System.out.println("Modified Linked List"); llist.printList(); }} /* This code is contributed by Rajat Mishra */
# Python program to segregate even and odd nodes in a# Linked Listhead = None # head of list # Node classclass Node: # Function to initialise the node object def __init__(self, data): self.data = data # Assign data self.next =None def segregateEvenOdd(): global head end = head prev = None curr = head # Get pointer to last Node while (end.next != None): end = end.next new_end = end # Consider all odd nodes before getting first even node while (curr.data % 2 !=0 and curr != end): new_end.next = curr curr = curr.next new_end.next.next = None new_end = new_end.next # do following steps only if there is an even node if (curr.data % 2 == 0): head = curr # now curr points to first even node while (curr != end): if (curr.data % 2 == 0): prev = curr curr = curr.next else: # Break the link between prev and curr prev.next = curr.next # Make next of curr as None curr.next = None # Move curr to end new_end.next = curr # Make curr as new end of list new_end = curr # Update curr pointer curr = prev.next # We have to set prev before executing rest of this code else: prev = curr if (new_end != end and end.data % 2 != 0): prev.next = end.next end.next = None new_end.next = end # Given a reference (pointer to pointer) to the head# of a list and an int, push a new node on the front# of the list.def push(new_data): global head # 1 & 2: Allocate the Node & # Put in the data new_node = Node(new_data) # 3. Make next of new Node as head new_node.next = head # 4. Move the head to point to new Node head = new_node # Utility function to print a linked listdef printList(): global head temp = head while(temp != None): print(temp.data, end = " ") temp = temp.next print(" ") # Driver program to test above functions push(11)push(10)push(8)push(6)push(4)push(2)push(0)print("Original Linked List")printList() segregateEvenOdd() print("Modified Linked List")printList() # This code is contributed by Arnab Kundu
// C# program to segregate even and odd nodes in a// Linked Listusing System; public class LinkedList{ Node head; // head of list /* Linked list Node*/ public class Node { public int data; public Node next; public Node(int d) { data = d; next = null; } } void segregateEvenOdd() { Node end = head; Node prev = null; Node curr = head; /* Get pointer to last Node */ while (end.next != null) end = end.next; Node new_end = end; // Consider all odd nodes before // getting first even node while (curr.data % 2 != 0 && curr != end) { new_end.next = curr; curr = curr.next; new_end.next.next = null; new_end = new_end.next; } // do following steps only // if there is an even node if (curr.data % 2 == 0) { head = curr; // now curr points to first even node while (curr != end) { if (curr.data % 2 == 0) { prev = curr; curr = curr.next; } else { /* Break the link between prev and curr*/ prev.next = curr.next; /* Make next of curr as null */ curr.next = null; /*Move curr to end */ new_end.next = curr; /*Make curr as new end of list */ new_end = curr; /*Update curr pointer */ curr = prev.next; } } } /* We have to set prev before executing rest of this code */ else prev = curr; if (new_end != end && end.data % 2 != 0) { prev.next = end.next; end.next = null; new_end.next = end; } } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(int new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ Node new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list void printList() { Node temp = head; while(temp != null) { Console.Write(temp.data + " "); temp = temp.next; } Console.WriteLine(); } /* Driver code */ public static void Main(String []args) { LinkedList llist = new LinkedList(); llist.push(11); llist.push(10); llist.push(8); llist.push(6); llist.push(4); llist.push(2); llist.push(0); Console.WriteLine("Original Linked List"); llist.printList(); llist.segregateEvenOdd(); Console.WriteLine("Modified Linked List"); llist.printList(); }} // This code has been contributed by 29AjayKumar
<script>// javascript program to segregate even and odd nodes in a// Linked Listvar head; // head of list /* Linked list Node */ class Node { constructor(val) { this.data = val; this.next = null; } } function segregateEvenOdd() { var end = head; var prev = null; var curr = head; /* Get pointer to last Node */ while (end.next != null) end = end.next; var new_end = end; // Consider all odd nodes before getting first even node while (curr.data % 2 != 0 && curr != end) { new_end.next = curr; curr = curr.next; new_end.next.next = null; new_end = new_end.next; } // do following steps only if there is an even node if (curr.data % 2 == 0) { head = curr; // now curr points to first even node while (curr != end) { if (curr.data % 2 == 0) { prev = curr; curr = curr.next; } else { /* Break the link between prev and curr */ prev.next = curr.next; /* Make next of curr as null */ curr.next = null; /* Move curr to end */ new_end.next = curr; /* Make curr as new end of list */ new_end = curr; /* Update curr pointer */ curr = prev.next; } } } /* We have to set prev before executing rest of this code */ else prev = curr; if (new_end != end && end.data % 2 != 0) { prev.next = end.next; end.next = null; new_end.next = end; } } /* * Given a reference (pointer to pointer) to the head of a list and an int, push * a new node on the front of the list. */ function push(new_data) { /* * 1 & 2: Allocate the Node & Put in the data */ var new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list function printList() { var temp = head; while (temp != null) { document.write(temp.data + " "); temp = temp.next; } document.write(); } /* Driver program to test above functions */ push(11); push(10); push(8); push(6); push(4); push(2); push(0); document.write("Original Linked List "); printList(); document.write("<br>"); segregateEvenOdd(); document.write("Modified Linked List "); // This code contributed by umadevi9616</script>
Output:
Original Linked list 0 2 4 6 8 10 11
Modified Linked list 0 2 4 6 8 10 11
Time complexity: O(n)Method 2 The idea is to split the linked list into two: one containing all even nodes and other containing all odd nodes. And finally, attach the odd node linked list after the even node linked list. To split the Linked List, traverse the original Linked List and move all odd nodes to a separate Linked List of all odd nodes. At the end of loop, the original list will have all the even nodes and the odd node list will have all the odd nodes. To keep the ordering of all nodes same, we must insert all the odd nodes at the end of the odd node list. And to do that in constant time, we must keep track of last pointer in the odd node list.
C++
C
Java
C#
Python3
Javascript
// CPP program to segregate even and odd nodes in a// Linked List#include <bits/stdc++.h>using namespace std; /* a node of the singly linked list */struct Node { int data; struct Node* next;}; // Function to segregate even and odd nodes.void segregateEvenOdd(struct Node** head_ref){ // Starting node of list having even values. Node* evenStart = NULL; // Ending node of even values list. Node* evenEnd = NULL; // Starting node of odd values list. Node* oddStart = NULL; // Ending node of odd values list. Node* oddEnd = NULL; // Node to traverse the list. Node* currNode = *head_ref; while (currNode != NULL) { int val = currNode->data; // If current value is even, add it to even values // list. if (val % 2 == 0) { if (evenStart == NULL) { evenStart = currNode; evenEnd = evenStart; } else { evenEnd->next = currNode; evenEnd = evenEnd->next; } } // If current value is odd, add it to odd values // list. else { if (oddStart == NULL) { oddStart = currNode; oddEnd = oddStart; } else { oddEnd->next = currNode; oddEnd = oddEnd->next; } } // Move head pointer one step in forward direction currNode = currNode->next; } // If either odd list or even list is empty, no change // is required as all elements are either even or odd. if (oddStart == NULL || evenStart == NULL) return; // Add odd list after even list. evenEnd->next = oddStart; oddEnd->next = NULL; // Modify head pointer to starting of even list. *head_ref = evenStart;} /* UTILITY FUNCTIONS *//* Function to insert a node at the beginning */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = new Node(); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(Node* node){ while (node != NULL) { cout << node->data << " "; node = node->next; }} /* Driver program to test above functions*/int main(){ /* Start with the empty list */ Node* head = NULL; /* Let us create a sample linked list as following 0->1->4->6->9->10->11 */ push(&head, 11); push(&head, 10); push(&head, 9); push(&head, 6); push(&head, 4); push(&head, 1); push(&head, 0); cout << "Original Linked list" << endl; printList(head); segregateEvenOdd(&head); cout << endl << "Modified Linked list " << endl; printList(head); return 0;} // This code is contributed by Aditya Kumar (adityakumar129)
// C program to segregate even and odd nodes in a// Linked List#include <stdio.h>#include <stdlib.h> /* a node of the singly linked list */typedef struct Node { int data; struct Node* next;} Node; // Function to segregate even and odd nodes.void segregateEvenOdd(Node** head_ref){ // Starting node of list having // even values. Node* evenStart = NULL; // Ending node of even values list. Node* evenEnd = NULL; // Starting node of odd values list. Node* oddStart = NULL; // Ending node of odd values list. Node* oddEnd = NULL; // Node to traverse the list. Node* currNode = *head_ref; while (currNode != NULL) { int val = currNode->data; // If current value is even, add it to even values // list. if (val % 2 == 0) { if (evenStart == NULL) { evenStart = currNode; evenEnd = evenStart; } else { evenEnd->next = currNode, evenEnd = evenEnd->next; } } // If current value is odd, add it to odd values // list. else { if (oddStart == NULL) { oddStart = currNode; oddEnd = oddStart; } else { oddEnd->next = currNode; oddEnd = oddEnd->next; } } // Move head pointer one step in forward direction currNode = currNode->next; } // If either odd list or even list is empty, no change // is required as all elements are either even or odd. if (oddStart == NULL || evenStart == NULL) return; // Add odd list after even list. evenEnd->next = oddStart; oddEnd->next = NULL; // Modify head pointer to // starting of even list. *head_ref = evenStart;} /* UTILITY FUNCTIONS *//* Function to insert a node at the beginning */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = (Node*)malloc(sizeof(Node)); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(Node* node){ while (node != NULL) { printf("%d ", node->data); node = node->next; }} /* Driver program to test above functions*/int main(){ /* Start with the empty list */ struct Node* head = NULL; /* Let us create a sample linked list as following 0->1->4->6->9->10->11 */ push(&head, 11); push(&head, 10); push(&head, 9); push(&head, 6); push(&head, 4); push(&head, 1); push(&head, 0); printf("\nOriginal Linked list \n"); printList(head); segregateEvenOdd(&head); printf("\nModified Linked list \n"); printList(head); return 0;} // This code is contributed by Aditya Kumar (adityakumar129)
// Java program to segregate even and odd nodes in a// Linked Listimport java.io.*; class LinkedList { Node head; // head of list /* Linked list Node*/ class Node { int data; Node next; Node(int d) { data = d; next = null; } } public void segregateEvenOdd() { Node evenStart = null; Node evenEnd = null; Node oddStart = null; Node oddEnd = null; Node currentNode = head; while(currentNode != null) { int element = currentNode.data; if(element % 2 == 0) { if(evenStart == null) { evenStart = currentNode; evenEnd = evenStart; } else { evenEnd.next = currentNode; evenEnd = evenEnd.next; } } else { if(oddStart == null) { oddStart = currentNode; oddEnd = oddStart; } else { oddEnd.next = currentNode; oddEnd = oddEnd.next; } } // Move head pointer one step in forward direction currentNode = currentNode.next; } if(oddStart == null || evenStart == null) { return; } evenEnd.next = oddStart; oddEnd.next = null; head=evenStart; } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(int new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ Node new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list void printList() { Node temp = head; while(temp != null) { System.out.print(temp.data+" "); temp = temp.next; } System.out.println(); } /* Driver program to test above functions */ public static void main(String args[]) { LinkedList llist = new LinkedList(); llist.push(11); llist.push(10); llist.push(9); llist.push(6); llist.push(4); llist.push(1); llist.push(0); System.out.println("Original Linked List"); llist.printList(); llist.segregateEvenOdd(); System.out.println("Modified Linked List"); llist.printList(); }}
// C# program to segregate even and odd nodes in a// Linked Listusing System; public class LinkedList{ Node head; // head of list /* Linked list Node*/ public class Node { public int data; public Node next; public Node(int d) { data = d; next = null; } } public void segregateEvenOdd() { Node evenStart = null; Node evenEnd = null; Node oddStart = null; Node oddEnd = null; Node currentNode = head; while(currentNode != null) { int element = currentNode.data; if(element % 2 == 0) { if(evenStart == null) { evenStart = currentNode; evenEnd = evenStart; } else { evenEnd.next = currentNode; evenEnd = evenEnd.next; } } else { if(oddStart == null) { oddStart = currentNode; oddEnd = oddStart; } else { oddEnd.next = currentNode; oddEnd = oddEnd.next; } } // Move head pointer one step in forward direction currentNode = currentNode.next; } if(oddStart == null || evenStart == null) { return; } evenEnd.next = oddStart; oddEnd.next = null; head=evenStart; } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(int new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ Node new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list void printList() { Node temp = head; while(temp != null) { Console.Write(temp.data+" "); temp = temp.next; } Console.WriteLine(); } /* Driver code */ public static void Main() { LinkedList llist = new LinkedList(); llist.push(11); llist.push(10); llist.push(9); llist.push(6); llist.push(4); llist.push(1); llist.push(0); Console.WriteLine("Original Linked List"); llist.printList(); llist.segregateEvenOdd(); Console.WriteLine("Modified Linked List"); llist.printList(); }} /* This code contributed by PrinciRaj1992 */
# Python3 program to segregate even and odd nodes in a# Linked Listhead = None # head of list # Node class class Node: # Function to initialise the node object def __init__(self, data): self.data = data # Assign data self.next =None # Function to segregate even and odd nodes.def segregateEvenOdd(): global head # Starting node of list having # even values. evenStart = None # Ending node of even values list. evenEnd = None # Starting node of odd values list. oddStart = None # Ending node of odd values list. oddEnd = None # Node to traverse the list. currNode = head while(currNode != None): val = currNode.data # If current value is even, add # it to even values list. if(val % 2 == 0): if(evenStart == None): evenStart = currNode evenEnd = evenStart else: evenEnd . next = currNode evenEnd = evenEnd . next # If current value is odd, add # it to odd values list. else: if(oddStart == None): oddStart = currNode oddEnd = oddStart else: oddEnd . next = currNode oddEnd = oddEnd . next # Move head pointer one step in # forward direction currNode = currNode . next # If either odd list or even list is empty, # no change is required as all elements # are either even or odd. if(oddStart == None or evenStart == None): return # Add odd list after even list. evenEnd . next = oddStart oddEnd . next = None # Modify head pointer to # starting of even list. head = evenStart ''' UTILITY FUNCTIONS '''''' Function to insert a node at the beginning '''def push(new_data): global head # 1 & 2: Allocate the Node & # Put in the data new_node = Node(new_data) # 3. Make next of new Node as head new_node.next = head # 4. Move the head to point to new Node head = new_node ''' Function to print nodes in a given linked list '''def printList(): global head node = head while (node != None): print(node.data, end = " ") node = node.next print() ''' Driver program to test above functions''' ''' Let us create a sample linked list as following0.1.4.6.9.10.11 ''' push(11)push(10)push(9)push(6)push(4)push(1)push(0) print("Original Linked list")printList() segregateEvenOdd() print("Modified Linked list")printList() # This code is contributed by shubhamsingh10.
<script> // JavaScript program to segregate// even and odd nodes in a// Linked List var head; // head of list /* Linked list Node*/ class Node { constructor(val) { this.data = val; this.next = null; } } function segregateEvenOdd() { var evenStart = null; var evenEnd = null; var oddStart = null; var oddEnd = null; var currentNode = head; while(currentNode != null) { var element = currentNode.data; if(element % 2 == 0) { if(evenStart == null) { evenStart = currentNode; evenEnd = evenStart; } else { evenEnd.next = currentNode; evenEnd = evenEnd.next; } } else { if(oddStart == null) { oddStart = currentNode; oddEnd = oddStart; } else { oddEnd.next = currentNode; oddEnd = oddEnd.next; } } // Move head pointer one // step in forward direction currentNode = currentNode.next; } if(oddStart == null || evenStart == null) { return; } evenEnd.next = oddStart; oddEnd.next = null; head=evenStart; } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ function push(new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ var new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list function printList() { var temp = head; while(temp != null) { document.write(temp.data+" "); temp = temp.next; } document.write("<br/>"); } /* Driver program to test above functions */ push(11); push(10); push(9); push(6); push(4); push(1); push(0); document.write("Original Linked List<br/>"); printList(); segregateEvenOdd(); document.write("Modified Linked List<br/>"); printList(); // This code is contributed by todaysgaurav </script>
Output:
Original Linked List
0 1 4 6 9 10 11
Modified Linked List
0 4 6 10 1 9 11
Time complexity: O(n)Please write comments if you find the above code/algorithm incorrect, or find other ways to solve the same problem.
nik1996
rathbhupendra
29AjayKumar
princiraj1992
nidhi_biet
andrew1234
SHUBHAMSINGH10
umadevi9616
todaysgaurav
simranarora5sos
surinderdawra388
adityakumar129
Amazon
Linked List
Amazon
Linked List
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n22 Jun, 2022"
},
{
"code": null,
"e": 282,
"s": 54,
"text": "Given a Linked List of integers, write a function to modify the linked list such that all even numbers appear before all the odd numbers in the modified linked list. Also, keep the order of even and odd numbers same.Examples: "
},
{
"code": null,
"e": 654,
"s": 282,
"text": "Input: 17->15->8->12->10->5->4->1->7->6->NULL\nOutput: 8->12->10->4->6->17->15->5->1->7->NULL\n\nInput: 8->12->10->5->4->1->6->NULL\nOutput: 8->12->10->4->6->5->1->NULL\n\n// If all numbers are even then do not change the list\nInput: 8->12->10->NULL\nOutput: 8->12->10->NULL\n\n// If all numbers are odd then do not change the list\nInput: 1->3->5->7->NULL\nOutput: 1->3->5->7->NULL"
},
{
"code": null,
"e": 1225,
"s": 656,
"text": "Method 1 The idea is to get pointer to the last node of list. And then traverse the list starting from the head node and move the odd valued nodes from their current position to end of the list.Thanks to blunderboy for suggesting this method.Algorithm: ...1) Get pointer to the last node. ...2) Move all the odd nodes to the end. ........a) Consider all odd nodes before the first even node and move them to end. ........b) Change the head pointer to point to the first even node. ........b) Consider all odd nodes after the first even node and move them to the end. "
},
{
"code": null,
"e": 1229,
"s": 1225,
"text": "C++"
},
{
"code": null,
"e": 1231,
"s": 1229,
"text": "C"
},
{
"code": null,
"e": 1236,
"s": 1231,
"text": "Java"
},
{
"code": null,
"e": 1243,
"s": 1236,
"text": "Python"
},
{
"code": null,
"e": 1246,
"s": 1243,
"text": "C#"
},
{
"code": null,
"e": 1257,
"s": 1246,
"text": "Javascript"
},
{
"code": "// C++ program to segregate even and// odd nodes in a Linked List#include <bits/stdc++.h>using namespace std; /* a node of the singly linked list */class Node{ public: int data; Node *next;}; void segregateEvenOdd(Node **head_ref){ Node *end = *head_ref; Node *prev = NULL; Node *curr = *head_ref; /* Get pointer to the last node */ while (end->next != NULL) end = end->next; Node *new_end = end; /* Consider all odd nodes before the first even node and move then after end */ while (curr->data % 2 != 0 && curr != end) { new_end->next = curr; curr = curr->next; new_end->next->next = NULL; new_end = new_end->next; } // 10->8->17->17->15 /* Do following steps only if there is any even node */ if (curr->data%2 == 0) { /* Change the head pointer to point to first even node */ *head_ref = curr; /* now current points to the first even node */ while (curr != end) { if ( (curr->data) % 2 == 0 ) { prev = curr; curr = curr->next; } else { /* break the link between prev and current */ prev->next = curr->next; /* Make next of curr as NULL */ curr->next = NULL; /* Move curr to end */ new_end->next = curr; /* make curr as new end of list */ new_end = curr; /* Update current pointer to next of the moved node */ curr = prev->next; } } } /* We must have prev set before executing lines following this statement */ else prev = curr; /* If there are more than 1 odd nodes and end of original list is odd then move this node to end to maintain same order of odd numbers in modified list */ if (new_end != end && (end->data) % 2 != 0) { prev->next = end->next; end->next = NULL; new_end->next = end; } return;} /* UTILITY FUNCTIONS *//* Function to insert a node at the beginning */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = new Node(); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(Node *node){ while (node != NULL) { cout << node->data <<\" \"; node = node->next; }} /* Driver code*/int main(){ /* Start with the empty list */ Node* head = NULL; /* Let us create a sample linked list as following 0->2->4->6->8->10->11 */ push(&head, 11); push(&head, 10); push(&head, 8); push(&head, 6); push(&head, 4); push(&head, 2); push(&head, 0); cout << \"Original Linked list \"; printList(head); segregateEvenOdd(&head); cout << \"\\nModified Linked list \"; printList(head); return 0;} // This code is contributed by rathbhupendra",
"e": 4418,
"s": 1257,
"text": null
},
{
"code": "// C program to segregate even and odd nodes in a// Linked List#include <stdio.h>#include <stdlib.h> /* a node of the singly linked list */struct Node{ int data; struct Node *next;}; void segregateEvenOdd(struct Node **head_ref){ struct Node *end = *head_ref; struct Node *prev = NULL; struct Node *curr = *head_ref; /* Get pointer to the last node */ while (end->next != NULL) end = end->next; struct Node *new_end = end; /* Consider all odd nodes before the first even node and move then after end */ while (curr->data %2 != 0 && curr != end) { new_end->next = curr; curr = curr->next; new_end->next->next = NULL; new_end = new_end->next; } // 10->8->17->17->15 /* Do following steps only if there is any even node */ if (curr->data%2 == 0) { /* Change the head pointer to point to first even node */ *head_ref = curr; /* now current points to the first even node */ while (curr != end) { if ( (curr->data)%2 == 0 ) { prev = curr; curr = curr->next; } else { /* break the link between prev and current */ prev->next = curr->next; /* Make next of curr as NULL */ curr->next = NULL; /* Move curr to end */ new_end->next = curr; /* make curr as new end of list */ new_end = curr; /* Update current pointer to next of the moved node */ curr = prev->next; } } } /* We must have prev set before executing lines following this statement */ else prev = curr; /* If there are more than 1 odd nodes and end of original list is odd then move this node to end to maintain same order of odd numbers in modified list */ if (new_end!=end && (end->data)%2 != 0) { prev->next = end->next; end->next = NULL; new_end->next = end; } return;} /* UTILITY FUNCTIONS *//* Function to insert a node at the beginning */void push(struct Node** head_ref, int new_data){ /* allocate node */ struct Node* new_node = (struct Node*) malloc(sizeof(struct Node)); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(struct Node *node){ while (node!=NULL) { printf(\"%d \", node->data); node = node->next; }} /* Driver program to test above functions*/int main(){ /* Start with the empty list */ struct Node* head = NULL; /* Let us create a sample linked list as following 0->2->4->6->8->10->11 */ push(&head, 11); push(&head, 10); push(&head, 8); push(&head, 6); push(&head, 4); push(&head, 2); push(&head, 0); printf(\"\\nOriginal Linked list \\n\"); printList(head); segregateEvenOdd(&head); printf(\"\\nModified Linked list \\n\"); printList(head); return 0;}",
"e": 7616,
"s": 4418,
"text": null
},
{
"code": "// Java program to segregate even and odd nodes in a// Linked Listclass LinkedList{ Node head; // head of list /* Linked list Node*/ class Node { int data; Node next; Node(int d) { data = d; next = null; } } void segregateEvenOdd() { Node end = head; Node prev = null; Node curr = head; /* Get pointer to last Node */ while (end.next != null) end = end.next; Node new_end = end; // Consider all odd nodes before getting first even node while (curr.data %2 !=0 && curr != end) { new_end.next = curr; curr = curr.next; new_end.next.next = null; new_end = new_end.next; } // do following steps only if there is an even node if (curr.data %2 ==0) { head = curr; // now curr points to first even node while (curr != end) { if (curr.data % 2 == 0) { prev = curr; curr = curr.next; } else { /* Break the link between prev and curr*/ prev.next = curr.next; /* Make next of curr as null */ curr.next = null; /*Move curr to end */ new_end.next = curr; /*Make curr as new end of list */ new_end = curr; /*Update curr pointer */ curr = prev.next; } } } /* We have to set prev before executing rest of this code */ else prev = curr; if (new_end != end && end.data %2 != 0) { prev.next = end.next; end.next = null; new_end.next = end; } } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(int new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ Node new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list void printList() { Node temp = head; while(temp != null) { System.out.print(temp.data+\" \"); temp = temp.next; } System.out.println(); } /* Driver program to test above functions */ public static void main(String args[]) { LinkedList llist = new LinkedList(); llist.push(11); llist.push(10); llist.push(8); llist.push(6); llist.push(4); llist.push(2); llist.push(0); System.out.println(\"Original Linked List\"); llist.printList(); llist.segregateEvenOdd(); System.out.println(\"Modified Linked List\"); llist.printList(); }} /* This code is contributed by Rajat Mishra */",
"e": 10762,
"s": 7616,
"text": null
},
{
"code": "# Python program to segregate even and odd nodes in a# Linked Listhead = None # head of list # Node classclass Node: # Function to initialise the node object def __init__(self, data): self.data = data # Assign data self.next =None def segregateEvenOdd(): global head end = head prev = None curr = head # Get pointer to last Node while (end.next != None): end = end.next new_end = end # Consider all odd nodes before getting first even node while (curr.data % 2 !=0 and curr != end): new_end.next = curr curr = curr.next new_end.next.next = None new_end = new_end.next # do following steps only if there is an even node if (curr.data % 2 == 0): head = curr # now curr points to first even node while (curr != end): if (curr.data % 2 == 0): prev = curr curr = curr.next else: # Break the link between prev and curr prev.next = curr.next # Make next of curr as None curr.next = None # Move curr to end new_end.next = curr # Make curr as new end of list new_end = curr # Update curr pointer curr = prev.next # We have to set prev before executing rest of this code else: prev = curr if (new_end != end and end.data % 2 != 0): prev.next = end.next end.next = None new_end.next = end # Given a reference (pointer to pointer) to the head# of a list and an int, push a new node on the front# of the list.def push(new_data): global head # 1 & 2: Allocate the Node & # Put in the data new_node = Node(new_data) # 3. Make next of new Node as head new_node.next = head # 4. Move the head to point to new Node head = new_node # Utility function to print a linked listdef printList(): global head temp = head while(temp != None): print(temp.data, end = \" \") temp = temp.next print(\" \") # Driver program to test above functions push(11)push(10)push(8)push(6)push(4)push(2)push(0)print(\"Original Linked List\")printList() segregateEvenOdd() print(\"Modified Linked List\")printList() # This code is contributed by Arnab Kundu",
"e": 13232,
"s": 10762,
"text": null
},
{
"code": "// C# program to segregate even and odd nodes in a// Linked Listusing System; public class LinkedList{ Node head; // head of list /* Linked list Node*/ public class Node { public int data; public Node next; public Node(int d) { data = d; next = null; } } void segregateEvenOdd() { Node end = head; Node prev = null; Node curr = head; /* Get pointer to last Node */ while (end.next != null) end = end.next; Node new_end = end; // Consider all odd nodes before // getting first even node while (curr.data % 2 != 0 && curr != end) { new_end.next = curr; curr = curr.next; new_end.next.next = null; new_end = new_end.next; } // do following steps only // if there is an even node if (curr.data % 2 == 0) { head = curr; // now curr points to first even node while (curr != end) { if (curr.data % 2 == 0) { prev = curr; curr = curr.next; } else { /* Break the link between prev and curr*/ prev.next = curr.next; /* Make next of curr as null */ curr.next = null; /*Move curr to end */ new_end.next = curr; /*Make curr as new end of list */ new_end = curr; /*Update curr pointer */ curr = prev.next; } } } /* We have to set prev before executing rest of this code */ else prev = curr; if (new_end != end && end.data % 2 != 0) { prev.next = end.next; end.next = null; new_end.next = end; } } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(int new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ Node new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list void printList() { Node temp = head; while(temp != null) { Console.Write(temp.data + \" \"); temp = temp.next; } Console.WriteLine(); } /* Driver code */ public static void Main(String []args) { LinkedList llist = new LinkedList(); llist.push(11); llist.push(10); llist.push(8); llist.push(6); llist.push(4); llist.push(2); llist.push(0); Console.WriteLine(\"Original Linked List\"); llist.printList(); llist.segregateEvenOdd(); Console.WriteLine(\"Modified Linked List\"); llist.printList(); }} // This code has been contributed by 29AjayKumar",
"e": 16424,
"s": 13232,
"text": null
},
{
"code": "<script>// javascript program to segregate even and odd nodes in a// Linked Listvar head; // head of list /* Linked list Node */ class Node { constructor(val) { this.data = val; this.next = null; } } function segregateEvenOdd() { var end = head; var prev = null; var curr = head; /* Get pointer to last Node */ while (end.next != null) end = end.next; var new_end = end; // Consider all odd nodes before getting first even node while (curr.data % 2 != 0 && curr != end) { new_end.next = curr; curr = curr.next; new_end.next.next = null; new_end = new_end.next; } // do following steps only if there is an even node if (curr.data % 2 == 0) { head = curr; // now curr points to first even node while (curr != end) { if (curr.data % 2 == 0) { prev = curr; curr = curr.next; } else { /* Break the link between prev and curr */ prev.next = curr.next; /* Make next of curr as null */ curr.next = null; /* Move curr to end */ new_end.next = curr; /* Make curr as new end of list */ new_end = curr; /* Update curr pointer */ curr = prev.next; } } } /* We have to set prev before executing rest of this code */ else prev = curr; if (new_end != end && end.data % 2 != 0) { prev.next = end.next; end.next = null; new_end.next = end; } } /* * Given a reference (pointer to pointer) to the head of a list and an int, push * a new node on the front of the list. */ function push(new_data) { /* * 1 & 2: Allocate the Node & Put in the data */ var new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list function printList() { var temp = head; while (temp != null) { document.write(temp.data + \" \"); temp = temp.next; } document.write(); } /* Driver program to test above functions */ push(11); push(10); push(8); push(6); push(4); push(2); push(0); document.write(\"Original Linked List \"); printList(); document.write(\"<br>\"); segregateEvenOdd(); document.write(\"Modified Linked List \"); // This code contributed by umadevi9616</script>",
"e": 19339,
"s": 16424,
"text": null
},
{
"code": null,
"e": 19349,
"s": 19339,
"text": "Output: "
},
{
"code": null,
"e": 19425,
"s": 19349,
"text": " Original Linked list 0 2 4 6 8 10 11\n Modified Linked list 0 2 4 6 8 10 11"
},
{
"code": null,
"e": 20088,
"s": 19425,
"text": "Time complexity: O(n)Method 2 The idea is to split the linked list into two: one containing all even nodes and other containing all odd nodes. And finally, attach the odd node linked list after the even node linked list. To split the Linked List, traverse the original Linked List and move all odd nodes to a separate Linked List of all odd nodes. At the end of loop, the original list will have all the even nodes and the odd node list will have all the odd nodes. To keep the ordering of all nodes same, we must insert all the odd nodes at the end of the odd node list. And to do that in constant time, we must keep track of last pointer in the odd node list. "
},
{
"code": null,
"e": 20092,
"s": 20088,
"text": "C++"
},
{
"code": null,
"e": 20094,
"s": 20092,
"text": "C"
},
{
"code": null,
"e": 20099,
"s": 20094,
"text": "Java"
},
{
"code": null,
"e": 20102,
"s": 20099,
"text": "C#"
},
{
"code": null,
"e": 20110,
"s": 20102,
"text": "Python3"
},
{
"code": null,
"e": 20121,
"s": 20110,
"text": "Javascript"
},
{
"code": "// CPP program to segregate even and odd nodes in a// Linked List#include <bits/stdc++.h>using namespace std; /* a node of the singly linked list */struct Node { int data; struct Node* next;}; // Function to segregate even and odd nodes.void segregateEvenOdd(struct Node** head_ref){ // Starting node of list having even values. Node* evenStart = NULL; // Ending node of even values list. Node* evenEnd = NULL; // Starting node of odd values list. Node* oddStart = NULL; // Ending node of odd values list. Node* oddEnd = NULL; // Node to traverse the list. Node* currNode = *head_ref; while (currNode != NULL) { int val = currNode->data; // If current value is even, add it to even values // list. if (val % 2 == 0) { if (evenStart == NULL) { evenStart = currNode; evenEnd = evenStart; } else { evenEnd->next = currNode; evenEnd = evenEnd->next; } } // If current value is odd, add it to odd values // list. else { if (oddStart == NULL) { oddStart = currNode; oddEnd = oddStart; } else { oddEnd->next = currNode; oddEnd = oddEnd->next; } } // Move head pointer one step in forward direction currNode = currNode->next; } // If either odd list or even list is empty, no change // is required as all elements are either even or odd. if (oddStart == NULL || evenStart == NULL) return; // Add odd list after even list. evenEnd->next = oddStart; oddEnd->next = NULL; // Modify head pointer to starting of even list. *head_ref = evenStart;} /* UTILITY FUNCTIONS *//* Function to insert a node at the beginning */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = new Node(); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(Node* node){ while (node != NULL) { cout << node->data << \" \"; node = node->next; }} /* Driver program to test above functions*/int main(){ /* Start with the empty list */ Node* head = NULL; /* Let us create a sample linked list as following 0->1->4->6->9->10->11 */ push(&head, 11); push(&head, 10); push(&head, 9); push(&head, 6); push(&head, 4); push(&head, 1); push(&head, 0); cout << \"Original Linked list\" << endl; printList(head); segregateEvenOdd(&head); cout << endl << \"Modified Linked list \" << endl; printList(head); return 0;} // This code is contributed by Aditya Kumar (adityakumar129)",
"e": 23041,
"s": 20121,
"text": null
},
{
"code": "// C program to segregate even and odd nodes in a// Linked List#include <stdio.h>#include <stdlib.h> /* a node of the singly linked list */typedef struct Node { int data; struct Node* next;} Node; // Function to segregate even and odd nodes.void segregateEvenOdd(Node** head_ref){ // Starting node of list having // even values. Node* evenStart = NULL; // Ending node of even values list. Node* evenEnd = NULL; // Starting node of odd values list. Node* oddStart = NULL; // Ending node of odd values list. Node* oddEnd = NULL; // Node to traverse the list. Node* currNode = *head_ref; while (currNode != NULL) { int val = currNode->data; // If current value is even, add it to even values // list. if (val % 2 == 0) { if (evenStart == NULL) { evenStart = currNode; evenEnd = evenStart; } else { evenEnd->next = currNode, evenEnd = evenEnd->next; } } // If current value is odd, add it to odd values // list. else { if (oddStart == NULL) { oddStart = currNode; oddEnd = oddStart; } else { oddEnd->next = currNode; oddEnd = oddEnd->next; } } // Move head pointer one step in forward direction currNode = currNode->next; } // If either odd list or even list is empty, no change // is required as all elements are either even or odd. if (oddStart == NULL || evenStart == NULL) return; // Add odd list after even list. evenEnd->next = oddStart; oddEnd->next = NULL; // Modify head pointer to // starting of even list. *head_ref = evenStart;} /* UTILITY FUNCTIONS *//* Function to insert a node at the beginning */void push(Node** head_ref, int new_data){ /* allocate node */ Node* new_node = (Node*)malloc(sizeof(Node)); /* put in the data */ new_node->data = new_data; /* link the old list off the new node */ new_node->next = (*head_ref); /* move the head to point to the new node */ (*head_ref) = new_node;} /* Function to print nodes in a given linked list */void printList(Node* node){ while (node != NULL) { printf(\"%d \", node->data); node = node->next; }} /* Driver program to test above functions*/int main(){ /* Start with the empty list */ struct Node* head = NULL; /* Let us create a sample linked list as following 0->1->4->6->9->10->11 */ push(&head, 11); push(&head, 10); push(&head, 9); push(&head, 6); push(&head, 4); push(&head, 1); push(&head, 0); printf(\"\\nOriginal Linked list \\n\"); printList(head); segregateEvenOdd(&head); printf(\"\\nModified Linked list \\n\"); printList(head); return 0;} // This code is contributed by Aditya Kumar (adityakumar129)",
"e": 25979,
"s": 23041,
"text": null
},
{
"code": "// Java program to segregate even and odd nodes in a// Linked Listimport java.io.*; class LinkedList { Node head; // head of list /* Linked list Node*/ class Node { int data; Node next; Node(int d) { data = d; next = null; } } public void segregateEvenOdd() { Node evenStart = null; Node evenEnd = null; Node oddStart = null; Node oddEnd = null; Node currentNode = head; while(currentNode != null) { int element = currentNode.data; if(element % 2 == 0) { if(evenStart == null) { evenStart = currentNode; evenEnd = evenStart; } else { evenEnd.next = currentNode; evenEnd = evenEnd.next; } } else { if(oddStart == null) { oddStart = currentNode; oddEnd = oddStart; } else { oddEnd.next = currentNode; oddEnd = oddEnd.next; } } // Move head pointer one step in forward direction currentNode = currentNode.next; } if(oddStart == null || evenStart == null) { return; } evenEnd.next = oddStart; oddEnd.next = null; head=evenStart; } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(int new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ Node new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list void printList() { Node temp = head; while(temp != null) { System.out.print(temp.data+\" \"); temp = temp.next; } System.out.println(); } /* Driver program to test above functions */ public static void main(String args[]) { LinkedList llist = new LinkedList(); llist.push(11); llist.push(10); llist.push(9); llist.push(6); llist.push(4); llist.push(1); llist.push(0); System.out.println(\"Original Linked List\"); llist.printList(); llist.segregateEvenOdd(); System.out.println(\"Modified Linked List\"); llist.printList(); }}",
"e": 28721,
"s": 25979,
"text": null
},
{
"code": "// C# program to segregate even and odd nodes in a// Linked Listusing System; public class LinkedList{ Node head; // head of list /* Linked list Node*/ public class Node { public int data; public Node next; public Node(int d) { data = d; next = null; } } public void segregateEvenOdd() { Node evenStart = null; Node evenEnd = null; Node oddStart = null; Node oddEnd = null; Node currentNode = head; while(currentNode != null) { int element = currentNode.data; if(element % 2 == 0) { if(evenStart == null) { evenStart = currentNode; evenEnd = evenStart; } else { evenEnd.next = currentNode; evenEnd = evenEnd.next; } } else { if(oddStart == null) { oddStart = currentNode; oddEnd = oddStart; } else { oddEnd.next = currentNode; oddEnd = oddEnd.next; } } // Move head pointer one step in forward direction currentNode = currentNode.next; } if(oddStart == null || evenStart == null) { return; } evenEnd.next = oddStart; oddEnd.next = null; head=evenStart; } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ void push(int new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ Node new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list void printList() { Node temp = head; while(temp != null) { Console.Write(temp.data+\" \"); temp = temp.next; } Console.WriteLine(); } /* Driver code */ public static void Main() { LinkedList llist = new LinkedList(); llist.push(11); llist.push(10); llist.push(9); llist.push(6); llist.push(4); llist.push(1); llist.push(0); Console.WriteLine(\"Original Linked List\"); llist.printList(); llist.segregateEvenOdd(); Console.WriteLine(\"Modified Linked List\"); llist.printList(); }} /* This code contributed by PrinciRaj1992 */",
"e": 31614,
"s": 28721,
"text": null
},
{
"code": "# Python3 program to segregate even and odd nodes in a# Linked Listhead = None # head of list # Node class class Node: # Function to initialise the node object def __init__(self, data): self.data = data # Assign data self.next =None # Function to segregate even and odd nodes.def segregateEvenOdd(): global head # Starting node of list having # even values. evenStart = None # Ending node of even values list. evenEnd = None # Starting node of odd values list. oddStart = None # Ending node of odd values list. oddEnd = None # Node to traverse the list. currNode = head while(currNode != None): val = currNode.data # If current value is even, add # it to even values list. if(val % 2 == 0): if(evenStart == None): evenStart = currNode evenEnd = evenStart else: evenEnd . next = currNode evenEnd = evenEnd . next # If current value is odd, add # it to odd values list. else: if(oddStart == None): oddStart = currNode oddEnd = oddStart else: oddEnd . next = currNode oddEnd = oddEnd . next # Move head pointer one step in # forward direction currNode = currNode . next # If either odd list or even list is empty, # no change is required as all elements # are either even or odd. if(oddStart == None or evenStart == None): return # Add odd list after even list. evenEnd . next = oddStart oddEnd . next = None # Modify head pointer to # starting of even list. head = evenStart ''' UTILITY FUNCTIONS '''''' Function to insert a node at the beginning '''def push(new_data): global head # 1 & 2: Allocate the Node & # Put in the data new_node = Node(new_data) # 3. Make next of new Node as head new_node.next = head # 4. Move the head to point to new Node head = new_node ''' Function to print nodes in a given linked list '''def printList(): global head node = head while (node != None): print(node.data, end = \" \") node = node.next print() ''' Driver program to test above functions''' ''' Let us create a sample linked list as following0.1.4.6.9.10.11 ''' push(11)push(10)push(9)push(6)push(4)push(1)push(0) print(\"Original Linked list\")printList() segregateEvenOdd() print(\"Modified Linked list\")printList() # This code is contributed by shubhamsingh10.",
"e": 34269,
"s": 31614,
"text": null
},
{
"code": "<script> // JavaScript program to segregate// even and odd nodes in a// Linked List var head; // head of list /* Linked list Node*/ class Node { constructor(val) { this.data = val; this.next = null; } } function segregateEvenOdd() { var evenStart = null; var evenEnd = null; var oddStart = null; var oddEnd = null; var currentNode = head; while(currentNode != null) { var element = currentNode.data; if(element % 2 == 0) { if(evenStart == null) { evenStart = currentNode; evenEnd = evenStart; } else { evenEnd.next = currentNode; evenEnd = evenEnd.next; } } else { if(oddStart == null) { oddStart = currentNode; oddEnd = oddStart; } else { oddEnd.next = currentNode; oddEnd = oddEnd.next; } } // Move head pointer one // step in forward direction currentNode = currentNode.next; } if(oddStart == null || evenStart == null) { return; } evenEnd.next = oddStart; oddEnd.next = null; head=evenStart; } /* Given a reference (pointer to pointer) to the head of a list and an int, push a new node on the front of the list. */ function push(new_data) { /* 1 & 2: Allocate the Node & Put in the data*/ var new_node = new Node(new_data); /* 3. Make next of new Node as head */ new_node.next = head; /* 4. Move the head to point to new Node */ head = new_node; } // Utility function to print a linked list function printList() { var temp = head; while(temp != null) { document.write(temp.data+\" \"); temp = temp.next; } document.write(\"<br/>\"); } /* Driver program to test above functions */ push(11); push(10); push(9); push(6); push(4); push(1); push(0); document.write(\"Original Linked List<br/>\"); printList(); segregateEvenOdd(); document.write(\"Modified Linked List<br/>\"); printList(); // This code is contributed by todaysgaurav </script>",
"e": 36889,
"s": 34269,
"text": null
},
{
"code": null,
"e": 36899,
"s": 36889,
"text": "Output: "
},
{
"code": null,
"e": 36975,
"s": 36899,
"text": "Original Linked List\n0 1 4 6 9 10 11 \nModified Linked List\n0 4 6 10 1 9 11 "
},
{
"code": null,
"e": 37113,
"s": 36975,
"text": "Time complexity: O(n)Please write comments if you find the above code/algorithm incorrect, or find other ways to solve the same problem. "
},
{
"code": null,
"e": 37121,
"s": 37113,
"text": "nik1996"
},
{
"code": null,
"e": 37135,
"s": 37121,
"text": "rathbhupendra"
},
{
"code": null,
"e": 37147,
"s": 37135,
"text": "29AjayKumar"
},
{
"code": null,
"e": 37161,
"s": 37147,
"text": "princiraj1992"
},
{
"code": null,
"e": 37172,
"s": 37161,
"text": "nidhi_biet"
},
{
"code": null,
"e": 37183,
"s": 37172,
"text": "andrew1234"
},
{
"code": null,
"e": 37198,
"s": 37183,
"text": "SHUBHAMSINGH10"
},
{
"code": null,
"e": 37210,
"s": 37198,
"text": "umadevi9616"
},
{
"code": null,
"e": 37223,
"s": 37210,
"text": "todaysgaurav"
},
{
"code": null,
"e": 37239,
"s": 37223,
"text": "simranarora5sos"
},
{
"code": null,
"e": 37256,
"s": 37239,
"text": "surinderdawra388"
},
{
"code": null,
"e": 37271,
"s": 37256,
"text": "adityakumar129"
},
{
"code": null,
"e": 37278,
"s": 37271,
"text": "Amazon"
},
{
"code": null,
"e": 37290,
"s": 37278,
"text": "Linked List"
},
{
"code": null,
"e": 37297,
"s": 37290,
"text": "Amazon"
},
{
"code": null,
"e": 37309,
"s": 37297,
"text": "Linked List"
}
] |
Kotlin Hashmap | 19 Aug, 2019
Kotlin HashMap is a collection which contains pairs of object. Kotlin Hash Table based implementation of the MutableMap interface. It stores the data in the form of key and value pair. Map keys are unique and the map holds only one value for each key. It is represented as HashMap<key, value> or HashMap<K, V>.
The hash table based implementation of HashMap does not guarantees about the order of specified data of key, value and entries of collections.
Kotlin HashMap provides 4 constructors and access modifier of each is public:
HashMap() : It is the default constructor which constructs an empty HashMap instance.
HashMap(initialCapacity: Int, loadFactor: Float = 0f) : It is used to construct a HashMap of specified capacity. If initialCapacity and loadFactor isn’t being used then both will get ignored.
HashMap(initialCapacity: Int) : It is used to construct a HashMap of specified capacity. If initialCapacity isn’t being used then it will get ignored.
HashMap(original: Map <out K, V> ) : It creates instance of HashMap with same mappings as specified map.
Kotlin program of using functions HashMap(), HashMap(original: Map), Traversing hashmap, HashMap.get() –
fun main(args: Array<String>) { //A simple example of HashMap class define // with empty "HashMap of <String, Int>" var hashMap : HashMap<String, Int> = HashMap<String, Int> () //printing the Empty hashMap printHashMap(hashMap) //adding elements to the hashMap using // put() function hashMap.put("IronMan" , 3000) hashMap.put("Thor" , 100) hashMap.put("SpiderMan" , 1100) hashMap.put("NickFury" , 1200) hashMap.put("HawkEye" , 1300) //printing the non-Empty hashMap printHashMap(hashMap) //using the overloaded print function of //Kotlin language to get the same results println("hashMap : " + hashMap + "\n") //hashMap traversal using a for loop for(key in hashMap.keys){ println("Element at key $key : ${hashMap[key]}") } //creating another hashMap object with // the previous version of hashMap object var secondHashMap : HashMap<String, Int> = HashMap<String, Int> (hashMap) println("\n" + "Second HashMap : ") for(key in secondHashMap.keys){ //using hashMap.get() function to fetch the values println("Element at key $key : ${hashMap.get(key)}") } //this will clear the whole map and make it empty println("hashMap.clear()") hashMap.clear() println("After Clearing : " + hashMap) } //function to print the hashMapfun printHashMap(hashMap : HashMap<String, Int>){ // isEmpty() function to check whether // the hashMap is empty or not if(hashMap.isEmpty()){ println("hashMap is empty") }else{ println("hashMap : " + hashMap) }}
Output :
hashMap is empty : {}
hashMap : {Thor=100, HawkEye=1300, NickFury=1200, IronMan=3000, SpiderMan=1100}
hashMap : {Thor=100, HawkEye=1300, NickFury=1200, IronMan=3000, SpiderMan=1100}
Element at key Thor : 100
Element at key HawkEye : 1300
Element at key NickFury : 1200
Element at key IronMan : 3000
Element at key SpiderMan : 1100
secondHashMap :
Element at key Thor : 100
Element at key HawkEye : 1300
Element at key IronMan : 3000
Element at key NickFury : 1200
Element at key SpiderMan : 1100
hashMap.clear()
After Clearing : {}
Kotlin program of using HashMap initial capacity, HashMap.size –
fun main(args: Array<String>) { //HashMap can also be initialize // with its initial capacity. //The capacity can be changed by // adding and replacing its element. var hashMap : HashMap<String, Int> = HashMap<String, Int> (4) //adding elements to the hashMap using put() function hashMap.put("IronMan" , 3000) hashMap.put("Thor" , 100) hashMap.put("SpiderMan" , 1100) hashMap.put("NickFury" , 1200) for(key in hashMap.keys) { println("Element at key $key : ${hashMap[key]}") } //returns the size of hashMap println("\n" + "hashMap.size : " + hashMap.size ) //adding a new element in the hashMap hashMap["BlackWidow"] = 1000; println("hashMap.size : " + hashMap.size + "\n") for(key in hashMap.keys) { println("Element at key $key : ${hashMap[key]}") }}
Output:
Element at key Thor : 100
Element at key IronMan : 3000
Element at key NickFury : 1200
Element at key SpiderMan : 1100
hashMap.size : 4
hashMap.size : 5
Element at key Thor : 100
Element at key BlackWidow : 1000
Element at key IronMan : 3000
Element at key NickFury : 1200
Element at key SpiderMan : 1100
Kotlin program of using functions HashMap.get(key), HashMap.replace(), HashMap.put() –
fun main(args: Array<String>) { var hashMap : HashMap<String, Int> = HashMap<String, Int> () //adding elements to the hashMap // using put() function hashMap.put("IronMan" , 3000) hashMap.put("Thor" , 100) hashMap.put("SpiderMan" , 1100) hashMap.put("Cap" , 1200) for(key in hashMap.keys) { println("Element at key $key : ${hashMap[key]}") } //the hashMap's elements can be accessed like this println("\nhashMap[\"IronMan\"] : " + hashMap["IronMan"]) hashMap["Thor"] = 2000 println("hashMap.get(\"Thor\") : " + hashMap.get("Thor") + "\n") //replacing some values hashMap.replace("Cap" , 999); hashMap.put("Thor" , 2000); println("hashMap.replace(\"Cap\" , 999)" + " hashMap.replace(\"Thor\" , 2000)) :") for(key in hashMap.keys) { println("Element at key $key : ${hashMap[key]}") }}
Output:
Element at key Thor : 100
Element at key Cap : 1200
Element at key IronMan : 3000
Element at key SpiderMan : 1100
hashMap["IronMan"] : 3000
hashMap.get("Thor") : 2000
hashMap.replace("Cap", 999) hashMap.replace("Thor", 2000)) :
Element at key Thor : 2000
Element at key Cap : 999
Element at key IronMan : 3000
Element at key SpiderMan : 1100
Kotlin HashMap provides constant time or O(1) complexity for basic operations like get and put, if hash function is properly written and it disperses the elements properly. In case of searching in the HashMap containsKey() is just a get() that throws away the retrieved value, it’s O(1) (assuming the hash function works properly).
Boolean containsKey(key: K): It returns true if map contains specifies key.
Boolean containsValue(value: V): It returns true if map maps one of more keys to specified value.
void clear(): It removes all elements from map.
remove(key: K): It removes the specified key and its corresponding value from map
Hash
Kotlin Collections
Picked
Kotlin
Hash
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n19 Aug, 2019"
},
{
"code": null,
"e": 364,
"s": 53,
"text": "Kotlin HashMap is a collection which contains pairs of object. Kotlin Hash Table based implementation of the MutableMap interface. It stores the data in the form of key and value pair. Map keys are unique and the map holds only one value for each key. It is represented as HashMap<key, value> or HashMap<K, V>."
},
{
"code": null,
"e": 507,
"s": 364,
"text": "The hash table based implementation of HashMap does not guarantees about the order of specified data of key, value and entries of collections."
},
{
"code": null,
"e": 585,
"s": 507,
"text": "Kotlin HashMap provides 4 constructors and access modifier of each is public:"
},
{
"code": null,
"e": 671,
"s": 585,
"text": "HashMap() : It is the default constructor which constructs an empty HashMap instance."
},
{
"code": null,
"e": 863,
"s": 671,
"text": "HashMap(initialCapacity: Int, loadFactor: Float = 0f) : It is used to construct a HashMap of specified capacity. If initialCapacity and loadFactor isn’t being used then both will get ignored."
},
{
"code": null,
"e": 1014,
"s": 863,
"text": "HashMap(initialCapacity: Int) : It is used to construct a HashMap of specified capacity. If initialCapacity isn’t being used then it will get ignored."
},
{
"code": null,
"e": 1119,
"s": 1014,
"text": "HashMap(original: Map <out K, V> ) : It creates instance of HashMap with same mappings as specified map."
},
{
"code": null,
"e": 1224,
"s": 1119,
"text": "Kotlin program of using functions HashMap(), HashMap(original: Map), Traversing hashmap, HashMap.get() –"
},
{
"code": "fun main(args: Array<String>) { //A simple example of HashMap class define // with empty \"HashMap of <String, Int>\" var hashMap : HashMap<String, Int> = HashMap<String, Int> () //printing the Empty hashMap printHashMap(hashMap) //adding elements to the hashMap using // put() function hashMap.put(\"IronMan\" , 3000) hashMap.put(\"Thor\" , 100) hashMap.put(\"SpiderMan\" , 1100) hashMap.put(\"NickFury\" , 1200) hashMap.put(\"HawkEye\" , 1300) //printing the non-Empty hashMap printHashMap(hashMap) //using the overloaded print function of //Kotlin language to get the same results println(\"hashMap : \" + hashMap + \"\\n\") //hashMap traversal using a for loop for(key in hashMap.keys){ println(\"Element at key $key : ${hashMap[key]}\") } //creating another hashMap object with // the previous version of hashMap object var secondHashMap : HashMap<String, Int> = HashMap<String, Int> (hashMap) println(\"\\n\" + \"Second HashMap : \") for(key in secondHashMap.keys){ //using hashMap.get() function to fetch the values println(\"Element at key $key : ${hashMap.get(key)}\") } //this will clear the whole map and make it empty println(\"hashMap.clear()\") hashMap.clear() println(\"After Clearing : \" + hashMap) } //function to print the hashMapfun printHashMap(hashMap : HashMap<String, Int>){ // isEmpty() function to check whether // the hashMap is empty or not if(hashMap.isEmpty()){ println(\"hashMap is empty\") }else{ println(\"hashMap : \" + hashMap) }}",
"e": 2838,
"s": 1224,
"text": null
},
{
"code": null,
"e": 2847,
"s": 2838,
"text": "Output :"
},
{
"code": null,
"e": 3385,
"s": 2847,
"text": "hashMap is empty : {}\n\nhashMap : {Thor=100, HawkEye=1300, NickFury=1200, IronMan=3000, SpiderMan=1100}\nhashMap : {Thor=100, HawkEye=1300, NickFury=1200, IronMan=3000, SpiderMan=1100}\n\nElement at key Thor : 100\nElement at key HawkEye : 1300\nElement at key NickFury : 1200\nElement at key IronMan : 3000\nElement at key SpiderMan : 1100\n\nsecondHashMap : \nElement at key Thor : 100\nElement at key HawkEye : 1300\nElement at key IronMan : 3000\nElement at key NickFury : 1200\nElement at key SpiderMan : 1100\n\nhashMap.clear()\nAfter Clearing : {}\n"
},
{
"code": null,
"e": 3452,
"s": 3387,
"text": "Kotlin program of using HashMap initial capacity, HashMap.size –"
},
{
"code": "fun main(args: Array<String>) { //HashMap can also be initialize // with its initial capacity. //The capacity can be changed by // adding and replacing its element. var hashMap : HashMap<String, Int> = HashMap<String, Int> (4) //adding elements to the hashMap using put() function hashMap.put(\"IronMan\" , 3000) hashMap.put(\"Thor\" , 100) hashMap.put(\"SpiderMan\" , 1100) hashMap.put(\"NickFury\" , 1200) for(key in hashMap.keys) { println(\"Element at key $key : ${hashMap[key]}\") } //returns the size of hashMap println(\"\\n\" + \"hashMap.size : \" + hashMap.size ) //adding a new element in the hashMap hashMap[\"BlackWidow\"] = 1000; println(\"hashMap.size : \" + hashMap.size + \"\\n\") for(key in hashMap.keys) { println(\"Element at key $key : ${hashMap[key]}\") }}",
"e": 4296,
"s": 3452,
"text": null
},
{
"code": null,
"e": 4304,
"s": 4296,
"text": "Output:"
},
{
"code": null,
"e": 4612,
"s": 4304,
"text": "Element at key Thor : 100\nElement at key IronMan : 3000\nElement at key NickFury : 1200\nElement at key SpiderMan : 1100\n\nhashMap.size : 4\nhashMap.size : 5\n\nElement at key Thor : 100\nElement at key BlackWidow : 1000\nElement at key IronMan : 3000\nElement at key NickFury : 1200\nElement at key SpiderMan : 1100\n"
},
{
"code": null,
"e": 4701,
"s": 4614,
"text": "Kotlin program of using functions HashMap.get(key), HashMap.replace(), HashMap.put() –"
},
{
"code": "fun main(args: Array<String>) { var hashMap : HashMap<String, Int> = HashMap<String, Int> () //adding elements to the hashMap // using put() function hashMap.put(\"IronMan\" , 3000) hashMap.put(\"Thor\" , 100) hashMap.put(\"SpiderMan\" , 1100) hashMap.put(\"Cap\" , 1200) for(key in hashMap.keys) { println(\"Element at key $key : ${hashMap[key]}\") } //the hashMap's elements can be accessed like this println(\"\\nhashMap[\\\"IronMan\\\"] : \" + hashMap[\"IronMan\"]) hashMap[\"Thor\"] = 2000 println(\"hashMap.get(\\\"Thor\\\") : \" + hashMap.get(\"Thor\") + \"\\n\") //replacing some values hashMap.replace(\"Cap\" , 999); hashMap.put(\"Thor\" , 2000); println(\"hashMap.replace(\\\"Cap\\\" , 999)\" + \" hashMap.replace(\\\"Thor\\\" , 2000)) :\") for(key in hashMap.keys) { println(\"Element at key $key : ${hashMap[key]}\") }}",
"e": 5614,
"s": 4701,
"text": null
},
{
"code": null,
"e": 5622,
"s": 5614,
"text": "Output:"
},
{
"code": null,
"e": 5967,
"s": 5622,
"text": "Element at key Thor : 100\nElement at key Cap : 1200\nElement at key IronMan : 3000\nElement at key SpiderMan : 1100\n\nhashMap[\"IronMan\"] : 3000\nhashMap.get(\"Thor\") : 2000\n\nhashMap.replace(\"Cap\", 999) hashMap.replace(\"Thor\", 2000)) :\nElement at key Thor : 2000\nElement at key Cap : 999\nElement at key IronMan : 3000\nElement at key SpiderMan : 1100\n"
},
{
"code": null,
"e": 6299,
"s": 5967,
"text": "Kotlin HashMap provides constant time or O(1) complexity for basic operations like get and put, if hash function is properly written and it disperses the elements properly. In case of searching in the HashMap containsKey() is just a get() that throws away the retrieved value, it’s O(1) (assuming the hash function works properly)."
},
{
"code": null,
"e": 6375,
"s": 6299,
"text": "Boolean containsKey(key: K): It returns true if map contains specifies key."
},
{
"code": null,
"e": 6473,
"s": 6375,
"text": "Boolean containsValue(value: V): It returns true if map maps one of more keys to specified value."
},
{
"code": null,
"e": 6521,
"s": 6473,
"text": "void clear(): It removes all elements from map."
},
{
"code": null,
"e": 6603,
"s": 6521,
"text": "remove(key: K): It removes the specified key and its corresponding value from map"
},
{
"code": null,
"e": 6608,
"s": 6603,
"text": "Hash"
},
{
"code": null,
"e": 6627,
"s": 6608,
"text": "Kotlin Collections"
},
{
"code": null,
"e": 6634,
"s": 6627,
"text": "Picked"
},
{
"code": null,
"e": 6641,
"s": 6634,
"text": "Kotlin"
},
{
"code": null,
"e": 6646,
"s": 6641,
"text": "Hash"
}
] |
How to solve Relational Algebra problems for GATE | 23 Jun, 2021
In this article, Let’s discuss common types of questions in relational algebra which are asked in GATE. Before reading this article, you should have an idea about Basic Operators and Extended Operators in relational algebra.
Type-1: Given a relational algebra expression, find the result. Suppose you have a relation Order(Prod_Id, Agent_Id, Order_Month) and you have to find out what will the following algebra expression return.
∏Order1.Prod_Id (ρ(Order1,Order) Order1.Prod_Id=Order2.Prod_Id
and Order1.Agent_Id≠Order2.Agent_Id
and Order1.Order_Month=Order2.Order_Month ρ(Order2,Order))
Process the expression starting from innermost brackets.
In this example, we have renamed order to Order1 and Order2 (Both represent the same relation order). Then we have applied the conditional join between Order1 and Order2. It will return those rows where Product_Id and Order_Month of Order1 and Order2 are the same but Agent_Id of Order1 and Order2 is different. It implies the rows where the same product is ordered by two different agents in the same month. Then we are projecting the Prod_Id.
So the final output will return the Prod_Id of products that are ordered by different agents in the same month. We can do this by taking sample data. Let Order relation consists of the following data.
Table – Order
When we apply the following expression, the rows which are highlighted in blue will be selected.
(ρ(Order1,Order)Order1.Prod_Id=Order2.Prod_Id
and Order1.Agent_Id≠Order2.Agent_Id
and Order1.Order_Month=Order2.Order_Month ρ(Order2,Order))
After projecting Order1.Prod_Id, the output will be P002 which is Prod_Id of products that are ordered by at least two different agents in the same month.
Note – If we want to find Prod_Id which are ordered by at least three different agents in same month, it can be done as:
∏Order1.Prod_Id (σOrder1.Prod_Id=Order2.Prod_Id
and Order1.Prod_Id=Order3.Prod_Id
and Order1.Agent_Id≠Order2.Agent_Id
and Order1.Agent_Id≠Order3.Agent_Id
and Order2.Agent_Id≠Order3.Agent_Id
and Order1.Order_Month=Order2.Order_Month
and Order1.Order_Month=Order3.Order_Month(ρ(Order1,Order)X ρ(Order2,Order)X ρ(Order3,Order)))
Type-2: Given two relations, what will be the maximum and minimum number of tuples after natural join?
Consider the following relation R(A,B,C) and S(B,D,E) with underlined primary key. The relation R contains 200 tuples and the relation S contains 100 tuples. What is the maximum number of tuples possible in the natural Join R and S?
To solve this type of question, first, we will see on which attribute natural join will take place. Natural join selects those rows which have equal values for common attribute. In this case, the expression would be like:
σR.B=S.B (RX S)
In relation R, attribute B is the primary key. So Relation R will have 200 distinct values of B. On the other hand, Relation S has BD as the primary key. So attribute B can have 100 distinct values or 1 value for all rows.
Case-1: S.B has 100 distinct values and each of these values match to R.B
In this case, every value of B in S will match to a value of B in R. So natural join will have 100 tuples.
Case-2: S.B has 1 values and this values match to R.B
In this case, every value of B in S will match to a value of B in R. So natural join will have 100 tuples.
Case-3: S.B has 100 distinct values and none of these values matches to R.B
In this case, no value of B in S will match to a value of B in R. So natural join will have 0 tuple.
Case-4: S.B has 1 value and it does not match with R.B
In this case, no value of B in S will match to a value of B in R. So natural join will have 0 tuples.
So the maximum number of tuples will be 100 and min will be 0.
Note – If it is explicitly mentioned that S.B is a foreign key to R.B, then Case-3 and Case-4 discussed above are not possible because the value of S.B will be from the values of R.B. So, the minimum and maximum number of tuples in natural join will be 100.
Reference – Fundamentals of Database Systems bu Navathe
Article contributed by Sonal Tuteja. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above.
krishna_97
DBMS-ER model
DBMS-Relational Algebra
DBMS
GATE CS
DBMS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n23 Jun, 2021"
},
{
"code": null,
"e": 278,
"s": 52,
"text": "In this article, Let’s discuss common types of questions in relational algebra which are asked in GATE. Before reading this article, you should have an idea about Basic Operators and Extended Operators in relational algebra. "
},
{
"code": null,
"e": 485,
"s": 278,
"text": "Type-1: Given a relational algebra expression, find the result. Suppose you have a relation Order(Prod_Id, Agent_Id, Order_Month) and you have to find out what will the following algebra expression return. "
},
{
"code": null,
"e": 733,
"s": 487,
"text": "∏Order1.Prod_Id (ρ(Order1,Order) Order1.Prod_Id=Order2.Prod_Id \n and Order1.Agent_Id≠Order2.Agent_Id \n and Order1.Order_Month=Order2.Order_Month ρ(Order2,Order)) "
},
{
"code": null,
"e": 791,
"s": 733,
"text": "Process the expression starting from innermost brackets. "
},
{
"code": null,
"e": 1237,
"s": 791,
"text": "In this example, we have renamed order to Order1 and Order2 (Both represent the same relation order). Then we have applied the conditional join between Order1 and Order2. It will return those rows where Product_Id and Order_Month of Order1 and Order2 are the same but Agent_Id of Order1 and Order2 is different. It implies the rows where the same product is ordered by two different agents in the same month. Then we are projecting the Prod_Id. "
},
{
"code": null,
"e": 1439,
"s": 1237,
"text": "So the final output will return the Prod_Id of products that are ordered by different agents in the same month. We can do this by taking sample data. Let Order relation consists of the following data. "
},
{
"code": null,
"e": 1454,
"s": 1439,
"text": "Table – Order "
},
{
"code": null,
"e": 1552,
"s": 1454,
"text": "When we apply the following expression, the rows which are highlighted in blue will be selected. "
},
{
"code": null,
"e": 1734,
"s": 1552,
"text": "(ρ(Order1,Order)Order1.Prod_Id=Order2.Prod_Id \n and Order1.Agent_Id≠Order2.Agent_Id \n and Order1.Order_Month=Order2.Order_Month ρ(Order2,Order))"
},
{
"code": null,
"e": 1892,
"s": 1736,
"text": "After projecting Order1.Prod_Id, the output will be P002 which is Prod_Id of products that are ordered by at least two different agents in the same month. "
},
{
"code": null,
"e": 2015,
"s": 1892,
"text": "Note – If we want to find Prod_Id which are ordered by at least three different agents in same month, it can be done as: "
},
{
"code": null,
"e": 2550,
"s": 2015,
"text": "∏Order1.Prod_Id (σOrder1.Prod_Id=Order2.Prod_Id \n and Order1.Prod_Id=Order3.Prod_Id \n and Order1.Agent_Id≠Order2.Agent_Id \n and Order1.Agent_Id≠Order3.Agent_Id \n and Order2.Agent_Id≠Order3.Agent_Id \n and Order1.Order_Month=Order2.Order_Month \n and Order1.Order_Month=Order3.Order_Month(ρ(Order1,Order)X ρ(Order2,Order)X ρ(Order3,Order))) "
},
{
"code": null,
"e": 2654,
"s": 2550,
"text": "Type-2: Given two relations, what will be the maximum and minimum number of tuples after natural join? "
},
{
"code": null,
"e": 2888,
"s": 2654,
"text": "Consider the following relation R(A,B,C) and S(B,D,E) with underlined primary key. The relation R contains 200 tuples and the relation S contains 100 tuples. What is the maximum number of tuples possible in the natural Join R and S? "
},
{
"code": null,
"e": 3111,
"s": 2888,
"text": "To solve this type of question, first, we will see on which attribute natural join will take place. Natural join selects those rows which have equal values for common attribute. In this case, the expression would be like: "
},
{
"code": null,
"e": 3128,
"s": 3111,
"text": "σR.B=S.B (RX S) "
},
{
"code": null,
"e": 3352,
"s": 3128,
"text": "In relation R, attribute B is the primary key. So Relation R will have 200 distinct values of B. On the other hand, Relation S has BD as the primary key. So attribute B can have 100 distinct values or 1 value for all rows. "
},
{
"code": null,
"e": 3427,
"s": 3352,
"text": "Case-1: S.B has 100 distinct values and each of these values match to R.B "
},
{
"code": null,
"e": 3537,
"s": 3429,
"text": "In this case, every value of B in S will match to a value of B in R. So natural join will have 100 tuples. "
},
{
"code": null,
"e": 3592,
"s": 3537,
"text": "Case-2: S.B has 1 values and this values match to R.B "
},
{
"code": null,
"e": 3702,
"s": 3594,
"text": "In this case, every value of B in S will match to a value of B in R. So natural join will have 100 tuples. "
},
{
"code": null,
"e": 3779,
"s": 3702,
"text": "Case-3: S.B has 100 distinct values and none of these values matches to R.B "
},
{
"code": null,
"e": 3883,
"s": 3781,
"text": "In this case, no value of B in S will match to a value of B in R. So natural join will have 0 tuple. "
},
{
"code": null,
"e": 3939,
"s": 3883,
"text": "Case-4: S.B has 1 value and it does not match with R.B "
},
{
"code": null,
"e": 4044,
"s": 3941,
"text": "In this case, no value of B in S will match to a value of B in R. So natural join will have 0 tuples. "
},
{
"code": null,
"e": 4108,
"s": 4044,
"text": "So the maximum number of tuples will be 100 and min will be 0. "
},
{
"code": null,
"e": 4367,
"s": 4108,
"text": "Note – If it is explicitly mentioned that S.B is a foreign key to R.B, then Case-3 and Case-4 discussed above are not possible because the value of S.B will be from the values of R.B. So, the minimum and maximum number of tuples in natural join will be 100. "
},
{
"code": null,
"e": 4423,
"s": 4367,
"text": "Reference – Fundamentals of Database Systems bu Navathe"
},
{
"code": null,
"e": 4586,
"s": 4423,
"text": "Article contributed by Sonal Tuteja. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. "
},
{
"code": null,
"e": 4597,
"s": 4586,
"text": "krishna_97"
},
{
"code": null,
"e": 4611,
"s": 4597,
"text": "DBMS-ER model"
},
{
"code": null,
"e": 4635,
"s": 4611,
"text": "DBMS-Relational Algebra"
},
{
"code": null,
"e": 4640,
"s": 4635,
"text": "DBMS"
},
{
"code": null,
"e": 4648,
"s": 4640,
"text": "GATE CS"
},
{
"code": null,
"e": 4653,
"s": 4648,
"text": "DBMS"
}
] |
Python | Pandas Series.fillna() | 13 Feb, 2019
Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index.
Pandas Series.fillna() function is used to fill NA/NaN values using the specified method.
Syntax: Series.fillna(value=None, method=None, axis=None, inplace=False, limit=None, downcast=None, **kwargs)
Parameter :value : Value to use to fill holesmethod : Method to use for filling holes in reindexed Series pad / ffillaxis : {0 or ‘index’}inplace : If True, fill in place.limit : If method is specified, this is the maximum number of consecutive NaN values to forward/backward filldowncast : dict, default is None
Returns : filled : Series
Example #1: Use Series.fillna() function to fill out the missing values in the given series object. Use a dictionary to pass the values to be filled corresponding to the different index labels in the series object.
# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['New York', 'Chicago', 'Toronto', None, 'Rio']) # Create the Indexsr.index = ['City 1', 'City 2', 'City 3', 'City 4', 'City 5'] # set the indexsr.index = index_ # Print the seriesprint(sr)
Output :
Now we will use Series.fillna() function to fill out the missing values in the given series object.
# fill the values using dictionaryresult = sr.fillna(value = {'City 4' : 'Lisbon', 'City 1' : 'Dublin'}) # Print the resultprint(result)
Output :
As we can see in the output, the Series.fillna() function has successfully filled out the missing values in the given series object. Example #2 : Use Series.fillna() function to fill out the missing values in the given series object using forward fill (ffill) method.
# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([100, None, None, 18, 65, None, 32, 10, 5, 24, None]) # Create the Indexindex_ = pd.date_range('2010-10-09', periods = 11, freq ='M') # set the indexsr.index = index_ # Print the seriesprint(sr)
Output :Now we will use Series.fillna() function to fill out the missing values in the given series object. We will use forward fill method to fill out the missing values.
# fill the values using forward fill methodresult = sr.fillna(method = 'ffill') # Print the resultprint(result)
Output :
As we can see in the output, the Series.fillna() function has successfully filled out the missing values in the given series object.
Python pandas-series
Python pandas-series-methods
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Python Dictionary
Different ways to create Pandas Dataframe
Enumerate() in Python
Read a file line by line in Python
How to Install PIP on Windows ?
Python String | replace()
*args and **kwargs in Python
Python Classes and Objects
Python OOPs Concepts
Iterate over a list in Python | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n13 Feb, 2019"
},
{
"code": null,
"e": 285,
"s": 28,
"text": "Pandas series is a One-dimensional ndarray with axis labels. The labels need not be unique but must be a hashable type. The object supports both integer- and label-based indexing and provides a host of methods for performing operations involving the index."
},
{
"code": null,
"e": 375,
"s": 285,
"text": "Pandas Series.fillna() function is used to fill NA/NaN values using the specified method."
},
{
"code": null,
"e": 485,
"s": 375,
"text": "Syntax: Series.fillna(value=None, method=None, axis=None, inplace=False, limit=None, downcast=None, **kwargs)"
},
{
"code": null,
"e": 798,
"s": 485,
"text": "Parameter :value : Value to use to fill holesmethod : Method to use for filling holes in reindexed Series pad / ffillaxis : {0 or ‘index’}inplace : If True, fill in place.limit : If method is specified, this is the maximum number of consecutive NaN values to forward/backward filldowncast : dict, default is None"
},
{
"code": null,
"e": 824,
"s": 798,
"text": "Returns : filled : Series"
},
{
"code": null,
"e": 1039,
"s": 824,
"text": "Example #1: Use Series.fillna() function to fill out the missing values in the given series object. Use a dictionary to pass the values to be filled corresponding to the different index labels in the series object."
},
{
"code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series(['New York', 'Chicago', 'Toronto', None, 'Rio']) # Create the Indexsr.index = ['City 1', 'City 2', 'City 3', 'City 4', 'City 5'] # set the indexsr.index = index_ # Print the seriesprint(sr)",
"e": 1314,
"s": 1039,
"text": null
},
{
"code": null,
"e": 1323,
"s": 1314,
"text": "Output :"
},
{
"code": null,
"e": 1423,
"s": 1323,
"text": "Now we will use Series.fillna() function to fill out the missing values in the given series object."
},
{
"code": "# fill the values using dictionaryresult = sr.fillna(value = {'City 4' : 'Lisbon', 'City 1' : 'Dublin'}) # Print the resultprint(result)",
"e": 1561,
"s": 1423,
"text": null
},
{
"code": null,
"e": 1570,
"s": 1561,
"text": "Output :"
},
{
"code": null,
"e": 1838,
"s": 1570,
"text": "As we can see in the output, the Series.fillna() function has successfully filled out the missing values in the given series object. Example #2 : Use Series.fillna() function to fill out the missing values in the given series object using forward fill (ffill) method."
},
{
"code": "# importing pandas as pdimport pandas as pd # Creating the Seriessr = pd.Series([100, None, None, 18, 65, None, 32, 10, 5, 24, None]) # Create the Indexindex_ = pd.date_range('2010-10-09', periods = 11, freq ='M') # set the indexsr.index = index_ # Print the seriesprint(sr)",
"e": 2117,
"s": 1838,
"text": null
},
{
"code": null,
"e": 2289,
"s": 2117,
"text": "Output :Now we will use Series.fillna() function to fill out the missing values in the given series object. We will use forward fill method to fill out the missing values."
},
{
"code": "# fill the values using forward fill methodresult = sr.fillna(method = 'ffill') # Print the resultprint(result)",
"e": 2402,
"s": 2289,
"text": null
},
{
"code": null,
"e": 2411,
"s": 2402,
"text": "Output :"
},
{
"code": null,
"e": 2544,
"s": 2411,
"text": "As we can see in the output, the Series.fillna() function has successfully filled out the missing values in the given series object."
},
{
"code": null,
"e": 2565,
"s": 2544,
"text": "Python pandas-series"
},
{
"code": null,
"e": 2594,
"s": 2565,
"text": "Python pandas-series-methods"
},
{
"code": null,
"e": 2608,
"s": 2594,
"text": "Python-pandas"
},
{
"code": null,
"e": 2615,
"s": 2608,
"text": "Python"
},
{
"code": null,
"e": 2713,
"s": 2615,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2731,
"s": 2713,
"text": "Python Dictionary"
},
{
"code": null,
"e": 2773,
"s": 2731,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 2795,
"s": 2773,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 2830,
"s": 2795,
"text": "Read a file line by line in Python"
},
{
"code": null,
"e": 2862,
"s": 2830,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 2888,
"s": 2862,
"text": "Python String | replace()"
},
{
"code": null,
"e": 2917,
"s": 2888,
"text": "*args and **kwargs in Python"
},
{
"code": null,
"e": 2944,
"s": 2917,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 2965,
"s": 2944,
"text": "Python OOPs Concepts"
}
] |
Subsets and Splits