title
stringlengths 3
221
| text
stringlengths 17
477k
| parsed
listlengths 0
3.17k
|
---|---|---|
Design a stack which can give maximum frequency element - GeeksforGeeks | 06 Mar, 2022
Given N elements and the task is to implement a stack which removes and returns the maximum frequency element on every pop operation. If there’s a tie in the frequency then the topmost highest frequency element will be returned.Examples:
Input: push(4) 8 push(6) 6 push(7) 7 push(6) 6 push(8); 4Output: pop() -> returns 6, as 6 is the most frequent (frequency of 6 = 2 ). pop() -> returns 8 (6 also has the highest frequency but it is not the topmost)
Approach: Maintaining two HashMap, one is frequency HashMap which maps elements to their frequencies and other is setMap which maps all the element with same frequency in one group (Stack). FrequencyStack has 2 functions:
push(int x): map the element (x) with frequency HashMap and update the maxfreq variable ( i.e. holds the maximum frequency till now ). setMap maintains a stack which contains all the elements with same frequency.pop(): First get the maxfreq element from setMap and then decrement the frequency of the popped element. After popping, if the stack becomes empty then decrement the maxfreq.
push(int x): map the element (x) with frequency HashMap and update the maxfreq variable ( i.e. holds the maximum frequency till now ). setMap maintains a stack which contains all the elements with same frequency.
pop(): First get the maxfreq element from setMap and then decrement the frequency of the popped element. After popping, if the stack becomes empty then decrement the maxfreq.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ implementation of the approach#include<bits/stdc++.h>using namespace std; // freqMap is to map element to its frequencymap<int, int> freqMap; // setMap is to map frequency to the// element list with this frequencymap<int, stack<int> > setMap; // Variable which stores maximum frequency// of the stack elementint maxfreq = 0; // Function to insert x in the stackvoid push(int x){ // Frequency of x int freq = freqMap[x] + 1; // Mapping of x with its frequency freqMap[x]= freq; // Update maxfreq variable if (freq > maxfreq) maxfreq = freq; // Map element to its frequency list // If given frequency list doesn't exist // make a new list of the required frequency setMap[freq].push(x);} // Function to remove maximum frequency elementint pop(){ // Remove element from setMap // from maximum frequency list int top = setMap[maxfreq].top(); setMap[maxfreq].pop(); // Decrement the frequency of the popped element freqMap[top]--; // If whole list is popped // then decrement the maxfreq if (setMap[maxfreq].size() == 0) maxfreq--; return top;} // Driver codeint main(){ // Push elements to the stack push(4); push(6); push(7); push(6); push(8); // Pop elements cout << (pop()) << "\n" ; cout << (pop()); return 0;} // This code is contributed by Arnab Kundu
// Java implementation of the approachimport java.util.*; public class freqStack { // freqMap is to map element to its frequency static Map<Integer, Integer> freqMap = new HashMap<>(); // setMap is to map frequency to the // element list with this frequency static Map<Integer, Stack<Integer> > setMap = new HashMap<>(); // Variable which stores maximum frequency // of the stack element static int maxfreq = 0; // Function to insert x in the stack public static void push(int x) { // Frequency of x int freq = freqMap.getOrDefault(x, 0) + 1; // Mapping of x with its frequency freqMap.put(x, freq); // Update maxfreq variable if (freq > maxfreq) maxfreq = freq; // Map element to its frequency list // If given frequency list doesn't exist // make a new list of the required frequency setMap.computeIfAbsent(freq, z -> new Stack()).push(x); } // Function to remove maximum frequency element public static int pop() { // Remove element from setMap // from maximum frequency list int top = setMap.get(maxfreq).pop(); // Decrement the frequency of the popped element freqMap.put(top, freqMap.get(top) - 1); // If whole list is popped // then decrement the maxfreq if (setMap.get(maxfreq).size() == 0) maxfreq--; return top; } // Driver code public static void main(String[] args) { // Push elements to the stack push(4); push(6); push(7); push(6); push(8); // Pop elements System.out.println(pop()); System.out.println(pop()); }}
# Python3 implementation of the approach # freqMap is to map element to its frequencyfreqMap = {}; # setMap is to map frequency to the# element list with this frequencysetMap = {}; # Variable which stores maximum frequency# of the stack elementmaxfreq = 0; # Function to insert x in the stackdef push(x) : global maxfreq; if x not in freqMap : freqMap[x] = 0 # Frequency of x freq = freqMap[x] + 1; # Mapping of x with its Frequency freqMap[x]= freq # Update maxfreq Variable if (freq > maxfreq) : maxfreq = freq # Map element to its frequency list # If given frequency list doesn't exist # make a new list of the required frequency if freq not in setMap : setMap[freq] = [] setMap[freq].append(x); # Function to remove maximum frequency elementdef pop() : global maxfreq # Remove element from setMap # from maximum frequency list top = setMap[maxfreq][-1]; setMap[maxfreq].pop(); # Decrement the frequency # of the popped element freqMap[top] -= 1; # If whole list is popped # then decrement the maxfreq if (len(setMap[maxfreq]) == 0) : maxfreq -= 1; return top; # Driver codeif __name__ == "__main__" : # Push elements to the stack push(4); push(6); push(7); push(6); push(8); # Pop elements print(pop()) ; print(pop()); # This code is contributed by AnkitRai01
// C# implementation of the approachusing System;using System.Collections.Generic;class GFG { // freqMap is to map element to its frequency static Dictionary<int, int> freqMap = new Dictionary<int, int>(); // setMap is to map frequency to the // element list with this frequency static Dictionary<int, Stack<int>> setMap = new Dictionary<int, Stack<int>>(); // Variable which stores maximum frequency // of the stack element static int maxfreq = 0; // Function to insert x in the stack static void push(int x) { int freq = 1; // Frequency of x if(freqMap.ContainsKey(x)) { freq = freq + freqMap[x]; } // Mapping of x with its frequency freqMap[x] = freq; // Update maxfreq variable if (freq > maxfreq) maxfreq = freq; // Map element to its frequency list // If given frequency list doesn't exist // make a new list of the required frequency if(!setMap.ContainsKey(freq)) { setMap[freq] = new Stack<int>(); } setMap[freq].Push(x); } // Function to remove maximum frequency element static int pop() { // Remove element from setMap // from maximum frequency list int top = setMap[maxfreq].Peek(); setMap[maxfreq].Pop(); // Decrement the frequency of the popped element freqMap[top] = freqMap[top] - 1; // If whole list is popped // then decrement the maxfreq if (setMap[maxfreq].Count == 0) maxfreq--; return top; } static void Main() { // Push elements to the stack push(4); push(6); push(7); push(6); push(8); // Pop elements Console.WriteLine(pop()) ; Console.WriteLine(pop()); }} // This code is contributed by rameshtravel07.
<script> // Javascript implementation of the approach // freqMap is to map element to its frequencyvar freqMap = new Map(); // setMap is to map frequency to the// element list with this frequencyvar setMap = new Map(); // Variable which stores maximum frequency// of the stack elementvar maxfreq = 0; // Function to insert x in the stackfunction push(x){ // Frequency of x if(!freqMap.has(x)) freqMap.set(x, 1) else freqMap.set(x, freqMap.get(x)+1) var freq = freqMap.get(x) // Mapping of x with its frequency freqMap.set(x, freq); // Update maxfreq variable if (freq > maxfreq) maxfreq = freq; // Map element to its frequency list // If given frequency list doesn't exist // make a new list of the required frequency if(!setMap.has(freq)) { setMap.set(freq, [x]) } else { var tmp = setMap.get(freq); tmp.push(x); setMap.set(freq, tmp); }} // Function to remove maximum frequency elementfunction pop(){ // Remove element from setMap // from maximum frequency list var tmp = setMap.get(maxfreq); var top = tmp[tmp.length-1]; tmp.pop(); setMap.set(maxfreq, tmp); // Decrement the frequency of the popped element if(freqMap.has(top)) freqMap.set(top, freqMap.get(top)-1) // If whole list is popped // then decrement the maxfreq if (setMap.get(maxfreq).length == 0) maxfreq--; return top;} // Driver code// Push elements to the stackpush(4);push(6);push(7);push(6);push(8); // Pop elementsdocument.write( (pop()) + "<br>");document.write( (pop())); // This code is contributed by itsok.</script>
6
8
andrew1234
ankthon
itsok
rameshtravel07
nikhatkhan11
frequency-counting
Hash
Stack
Hash
Stack
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Rearrange an array such that arr[i] = i
Quadratic Probing in Hashing
Hashing in Java
Advantages of BST over Hash Table
Load Factor and Rehashing
Stack Data Structure (Introduction and Program)
Stack in Python
Stack Class in Java
Check for Balanced Brackets in an expression (well-formedness) using Stack
Program for Tower of Hanoi | [
{
"code": null,
"e": 25078,
"s": 25050,
"text": "\n06 Mar, 2022"
},
{
"code": null,
"e": 25318,
"s": 25078,
"text": "Given N elements and the task is to implement a stack which removes and returns the maximum frequency element on every pop operation. If there’s a tie in the frequency then the topmost highest frequency element will be returned.Examples: "
},
{
"code": null,
"e": 25534,
"s": 25318,
"text": "Input: push(4) 8 push(6) 6 push(7) 7 push(6) 6 push(8); 4Output: pop() -> returns 6, as 6 is the most frequent (frequency of 6 = 2 ). pop() -> returns 8 (6 also has the highest frequency but it is not the topmost) "
},
{
"code": null,
"e": 25760,
"s": 25536,
"text": "Approach: Maintaining two HashMap, one is frequency HashMap which maps elements to their frequencies and other is setMap which maps all the element with same frequency in one group (Stack). FrequencyStack has 2 functions: "
},
{
"code": null,
"e": 26147,
"s": 25760,
"text": "push(int x): map the element (x) with frequency HashMap and update the maxfreq variable ( i.e. holds the maximum frequency till now ). setMap maintains a stack which contains all the elements with same frequency.pop(): First get the maxfreq element from setMap and then decrement the frequency of the popped element. After popping, if the stack becomes empty then decrement the maxfreq."
},
{
"code": null,
"e": 26360,
"s": 26147,
"text": "push(int x): map the element (x) with frequency HashMap and update the maxfreq variable ( i.e. holds the maximum frequency till now ). setMap maintains a stack which contains all the elements with same frequency."
},
{
"code": null,
"e": 26535,
"s": 26360,
"text": "pop(): First get the maxfreq element from setMap and then decrement the frequency of the popped element. After popping, if the stack becomes empty then decrement the maxfreq."
},
{
"code": null,
"e": 26587,
"s": 26535,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 26591,
"s": 26587,
"text": "C++"
},
{
"code": null,
"e": 26596,
"s": 26591,
"text": "Java"
},
{
"code": null,
"e": 26604,
"s": 26596,
"text": "Python3"
},
{
"code": null,
"e": 26607,
"s": 26604,
"text": "C#"
},
{
"code": null,
"e": 26618,
"s": 26607,
"text": "Javascript"
},
{
"code": "// C++ implementation of the approach#include<bits/stdc++.h>using namespace std; // freqMap is to map element to its frequencymap<int, int> freqMap; // setMap is to map frequency to the// element list with this frequencymap<int, stack<int> > setMap; // Variable which stores maximum frequency// of the stack elementint maxfreq = 0; // Function to insert x in the stackvoid push(int x){ // Frequency of x int freq = freqMap[x] + 1; // Mapping of x with its frequency freqMap[x]= freq; // Update maxfreq variable if (freq > maxfreq) maxfreq = freq; // Map element to its frequency list // If given frequency list doesn't exist // make a new list of the required frequency setMap[freq].push(x);} // Function to remove maximum frequency elementint pop(){ // Remove element from setMap // from maximum frequency list int top = setMap[maxfreq].top(); setMap[maxfreq].pop(); // Decrement the frequency of the popped element freqMap[top]--; // If whole list is popped // then decrement the maxfreq if (setMap[maxfreq].size() == 0) maxfreq--; return top;} // Driver codeint main(){ // Push elements to the stack push(4); push(6); push(7); push(6); push(8); // Pop elements cout << (pop()) << \"\\n\" ; cout << (pop()); return 0;} // This code is contributed by Arnab Kundu",
"e": 27992,
"s": 26618,
"text": null
},
{
"code": "// Java implementation of the approachimport java.util.*; public class freqStack { // freqMap is to map element to its frequency static Map<Integer, Integer> freqMap = new HashMap<>(); // setMap is to map frequency to the // element list with this frequency static Map<Integer, Stack<Integer> > setMap = new HashMap<>(); // Variable which stores maximum frequency // of the stack element static int maxfreq = 0; // Function to insert x in the stack public static void push(int x) { // Frequency of x int freq = freqMap.getOrDefault(x, 0) + 1; // Mapping of x with its frequency freqMap.put(x, freq); // Update maxfreq variable if (freq > maxfreq) maxfreq = freq; // Map element to its frequency list // If given frequency list doesn't exist // make a new list of the required frequency setMap.computeIfAbsent(freq, z -> new Stack()).push(x); } // Function to remove maximum frequency element public static int pop() { // Remove element from setMap // from maximum frequency list int top = setMap.get(maxfreq).pop(); // Decrement the frequency of the popped element freqMap.put(top, freqMap.get(top) - 1); // If whole list is popped // then decrement the maxfreq if (setMap.get(maxfreq).size() == 0) maxfreq--; return top; } // Driver code public static void main(String[] args) { // Push elements to the stack push(4); push(6); push(7); push(6); push(8); // Pop elements System.out.println(pop()); System.out.println(pop()); }}",
"e": 29713,
"s": 27992,
"text": null
},
{
"code": "# Python3 implementation of the approach # freqMap is to map element to its frequencyfreqMap = {}; # setMap is to map frequency to the# element list with this frequencysetMap = {}; # Variable which stores maximum frequency# of the stack elementmaxfreq = 0; # Function to insert x in the stackdef push(x) : global maxfreq; if x not in freqMap : freqMap[x] = 0 # Frequency of x freq = freqMap[x] + 1; # Mapping of x with its Frequency freqMap[x]= freq # Update maxfreq Variable if (freq > maxfreq) : maxfreq = freq # Map element to its frequency list # If given frequency list doesn't exist # make a new list of the required frequency if freq not in setMap : setMap[freq] = [] setMap[freq].append(x); # Function to remove maximum frequency elementdef pop() : global maxfreq # Remove element from setMap # from maximum frequency list top = setMap[maxfreq][-1]; setMap[maxfreq].pop(); # Decrement the frequency # of the popped element freqMap[top] -= 1; # If whole list is popped # then decrement the maxfreq if (len(setMap[maxfreq]) == 0) : maxfreq -= 1; return top; # Driver codeif __name__ == \"__main__\" : # Push elements to the stack push(4); push(6); push(7); push(6); push(8); # Pop elements print(pop()) ; print(pop()); # This code is contributed by AnkitRai01",
"e": 31167,
"s": 29713,
"text": null
},
{
"code": "// C# implementation of the approachusing System;using System.Collections.Generic;class GFG { // freqMap is to map element to its frequency static Dictionary<int, int> freqMap = new Dictionary<int, int>(); // setMap is to map frequency to the // element list with this frequency static Dictionary<int, Stack<int>> setMap = new Dictionary<int, Stack<int>>(); // Variable which stores maximum frequency // of the stack element static int maxfreq = 0; // Function to insert x in the stack static void push(int x) { int freq = 1; // Frequency of x if(freqMap.ContainsKey(x)) { freq = freq + freqMap[x]; } // Mapping of x with its frequency freqMap[x] = freq; // Update maxfreq variable if (freq > maxfreq) maxfreq = freq; // Map element to its frequency list // If given frequency list doesn't exist // make a new list of the required frequency if(!setMap.ContainsKey(freq)) { setMap[freq] = new Stack<int>(); } setMap[freq].Push(x); } // Function to remove maximum frequency element static int pop() { // Remove element from setMap // from maximum frequency list int top = setMap[maxfreq].Peek(); setMap[maxfreq].Pop(); // Decrement the frequency of the popped element freqMap[top] = freqMap[top] - 1; // If whole list is popped // then decrement the maxfreq if (setMap[maxfreq].Count == 0) maxfreq--; return top; } static void Main() { // Push elements to the stack push(4); push(6); push(7); push(6); push(8); // Pop elements Console.WriteLine(pop()) ; Console.WriteLine(pop()); }} // This code is contributed by rameshtravel07.",
"e": 33045,
"s": 31167,
"text": null
},
{
"code": "<script> // Javascript implementation of the approach // freqMap is to map element to its frequencyvar freqMap = new Map(); // setMap is to map frequency to the// element list with this frequencyvar setMap = new Map(); // Variable which stores maximum frequency// of the stack elementvar maxfreq = 0; // Function to insert x in the stackfunction push(x){ // Frequency of x if(!freqMap.has(x)) freqMap.set(x, 1) else freqMap.set(x, freqMap.get(x)+1) var freq = freqMap.get(x) // Mapping of x with its frequency freqMap.set(x, freq); // Update maxfreq variable if (freq > maxfreq) maxfreq = freq; // Map element to its frequency list // If given frequency list doesn't exist // make a new list of the required frequency if(!setMap.has(freq)) { setMap.set(freq, [x]) } else { var tmp = setMap.get(freq); tmp.push(x); setMap.set(freq, tmp); }} // Function to remove maximum frequency elementfunction pop(){ // Remove element from setMap // from maximum frequency list var tmp = setMap.get(maxfreq); var top = tmp[tmp.length-1]; tmp.pop(); setMap.set(maxfreq, tmp); // Decrement the frequency of the popped element if(freqMap.has(top)) freqMap.set(top, freqMap.get(top)-1) // If whole list is popped // then decrement the maxfreq if (setMap.get(maxfreq).length == 0) maxfreq--; return top;} // Driver code// Push elements to the stackpush(4);push(6);push(7);push(6);push(8); // Pop elementsdocument.write( (pop()) + \"<br>\");document.write( (pop())); // This code is contributed by itsok.</script>",
"e": 34699,
"s": 33045,
"text": null
},
{
"code": null,
"e": 34703,
"s": 34699,
"text": "6\n8"
},
{
"code": null,
"e": 34716,
"s": 34705,
"text": "andrew1234"
},
{
"code": null,
"e": 34724,
"s": 34716,
"text": "ankthon"
},
{
"code": null,
"e": 34730,
"s": 34724,
"text": "itsok"
},
{
"code": null,
"e": 34745,
"s": 34730,
"text": "rameshtravel07"
},
{
"code": null,
"e": 34758,
"s": 34745,
"text": "nikhatkhan11"
},
{
"code": null,
"e": 34777,
"s": 34758,
"text": "frequency-counting"
},
{
"code": null,
"e": 34782,
"s": 34777,
"text": "Hash"
},
{
"code": null,
"e": 34788,
"s": 34782,
"text": "Stack"
},
{
"code": null,
"e": 34793,
"s": 34788,
"text": "Hash"
},
{
"code": null,
"e": 34799,
"s": 34793,
"text": "Stack"
},
{
"code": null,
"e": 34897,
"s": 34799,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 34906,
"s": 34897,
"text": "Comments"
},
{
"code": null,
"e": 34919,
"s": 34906,
"text": "Old Comments"
},
{
"code": null,
"e": 34959,
"s": 34919,
"text": "Rearrange an array such that arr[i] = i"
},
{
"code": null,
"e": 34988,
"s": 34959,
"text": "Quadratic Probing in Hashing"
},
{
"code": null,
"e": 35004,
"s": 34988,
"text": "Hashing in Java"
},
{
"code": null,
"e": 35038,
"s": 35004,
"text": "Advantages of BST over Hash Table"
},
{
"code": null,
"e": 35064,
"s": 35038,
"text": "Load Factor and Rehashing"
},
{
"code": null,
"e": 35112,
"s": 35064,
"text": "Stack Data Structure (Introduction and Program)"
},
{
"code": null,
"e": 35128,
"s": 35112,
"text": "Stack in Python"
},
{
"code": null,
"e": 35148,
"s": 35128,
"text": "Stack Class in Java"
},
{
"code": null,
"e": 35223,
"s": 35148,
"text": "Check for Balanced Brackets in an expression (well-formedness) using Stack"
}
] |
Top 5 Beautiful Soup Functions That Will Make Your Life Easier | by Lazar Gugleta | Towards Data Science | Once you get into Web Scraping and data processing, you will find so many tools that can do that job for you. One of them is Beautiful Soup, which is a python library for pulling data out of HTML and XML files. It creates data parse trees in order to get data easily.
The basic process goes something like this:
Get the data and then process it any way you want.
That is why today I want to show you some of the top functions that Beautiful Soup has to offer.
If you are also interested in other libraries like Selenium, here are other examples you should look into:I have written articles about Selenium and Web Scraping before, so before you begin with these, I would recommend you read this article “Everything About Web Scraping”, because of the setup process. And if you are already more advanced with Web Scraping, try my advanced scripts like “How to Save Money with Python” and “How to Make an Analysis Tool with Python”.
Also, a good example of setting up the environment for BeautifulSoup is in the article “How to Save Money with Python”.
Let’s just jump right into it!
Before we get into Top 5 Functions, we have to set up our environment and libraries that we are going to use in order to get data.
In that terminal you should install libraries:
pip3 install requests
Requests can be used so you can add content like headers, form data, multipart files, and parameters via simple Python libraries. It also allows you to access the response data of Python in the same way.
sudo pip3 install beautifulsoup4
This is our main library Beautiful Soup that we already mentioned above.
Also when you start your Python script at the beginning you should include the libraries we just installed:
import requestsfrom bs4 import BeautifulSoup
Now let’s move on to the functions!
This function is absolutely essential since with it you will get to the certain web page you desire. Let me show you.
First, we have to find a URL we want to scrape (get data) from:
URL = 'https://www.amazon.de/gp/product/B0756CYWWD/ref=as_li_tl?ie=UTF8&tag=idk01e-21&camp=1638&creative=6742&linkCode=as2&creativeASIN=B0756CYWWD&linkId=868d0edc56c291dbff697d1692708240'headers = {"User-agent": 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'}
I took a random Amazon product and with the get function, we are going to get access to data from the web page. Headers are just a definition for your browser. You can check yours here.
Using the requests library we get to the desired URL with defined headers.After that, we create an object instance ‘soup’ that we can use to find anything we want on the page.
page = requests.get(URL, headers=headers)soup = BeautifulSoup(page.content, 'html.parser')
BeautifulSoup(,) creates a data structure representing a parsed HTML or XML document.Most of the methods you’ll call on a BeautifulSoup object are inherited from PageElement or Tag.Internally, this class defines the basic interface called by the tree builders when converting an HTML/XML document into a data structure. The interface abstracts away the differences between parsers.
We can now move on to the next function, which actually searches the object we just created.
With the find() function, we are able to search for anything in our web page.Let’s say we want to get a title and the price of the product based on their ids.
title = soup.find(id="productTitle").get_text()price = soup.find(id="priceblock_ourprice").get_text()
The id of these Web elements you can find by clicking F12 on your keyboard or right-click -> ‘ Inspect’.
Let’s look closely at what just happened there!
As you can see in the previous function we used get_text() to extract the text part of the newly found elements title and price.
But before we get to the final results there are a few more things that we have to perform on our product in order to get perfect output.
The strip() method returns a copy of the string with both leading and trailing characters removed (based on the string argument passed).
We use this function in order to remove the empty spaces we have in our title:
This function can also be used in any other python usage, not just Beautiful Soup, but in my personal experience, it has come in handy so many times when operating on text elements and that is why I am putting it on this list.
This function also has a general-purpose for Python but I found it very useful as well.It splits the string into different parts and we can use the parts that we desire.It works with a combination of the separator and a string.
We use sep as the separator in our string for price and convert it to integer (whole number).
replace() just replaces ‘.’ with an empty string.
sep = ','con_price = price.split(sep, 1)[0]converted_price = int(con_price.replace('.', ''))
Here are the final results:
I put the complete code for you in this Gist:
Just check your headers before you execute it.
If you want to run it, here is the terminal command:
python3 bs_tutorial.py
We are done!
As mentioned before, this is not my first time writing about Beautiful Soup, Selenium and Web Scraping in general. There are many more functions I would love to cover and many more to come. I hope you liked this tutorial and in order to keep up, follow me for more!
Thanks for reading! | [
{
"code": null,
"e": 439,
"s": 171,
"text": "Once you get into Web Scraping and data processing, you will find so many tools that can do that job for you. One of them is Beautiful Soup, which is a python library for pulling data out of HTML and XML files. It creates data parse trees in order to get data easily."
},
{
"code": null,
"e": 483,
"s": 439,
"text": "The basic process goes something like this:"
},
{
"code": null,
"e": 534,
"s": 483,
"text": "Get the data and then process it any way you want."
},
{
"code": null,
"e": 631,
"s": 534,
"text": "That is why today I want to show you some of the top functions that Beautiful Soup has to offer."
},
{
"code": null,
"e": 1101,
"s": 631,
"text": "If you are also interested in other libraries like Selenium, here are other examples you should look into:I have written articles about Selenium and Web Scraping before, so before you begin with these, I would recommend you read this article “Everything About Web Scraping”, because of the setup process. And if you are already more advanced with Web Scraping, try my advanced scripts like “How to Save Money with Python” and “How to Make an Analysis Tool with Python”."
},
{
"code": null,
"e": 1221,
"s": 1101,
"text": "Also, a good example of setting up the environment for BeautifulSoup is in the article “How to Save Money with Python”."
},
{
"code": null,
"e": 1252,
"s": 1221,
"text": "Let’s just jump right into it!"
},
{
"code": null,
"e": 1383,
"s": 1252,
"text": "Before we get into Top 5 Functions, we have to set up our environment and libraries that we are going to use in order to get data."
},
{
"code": null,
"e": 1430,
"s": 1383,
"text": "In that terminal you should install libraries:"
},
{
"code": null,
"e": 1452,
"s": 1430,
"text": "pip3 install requests"
},
{
"code": null,
"e": 1656,
"s": 1452,
"text": "Requests can be used so you can add content like headers, form data, multipart files, and parameters via simple Python libraries. It also allows you to access the response data of Python in the same way."
},
{
"code": null,
"e": 1689,
"s": 1656,
"text": "sudo pip3 install beautifulsoup4"
},
{
"code": null,
"e": 1762,
"s": 1689,
"text": "This is our main library Beautiful Soup that we already mentioned above."
},
{
"code": null,
"e": 1870,
"s": 1762,
"text": "Also when you start your Python script at the beginning you should include the libraries we just installed:"
},
{
"code": null,
"e": 1915,
"s": 1870,
"text": "import requestsfrom bs4 import BeautifulSoup"
},
{
"code": null,
"e": 1951,
"s": 1915,
"text": "Now let’s move on to the functions!"
},
{
"code": null,
"e": 2069,
"s": 1951,
"text": "This function is absolutely essential since with it you will get to the certain web page you desire. Let me show you."
},
{
"code": null,
"e": 2133,
"s": 2069,
"text": "First, we have to find a URL we want to scrape (get data) from:"
},
{
"code": null,
"e": 2454,
"s": 2133,
"text": "URL = 'https://www.amazon.de/gp/product/B0756CYWWD/ref=as_li_tl?ie=UTF8&tag=idk01e-21&camp=1638&creative=6742&linkCode=as2&creativeASIN=B0756CYWWD&linkId=868d0edc56c291dbff697d1692708240'headers = {\"User-agent\": 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36'}"
},
{
"code": null,
"e": 2640,
"s": 2454,
"text": "I took a random Amazon product and with the get function, we are going to get access to data from the web page. Headers are just a definition for your browser. You can check yours here."
},
{
"code": null,
"e": 2816,
"s": 2640,
"text": "Using the requests library we get to the desired URL with defined headers.After that, we create an object instance ‘soup’ that we can use to find anything we want on the page."
},
{
"code": null,
"e": 2907,
"s": 2816,
"text": "page = requests.get(URL, headers=headers)soup = BeautifulSoup(page.content, 'html.parser')"
},
{
"code": null,
"e": 3289,
"s": 2907,
"text": "BeautifulSoup(,) creates a data structure representing a parsed HTML or XML document.Most of the methods you’ll call on a BeautifulSoup object are inherited from PageElement or Tag.Internally, this class defines the basic interface called by the tree builders when converting an HTML/XML document into a data structure. The interface abstracts away the differences between parsers."
},
{
"code": null,
"e": 3382,
"s": 3289,
"text": "We can now move on to the next function, which actually searches the object we just created."
},
{
"code": null,
"e": 3541,
"s": 3382,
"text": "With the find() function, we are able to search for anything in our web page.Let’s say we want to get a title and the price of the product based on their ids."
},
{
"code": null,
"e": 3643,
"s": 3541,
"text": "title = soup.find(id=\"productTitle\").get_text()price = soup.find(id=\"priceblock_ourprice\").get_text()"
},
{
"code": null,
"e": 3748,
"s": 3643,
"text": "The id of these Web elements you can find by clicking F12 on your keyboard or right-click -> ‘ Inspect’."
},
{
"code": null,
"e": 3796,
"s": 3748,
"text": "Let’s look closely at what just happened there!"
},
{
"code": null,
"e": 3925,
"s": 3796,
"text": "As you can see in the previous function we used get_text() to extract the text part of the newly found elements title and price."
},
{
"code": null,
"e": 4063,
"s": 3925,
"text": "But before we get to the final results there are a few more things that we have to perform on our product in order to get perfect output."
},
{
"code": null,
"e": 4200,
"s": 4063,
"text": "The strip() method returns a copy of the string with both leading and trailing characters removed (based on the string argument passed)."
},
{
"code": null,
"e": 4279,
"s": 4200,
"text": "We use this function in order to remove the empty spaces we have in our title:"
},
{
"code": null,
"e": 4506,
"s": 4279,
"text": "This function can also be used in any other python usage, not just Beautiful Soup, but in my personal experience, it has come in handy so many times when operating on text elements and that is why I am putting it on this list."
},
{
"code": null,
"e": 4734,
"s": 4506,
"text": "This function also has a general-purpose for Python but I found it very useful as well.It splits the string into different parts and we can use the parts that we desire.It works with a combination of the separator and a string."
},
{
"code": null,
"e": 4828,
"s": 4734,
"text": "We use sep as the separator in our string for price and convert it to integer (whole number)."
},
{
"code": null,
"e": 4878,
"s": 4828,
"text": "replace() just replaces ‘.’ with an empty string."
},
{
"code": null,
"e": 4971,
"s": 4878,
"text": "sep = ','con_price = price.split(sep, 1)[0]converted_price = int(con_price.replace('.', ''))"
},
{
"code": null,
"e": 4999,
"s": 4971,
"text": "Here are the final results:"
},
{
"code": null,
"e": 5045,
"s": 4999,
"text": "I put the complete code for you in this Gist:"
},
{
"code": null,
"e": 5092,
"s": 5045,
"text": "Just check your headers before you execute it."
},
{
"code": null,
"e": 5145,
"s": 5092,
"text": "If you want to run it, here is the terminal command:"
},
{
"code": null,
"e": 5168,
"s": 5145,
"text": "python3 bs_tutorial.py"
},
{
"code": null,
"e": 5181,
"s": 5168,
"text": "We are done!"
},
{
"code": null,
"e": 5447,
"s": 5181,
"text": "As mentioned before, this is not my first time writing about Beautiful Soup, Selenium and Web Scraping in general. There are many more functions I would love to cover and many more to come. I hope you liked this tutorial and in order to keep up, follow me for more!"
}
] |
Retrieve and Process hyperspectral field measurements | Towards Data Science | As technology evolves rapidly, remote sensing is finding its many uses across different subjects, from forest fire mapping to water quality assessment, 3D surface modeling and many others. The advantages of obtaining physical characteristics of an area by measuring its reflected and emitted radiation at a distance are many. However, field measurements are as important as the remote sensed data, in order to conduct researches that will make it possible to correlate what is “seen” at a distance with the ground truth reality. In the case of multispectral remote sensed images, it is important to conduct field measurements using hyperspectral sensors that will permit to calibrate the final results.
It was the beginning of 2021 when my thesis supervisor told me that we needed to recover some hyper spectral field measurements. These measurements have been done during various ancient terrain campaigns across the world. The objective seemed quite simple. I should retrieve the measurements from 3 field sensors (irradiance and radiances — upwelling and sky’s) to calculate the final reflectance (eq. 1). All field measurements were done by using well known sensors, and the results were stored in Microsoft Access files (.mdb). As there was a software specific to manage the .mdb files from the manufacturer of the sensors, called MDSA XE, I thought it would be a “piece of cake”. And I couldn’t be more wrong.
Germans are very well known for their industry and the quality of their equipment. That’s without a doubt. However, when it comes to systems, user interface or usability, it seems that there’s always something missing. And, in the case of the MSDA, a lot was missing. Even a simple task such open a file is a weird operation that involves restarting the application. I couldn’t rely on that for my task.
For that, I’ve developed an “unofficial” framework called RadiometryTrios, that is available for Python and that will make things a lot easier, not only to access this kind of data, but also to create an entire database from them.
As explained, this framework is responsible for manipulating hyperspectral radiometry measurements from field campaigns, using the TRIOS equipment.
The manipulation involves:
Extracting radiometry measurements from .MDB files in the MSDA/TRIOS format.
Analysis of the radiometry, data cleaning, and graph plotting
Wavelengths Interpolation
Reflectance generation (from radiance and irradiance measurements)
Input/Output from/to different formats
The schema of the RadiometryTrios framework is displayed in Figure 1.
The classes that are declared in the package are:
BaseRadiometry: this class holds some common functionalities that are used across the other classes such as plotting and interpolating functions. The methods for this class are declared as “static” and can be used as simple functions.
TriosMDB: this class is responsible for connecting to the access .mdb file and extracting the radiometries from it. It is possible to filter values so that the extraction corresponds to the interested measurement.
Radiometry: this class represents the radiometry from a specific measurement (single sensor, but multiple times). It allows data cleaning, filtering and plotting.
RadiometryGroup: as aforementioned, to obtain the reflectance, it is necessary to use values from 3 different sensors. In this regard, the RadiometryGroup class stores many radiometries and makes easier to apply filters to all of them at the same time. It supports also plotting functions.
RadiometryDB: finally, the last class is responsible for creating and maintaining a radiometry database, from the many measurements that have been treated.
The package is available in the following GitHub repository : https://github.com/cordmaur/RadiometryTrios
The repository contains detailed information about installation and dependencies, but, in summary, it is necessary to just clone the project and install it.
git clone https://github.com/cordmaur/RadiometryTrios.gitcd RadiometryTriospip install -e .
Note: The -e option, will install the package in developer mode, so it makes easier to make changes to the source code (on the project folder) and they will be applied automatically, without necessity to reinstall it.
To check if it is installed correctly, run the following commands on a python shell (or Jupyter notebook):
import RadiometryTrios RadiometryTrios.__version__'0.0.1'
Once we have the package successfully installed, it is time to learn how to use it’s many functionalities.
The first class that will be covered is the TriosMDB class. The notebook 00_TriosMDB.ipynb available at the nbs/ folder describes in detail how to use it.
It is important to note that the TriosMDB class takes care of the ODBC connection under the hood. That means you don't have to take care of it manually and the only requirement is the installation of the pyodbc package that is mandatory. Let’s check how to open a connection and display a summary:
{'FileName': WindowsPath('../mdbs/trios_test.mdb'), 'Records': 937, 'IDDevice': "['IP_c090', 'SAM_83AE', 'SAM_83ba', 'SAMIP_5078', 'SAM_83B0', 'SAM_83BA']", 'IDDataType': "['Inclination', 'Pressure', 'SPECTRUM', 'SAMIP']", 'MethodName': "['SAMIP_5078', 'SAM_83AE', 'SAM_83B0', 'DoubleSpecCalc_2', 'SAM_Calibration_Station', None]", 'IDMethodType': "['SAMIP Control', 'SAM Control', 'DoubleSpecCalc', 'SAM Calibration Station', None]", 'Comment': "['Ed deck', 'Lu deck', 'Ld deck', 'reflectance', None]", 'CommentSub1': "[None, 'factor 0.028']", 'CommentSub2': '[None]'}
Once a TriosMDB object is created, it fills up a pandas DataFrame with all the content from the tblData table. This can be accessed through the .df attribute. The tblData is the default table of the Trios/MSDA format. To access the contents:
mdb.df.head(3)
Probably the most basic action to analyse the MDB file is to perform queries. The function exec_query returns the results of a query as a list or a pandas dataframe. The exec_query can be used to inspect other tables from the MDB file as well. Let's check the contents of the tblData (the main table). The results output can be a dataframe or a list where each item is a tuple that represents a table row.
mdb.exec_query("select top 1 * from tblData", output_format='pandas')
Or the results can be displayed as a list:
results = mdb.exec_query("select top 1 * from tblData", output_format='list')results[0][:6]('2572_2015-11-14_14-29-21_188_038', 1, 'IP_c090', 'Inclination', 'Calibrated', None)
To retrieve the measurements of an MDB file, it is necessary to specify which conditions should be met for the desired radiometry type. These conditions may be a combination of different Columns/Values and must be specified in a dictionary. Each condition’s combination will be related to one radiometry type, also called r-type. For example, in this MDB, we can see that there are reflectances already calculated. Inspecting the MDB in ACCESS, we can see that these reflectances are associated with some IDDevices:
To access the desired radiometry, we need first to create a dictionary with the conditions for each radiometry type, as demonstrated bellow. Once the select_radiometries is successfully called, it will retrieve the times that coincide with all the types, if logic='and' or all times found on the MDB if logic='or'. Each radiometry DataFrame will be loaded in the .measurements attribute. That will be explained in the Accessing Measurements section.
DatetimeIndex(['2015-11-14 14:39:45', '2015-11-14 14:39:55', '2015-11-14 14:40:05', '2015-11-14 14:40:15', '2015-11-14 14:40:25', '2015-11-14 14:40:35', '2015-11-14 14:40:45', '2015-11-14 14:40:55', '2015-11-14 14:41:05', '2015-11-14 14:41:15', '2015-11-14 14:41:25', '2015-11-14 14:41:35', '2015-11-14 14:41:45', '2015-11-14 14:41:55', '2015-11-14 14:42:05', '2015-11-14 14:42:15', '2015-11-14 14:42:25', '2015-11-14 14:42:35', '2015-11-14 14:42:45', '2015-11-14 14:42:55', '2015-11-14 14:43:05', '2015-11-14 14:43:15', '2015-11-14 14:43:25', '2015-11-14 14:43:35', '2015-11-14 14:43:45', '2015-11-14 14:43:55', '2015-11-14 14:44:05', '2015-11-14 14:44:15', '2015-11-14 14:44:25', '2015-11-14 14:44:35', '2015-11-14 14:44:45', '2015-11-14 14:44:55', '2015-11-14 14:45:05', '2015-11-14 14:45:15', '2015-11-14 14:45:25', '2015-11-14 14:45:35', '2015-11-14 14:45:45', '2015-11-14 14:45:55'], dtype='datetime64[ns]', freq='10S')
The objective of the TriosMDB class is basically search and export the measurements from the Trios/MSDA mdb. To more advanced manipulation, such as interpolation, reflectance calculation or data cleaning, the Radiometry class is more appropriated. Even so, the TriosMDB is filled with some basic plotting functions, to make sure what is being exported makes sense.
mdb.plot_radiometry('reflectance', min_wl=380, max_wl=950)
mdb.plot_radiometries(cols=2, min_wl=380, max_wl=950)
The last step is to export the radiometries to a text file. The output is a .txt file in the Trios/MSDA format (similar to the .mlb files)The create_measurement_dir flag indicates if the a subdirectory must be created. The subdirectory name follows the format YYYYMMDD-mmss and is the format used by the RadiometryDatabase class.Keep in mind that this format is not easy to be manipulated. It is intended to maintain compatibility with MSDA output files.For a more user-friendly format, it is suggested to use the interpolated output provided by the Radiometry class.
out_dir = mdb.export_txt(create_measurement_dir=True)Saving output file to ..\mdbs\20151114-1439
The output directory is returned. It is a good practice to store it in a variable, because it will be used to open the Radiometry class. Let's now check the files that have been saved:
[file for file in out_dir.iterdir()][WindowsPath('../mdbs/20151114-1439/Ed_interpolated.bak'), WindowsPath('../mdbs/20151114-1439/Ed_interpolated.csv'), WindowsPath('../mdbs/20151114-1439/Ed_spectrum_LO.bak'), WindowsPath('../mdbs/20151114-1439/Ed_spectrum_LO.txt'), WindowsPath('../mdbs/20151114-1439/Fig_20151114-1439.png'), WindowsPath('../mdbs/20151114-1439/Ld_interpolated.bak'), WindowsPath('../mdbs/20151114-1439/Ld_interpolated.csv'), WindowsPath('../mdbs/20151114-1439/Ld_spectrum_LO.bak'), WindowsPath('../mdbs/20151114-1439/Ld_spectrum_LO.txt'), WindowsPath('../mdbs/20151114-1439/Lu_interpolated.bak'), WindowsPath('../mdbs/20151114-1439/Lu_interpolated.csv'), WindowsPath('../mdbs/20151114-1439/Lu_spectrum_LO.bak'), WindowsPath('../mdbs/20151114-1439/Lu_spectrum_LO.txt'), WindowsPath('../mdbs/20151114-1439/reflectance_spectrum_LO.txt'), WindowsPath('../mdbs/20151114-1439/Rrs_interpolated.bak'), WindowsPath('../mdbs/20151114-1439/Rrs_interpolated.csv')]
To check the contents of the reflectance file. We can open it in the Notepad or any other text editor:
As already mentioned, we can see that this format is not user friendly to be opened in Excel or in a Pandas DataFrame, for example. The interpolated .csv output provided by the Radiometry class is more appropriate for these purposes.
All the code, filled with more examples, is available at the /nbs/00_TriosMDB.ipynb notebook:
In this first part, we’ve seen how to use the TriosMDB class to export radiometry measurements from the .mdb files at the MSDA/Trios format. Basic filtering functionalities are provided to locate the exact measurement inside the .mdb file and export it to a text file. More advanced manipulation functions, such as data cleaning, filtering, interpolation, etc. are provided by the Radiometry and the RadiometryGroup classes, but they will be covered in the next part.
Thanks, and see you there.
If you liked this article and want to continue reading/learning without limits, consider becoming a Medium member. I’ll receive a portion of your membership fee if you use the following link, with no extra cost to you. | [
{
"code": null,
"e": 874,
"s": 171,
"text": "As technology evolves rapidly, remote sensing is finding its many uses across different subjects, from forest fire mapping to water quality assessment, 3D surface modeling and many others. The advantages of obtaining physical characteristics of an area by measuring its reflected and emitted radiation at a distance are many. However, field measurements are as important as the remote sensed data, in order to conduct researches that will make it possible to correlate what is “seen” at a distance with the ground truth reality. In the case of multispectral remote sensed images, it is important to conduct field measurements using hyperspectral sensors that will permit to calibrate the final results."
},
{
"code": null,
"e": 1587,
"s": 874,
"text": "It was the beginning of 2021 when my thesis supervisor told me that we needed to recover some hyper spectral field measurements. These measurements have been done during various ancient terrain campaigns across the world. The objective seemed quite simple. I should retrieve the measurements from 3 field sensors (irradiance and radiances — upwelling and sky’s) to calculate the final reflectance (eq. 1). All field measurements were done by using well known sensors, and the results were stored in Microsoft Access files (.mdb). As there was a software specific to manage the .mdb files from the manufacturer of the sensors, called MDSA XE, I thought it would be a “piece of cake”. And I couldn’t be more wrong."
},
{
"code": null,
"e": 1991,
"s": 1587,
"text": "Germans are very well known for their industry and the quality of their equipment. That’s without a doubt. However, when it comes to systems, user interface or usability, it seems that there’s always something missing. And, in the case of the MSDA, a lot was missing. Even a simple task such open a file is a weird operation that involves restarting the application. I couldn’t rely on that for my task."
},
{
"code": null,
"e": 2222,
"s": 1991,
"text": "For that, I’ve developed an “unofficial” framework called RadiometryTrios, that is available for Python and that will make things a lot easier, not only to access this kind of data, but also to create an entire database from them."
},
{
"code": null,
"e": 2370,
"s": 2222,
"text": "As explained, this framework is responsible for manipulating hyperspectral radiometry measurements from field campaigns, using the TRIOS equipment."
},
{
"code": null,
"e": 2397,
"s": 2370,
"text": "The manipulation involves:"
},
{
"code": null,
"e": 2474,
"s": 2397,
"text": "Extracting radiometry measurements from .MDB files in the MSDA/TRIOS format."
},
{
"code": null,
"e": 2536,
"s": 2474,
"text": "Analysis of the radiometry, data cleaning, and graph plotting"
},
{
"code": null,
"e": 2562,
"s": 2536,
"text": "Wavelengths Interpolation"
},
{
"code": null,
"e": 2629,
"s": 2562,
"text": "Reflectance generation (from radiance and irradiance measurements)"
},
{
"code": null,
"e": 2668,
"s": 2629,
"text": "Input/Output from/to different formats"
},
{
"code": null,
"e": 2738,
"s": 2668,
"text": "The schema of the RadiometryTrios framework is displayed in Figure 1."
},
{
"code": null,
"e": 2788,
"s": 2738,
"text": "The classes that are declared in the package are:"
},
{
"code": null,
"e": 3023,
"s": 2788,
"text": "BaseRadiometry: this class holds some common functionalities that are used across the other classes such as plotting and interpolating functions. The methods for this class are declared as “static” and can be used as simple functions."
},
{
"code": null,
"e": 3237,
"s": 3023,
"text": "TriosMDB: this class is responsible for connecting to the access .mdb file and extracting the radiometries from it. It is possible to filter values so that the extraction corresponds to the interested measurement."
},
{
"code": null,
"e": 3400,
"s": 3237,
"text": "Radiometry: this class represents the radiometry from a specific measurement (single sensor, but multiple times). It allows data cleaning, filtering and plotting."
},
{
"code": null,
"e": 3690,
"s": 3400,
"text": "RadiometryGroup: as aforementioned, to obtain the reflectance, it is necessary to use values from 3 different sensors. In this regard, the RadiometryGroup class stores many radiometries and makes easier to apply filters to all of them at the same time. It supports also plotting functions."
},
{
"code": null,
"e": 3846,
"s": 3690,
"text": "RadiometryDB: finally, the last class is responsible for creating and maintaining a radiometry database, from the many measurements that have been treated."
},
{
"code": null,
"e": 3952,
"s": 3846,
"text": "The package is available in the following GitHub repository : https://github.com/cordmaur/RadiometryTrios"
},
{
"code": null,
"e": 4109,
"s": 3952,
"text": "The repository contains detailed information about installation and dependencies, but, in summary, it is necessary to just clone the project and install it."
},
{
"code": null,
"e": 4201,
"s": 4109,
"text": "git clone https://github.com/cordmaur/RadiometryTrios.gitcd RadiometryTriospip install -e ."
},
{
"code": null,
"e": 4419,
"s": 4201,
"text": "Note: The -e option, will install the package in developer mode, so it makes easier to make changes to the source code (on the project folder) and they will be applied automatically, without necessity to reinstall it."
},
{
"code": null,
"e": 4526,
"s": 4419,
"text": "To check if it is installed correctly, run the following commands on a python shell (or Jupyter notebook):"
},
{
"code": null,
"e": 4584,
"s": 4526,
"text": "import RadiometryTrios RadiometryTrios.__version__'0.0.1'"
},
{
"code": null,
"e": 4691,
"s": 4584,
"text": "Once we have the package successfully installed, it is time to learn how to use it’s many functionalities."
},
{
"code": null,
"e": 4846,
"s": 4691,
"text": "The first class that will be covered is the TriosMDB class. The notebook 00_TriosMDB.ipynb available at the nbs/ folder describes in detail how to use it."
},
{
"code": null,
"e": 5144,
"s": 4846,
"text": "It is important to note that the TriosMDB class takes care of the ODBC connection under the hood. That means you don't have to take care of it manually and the only requirement is the installation of the pyodbc package that is mandatory. Let’s check how to open a connection and display a summary:"
},
{
"code": null,
"e": 5714,
"s": 5144,
"text": "{'FileName': WindowsPath('../mdbs/trios_test.mdb'), 'Records': 937, 'IDDevice': \"['IP_c090', 'SAM_83AE', 'SAM_83ba', 'SAMIP_5078', 'SAM_83B0', 'SAM_83BA']\", 'IDDataType': \"['Inclination', 'Pressure', 'SPECTRUM', 'SAMIP']\", 'MethodName': \"['SAMIP_5078', 'SAM_83AE', 'SAM_83B0', 'DoubleSpecCalc_2', 'SAM_Calibration_Station', None]\", 'IDMethodType': \"['SAMIP Control', 'SAM Control', 'DoubleSpecCalc', 'SAM Calibration Station', None]\", 'Comment': \"['Ed deck', 'Lu deck', 'Ld deck', 'reflectance', None]\", 'CommentSub1': \"[None, 'factor 0.028']\", 'CommentSub2': '[None]'}"
},
{
"code": null,
"e": 5956,
"s": 5714,
"text": "Once a TriosMDB object is created, it fills up a pandas DataFrame with all the content from the tblData table. This can be accessed through the .df attribute. The tblData is the default table of the Trios/MSDA format. To access the contents:"
},
{
"code": null,
"e": 5971,
"s": 5956,
"text": "mdb.df.head(3)"
},
{
"code": null,
"e": 6377,
"s": 5971,
"text": "Probably the most basic action to analyse the MDB file is to perform queries. The function exec_query returns the results of a query as a list or a pandas dataframe. The exec_query can be used to inspect other tables from the MDB file as well. Let's check the contents of the tblData (the main table). The results output can be a dataframe or a list where each item is a tuple that represents a table row."
},
{
"code": null,
"e": 6447,
"s": 6377,
"text": "mdb.exec_query(\"select top 1 * from tblData\", output_format='pandas')"
},
{
"code": null,
"e": 6490,
"s": 6447,
"text": "Or the results can be displayed as a list:"
},
{
"code": null,
"e": 6667,
"s": 6490,
"text": "results = mdb.exec_query(\"select top 1 * from tblData\", output_format='list')results[0][:6]('2572_2015-11-14_14-29-21_188_038', 1, 'IP_c090', 'Inclination', 'Calibrated', None)"
},
{
"code": null,
"e": 7183,
"s": 6667,
"text": "To retrieve the measurements of an MDB file, it is necessary to specify which conditions should be met for the desired radiometry type. These conditions may be a combination of different Columns/Values and must be specified in a dictionary. Each condition’s combination will be related to one radiometry type, also called r-type. For example, in this MDB, we can see that there are reflectances already calculated. Inspecting the MDB in ACCESS, we can see that these reflectances are associated with some IDDevices:"
},
{
"code": null,
"e": 7633,
"s": 7183,
"text": "To access the desired radiometry, we need first to create a dictionary with the conditions for each radiometry type, as demonstrated bellow. Once the select_radiometries is successfully called, it will retrieve the times that coincide with all the types, if logic='and' or all times found on the MDB if logic='or'. Each radiometry DataFrame will be loaded in the .measurements attribute. That will be explained in the Accessing Measurements section."
},
{
"code": null,
"e": 8824,
"s": 7633,
"text": "DatetimeIndex(['2015-11-14 14:39:45', '2015-11-14 14:39:55', '2015-11-14 14:40:05', '2015-11-14 14:40:15', '2015-11-14 14:40:25', '2015-11-14 14:40:35', '2015-11-14 14:40:45', '2015-11-14 14:40:55', '2015-11-14 14:41:05', '2015-11-14 14:41:15', '2015-11-14 14:41:25', '2015-11-14 14:41:35', '2015-11-14 14:41:45', '2015-11-14 14:41:55', '2015-11-14 14:42:05', '2015-11-14 14:42:15', '2015-11-14 14:42:25', '2015-11-14 14:42:35', '2015-11-14 14:42:45', '2015-11-14 14:42:55', '2015-11-14 14:43:05', '2015-11-14 14:43:15', '2015-11-14 14:43:25', '2015-11-14 14:43:35', '2015-11-14 14:43:45', '2015-11-14 14:43:55', '2015-11-14 14:44:05', '2015-11-14 14:44:15', '2015-11-14 14:44:25', '2015-11-14 14:44:35', '2015-11-14 14:44:45', '2015-11-14 14:44:55', '2015-11-14 14:45:05', '2015-11-14 14:45:15', '2015-11-14 14:45:25', '2015-11-14 14:45:35', '2015-11-14 14:45:45', '2015-11-14 14:45:55'], dtype='datetime64[ns]', freq='10S')"
},
{
"code": null,
"e": 9189,
"s": 8824,
"text": "The objective of the TriosMDB class is basically search and export the measurements from the Trios/MSDA mdb. To more advanced manipulation, such as interpolation, reflectance calculation or data cleaning, the Radiometry class is more appropriated. Even so, the TriosMDB is filled with some basic plotting functions, to make sure what is being exported makes sense."
},
{
"code": null,
"e": 9248,
"s": 9189,
"text": "mdb.plot_radiometry('reflectance', min_wl=380, max_wl=950)"
},
{
"code": null,
"e": 9303,
"s": 9248,
"text": "mdb.plot_radiometries(cols=2, min_wl=380, max_wl=950)"
},
{
"code": null,
"e": 9871,
"s": 9303,
"text": "The last step is to export the radiometries to a text file. The output is a .txt file in the Trios/MSDA format (similar to the .mlb files)The create_measurement_dir flag indicates if the a subdirectory must be created. The subdirectory name follows the format YYYYMMDD-mmss and is the format used by the RadiometryDatabase class.Keep in mind that this format is not easy to be manipulated. It is intended to maintain compatibility with MSDA output files.For a more user-friendly format, it is suggested to use the interpolated output provided by the Radiometry class."
},
{
"code": null,
"e": 9968,
"s": 9871,
"text": "out_dir = mdb.export_txt(create_measurement_dir=True)Saving output file to ..\\mdbs\\20151114-1439"
},
{
"code": null,
"e": 10153,
"s": 9968,
"text": "The output directory is returned. It is a good practice to store it in a variable, because it will be used to open the Radiometry class. Let's now check the files that have been saved:"
},
{
"code": null,
"e": 11139,
"s": 10153,
"text": "[file for file in out_dir.iterdir()][WindowsPath('../mdbs/20151114-1439/Ed_interpolated.bak'), WindowsPath('../mdbs/20151114-1439/Ed_interpolated.csv'), WindowsPath('../mdbs/20151114-1439/Ed_spectrum_LO.bak'), WindowsPath('../mdbs/20151114-1439/Ed_spectrum_LO.txt'), WindowsPath('../mdbs/20151114-1439/Fig_20151114-1439.png'), WindowsPath('../mdbs/20151114-1439/Ld_interpolated.bak'), WindowsPath('../mdbs/20151114-1439/Ld_interpolated.csv'), WindowsPath('../mdbs/20151114-1439/Ld_spectrum_LO.bak'), WindowsPath('../mdbs/20151114-1439/Ld_spectrum_LO.txt'), WindowsPath('../mdbs/20151114-1439/Lu_interpolated.bak'), WindowsPath('../mdbs/20151114-1439/Lu_interpolated.csv'), WindowsPath('../mdbs/20151114-1439/Lu_spectrum_LO.bak'), WindowsPath('../mdbs/20151114-1439/Lu_spectrum_LO.txt'), WindowsPath('../mdbs/20151114-1439/reflectance_spectrum_LO.txt'), WindowsPath('../mdbs/20151114-1439/Rrs_interpolated.bak'), WindowsPath('../mdbs/20151114-1439/Rrs_interpolated.csv')]"
},
{
"code": null,
"e": 11242,
"s": 11139,
"text": "To check the contents of the reflectance file. We can open it in the Notepad or any other text editor:"
},
{
"code": null,
"e": 11476,
"s": 11242,
"text": "As already mentioned, we can see that this format is not user friendly to be opened in Excel or in a Pandas DataFrame, for example. The interpolated .csv output provided by the Radiometry class is more appropriate for these purposes."
},
{
"code": null,
"e": 11570,
"s": 11476,
"text": "All the code, filled with more examples, is available at the /nbs/00_TriosMDB.ipynb notebook:"
},
{
"code": null,
"e": 12038,
"s": 11570,
"text": "In this first part, we’ve seen how to use the TriosMDB class to export radiometry measurements from the .mdb files at the MSDA/Trios format. Basic filtering functionalities are provided to locate the exact measurement inside the .mdb file and export it to a text file. More advanced manipulation functions, such as data cleaning, filtering, interpolation, etc. are provided by the Radiometry and the RadiometryGroup classes, but they will be covered in the next part."
},
{
"code": null,
"e": 12065,
"s": 12038,
"text": "Thanks, and see you there."
}
] |
How to use drag and drop in Android? | This example demonstrates how do I use drag and drop in android.
Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project.
Step 2 − Add the following code to res/layout/activity_main.xml.
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="horizontal"
tools:context="MainActivity" >
<LinearLayout
android:id="@+id/leftView"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_margin="10dp"
android:layout_weight="1"
android:background="@android:color/darker_gray"
android:gravity="center_vertical"
android:orientation="vertical" >
<ImageView
android:id="@+id/boxView"
android:layout_width="75dp"
android:layout_height="75dp"
android:layout_gravity="center_vertical|center_horizontal"
android:layout_margin="10dp"
android:background="@drawable/one" />
</LinearLayout>
<LinearLayout
android:id="@+id/rightView"
android:layout_width="0dp"
android:layout_height="match_parent"
android:layout_margin="10dp"
android:layout_weight="1"
android:background="@android:color/darker_gray"
android:gravity="center_vertical"
android:orientation="vertical" >
</LinearLayout>
</LinearLayout>
Step 3 − Add the following code to src/MainActivity.java
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.view.DragEvent;
import android.view.MotionEvent;
import android.view.View;
import android.view.ViewGroup;
import android.widget.LinearLayout;
public class MainActivity extends AppCompatActivity implements View.OnTouchListener, View.OnDragListener {
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
findViewById(R.id.boxView).setOnTouchListener(this);
findViewById(R.id.leftView).setOnDragListener(this);
findViewById(R.id.rightView).setOnDragListener(this);
}
@Override
public boolean onDrag(View view, DragEvent event) {
if (event.getAction() == DragEvent.ACTION_DROP) {
view = (View) event.getLocalState();
if (view.getId() == R.id.leftView || view.getId() == R.id.rightView) {
ViewGroup source = (ViewGroup) view.getParent();
source.removeView(view);
LinearLayout target = (LinearLayout) view;
target.addView(view);
}
view.setVisibility(View.VISIBLE);
}
return true;
}
@Override
public boolean onTouch(View view, MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_DOWN) {
View.DragShadowBuilder shadowBuilder = new View.DragShadowBuilder(view);
view.startDrag(null, shadowBuilder, view, 0);
view.setVisibility(View.INVISIBLE);
return true;
}
return false;
}
}
Step 4 - Add the following code to androidManifest.xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="app.com.sample">
<application
android:allowBackup="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
android:roundIcon="@mipmap/ic_launcher_round"
android:supportsRtl="true"
android:theme="@style/AppTheme">
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –
Click here to download the project code. | [
{
"code": null,
"e": 1127,
"s": 1062,
"text": "This example demonstrates how do I use drag and drop in android."
},
{
"code": null,
"e": 1256,
"s": 1127,
"text": "Step 1 − Create a new project in Android Studio, go to File ⇒ New Project and fill all required details to create a new project."
},
{
"code": null,
"e": 1321,
"s": 1256,
"text": "Step 2 − Add the following code to res/layout/activity_main.xml."
},
{
"code": null,
"e": 2559,
"s": 1321,
"text": "<LinearLayout xmlns:android=\"http://schemas.android.com/apk/res/android\"\n xmlns:tools=\"http://schemas.android.com/tools\"\n android:layout_width=\"match_parent\"\n android:layout_height=\"match_parent\"\n android:orientation=\"horizontal\"\n tools:context=\"MainActivity\" >\n <LinearLayout\n android:id=\"@+id/leftView\"\n android:layout_width=\"0dp\"\n android:layout_height=\"match_parent\"\n android:layout_margin=\"10dp\"\n android:layout_weight=\"1\"\n android:background=\"@android:color/darker_gray\"\n android:gravity=\"center_vertical\"\n android:orientation=\"vertical\" >\n <ImageView\n android:id=\"@+id/boxView\"\n android:layout_width=\"75dp\"\n android:layout_height=\"75dp\"\n android:layout_gravity=\"center_vertical|center_horizontal\"\n android:layout_margin=\"10dp\"\n android:background=\"@drawable/one\" />\n </LinearLayout>\n <LinearLayout\n android:id=\"@+id/rightView\"\n android:layout_width=\"0dp\"\n android:layout_height=\"match_parent\"\n android:layout_margin=\"10dp\"\n android:layout_weight=\"1\"\n android:background=\"@android:color/darker_gray\"\n android:gravity=\"center_vertical\"\n android:orientation=\"vertical\" >\n </LinearLayout>\n</LinearLayout>"
},
{
"code": null,
"e": 2616,
"s": 2559,
"text": "Step 3 − Add the following code to src/MainActivity.java"
},
{
"code": null,
"e": 4186,
"s": 2616,
"text": "import android.support.v7.app.AppCompatActivity;\nimport android.os.Bundle;\nimport android.view.DragEvent;\nimport android.view.MotionEvent;\nimport android.view.View;\nimport android.view.ViewGroup;\nimport android.widget.LinearLayout;\npublic class MainActivity extends AppCompatActivity implements View.OnTouchListener, View.OnDragListener {\n @Override\n protected void onCreate(Bundle savedInstanceState) {\n super.onCreate(savedInstanceState);\n setContentView(R.layout.activity_main);\n findViewById(R.id.boxView).setOnTouchListener(this);\n findViewById(R.id.leftView).setOnDragListener(this);\n findViewById(R.id.rightView).setOnDragListener(this);\n }\n @Override\n public boolean onDrag(View view, DragEvent event) {\n if (event.getAction() == DragEvent.ACTION_DROP) {\n view = (View) event.getLocalState();\n if (view.getId() == R.id.leftView || view.getId() == R.id.rightView) {\n ViewGroup source = (ViewGroup) view.getParent();\n source.removeView(view);\n LinearLayout target = (LinearLayout) view;\n target.addView(view);\n }\n view.setVisibility(View.VISIBLE);\n }\n return true;\n }\n @Override\n public boolean onTouch(View view, MotionEvent event) {\n if (event.getAction() == MotionEvent.ACTION_DOWN) {\n View.DragShadowBuilder shadowBuilder = new View.DragShadowBuilder(view);\n view.startDrag(null, shadowBuilder, view, 0);\n view.setVisibility(View.INVISIBLE);\n return true;\n }\n return false;\n }\n}"
},
{
"code": null,
"e": 4241,
"s": 4186,
"text": "Step 4 - Add the following code to androidManifest.xml"
},
{
"code": null,
"e": 4914,
"s": 4241,
"text": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<manifest xmlns:android=\"http://schemas.android.com/apk/res/android\"\n package=\"app.com.sample\">\n <application\n android:allowBackup=\"true\"\n android:icon=\"@mipmap/ic_launcher\"\n android:label=\"@string/app_name\"\n android:roundIcon=\"@mipmap/ic_launcher_round\"\n android:supportsRtl=\"true\"\n android:theme=\"@style/AppTheme\">\n <activity android:name=\".MainActivity\">\n <intent-filter>\n <action android:name=\"android.intent.action.MAIN\" />\n <category android:name=\"android.intent.category.LAUNCHER\" />\n </intent-filter>\n </activity>\n </application>\n</manifest>"
},
{
"code": null,
"e": 5261,
"s": 4914,
"text": "Let's try to run your application. I assume you have connected your actual Android Mobile device with your computer. To run the app from android studio, open one of your project's activity files and click Run icon from the toolbar. Select your mobile device as an option and then check your mobile device which will display your default screen –"
},
{
"code": null,
"e": 5302,
"s": 5261,
"text": "Click here to download the project code."
}
] |
Fortran - Stop Statement | If you wish execution of your program to cease, you can insert a stop statement.
program stop_example
implicit none
integer :: i
do i = 1, 20
if (i == 5) then
stop
end if
print*, i
end do
end program stop_example
When the above code is compiled and executed, it produces the following result −
1
2
3
4
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2227,
"s": 2146,
"text": "If you wish execution of your program to cease, you can insert a stop statement."
},
{
"code": null,
"e": 2459,
"s": 2227,
"text": "program stop_example \nimplicit none\n\n integer :: i \n do i = 1, 20 \n \n if (i == 5) then \n stop \n end if \n \n print*, i \n end do \n \nend program stop_example"
},
{
"code": null,
"e": 2540,
"s": 2459,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 2549,
"s": 2540,
"text": "1\n2\n3\n4\n"
},
{
"code": null,
"e": 2556,
"s": 2549,
"text": " Print"
},
{
"code": null,
"e": 2567,
"s": 2556,
"text": " Add Notes"
}
] |
Find minimum and maximum element in an array | Practice | GeeksforGeeks | Given an array A of size N of integers. Your task is to find the minimum and maximum elements in the array.
Example 1:
Input:
N = 6
A[] = {3, 2, 1, 56, 10000, 167}
Output:
min = 1, max = 10000
Example 2:
Input:
N = 5
A[] = {1, 345, 234, 21, 56789}
Output:
min = 1, max = 56789
Your Task:
You don't need to read input or print anything. Your task is to complete the function getMinMax() which takes the array A[] and its size N as inputs and returns the minimum and maximum element of the array.
Expected Time Complexity: O(N)
Expected Auxiliary Space: O(1)
Constraints:
1 <= N <= 105
1 <= Ai <=1012
0
ishankvarshney13in 11 hours
struct pair getMinMax(long long int arr[], long long int n) { long long min=INT_MAX; long long max=INT_MIN; for(int i=0;i<n;i++) { if(min>arr[i]) min=arr[i]; if(max<arr[i]) max=arr[i]; } return {min,max};}
please tell me what is the error?
+1
harshscodein 8 hours
long long max=INT_MIN;
long long min=INT_MAX;
for(int i=0;i<n;i++)
{
max=max<a[i]?a[i]:max;
min=min>a[i]?a[i]:min;
}
return {min,max};
0
anu149264in 1 minute
JAVA SOLUTION.........
class Compute { static pair getMinMax(long arr[], long n) { //Write your code here long largest=Integer.MIN_VALUE; long smallest=Integer.MAX_VALUE; for(int i=0;i<n;i++){ if(arr[i] >largest){ largest=arr[i]; } if(arr[i] <smallest){ smallest=arr[i]; } } pair npair=new pair(smallest,largest); return npair; }}
0
govindgautam33310 hours ago
pair<long long, long long> getMinMax(long long a[], int n) { pair<long long,long long>ans(INT_MAX,INT_MIN);
for(long long i=0;i<n;i++) { if(a[i]<ans.first) ans.first=a[i]; if(a[i]>ans.second) ans.second=a[i]; } return ans;}
0
5f2wpb3plfsg1ukzdkg59penrcckudo1r3su1sbr17 hours ago
pair<int,int> num;num.first=a[0];num.second=a[0];
for(int i=0;i<n;i++) { if(a[i]<num.first) { num.first=a[i]; } else if(a[i]>num.second) { num.second=a[i]; } } return num;
0
madhuranjansharma48
This comment was deleted.
+1
hjadoun92 days ago
pair<long long, long long> getMinMax(long long a[], int n) { pair<long long, long long> minmax; int i; if(n%2==0) { if(a[1]<a[0]) { minmax.first=a[1]; minmax.second=a[0]; } else { minmax.first=a[0]; minmax.second=a[1]; } i=2; } else { minmax.first=a[0]; minmax.second=a[0]; i=1; } while(i<n-1) { if(a[i]<a[i+1]) { if(a[i]<minmax.first) minmax.first=a[i]; if(a[i+1]>minmax.second) minmax.second=a[i+1]; } else { if(a[i]>minmax.second) minmax.second=a[i]; if(a[i+1]<minmax.first) minmax.first=a[i+1]; } i+=2; } return minmax;}
0
visionsameer392 days ago
python
def getMinMax( a, n): return min(a),max(a)
0
anmolcs192 days ago
//JAVA SOLUTION
static pair getMinMax(long a[], long n) { long max = a[0]; long min = a[0]; for(int i=0;i<n;i++){ if(a[i]> max){ max = a[i]; } else if(a[i]<min){ min = a[i]; } } return new pair(min,max); }}
0
mayurcosmos0073 days ago
class Compute
{
static pair getMinMax(long a[], long n)
{
long min = a[0], max = a[0];
for(int i=0; i<n; i++){
if(a[i] < min) min = a[i];
if( a[i] > max) max = a[i];
}
pair p = new pair(min, max);
return p;
}
}
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 346,
"s": 238,
"text": "Given an array A of size N of integers. Your task is to find the minimum and maximum elements in the array."
},
{
"code": null,
"e": 359,
"s": 348,
"text": "Example 1:"
},
{
"code": null,
"e": 434,
"s": 359,
"text": "Input:\nN = 6\nA[] = {3, 2, 1, 56, 10000, 167}\nOutput:\nmin = 1, max = 10000"
},
{
"code": null,
"e": 447,
"s": 436,
"text": "Example 2:"
},
{
"code": null,
"e": 521,
"s": 447,
"text": "Input:\nN = 5\nA[] = {1, 345, 234, 21, 56789}\nOutput:\nmin = 1, max = 56789"
},
{
"code": null,
"e": 743,
"s": 523,
"text": "Your Task: \nYou don't need to read input or print anything. Your task is to complete the function getMinMax() which takes the array A[] and its size N as inputs and returns the minimum and maximum element of the array."
},
{
"code": null,
"e": 807,
"s": 745,
"text": "Expected Time Complexity: O(N)\nExpected Auxiliary Space: O(1)"
},
{
"code": null,
"e": 851,
"s": 809,
"text": "Constraints:\n1 <= N <= 105\n1 <= Ai <=1012"
},
{
"code": null,
"e": 853,
"s": 851,
"text": "0"
},
{
"code": null,
"e": 881,
"s": 853,
"text": "ishankvarshney13in 11 hours"
},
{
"code": null,
"e": 1126,
"s": 881,
"text": "struct pair getMinMax(long long int arr[], long long int n) { long long min=INT_MAX; long long max=INT_MIN; for(int i=0;i<n;i++) { if(min>arr[i]) min=arr[i]; if(max<arr[i]) max=arr[i]; } return {min,max};}"
},
{
"code": null,
"e": 1160,
"s": 1126,
"text": "please tell me what is the error?"
},
{
"code": null,
"e": 1163,
"s": 1160,
"text": "+1"
},
{
"code": null,
"e": 1184,
"s": 1163,
"text": "harshscodein 8 hours"
},
{
"code": null,
"e": 1355,
"s": 1184,
"text": "long long max=INT_MIN;\n long long min=INT_MAX;\n for(int i=0;i<n;i++)\n {\n max=max<a[i]?a[i]:max;\n min=min>a[i]?a[i]:min;\n }\n return {min,max};"
},
{
"code": null,
"e": 1357,
"s": 1355,
"text": "0"
},
{
"code": null,
"e": 1378,
"s": 1357,
"text": "anu149264in 1 minute"
},
{
"code": null,
"e": 1401,
"s": 1378,
"text": "JAVA SOLUTION........."
},
{
"code": null,
"e": 1841,
"s": 1401,
"text": "class Compute { static pair getMinMax(long arr[], long n) { //Write your code here long largest=Integer.MIN_VALUE; long smallest=Integer.MAX_VALUE; for(int i=0;i<n;i++){ if(arr[i] >largest){ largest=arr[i]; } if(arr[i] <smallest){ smallest=arr[i]; } } pair npair=new pair(smallest,largest); return npair; }}"
},
{
"code": null,
"e": 1843,
"s": 1841,
"text": "0"
},
{
"code": null,
"e": 1871,
"s": 1843,
"text": "govindgautam33310 hours ago"
},
{
"code": null,
"e": 1984,
"s": 1871,
"text": "pair<long long, long long> getMinMax(long long a[], int n) { pair<long long,long long>ans(INT_MAX,INT_MIN);"
},
{
"code": null,
"e": 2134,
"s": 1984,
"text": " for(long long i=0;i<n;i++) { if(a[i]<ans.first) ans.first=a[i]; if(a[i]>ans.second) ans.second=a[i]; } return ans;}"
},
{
"code": null,
"e": 2136,
"s": 2134,
"text": "0"
},
{
"code": null,
"e": 2189,
"s": 2136,
"text": "5f2wpb3plfsg1ukzdkg59penrcckudo1r3su1sbr17 hours ago"
},
{
"code": null,
"e": 2239,
"s": 2189,
"text": "pair<int,int> num;num.first=a[0];num.second=a[0];"
},
{
"code": null,
"e": 2444,
"s": 2239,
"text": " for(int i=0;i<n;i++) { if(a[i]<num.first) { num.first=a[i]; } else if(a[i]>num.second) { num.second=a[i]; } } return num;"
},
{
"code": null,
"e": 2446,
"s": 2444,
"text": "0"
},
{
"code": null,
"e": 2466,
"s": 2446,
"text": "madhuranjansharma48"
},
{
"code": null,
"e": 2492,
"s": 2466,
"text": "This comment was deleted."
},
{
"code": null,
"e": 2495,
"s": 2492,
"text": "+1"
},
{
"code": null,
"e": 2514,
"s": 2495,
"text": "hjadoun92 days ago"
},
{
"code": null,
"e": 3286,
"s": 2514,
"text": "pair<long long, long long> getMinMax(long long a[], int n) { pair<long long, long long> minmax; int i; if(n%2==0) { if(a[1]<a[0]) { minmax.first=a[1]; minmax.second=a[0]; } else { minmax.first=a[0]; minmax.second=a[1]; } i=2; } else { minmax.first=a[0]; minmax.second=a[0]; i=1; } while(i<n-1) { if(a[i]<a[i+1]) { if(a[i]<minmax.first) minmax.first=a[i]; if(a[i+1]>minmax.second) minmax.second=a[i+1]; } else { if(a[i]>minmax.second) minmax.second=a[i]; if(a[i+1]<minmax.first) minmax.first=a[i+1]; } i+=2; } return minmax;}"
},
{
"code": null,
"e": 3288,
"s": 3286,
"text": "0"
},
{
"code": null,
"e": 3313,
"s": 3288,
"text": "visionsameer392 days ago"
},
{
"code": null,
"e": 3320,
"s": 3313,
"text": "python"
},
{
"code": null,
"e": 3367,
"s": 3320,
"text": " def getMinMax( a, n): return min(a),max(a)"
},
{
"code": null,
"e": 3369,
"s": 3367,
"text": "0"
},
{
"code": null,
"e": 3389,
"s": 3369,
"text": "anmolcs192 days ago"
},
{
"code": null,
"e": 3405,
"s": 3389,
"text": "//JAVA SOLUTION"
},
{
"code": null,
"e": 3701,
"s": 3407,
"text": " static pair getMinMax(long a[], long n) { long max = a[0]; long min = a[0]; for(int i=0;i<n;i++){ if(a[i]> max){ max = a[i]; } else if(a[i]<min){ min = a[i]; } } return new pair(min,max); }} "
},
{
"code": null,
"e": 3703,
"s": 3701,
"text": "0"
},
{
"code": null,
"e": 3728,
"s": 3703,
"text": "mayurcosmos0073 days ago"
},
{
"code": null,
"e": 4044,
"s": 3728,
"text": "class Compute \n{\n static pair getMinMax(long a[], long n) \n {\n long min = a[0], max = a[0];\n \n for(int i=0; i<n; i++){\n \n if(a[i] < min) min = a[i];\n if( a[i] > max) max = a[i];\n }\n \n pair p = new pair(min, max);\n \n return p;\n }\n}"
},
{
"code": null,
"e": 4190,
"s": 4044,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 4226,
"s": 4190,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 4236,
"s": 4226,
"text": "\nProblem\n"
},
{
"code": null,
"e": 4246,
"s": 4236,
"text": "\nContest\n"
},
{
"code": null,
"e": 4309,
"s": 4246,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 4457,
"s": 4309,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 4665,
"s": 4457,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 4771,
"s": 4665,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
How to Copy Rows from One Table to Another in SQL? - GeeksforGeeks | 28 Feb, 2022
In this article, we learn How to copy rows from one table to another table in SQL. For better understanding, we will Implement this Query with the Help of an Example. This query is Very Help Whenever we need two or more same Column in a Different Table. That time we do need to insert the record manually in the table with the help of another table We can directly Copy the record from another table. First of all, We will Create a Database name as a Sample after that we will create two tables inside of the Database Sample. The first Table Name is EMPOLYEET and the Second one is the ATTENDANCE Table.
We will use here two Statements
INSERT STATEMENT
SELECT STATEMENT
We will follow the below steps to Implement How to Copy rows from one table to another table in SQL:
Step 1: Create A Database
For database creation, there is the query we will use in the SQL Platform. this is the query.
Syntax:
Create database database_name;
Query:
CREATE DATABASE Sample; // query will create a database in SQL Platform
Step 2: Use Database
For using the database we will use another query in SQL Platform like Mysql, oracle, etc.
Query:
use Sample;
Step 3: Creation of table
For the creation Data table, we will use this below
Query:
create table table_name(
column1 type(size),
column2 type(size),
.
columnN type(size)
);
This will create a new table in the existing Database.
Query:
CREATE TABLE EMPLOYEE
(
EMPNAME VARCHAR(25),
GENDER VARCHAR(6),
DEPT VARCHAR(20),
CONTACTNO BIGINT NOT NULL,
CITY VARCHAR(15)
);
and now we will Create another Table name ATTENDANCE
Query:
CREATE TABLE ATTENDANCE
( EMPNAME VARCHAR(25),
GENDER VARCHAR(6),
DEPT VARCHAR(20),
ATTENDATE DATE DEFAULT GETDATE()
);
Step 4: Insert Data Into Table EMPOLYEET
Query:
INSERT INTO EMPLOYEET
VALUES ('VISHAL','MALE','SALES',9193458625,'GAZIABAD'),
('DIVYA','FEMALE','MANAGER',7352158944,'BARIELLY'),
('REKHA','FEMALE','IT',7830246946,'KOLKATA'),
('RAHUL','MALE','MARKETING',7906334516,'MEERUT'),
('SANJAY','MALE','SALES',9149335694,'MORADABAD'),
('RAJKUMAR','MALE','MANAGER',9675274391,'BENGALURU'),
('RAJSHREE','FEMALE','SALES',9193458625,'VODODARA'),
('HAIM','MALE','IT',7088573213,'SAMBHAL'),
('RAKESH','MALE','MARKETING',9645956441,'BOKARO'),
('MOHINI','FEMALE','SALES',9147844694,'Dehli');
Step 5: VERIFY OR VIEW INSERTED DATA IN EMPLOYEET TABLE
After inserting data into the table We can justify or confirm which data we have to insert correctly or not. With the help of the Below Query.
Query:
SELECT * FROM EMPLOYEET;
Output:
Step 6: Insert Data Into Table ATTENDANCE
In this table we will not Insert record Manually because the same data exist in another table EMPLOYEET so, therefore, we will copy data from EMPLOYEET table to ATTENDANCE Table. With the below of below query
Query:
INSERT INTO ATTENDANCE (EMPNAME,GENDER,DEPT)
SELECT EMPNAME,GENDER,DEPT FROM EMPLOYEET;
After Inserting data in the table We can justify or confirm which data we have copied record from EMPLOYEET have inserted correctly or not. With the help of the Below Query.
Query:
SELECT * FROM ATTENDANCE;
Now We can see in below Snapshots record Copied Correctly
Output:
anikaseth98
rkbhola5
SQL-Server
SQL
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Update Multiple Columns in Single Update Statement in SQL?
What is Temporary Table in SQL?
SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter
SQL using Python
SQL | Subquery
SQL Query to Convert VARCHAR to INT
How to Write a SQL Query For a Specific Date Range and Date Time?
How to Select Data Between Two Dates and Times in SQL Server?
SQL - SELECT from Multiple Tables with MS SQL Server
SQL Query to Delete Duplicate Rows | [
{
"code": null,
"e": 24268,
"s": 24240,
"text": "\n28 Feb, 2022"
},
{
"code": null,
"e": 24872,
"s": 24268,
"text": "In this article, we learn How to copy rows from one table to another table in SQL. For better understanding, we will Implement this Query with the Help of an Example. This query is Very Help Whenever we need two or more same Column in a Different Table. That time we do need to insert the record manually in the table with the help of another table We can directly Copy the record from another table. First of all, We will Create a Database name as a Sample after that we will create two tables inside of the Database Sample. The first Table Name is EMPOLYEET and the Second one is the ATTENDANCE Table."
},
{
"code": null,
"e": 24905,
"s": 24872,
"text": "We will use here two Statements "
},
{
"code": null,
"e": 24922,
"s": 24905,
"text": "INSERT STATEMENT"
},
{
"code": null,
"e": 24939,
"s": 24922,
"text": "SELECT STATEMENT"
},
{
"code": null,
"e": 25040,
"s": 24939,
"text": "We will follow the below steps to Implement How to Copy rows from one table to another table in SQL:"
},
{
"code": null,
"e": 25067,
"s": 25040,
"text": "Step 1: Create A Database "
},
{
"code": null,
"e": 25161,
"s": 25067,
"text": "For database creation, there is the query we will use in the SQL Platform. this is the query."
},
{
"code": null,
"e": 25169,
"s": 25161,
"text": "Syntax:"
},
{
"code": null,
"e": 25201,
"s": 25169,
"text": "Create database database_name; "
},
{
"code": null,
"e": 25208,
"s": 25201,
"text": "Query:"
},
{
"code": null,
"e": 25281,
"s": 25208,
"text": "CREATE DATABASE Sample; // query will create a database in SQL Platform "
},
{
"code": null,
"e": 25304,
"s": 25281,
"text": "Step 2: Use Database "
},
{
"code": null,
"e": 25395,
"s": 25304,
"text": "For using the database we will use another query in SQL Platform like Mysql, oracle, etc. "
},
{
"code": null,
"e": 25402,
"s": 25395,
"text": "Query:"
},
{
"code": null,
"e": 25416,
"s": 25402,
"text": "use Sample; "
},
{
"code": null,
"e": 25442,
"s": 25416,
"text": "Step 3: Creation of table"
},
{
"code": null,
"e": 25495,
"s": 25442,
"text": "For the creation Data table, we will use this below "
},
{
"code": null,
"e": 25502,
"s": 25495,
"text": "Query:"
},
{
"code": null,
"e": 25592,
"s": 25502,
"text": "create table table_name(\ncolumn1 type(size),\ncolumn2 type(size),\n.\ncolumnN type(size)\n); "
},
{
"code": null,
"e": 25647,
"s": 25592,
"text": "This will create a new table in the existing Database."
},
{
"code": null,
"e": 25654,
"s": 25647,
"text": "Query:"
},
{
"code": null,
"e": 25784,
"s": 25654,
"text": "CREATE TABLE EMPLOYEE\n(\nEMPNAME VARCHAR(25),\nGENDER VARCHAR(6),\nDEPT VARCHAR(20),\nCONTACTNO BIGINT NOT NULL,\nCITY VARCHAR(15)\n); "
},
{
"code": null,
"e": 25837,
"s": 25784,
"text": "and now we will Create another Table name ATTENDANCE"
},
{
"code": null,
"e": 25844,
"s": 25837,
"text": "Query:"
},
{
"code": null,
"e": 25965,
"s": 25844,
"text": "CREATE TABLE ATTENDANCE\n( EMPNAME VARCHAR(25),\nGENDER VARCHAR(6),\nDEPT VARCHAR(20),\nATTENDATE DATE DEFAULT GETDATE()\n); "
},
{
"code": null,
"e": 26007,
"s": 25965,
"text": "Step 4: Insert Data Into Table EMPOLYEET "
},
{
"code": null,
"e": 26014,
"s": 26007,
"text": "Query:"
},
{
"code": null,
"e": 26540,
"s": 26014,
"text": "INSERT INTO EMPLOYEET\nVALUES ('VISHAL','MALE','SALES',9193458625,'GAZIABAD'),\n('DIVYA','FEMALE','MANAGER',7352158944,'BARIELLY'),\n('REKHA','FEMALE','IT',7830246946,'KOLKATA'),\n('RAHUL','MALE','MARKETING',7906334516,'MEERUT'),\n('SANJAY','MALE','SALES',9149335694,'MORADABAD'),\n('RAJKUMAR','MALE','MANAGER',9675274391,'BENGALURU'),\n('RAJSHREE','FEMALE','SALES',9193458625,'VODODARA'),\n('HAIM','MALE','IT',7088573213,'SAMBHAL'),\n('RAKESH','MALE','MARKETING',9645956441,'BOKARO'),\n('MOHINI','FEMALE','SALES',9147844694,'Dehli'); "
},
{
"code": null,
"e": 26597,
"s": 26540,
"text": "Step 5: VERIFY OR VIEW INSERTED DATA IN EMPLOYEET TABLE "
},
{
"code": null,
"e": 26740,
"s": 26597,
"text": "After inserting data into the table We can justify or confirm which data we have to insert correctly or not. With the help of the Below Query."
},
{
"code": null,
"e": 26747,
"s": 26740,
"text": "Query:"
},
{
"code": null,
"e": 26774,
"s": 26747,
"text": " SELECT * FROM EMPLOYEET; "
},
{
"code": null,
"e": 26782,
"s": 26774,
"text": "Output:"
},
{
"code": null,
"e": 26824,
"s": 26782,
"text": "Step 6: Insert Data Into Table ATTENDANCE"
},
{
"code": null,
"e": 27033,
"s": 26824,
"text": "In this table we will not Insert record Manually because the same data exist in another table EMPLOYEET so, therefore, we will copy data from EMPLOYEET table to ATTENDANCE Table. With the below of below query"
},
{
"code": null,
"e": 27040,
"s": 27033,
"text": "Query:"
},
{
"code": null,
"e": 27129,
"s": 27040,
"text": "INSERT INTO ATTENDANCE (EMPNAME,GENDER,DEPT)\nSELECT EMPNAME,GENDER,DEPT FROM EMPLOYEET; "
},
{
"code": null,
"e": 27303,
"s": 27129,
"text": "After Inserting data in the table We can justify or confirm which data we have copied record from EMPLOYEET have inserted correctly or not. With the help of the Below Query."
},
{
"code": null,
"e": 27310,
"s": 27303,
"text": "Query:"
},
{
"code": null,
"e": 27336,
"s": 27310,
"text": "SELECT * FROM ATTENDANCE;"
},
{
"code": null,
"e": 27395,
"s": 27336,
"text": "Now We can see in below Snapshots record Copied Correctly "
},
{
"code": null,
"e": 27403,
"s": 27395,
"text": "Output:"
},
{
"code": null,
"e": 27415,
"s": 27403,
"text": "anikaseth98"
},
{
"code": null,
"e": 27424,
"s": 27415,
"text": "rkbhola5"
},
{
"code": null,
"e": 27435,
"s": 27424,
"text": "SQL-Server"
},
{
"code": null,
"e": 27439,
"s": 27435,
"text": "SQL"
},
{
"code": null,
"e": 27443,
"s": 27439,
"text": "SQL"
},
{
"code": null,
"e": 27541,
"s": 27443,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27607,
"s": 27541,
"text": "How to Update Multiple Columns in Single Update Statement in SQL?"
},
{
"code": null,
"e": 27639,
"s": 27607,
"text": "What is Temporary Table in SQL?"
},
{
"code": null,
"e": 27717,
"s": 27639,
"text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter"
},
{
"code": null,
"e": 27734,
"s": 27717,
"text": "SQL using Python"
},
{
"code": null,
"e": 27749,
"s": 27734,
"text": "SQL | Subquery"
},
{
"code": null,
"e": 27785,
"s": 27749,
"text": "SQL Query to Convert VARCHAR to INT"
},
{
"code": null,
"e": 27851,
"s": 27785,
"text": "How to Write a SQL Query For a Specific Date Range and Date Time?"
},
{
"code": null,
"e": 27913,
"s": 27851,
"text": "How to Select Data Between Two Dates and Times in SQL Server?"
},
{
"code": null,
"e": 27966,
"s": 27913,
"text": "SQL - SELECT from Multiple Tables with MS SQL Server"
}
] |
How to pass multiple props in a single event handler in ReactJS? - GeeksforGeeks | 31 Mar, 2021
If we want to pass/call the multiple props methods in a single event handler in ReactJS then there are two ways to make it work.
Method 1: We can make a separate method for the event and call all the props method in that method. Syntax:const seperateMethod= () => {
props.method1()
props.method2()
}
Method 1: We can make a separate method for the event and call all the props method in that method.
Syntax:
const seperateMethod= () => {
props.method1()
props.method2()
}
Method 2: We can create an anonymous function and call all the props method inside the anonymous method.Syntax:<Component onClick={() => {
props.method1();
props.method2()
}}>
</Component>
Method 2: We can create an anonymous function and call all the props method inside the anonymous method.
Syntax:
<Component onClick={() => {
props.method1();
props.method2()
}}>
</Component>
Creating React Application:
Step 1: Create a React application using the following command:
npx create-react-app foldername
Step 2: After creating your project folder i.e. foldername, move to it using the following command:
cd foldername
Project Structure: It will look like the following.
Example: Now write down the following code in the App.js file. Here, App is our default component where we have written our code.
App.js
import React from 'react';export default class App extends React.Component { sayHi = () => { alert("Hi from GFG"); } sayHello = () => { alert("Hello from GFG"); } render() { return ( <div style={{ marginLeft: 50 }}> <Child1 m1={this.sayHi} m2={this.sayHello} > </Child1> <br></br> <Child2 m1={this.sayHi} m2={this.sayHello}> </Child2> </div> ) }} // Method 1class Child1 extends React.Component { seperatemethod = () => { this.props.m1(); this.props.m2(); } render() { return ( <div> <button onClick={this.seperatemethod}> Hello Hi from GFG </button> </div> ) }} // Method 2class Child2 extends React.Component { render() { return ( <div> <button onClick={() => { this.props.m1(); this.props.m2(); }} >Hello hi from GFG</button> </div> ) }}
Step to Run Application: Run the application using the following command from the root directory of the project:
npm start
Output: Now open your browser and go to http://localhost:3000/, you will see the following output:
Explanation: As we can see from the above code Child1 component is calling the multiple props using method 1, by creating a separate method and the child2 component is calling the multiple props by creating an anonymous method.
Picked
React-Questions
ReactJS
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to set background images in ReactJS ?
How to create a table in ReactJS ?
How to navigate on path by button click in react router ?
How to create a multi-page website using React.js ?
ReactJS useNavigate() Hook
Top 10 Front End Developer Skills That You Need in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to insert spaces/tabs in text using HTML/CSS?
Difference between var, let and const keywords in JavaScript | [
{
"code": null,
"e": 24397,
"s": 24369,
"text": "\n31 Mar, 2021"
},
{
"code": null,
"e": 24526,
"s": 24397,
"text": "If we want to pass/call the multiple props methods in a single event handler in ReactJS then there are two ways to make it work."
},
{
"code": null,
"e": 24699,
"s": 24526,
"text": "Method 1: We can make a separate method for the event and call all the props method in that method. Syntax:const seperateMethod= () => {\n props.method1()\n props.method2()\n}"
},
{
"code": null,
"e": 24800,
"s": 24699,
"text": "Method 1: We can make a separate method for the event and call all the props method in that method. "
},
{
"code": null,
"e": 24808,
"s": 24800,
"text": "Syntax:"
},
{
"code": null,
"e": 24874,
"s": 24808,
"text": "const seperateMethod= () => {\n props.method1()\n props.method2()\n}"
},
{
"code": null,
"e": 25070,
"s": 24874,
"text": "Method 2: We can create an anonymous function and call all the props method inside the anonymous method.Syntax:<Component onClick={() => { \n props.method1(); \n props.method2() \n}}>\n</Component>"
},
{
"code": null,
"e": 25175,
"s": 25070,
"text": "Method 2: We can create an anonymous function and call all the props method inside the anonymous method."
},
{
"code": null,
"e": 25183,
"s": 25175,
"text": "Syntax:"
},
{
"code": null,
"e": 25268,
"s": 25183,
"text": "<Component onClick={() => { \n props.method1(); \n props.method2() \n}}>\n</Component>"
},
{
"code": null,
"e": 25296,
"s": 25268,
"text": "Creating React Application:"
},
{
"code": null,
"e": 25360,
"s": 25296,
"text": "Step 1: Create a React application using the following command:"
},
{
"code": null,
"e": 25392,
"s": 25360,
"text": "npx create-react-app foldername"
},
{
"code": null,
"e": 25492,
"s": 25392,
"text": "Step 2: After creating your project folder i.e. foldername, move to it using the following command:"
},
{
"code": null,
"e": 25506,
"s": 25492,
"text": "cd foldername"
},
{
"code": null,
"e": 25558,
"s": 25506,
"text": "Project Structure: It will look like the following."
},
{
"code": null,
"e": 25688,
"s": 25558,
"text": "Example: Now write down the following code in the App.js file. Here, App is our default component where we have written our code."
},
{
"code": null,
"e": 25695,
"s": 25688,
"text": "App.js"
},
{
"code": "import React from 'react';export default class App extends React.Component { sayHi = () => { alert(\"Hi from GFG\"); } sayHello = () => { alert(\"Hello from GFG\"); } render() { return ( <div style={{ marginLeft: 50 }}> <Child1 m1={this.sayHi} m2={this.sayHello} > </Child1> <br></br> <Child2 m1={this.sayHi} m2={this.sayHello}> </Child2> </div> ) }} // Method 1class Child1 extends React.Component { seperatemethod = () => { this.props.m1(); this.props.m2(); } render() { return ( <div> <button onClick={this.seperatemethod}> Hello Hi from GFG </button> </div> ) }} // Method 2class Child2 extends React.Component { render() { return ( <div> <button onClick={() => { this.props.m1(); this.props.m2(); }} >Hello hi from GFG</button> </div> ) }}",
"e": 26649,
"s": 25695,
"text": null
},
{
"code": null,
"e": 26762,
"s": 26649,
"text": "Step to Run Application: Run the application using the following command from the root directory of the project:"
},
{
"code": null,
"e": 26772,
"s": 26762,
"text": "npm start"
},
{
"code": null,
"e": 26871,
"s": 26772,
"text": "Output: Now open your browser and go to http://localhost:3000/, you will see the following output:"
},
{
"code": null,
"e": 27099,
"s": 26871,
"text": "Explanation: As we can see from the above code Child1 component is calling the multiple props using method 1, by creating a separate method and the child2 component is calling the multiple props by creating an anonymous method."
},
{
"code": null,
"e": 27106,
"s": 27099,
"text": "Picked"
},
{
"code": null,
"e": 27122,
"s": 27106,
"text": "React-Questions"
},
{
"code": null,
"e": 27130,
"s": 27122,
"text": "ReactJS"
},
{
"code": null,
"e": 27147,
"s": 27130,
"text": "Web Technologies"
},
{
"code": null,
"e": 27245,
"s": 27147,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27254,
"s": 27245,
"text": "Comments"
},
{
"code": null,
"e": 27267,
"s": 27254,
"text": "Old Comments"
},
{
"code": null,
"e": 27309,
"s": 27267,
"text": "How to set background images in ReactJS ?"
},
{
"code": null,
"e": 27344,
"s": 27309,
"text": "How to create a table in ReactJS ?"
},
{
"code": null,
"e": 27402,
"s": 27344,
"text": "How to navigate on path by button click in react router ?"
},
{
"code": null,
"e": 27454,
"s": 27402,
"text": "How to create a multi-page website using React.js ?"
},
{
"code": null,
"e": 27481,
"s": 27454,
"text": "ReactJS useNavigate() Hook"
},
{
"code": null,
"e": 27537,
"s": 27481,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 27570,
"s": 27537,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 27632,
"s": 27570,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 27682,
"s": 27632,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
C# program to display the next day | To display the next day, use the AddDays() method and a value +1 to get the next day.
Firstly, use the following to get the current day −
DateTime.Today
Now, add 1 to the AddDays() method to get the next day −
DateTime.Today.AddDays(1)
The following is the code −
Live Demo
using System;
using System.Collections.Generic;
using System.Linq;
public class Demo {
public static void Main() {
Console.WriteLine("Today = {0}", DateTime.Today);
Console.WriteLine("Previous Day = {0}", DateTime.Today.AddDays(-1));
Console.WriteLine("Next Day (Tomorrow) = {0}", DateTime.Today.AddDays(1));
}
}
Today = 9/4/2018 12:00:00 AM
Previous Day = 9/3/2018 12:00:00 AM
Next Day (Tomorrow) = 9/5/2018 12:00:00 AM | [
{
"code": null,
"e": 1148,
"s": 1062,
"text": "To display the next day, use the AddDays() method and a value +1 to get the next day."
},
{
"code": null,
"e": 1200,
"s": 1148,
"text": "Firstly, use the following to get the current day −"
},
{
"code": null,
"e": 1215,
"s": 1200,
"text": "DateTime.Today"
},
{
"code": null,
"e": 1272,
"s": 1215,
"text": "Now, add 1 to the AddDays() method to get the next day −"
},
{
"code": null,
"e": 1298,
"s": 1272,
"text": "DateTime.Today.AddDays(1)"
},
{
"code": null,
"e": 1326,
"s": 1298,
"text": "The following is the code −"
},
{
"code": null,
"e": 1337,
"s": 1326,
"text": " Live Demo"
},
{
"code": null,
"e": 1674,
"s": 1337,
"text": "using System;\nusing System.Collections.Generic;\nusing System.Linq;\npublic class Demo {\n public static void Main() {\n Console.WriteLine(\"Today = {0}\", DateTime.Today);\n Console.WriteLine(\"Previous Day = {0}\", DateTime.Today.AddDays(-1));\n Console.WriteLine(\"Next Day (Tomorrow) = {0}\", DateTime.Today.AddDays(1));\n }\n}"
},
{
"code": null,
"e": 1782,
"s": 1674,
"text": "Today = 9/4/2018 12:00:00 AM\nPrevious Day = 9/3/2018 12:00:00 AM\nNext Day (Tomorrow) = 9/5/2018 12:00:00 AM"
}
] |
Check if given email address is valid or not in C++ - GeeksforGeeks | 26 Oct, 2020
Given a string email that denotes an Email Address, the task is to check if the given string is a valid email id or not. If found to be true, then print “Valid”. Otherwise, print “Invalid”. A valid email address consists of an email prefix and an email domain, both in acceptable formats:
The email address must start with a letter (no numbers or symbols).
There must be an @ somewhere in the string that is located before the dot.
There must be text after the @ symbol but before the dot.
There must be a dot and text after the dot.
Examples:
Input: email = “[email protected]” Output: ValidExplanation:The given string follow all the criteria for an valid email string.
Input: email = “[email protected]”Output: Invalid
String Traversal based Approach: Follow the steps below:
Check if the first character of the email id string is an alphabet or not. If not, then the email is Invalid.Now traverse over the string email to find the position the “@” and “.” If “@” or “.” is not present then the email is Invalid.If “.” is not present after “@” then the email is Invalid.If “.” is the last character of the string email then the email id is Invalid.Otherwise, the email is Valid.
Check if the first character of the email id string is an alphabet or not. If not, then the email is Invalid.
Now traverse over the string email to find the position the “@” and “.” If “@” or “.” is not present then the email is Invalid.
If “.” is not present after “@” then the email is Invalid.
If “.” is the last character of the string email then the email id is Invalid.
Otherwise, the email is Valid.
Below is the implementation of the above approach:
C++
// C++ program for the above approach #include <bits/stdc++.h>using namespace std; // Function to check the character// is an alphabet or notbool isChar(char c){ return ((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z'));} // Function to check the character// is an digit or notbool isDigit(const char c){ return (c >= '0' && c <= '9');} // Function to check email id is// valid or notbool is_valid(string email){ // Check the first character // is an alphabet or not if (!isChar(email[0])) { // If it's not an alphabet // email id is not valid return 0; } // Variable to store position // of At and Dot int At = -1, Dot = -1; // Traverse over the email id // string to find position of // Dot and At for (int i = 0; i < email.length(); i++) { // If the character is '@' if (email[i] == '@') { At = i; } // If character is '.' else if (email[i] == '.') { Dot = i; } } // If At or Dot is not present if (At == -1 || Dot == -1) return 0; // If Dot is present before At if (At > Dot) return 0; // If Dot is present at the end return !(Dot >= (email.length() - 1));} // Driver Codeint main(){ // Given string email string email = "[email protected]"; // Function Call bool ans = is_valid(email); // Print the result if (ans) { cout << email << " : " << "valid" << endl; } else { cout << email << " : " << "invalid" << endl; } return 0;}
[email protected] : valid
Time Complexity: O(N)Auxiliary Space: O(1)
Regular Expression based Approach: The given problem can also be solved using Regular Expression. Below are the steps:
Get the email string.
Create a regular expression to check the valid email as mentioned below:
regex = “(\\w+)(\\.|_)?(\\w*)@(\\w+)(\\.(\\w+))+”
Match the given string email with the regular expression. In C++, this can be done by using regex_match().
Print “Valid” if the given string email matches with the given regular expression, else return “Invalid”.
Below is the implementation of the above approach:
C++
// C++ program for the above approach #include <iostream>#include <regex>using namespace std; // Function to check the email id// is valid or notbool isValid(const string& email){ // Regular expression definition const regex pattern( "(\\w+)(\\.|_)?(\\w*)@(\\w+)(\\.(\\w+))+"); // Match the string pattern // with regular expression return regex_match(email, pattern);} // Driver Codeint main(){ // Given string email string email = "[email protected]"; // Function Call bool ans = isValid(email); // Print the result if (ans) { cout << email << " : " << "valid" << endl; } else { cout << email << " : " << "invalid" << endl; }}
[email protected] : valid
Time Complexity: O(N)Auxiliary Space: O(1)
CPP-regex
regular-expression
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Top 50 String Coding Problems for Interviews
Naive algorithm for Pattern Searching
Vigenère Cipher
Hill Cipher
Count words in a given string
How to Append a Character to a String in C
Convert character array to string in C++
sprintf() in C
Program to count occurrence of a given character in a string
Converting Roman Numerals to Decimal lying between 1 to 3999 | [
{
"code": null,
"e": 24854,
"s": 24826,
"text": "\n26 Oct, 2020"
},
{
"code": null,
"e": 25143,
"s": 24854,
"text": "Given a string email that denotes an Email Address, the task is to check if the given string is a valid email id or not. If found to be true, then print “Valid”. Otherwise, print “Invalid”. A valid email address consists of an email prefix and an email domain, both in acceptable formats:"
},
{
"code": null,
"e": 25211,
"s": 25143,
"text": "The email address must start with a letter (no numbers or symbols)."
},
{
"code": null,
"e": 25286,
"s": 25211,
"text": "There must be an @ somewhere in the string that is located before the dot."
},
{
"code": null,
"e": 25344,
"s": 25286,
"text": "There must be text after the @ symbol but before the dot."
},
{
"code": null,
"e": 25388,
"s": 25344,
"text": "There must be a dot and text after the dot."
},
{
"code": null,
"e": 25398,
"s": 25388,
"text": "Examples:"
},
{
"code": null,
"e": 25537,
"s": 25398,
"text": "Input: email = “[email protected]” Output: ValidExplanation:The given string follow all the criteria for an valid email string."
},
{
"code": null,
"e": 25599,
"s": 25537,
"text": "Input: email = “[email protected]”Output: Invalid"
},
{
"code": null,
"e": 25656,
"s": 25599,
"text": "String Traversal based Approach: Follow the steps below:"
},
{
"code": null,
"e": 26059,
"s": 25656,
"text": "Check if the first character of the email id string is an alphabet or not. If not, then the email is Invalid.Now traverse over the string email to find the position the “@” and “.” If “@” or “.” is not present then the email is Invalid.If “.” is not present after “@” then the email is Invalid.If “.” is the last character of the string email then the email id is Invalid.Otherwise, the email is Valid."
},
{
"code": null,
"e": 26169,
"s": 26059,
"text": "Check if the first character of the email id string is an alphabet or not. If not, then the email is Invalid."
},
{
"code": null,
"e": 26297,
"s": 26169,
"text": "Now traverse over the string email to find the position the “@” and “.” If “@” or “.” is not present then the email is Invalid."
},
{
"code": null,
"e": 26356,
"s": 26297,
"text": "If “.” is not present after “@” then the email is Invalid."
},
{
"code": null,
"e": 26435,
"s": 26356,
"text": "If “.” is the last character of the string email then the email id is Invalid."
},
{
"code": null,
"e": 26466,
"s": 26435,
"text": "Otherwise, the email is Valid."
},
{
"code": null,
"e": 26517,
"s": 26466,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 26521,
"s": 26517,
"text": "C++"
},
{
"code": "// C++ program for the above approach #include <bits/stdc++.h>using namespace std; // Function to check the character// is an alphabet or notbool isChar(char c){ return ((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z'));} // Function to check the character// is an digit or notbool isDigit(const char c){ return (c >= '0' && c <= '9');} // Function to check email id is// valid or notbool is_valid(string email){ // Check the first character // is an alphabet or not if (!isChar(email[0])) { // If it's not an alphabet // email id is not valid return 0; } // Variable to store position // of At and Dot int At = -1, Dot = -1; // Traverse over the email id // string to find position of // Dot and At for (int i = 0; i < email.length(); i++) { // If the character is '@' if (email[i] == '@') { At = i; } // If character is '.' else if (email[i] == '.') { Dot = i; } } // If At or Dot is not present if (At == -1 || Dot == -1) return 0; // If Dot is present before At if (At > Dot) return 0; // If Dot is present at the end return !(Dot >= (email.length() - 1));} // Driver Codeint main(){ // Given string email string email = \"[email protected]\"; // Function Call bool ans = is_valid(email); // Print the result if (ans) { cout << email << \" : \" << \"valid\" << endl; } else { cout << email << \" : \" << \"invalid\" << endl; } return 0;}",
"e": 28139,
"s": 26521,
"text": null
},
{
"code": null,
"e": 28177,
"s": 28139,
"text": "[email protected] : valid\n"
},
{
"code": null,
"e": 28220,
"s": 28177,
"text": "Time Complexity: O(N)Auxiliary Space: O(1)"
},
{
"code": null,
"e": 28339,
"s": 28220,
"text": "Regular Expression based Approach: The given problem can also be solved using Regular Expression. Below are the steps:"
},
{
"code": null,
"e": 28361,
"s": 28339,
"text": "Get the email string."
},
{
"code": null,
"e": 28434,
"s": 28361,
"text": "Create a regular expression to check the valid email as mentioned below:"
},
{
"code": null,
"e": 28484,
"s": 28434,
"text": "regex = “(\\\\w+)(\\\\.|_)?(\\\\w*)@(\\\\w+)(\\\\.(\\\\w+))+”"
},
{
"code": null,
"e": 28591,
"s": 28484,
"text": "Match the given string email with the regular expression. In C++, this can be done by using regex_match()."
},
{
"code": null,
"e": 28697,
"s": 28591,
"text": "Print “Valid” if the given string email matches with the given regular expression, else return “Invalid”."
},
{
"code": null,
"e": 28748,
"s": 28697,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 28752,
"s": 28748,
"text": "C++"
},
{
"code": "// C++ program for the above approach #include <iostream>#include <regex>using namespace std; // Function to check the email id// is valid or notbool isValid(const string& email){ // Regular expression definition const regex pattern( \"(\\\\w+)(\\\\.|_)?(\\\\w*)@(\\\\w+)(\\\\.(\\\\w+))+\"); // Match the string pattern // with regular expression return regex_match(email, pattern);} // Driver Codeint main(){ // Given string email string email = \"[email protected]\"; // Function Call bool ans = isValid(email); // Print the result if (ans) { cout << email << \" : \" << \"valid\" << endl; } else { cout << email << \" : \" << \"invalid\" << endl; }}",
"e": 29497,
"s": 28752,
"text": null
},
{
"code": null,
"e": 29535,
"s": 29497,
"text": "[email protected] : valid\n"
},
{
"code": null,
"e": 29578,
"s": 29535,
"text": "Time Complexity: O(N)Auxiliary Space: O(1)"
},
{
"code": null,
"e": 29588,
"s": 29578,
"text": "CPP-regex"
},
{
"code": null,
"e": 29607,
"s": 29588,
"text": "regular-expression"
},
{
"code": null,
"e": 29615,
"s": 29607,
"text": "Strings"
},
{
"code": null,
"e": 29623,
"s": 29615,
"text": "Strings"
},
{
"code": null,
"e": 29721,
"s": 29623,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29766,
"s": 29721,
"text": "Top 50 String Coding Problems for Interviews"
},
{
"code": null,
"e": 29804,
"s": 29766,
"text": "Naive algorithm for Pattern Searching"
},
{
"code": null,
"e": 29821,
"s": 29804,
"text": "Vigenère Cipher"
},
{
"code": null,
"e": 29833,
"s": 29821,
"text": "Hill Cipher"
},
{
"code": null,
"e": 29863,
"s": 29833,
"text": "Count words in a given string"
},
{
"code": null,
"e": 29906,
"s": 29863,
"text": "How to Append a Character to a String in C"
},
{
"code": null,
"e": 29947,
"s": 29906,
"text": "Convert character array to string in C++"
},
{
"code": null,
"e": 29962,
"s": 29947,
"text": "sprintf() in C"
},
{
"code": null,
"e": 30023,
"s": 29962,
"text": "Program to count occurrence of a given character in a string"
}
] |
C# | NumericUpDown Class - GeeksforGeeks | 05 Sep, 2019
In Windows Forms, NumericUpDown control is used to provide a Windows spin box or an up-down control which displays the numeric values. Or in other words, NumericUpDown control provides an interface which moves using up and down arrow and holds some pre-defined numeric value. The NumericUpDown class is used to represent the windows numeric up-down box and also provide different types of properties, methods, and events. It is defined under System.Windows.Forms namespace. In C# you can create a NumericUpDown in the windows form by using two different ways:
1. Design-Time: It is the easiest way to create a NumericUpDown as shown in the following steps:
Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp
Step 2: Next, drag and drop the NumericUpDown control from the toolbox to the form.
Step 3: After drag and drop you will go to the properties of the NumericUpDown control to modify NumericUpDown according to your requirement.Output:
Output:
2. Run-Time: It is a little bit trickier than the above method. In this method, you can create a NumericUpDown control programmatically with the help of syntax provided by the NumericUpDown class. The following steps show how to set the create NumericUpDown dynamically:
Step 1: Create a NumericUpDown control using the NumericUpDown() constructor is provided by the NumericUpDown class.// Creating a NumericUpDown control
NumericUpDown nbox = new NumericUpDown();
// Creating a NumericUpDown control
NumericUpDown nbox = new NumericUpDown();
Step 2: After creating a NumericUpDown control, set the property of the NumericUpDown control provided by the NumericUpDown class.// Setting the properties of NumericUpDown control
nbox.Location = new Point(386, 130);
nbox.Size = new Size(126, 26);
nbox.Font = new Font("Bodoni MT", 12);
nbox.Value = 18;
nbox.Minimum = 18;
nbox.Maximum = 30;
nbox.BackColor = Color.LightGreen;
nbox.ForeColor = Color.DarkGreen;
nbox.Increment = 1;
nbox.Name = "MySpinBox";
// Setting the properties of NumericUpDown control
nbox.Location = new Point(386, 130);
nbox.Size = new Size(126, 26);
nbox.Font = new Font("Bodoni MT", 12);
nbox.Value = 18;
nbox.Minimum = 18;
nbox.Maximum = 30;
nbox.BackColor = Color.LightGreen;
nbox.ForeColor = Color.DarkGreen;
nbox.Increment = 1;
nbox.Name = "MySpinBox";
Step 3: And last add this NumericUpDown control to the form using the following statement:// Adding this control
// to the form
this.Controls.Add(nbox);
Example:using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp42 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of the labels Label l1 = new Label(); l1.Location = new Point(348, 61); l1.Size = new Size(215, 20); l1.Text = "Form"; l1.Font = new Font("Bodoni MT", 12); this.Controls.Add(l1); Label l2 = new Label(); l2.Location = new Point(242, 136); l2.Size = new Size(103, 20); l2.Text = "Enter Age"; l2.Font = new Font("Bodoni MT", 12); this.Controls.Add(l2); // Creating and setting the // properties of NumericUpDown NumericUpDown nbox = new NumericUpDown(); nbox.Location = new Point(386, 130); nbox.Size = new Size(126, 26); nbox.Font = new Font("Bodoni MT", 12); nbox.Value = 18; nbox.Minimum = 18; nbox.Maximum = 30; nbox.BackColor = Color.LightGreen; nbox.ForeColor = Color.DarkGreen; nbox.Increment = 1; nbox.Name = "MySpinBox"; // Adding this control // to the form this.Controls.Add(nbox); }}}Output:
// Adding this control
// to the form
this.Controls.Add(nbox);
Example:
using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp42 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of the labels Label l1 = new Label(); l1.Location = new Point(348, 61); l1.Size = new Size(215, 20); l1.Text = "Form"; l1.Font = new Font("Bodoni MT", 12); this.Controls.Add(l1); Label l2 = new Label(); l2.Location = new Point(242, 136); l2.Size = new Size(103, 20); l2.Text = "Enter Age"; l2.Font = new Font("Bodoni MT", 12); this.Controls.Add(l2); // Creating and setting the // properties of NumericUpDown NumericUpDown nbox = new NumericUpDown(); nbox.Location = new Point(386, 130); nbox.Size = new Size(126, 26); nbox.Font = new Font("Bodoni MT", 12); nbox.Value = 18; nbox.Minimum = 18; nbox.Maximum = 30; nbox.BackColor = Color.LightGreen; nbox.ForeColor = Color.DarkGreen; nbox.Increment = 1; nbox.Name = "MySpinBox"; // Adding this control // to the form this.Controls.Add(nbox); }}}
Output:
CSharp-Windows-Forms-Namespace
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Destructors in C#
Difference between Ref and Out keywords in C#
C# | Delegates
C# | Constructors
Extension Method in C#
C# | String.IndexOf( ) Method | Set - 1
C# | Class and Object
Introduction to .NET Framework
C# | Abstract Classes
C# | Data Types | [
{
"code": null,
"e": 24416,
"s": 24388,
"text": "\n05 Sep, 2019"
},
{
"code": null,
"e": 24976,
"s": 24416,
"text": "In Windows Forms, NumericUpDown control is used to provide a Windows spin box or an up-down control which displays the numeric values. Or in other words, NumericUpDown control provides an interface which moves using up and down arrow and holds some pre-defined numeric value. The NumericUpDown class is used to represent the windows numeric up-down box and also provide different types of properties, methods, and events. It is defined under System.Windows.Forms namespace. In C# you can create a NumericUpDown in the windows form by using two different ways:"
},
{
"code": null,
"e": 25073,
"s": 24976,
"text": "1. Design-Time: It is the easiest way to create a NumericUpDown as shown in the following steps:"
},
{
"code": null,
"e": 25189,
"s": 25073,
"text": "Step 1: Create a windows form as shown in the below image:Visual Studio -> File -> New -> Project -> WindowsFormApp"
},
{
"code": null,
"e": 25273,
"s": 25189,
"text": "Step 2: Next, drag and drop the NumericUpDown control from the toolbox to the form."
},
{
"code": null,
"e": 25422,
"s": 25273,
"text": "Step 3: After drag and drop you will go to the properties of the NumericUpDown control to modify NumericUpDown according to your requirement.Output:"
},
{
"code": null,
"e": 25430,
"s": 25422,
"text": "Output:"
},
{
"code": null,
"e": 25701,
"s": 25430,
"text": "2. Run-Time: It is a little bit trickier than the above method. In this method, you can create a NumericUpDown control programmatically with the help of syntax provided by the NumericUpDown class. The following steps show how to set the create NumericUpDown dynamically:"
},
{
"code": null,
"e": 25897,
"s": 25701,
"text": "Step 1: Create a NumericUpDown control using the NumericUpDown() constructor is provided by the NumericUpDown class.// Creating a NumericUpDown control\nNumericUpDown nbox = new NumericUpDown(); \n"
},
{
"code": null,
"e": 25977,
"s": 25897,
"text": "// Creating a NumericUpDown control\nNumericUpDown nbox = new NumericUpDown(); \n"
},
{
"code": null,
"e": 26445,
"s": 25977,
"text": "Step 2: After creating a NumericUpDown control, set the property of the NumericUpDown control provided by the NumericUpDown class.// Setting the properties of NumericUpDown control\nnbox.Location = new Point(386, 130); \nnbox.Size = new Size(126, 26); \nnbox.Font = new Font(\"Bodoni MT\", 12); \nnbox.Value = 18; \nnbox.Minimum = 18; \nnbox.Maximum = 30; \nnbox.BackColor = Color.LightGreen; \nnbox.ForeColor = Color.DarkGreen; \nnbox.Increment = 1; \nnbox.Name = \"MySpinBox\"; \n"
},
{
"code": null,
"e": 26783,
"s": 26445,
"text": "// Setting the properties of NumericUpDown control\nnbox.Location = new Point(386, 130); \nnbox.Size = new Size(126, 26); \nnbox.Font = new Font(\"Bodoni MT\", 12); \nnbox.Value = 18; \nnbox.Minimum = 18; \nnbox.Maximum = 30; \nnbox.BackColor = Color.LightGreen; \nnbox.ForeColor = Color.DarkGreen; \nnbox.Increment = 1; \nnbox.Name = \"MySpinBox\"; \n"
},
{
"code": null,
"e": 28405,
"s": 26783,
"text": "Step 3: And last add this NumericUpDown control to the form using the following statement:// Adding this control \n// to the form \nthis.Controls.Add(nbox); \nExample:using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp42 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of the labels Label l1 = new Label(); l1.Location = new Point(348, 61); l1.Size = new Size(215, 20); l1.Text = \"Form\"; l1.Font = new Font(\"Bodoni MT\", 12); this.Controls.Add(l1); Label l2 = new Label(); l2.Location = new Point(242, 136); l2.Size = new Size(103, 20); l2.Text = \"Enter Age\"; l2.Font = new Font(\"Bodoni MT\", 12); this.Controls.Add(l2); // Creating and setting the // properties of NumericUpDown NumericUpDown nbox = new NumericUpDown(); nbox.Location = new Point(386, 130); nbox.Size = new Size(126, 26); nbox.Font = new Font(\"Bodoni MT\", 12); nbox.Value = 18; nbox.Minimum = 18; nbox.Maximum = 30; nbox.BackColor = Color.LightGreen; nbox.ForeColor = Color.DarkGreen; nbox.Increment = 1; nbox.Name = \"MySpinBox\"; // Adding this control // to the form this.Controls.Add(nbox); }}}Output:"
},
{
"code": null,
"e": 28472,
"s": 28405,
"text": "// Adding this control \n// to the form \nthis.Controls.Add(nbox); \n"
},
{
"code": null,
"e": 28481,
"s": 28472,
"text": "Example:"
},
{
"code": "using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Linq;using System.Text;using System.Threading.Tasks;using System.Windows.Forms; namespace WindowsFormsApp42 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void Form1_Load(object sender, EventArgs e) { // Creating and setting the // properties of the labels Label l1 = new Label(); l1.Location = new Point(348, 61); l1.Size = new Size(215, 20); l1.Text = \"Form\"; l1.Font = new Font(\"Bodoni MT\", 12); this.Controls.Add(l1); Label l2 = new Label(); l2.Location = new Point(242, 136); l2.Size = new Size(103, 20); l2.Text = \"Enter Age\"; l2.Font = new Font(\"Bodoni MT\", 12); this.Controls.Add(l2); // Creating and setting the // properties of NumericUpDown NumericUpDown nbox = new NumericUpDown(); nbox.Location = new Point(386, 130); nbox.Size = new Size(126, 26); nbox.Font = new Font(\"Bodoni MT\", 12); nbox.Value = 18; nbox.Minimum = 18; nbox.Maximum = 30; nbox.BackColor = Color.LightGreen; nbox.ForeColor = Color.DarkGreen; nbox.Increment = 1; nbox.Name = \"MySpinBox\"; // Adding this control // to the form this.Controls.Add(nbox); }}}",
"e": 29932,
"s": 28481,
"text": null
},
{
"code": null,
"e": 29940,
"s": 29932,
"text": "Output:"
},
{
"code": null,
"e": 29971,
"s": 29940,
"text": "CSharp-Windows-Forms-Namespace"
},
{
"code": null,
"e": 29974,
"s": 29971,
"text": "C#"
},
{
"code": null,
"e": 30072,
"s": 29974,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30090,
"s": 30072,
"text": "Destructors in C#"
},
{
"code": null,
"e": 30136,
"s": 30090,
"text": "Difference between Ref and Out keywords in C#"
},
{
"code": null,
"e": 30151,
"s": 30136,
"text": "C# | Delegates"
},
{
"code": null,
"e": 30169,
"s": 30151,
"text": "C# | Constructors"
},
{
"code": null,
"e": 30192,
"s": 30169,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 30232,
"s": 30192,
"text": "C# | String.IndexOf( ) Method | Set - 1"
},
{
"code": null,
"e": 30254,
"s": 30232,
"text": "C# | Class and Object"
},
{
"code": null,
"e": 30285,
"s": 30254,
"text": "Introduction to .NET Framework"
},
{
"code": null,
"e": 30307,
"s": 30285,
"text": "C# | Abstract Classes"
}
] |
Graph Coloring Algorithm with Networkx in Python | Towards Data Science | Let’s start with a possible real life challenge and I want to convince you that network theory and the mathematical tools that were developed within this field can solve this for you.
Imagine you work in the administration of a school for several years now. To make the exam schedules was painful but manageable with paper and pen up to now. But the school was growing really fast and the whole schedule arranging became overwhelming. There are so many students enrolled and you want that they all can attend their exams before they go off on holidays. How terrible it would be if students have to come back after holidays to take their exams. How are you going to solve this?
I claim that by the end of this article you can turn data from the wild which for example can be lists of students and the classes they attend and build an exam schedule where no students have the uneasy situation of colliding exam dates. The coloring problem is not just useful for exam schedules but many different situations. Maybe you come up with another idea, I would be glad to hear that from you in the comment section!
Before we start with this short tutorial, I want to share with you the learning goals for this article that you can check off while working along.
Networks in python with library networkxGraph coloring problem, what is it and how do you solve it?Solve practical example
Networks in python with library networkx
Graph coloring problem, what is it and how do you solve it?
Solve practical example
I assume you are familiar with network theory or also called graph theory. But if not let’s recap the most fundamental blocks in networks while getting to know the functionality of networkx.
import networkx as nx
If you don’t have networkx on your computer or in your virtual environment use
pip install networkx
or
conda install networkx
Let’s create a network with this library and call it network.
network = nx.Graph()
A network is made up from nodes and edges which are the connection between the nodes. Let’s add three nodes and two edges to our network.
To multiple nodes at once, we can provide a list of node names. In this case the nodes are called 1,2 and 3..
network.add_nodes_from([1,2,3])print(f"This network has now {network.number_of_nodes()} nodes.")
This network has now 3 nodes ..
To add an edge between two nodes, name the first and the second node that you want to connect with each other.
network.add_edge(1,2)network.add_edge(1,3)
We are also able to plot this network to get a visual understanding too.
nx.draw_networkx(network, with_labels=True)
So to summarize this paragraph, we know what a network consists of and how it can be built and visualized with networkx. If I wanted to describe this graph, I would say, this graph G consists of three nodes called 1, 2 and 3. In which node 2 is adjacent to node 1 and node 3 is adjacent to node 1. Node 1 is adjacent to node 2 and 3. Okay, enough for now. Let’s continue with the coloring problem.
The Graph Coloring Problem is defined as:
Given a graph G and k colors, assign a color to each node so that adjacent nodes get different colors.
In this sense, a color is another word for category. Let’s look at our example from before and add two or three nodes and assign different colors to them.
network.add_nodes_from([4,5,6,7])network.add_edge(1,4)network.add_edge(1,5)network.add_edge(2,4)network.add_edge(3,5)network.add_edge(5,6)network.add_edge(7,3)network.add_edge(7,6)
Okay let’s color this graph manually to solve the coloring problem.
color_list = ["gold", "violet", "violet", "violet", "limegreen", "limegreen", "darkorange"]nx.draw_networkx(network,node_color=color_list, with_labels=True)
Okay this is not what we want. Node 5 and 6 have the same color but are adjacent, also node 4 and 2 are adjacent but share the same color. We have to fix that.
color_list = [“gold”, “red”, “violet”, “pink”, “limegreen”, “violet”, “darkorange”]
Okay this looks like a possible solution to the graph coloring problem. But you might ask yourself, how do I know how many colors I am gonna need? We now look at the toy example of making an exam schedule where this becomes clearer.
In this practical example, we try to find an optimal solution for the exam schedule of a single semester.
I created an artificial dataset that consists of 250 students which attend our fictive school that offers 5 majors, that also can be attended as minors. Each student is allowed to register for 5 classes, if she or he enrolled for a combination of a minor and a major (which is done by 10% of our students) then she or he takes 3 classes from the major and 2 from the minor. Otherwise if they study only a major, they choose all 5 lectures from their main subject. Each subject offers between 6 to 9 courses (classes).
I uploaded this dataset on kaggle with the following link. We download it and use pandas to read in the csv. The csv has the following structure. The rows show the students, and the column 1 shows the major, column 2 shows the minor if she or he has one. Column 3 to 42 are the different subjects.
import pandas as pdstudent_data = pd.read_csv("synthetic_school_enrollment_data.csv")
Up to now, I didn’t tell you how we are gonna prevent students from having colliding exam dates. But now we have everything prepared to tackle this.
As I mentioned before, in the coloring problem, we want to prevent adjacent (neighboring) nodes from having the same color. In our example we want to avoid that students have to write two exams at the same time.
These two problems sound similar!
Thus we can come up with the idea, that we want to model the courses as nodes and exam dates as colors, and the nodes are connected if they share participating students. Therefore our exam scheduling is solved, when no neighboring courses/nodes have the same date/color. Okay, so let’s create a network that has our 40 courses as nodes and makes them connected if the share participating students.
Create a list of our 40 courses.
courses = list(student_data.columns)[2:]
Create a network object with networkx and add a node for each of the 40 courses
class_network = nx.Graph()class_network.add_nodes_from(courses)
Let’s add edges to connect the nodes. An edge is drawn between two classes if class A shares at least one student with class B.
Each student attends 5 courses. I want to pack them into a list, so that I can later make edges between all the possible combinations in this list, since this one student cannot attend any of these 5 exams at the same time. Therefore I loop over the students and make a list for each them.
without_subj = student_data.drop([‘Major’, ‘Minor’], axis=1) # We don’t need major and minor for the momentwithout_subj = without_subj.T # transposelist_of_overlaps = []for student in name_list:list_of_overlaps.append(list(without_subj.loc[without_subj[student]].index))
The next step uses a library (itertools) with a cool function called combinations. First argument is the list from which you want to have combinations, and the second argument says of how many elements a combination is composed. I provided a little example, to make you familiar with the functionality.
import itertoolsfor pair in itertools.combinations([1,2,3],2): print(pair)(1, 2)(1, 3)(2, 3)
We thus loop over the list of overlaps that were created for each student and then we combine every course with every other course of this list. This enables us to take the pairs and form edges between them.
for sublist in list_of_overlaps: for pair in itertools.combinations(sublist, 2): class_network.add_edge(pair[0], pair[1])
This process resulted in 259 connections between classes.
n_edges_total = len(list(class_network.edges))print(n_edges_total)259
The formula that describes how many connections or edges are possible for one single graph is.
n_nodes = len(list(class_network.nodes))n_edges_possible = (n_nodes*(n_nodes-1))/2
There are 780 possible edgees in this graph, our particular graph from the school example has 259, thus 33% of the possible edges are realised.
We can have a look at our school network that shows the courses and shared students.
fig = plt.figure(figsize=(12,12))nx.draw_networkx(class_network, with_labels=True)
Okay let’s build our algorithm for the coloring problem. To start off, I want to say that this is a NP-complete-problem, meaning the solution can only be found with brute-force algorithms. So basically, what we do is:
Order the nodes randomlyOrder the colors (if colors should represent dates, start with your first date at the top)Process the nodes one at the time, assign the first legal color from the list to our node
Order the nodes randomly
Order the colors (if colors should represent dates, start with your first date at the top)
Process the nodes one at the time, assign the first legal color from the list to our node
Since it is a NP-complete-problem we cannot get better than this greedy algorithm.
A greedy algorithm is a simple, intuitive algorithm that is used in optimization problems. The algorithm makes the optimal choice at each step as it attempts to find the overall optimal way to solve the entire problem.
And different orderings (step 1) can give different results for the same graph.
To demonstrate that, we will run the algorithm a couple of times to get an example.
If you are interested, this video gives explanation of the mathematical fundamentals. One thing I want think is important to say, is that the algorithm uses max n+1 colors at max, where n is the number of the highest degree in our network. The degree of a node v_i says to how many nodes a particular node v_i is connected to.
We can find out how many degrees our network has by calling network.degree and this can be converted into a dictionary and we take the max.
max(dict(class_network.degree).values())
Which is in our network 23. But remember, this is the upper-ceiling, maybe we can get better than this. Is 24 (23+1) actually a good number at all? We have to look at how many courses we offer, to see if we could consolidate the exam dates into fewer ones.
len(courses)
Oh we have 40 classes! So at least 16 dates can be saved, this is encouraging. Maybe fewer dates are possible? To be on the save side, I will prepare 24 different colors (from matplotlib), which is the maximal number of colors we will need.
colors = ["lightcoral", "gray", "lightgray", "firebrick", "red", "chocolate", "darkorange", "moccasin", "gold", "yellow", "darkolivegreen", "chartreuse", "forestgreen", "lime", "mediumaquamarine", "turquoise", "teal", "cadetblue", "dogerblue", "blue", "slateblue", "blueviolet", "magenta", "lightsteelblue"]
We need also 24 possible exam dates to which we can assign the exams and a dictionary that translates from colors to datetime objects.
from datetime import datetimedates = []calendar = {}for i in list(range(14,20)): for j in list(range(10,18,2)): date = datetime(2021, 6, i, j, 0) dates.append(date) calendar[date] = []
Our translating dictionary:
from_color_to_date = {col: dates[i] for i, col in enumerate(colors)}
And now we can write our greedy algorithm...
def greedy_coloring_algorithm(network, colors): nodes = list(network.nodes()) random.shuffle(nodes) # step 1 random ordering for node in nodes: dict_neighbors = dict(network[node])# gives names of nodes that are neighbors nodes_neighbors = list(dict_neighbors.keys()) forbidden_colors = [] for neighbor in nodes_neighbors: example.nodes.data()[1] len(example.nodes.data()[1].keys()) if len(network.nodes.data()[neighbor].keys()) == 0: # if the neighbor has no color, proceed continue else: # if the neighbor has a color, # this color is forbidden example.nodes.data()[1]['color'] forbidden_color = network.nodes.data()[neighbor] forbidden_color = forbidden_color['color'] forbidden_colors.append(forbidden_color) # assign the first color # that is not forbidden for color in colors: # step 2: start everytime at the top of the colors, # so that the smallest number of colors is used if color in forbidden_colors: continue else: # step 3: color one node at the time network.nodes[node]['color'] = color break
run the algorithm
greedy_coloring_algorithm(class_network, colors)
It’s time to look at the results. Let me grab the color values of each node and pass it into a list called colors_node.
colors_nodes = [data[‘color’] for v, data in class_network.nodes(data=True)]nx.draw(class_network, node_color=colors_nodes, with_labels=True)
This graph looks promising. It’s a little bit messy. But we can also look at the colors list to see how many colors or dates we finally need.
len(set(colors_nodes))
10! We only need 10 dates for our 40 exams. Who would have guessed that? For me that is amazing. Such a powerful tool at hand.
But as mentioned before, this algorithm does not reliably generate 10 categories, it will for sure generate 24 or less, but how many there will be, depends on the ordering in which we color the nodes.
Let’s make a short comparison with different orderings and see what we get, if we can get better than 10.
number = []for i in list(range(0,50)): greedy_coloring_algorithm(class_network, colors) colors_nodes = [data['color'] for v, data in class_network.nodes(data=True)] num_col = len(set(colors_nodes)) number.append(num_col)[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
Okay it seems that this network is an easy to solve regarding the coloring problem. We get the same result everytime.
Our final goal is to provide the principal of our school with a exam schedule. We assume the exam lasts 1.5h and the students get 30-minute break.
for v, data in class_network.nodes(data=True): calendar[from_color_to_date[data['color']]].append(v)max_number_exams_sync = len(max(list(calendar.values()),key=len))rooms = ["Room "+str(i) for i in list(range(max_number_exams_sync))]pd.DataFrame.from_dict(calendar, orient='index', columns=rooms)
Here we go! We can proudly give the exam schedule to the schoolmaster who will be surprised that we can finish all exams after just 2.5 days.
I claimed that you will be able to use some data and the graph coloring algorithm to automate difficult tasks for you. You have seen how you can create graphs with networkx as well as how to apply such a graph coloring algorithm in python. I walked you through this rather theoretical algorithm with a nice application and if you work in a school that wants to prevent exam dates to collide, you have seen a possible solution. But if not I hope you still can transfer this knowledge to your domain. Let me know if it helped you finding a solution in one of the challenges you are facing. | [
{
"code": null,
"e": 356,
"s": 172,
"text": "Let’s start with a possible real life challenge and I want to convince you that network theory and the mathematical tools that were developed within this field can solve this for you."
},
{
"code": null,
"e": 849,
"s": 356,
"text": "Imagine you work in the administration of a school for several years now. To make the exam schedules was painful but manageable with paper and pen up to now. But the school was growing really fast and the whole schedule arranging became overwhelming. There are so many students enrolled and you want that they all can attend their exams before they go off on holidays. How terrible it would be if students have to come back after holidays to take their exams. How are you going to solve this?"
},
{
"code": null,
"e": 1277,
"s": 849,
"text": "I claim that by the end of this article you can turn data from the wild which for example can be lists of students and the classes they attend and build an exam schedule where no students have the uneasy situation of colliding exam dates. The coloring problem is not just useful for exam schedules but many different situations. Maybe you come up with another idea, I would be glad to hear that from you in the comment section!"
},
{
"code": null,
"e": 1424,
"s": 1277,
"text": "Before we start with this short tutorial, I want to share with you the learning goals for this article that you can check off while working along."
},
{
"code": null,
"e": 1547,
"s": 1424,
"text": "Networks in python with library networkxGraph coloring problem, what is it and how do you solve it?Solve practical example"
},
{
"code": null,
"e": 1588,
"s": 1547,
"text": "Networks in python with library networkx"
},
{
"code": null,
"e": 1648,
"s": 1588,
"text": "Graph coloring problem, what is it and how do you solve it?"
},
{
"code": null,
"e": 1672,
"s": 1648,
"text": "Solve practical example"
},
{
"code": null,
"e": 1863,
"s": 1672,
"text": "I assume you are familiar with network theory or also called graph theory. But if not let’s recap the most fundamental blocks in networks while getting to know the functionality of networkx."
},
{
"code": null,
"e": 1885,
"s": 1863,
"text": "import networkx as nx"
},
{
"code": null,
"e": 1964,
"s": 1885,
"text": "If you don’t have networkx on your computer or in your virtual environment use"
},
{
"code": null,
"e": 1985,
"s": 1964,
"text": "pip install networkx"
},
{
"code": null,
"e": 1988,
"s": 1985,
"text": "or"
},
{
"code": null,
"e": 2011,
"s": 1988,
"text": "conda install networkx"
},
{
"code": null,
"e": 2073,
"s": 2011,
"text": "Let’s create a network with this library and call it network."
},
{
"code": null,
"e": 2094,
"s": 2073,
"text": "network = nx.Graph()"
},
{
"code": null,
"e": 2232,
"s": 2094,
"text": "A network is made up from nodes and edges which are the connection between the nodes. Let’s add three nodes and two edges to our network."
},
{
"code": null,
"e": 2342,
"s": 2232,
"text": "To multiple nodes at once, we can provide a list of node names. In this case the nodes are called 1,2 and 3.."
},
{
"code": null,
"e": 2439,
"s": 2342,
"text": "network.add_nodes_from([1,2,3])print(f\"This network has now {network.number_of_nodes()} nodes.\")"
},
{
"code": null,
"e": 2471,
"s": 2439,
"text": "This network has now 3 nodes .."
},
{
"code": null,
"e": 2582,
"s": 2471,
"text": "To add an edge between two nodes, name the first and the second node that you want to connect with each other."
},
{
"code": null,
"e": 2625,
"s": 2582,
"text": "network.add_edge(1,2)network.add_edge(1,3)"
},
{
"code": null,
"e": 2698,
"s": 2625,
"text": "We are also able to plot this network to get a visual understanding too."
},
{
"code": null,
"e": 2742,
"s": 2698,
"text": "nx.draw_networkx(network, with_labels=True)"
},
{
"code": null,
"e": 3140,
"s": 2742,
"text": "So to summarize this paragraph, we know what a network consists of and how it can be built and visualized with networkx. If I wanted to describe this graph, I would say, this graph G consists of three nodes called 1, 2 and 3. In which node 2 is adjacent to node 1 and node 3 is adjacent to node 1. Node 1 is adjacent to node 2 and 3. Okay, enough for now. Let’s continue with the coloring problem."
},
{
"code": null,
"e": 3182,
"s": 3140,
"text": "The Graph Coloring Problem is defined as:"
},
{
"code": null,
"e": 3285,
"s": 3182,
"text": "Given a graph G and k colors, assign a color to each node so that adjacent nodes get different colors."
},
{
"code": null,
"e": 3440,
"s": 3285,
"text": "In this sense, a color is another word for category. Let’s look at our example from before and add two or three nodes and assign different colors to them."
},
{
"code": null,
"e": 3621,
"s": 3440,
"text": "network.add_nodes_from([4,5,6,7])network.add_edge(1,4)network.add_edge(1,5)network.add_edge(2,4)network.add_edge(3,5)network.add_edge(5,6)network.add_edge(7,3)network.add_edge(7,6)"
},
{
"code": null,
"e": 3689,
"s": 3621,
"text": "Okay let’s color this graph manually to solve the coloring problem."
},
{
"code": null,
"e": 3859,
"s": 3689,
"text": "color_list = [\"gold\", \"violet\", \"violet\", \"violet\", \"limegreen\", \"limegreen\", \"darkorange\"]nx.draw_networkx(network,node_color=color_list, with_labels=True)"
},
{
"code": null,
"e": 4019,
"s": 3859,
"text": "Okay this is not what we want. Node 5 and 6 have the same color but are adjacent, also node 4 and 2 are adjacent but share the same color. We have to fix that."
},
{
"code": null,
"e": 4116,
"s": 4019,
"text": "color_list = [“gold”, “red”, “violet”, “pink”, “limegreen”, “violet”, “darkorange”]"
},
{
"code": null,
"e": 4349,
"s": 4116,
"text": "Okay this looks like a possible solution to the graph coloring problem. But you might ask yourself, how do I know how many colors I am gonna need? We now look at the toy example of making an exam schedule where this becomes clearer."
},
{
"code": null,
"e": 4455,
"s": 4349,
"text": "In this practical example, we try to find an optimal solution for the exam schedule of a single semester."
},
{
"code": null,
"e": 4973,
"s": 4455,
"text": "I created an artificial dataset that consists of 250 students which attend our fictive school that offers 5 majors, that also can be attended as minors. Each student is allowed to register for 5 classes, if she or he enrolled for a combination of a minor and a major (which is done by 10% of our students) then she or he takes 3 classes from the major and 2 from the minor. Otherwise if they study only a major, they choose all 5 lectures from their main subject. Each subject offers between 6 to 9 courses (classes)."
},
{
"code": null,
"e": 5271,
"s": 4973,
"text": "I uploaded this dataset on kaggle with the following link. We download it and use pandas to read in the csv. The csv has the following structure. The rows show the students, and the column 1 shows the major, column 2 shows the minor if she or he has one. Column 3 to 42 are the different subjects."
},
{
"code": null,
"e": 5357,
"s": 5271,
"text": "import pandas as pdstudent_data = pd.read_csv(\"synthetic_school_enrollment_data.csv\")"
},
{
"code": null,
"e": 5506,
"s": 5357,
"text": "Up to now, I didn’t tell you how we are gonna prevent students from having colliding exam dates. But now we have everything prepared to tackle this."
},
{
"code": null,
"e": 5718,
"s": 5506,
"text": "As I mentioned before, in the coloring problem, we want to prevent adjacent (neighboring) nodes from having the same color. In our example we want to avoid that students have to write two exams at the same time."
},
{
"code": null,
"e": 5752,
"s": 5718,
"text": "These two problems sound similar!"
},
{
"code": null,
"e": 6150,
"s": 5752,
"text": "Thus we can come up with the idea, that we want to model the courses as nodes and exam dates as colors, and the nodes are connected if they share participating students. Therefore our exam scheduling is solved, when no neighboring courses/nodes have the same date/color. Okay, so let’s create a network that has our 40 courses as nodes and makes them connected if the share participating students."
},
{
"code": null,
"e": 6183,
"s": 6150,
"text": "Create a list of our 40 courses."
},
{
"code": null,
"e": 6224,
"s": 6183,
"text": "courses = list(student_data.columns)[2:]"
},
{
"code": null,
"e": 6304,
"s": 6224,
"text": "Create a network object with networkx and add a node for each of the 40 courses"
},
{
"code": null,
"e": 6368,
"s": 6304,
"text": "class_network = nx.Graph()class_network.add_nodes_from(courses)"
},
{
"code": null,
"e": 6496,
"s": 6368,
"text": "Let’s add edges to connect the nodes. An edge is drawn between two classes if class A shares at least one student with class B."
},
{
"code": null,
"e": 6786,
"s": 6496,
"text": "Each student attends 5 courses. I want to pack them into a list, so that I can later make edges between all the possible combinations in this list, since this one student cannot attend any of these 5 exams at the same time. Therefore I loop over the students and make a list for each them."
},
{
"code": null,
"e": 7057,
"s": 6786,
"text": "without_subj = student_data.drop([‘Major’, ‘Minor’], axis=1) # We don’t need major and minor for the momentwithout_subj = without_subj.T # transposelist_of_overlaps = []for student in name_list:list_of_overlaps.append(list(without_subj.loc[without_subj[student]].index))"
},
{
"code": null,
"e": 7360,
"s": 7057,
"text": "The next step uses a library (itertools) with a cool function called combinations. First argument is the list from which you want to have combinations, and the second argument says of how many elements a combination is composed. I provided a little example, to make you familiar with the functionality."
},
{
"code": null,
"e": 7453,
"s": 7360,
"text": "import itertoolsfor pair in itertools.combinations([1,2,3],2): print(pair)(1, 2)(1, 3)(2, 3)"
},
{
"code": null,
"e": 7661,
"s": 7453,
"text": "We thus loop over the list of overlaps that were created for each student and then we combine every course with every other course of this list. This enables us to take the pairs and form edges between them."
},
{
"code": null,
"e": 7790,
"s": 7661,
"text": "for sublist in list_of_overlaps: for pair in itertools.combinations(sublist, 2): class_network.add_edge(pair[0], pair[1])"
},
{
"code": null,
"e": 7848,
"s": 7790,
"text": "This process resulted in 259 connections between classes."
},
{
"code": null,
"e": 7918,
"s": 7848,
"text": "n_edges_total = len(list(class_network.edges))print(n_edges_total)259"
},
{
"code": null,
"e": 8013,
"s": 7918,
"text": "The formula that describes how many connections or edges are possible for one single graph is."
},
{
"code": null,
"e": 8096,
"s": 8013,
"text": "n_nodes = len(list(class_network.nodes))n_edges_possible = (n_nodes*(n_nodes-1))/2"
},
{
"code": null,
"e": 8240,
"s": 8096,
"text": "There are 780 possible edgees in this graph, our particular graph from the school example has 259, thus 33% of the possible edges are realised."
},
{
"code": null,
"e": 8325,
"s": 8240,
"text": "We can have a look at our school network that shows the courses and shared students."
},
{
"code": null,
"e": 8408,
"s": 8325,
"text": "fig = plt.figure(figsize=(12,12))nx.draw_networkx(class_network, with_labels=True)"
},
{
"code": null,
"e": 8626,
"s": 8408,
"text": "Okay let’s build our algorithm for the coloring problem. To start off, I want to say that this is a NP-complete-problem, meaning the solution can only be found with brute-force algorithms. So basically, what we do is:"
},
{
"code": null,
"e": 8830,
"s": 8626,
"text": "Order the nodes randomlyOrder the colors (if colors should represent dates, start with your first date at the top)Process the nodes one at the time, assign the first legal color from the list to our node"
},
{
"code": null,
"e": 8855,
"s": 8830,
"text": "Order the nodes randomly"
},
{
"code": null,
"e": 8946,
"s": 8855,
"text": "Order the colors (if colors should represent dates, start with your first date at the top)"
},
{
"code": null,
"e": 9036,
"s": 8946,
"text": "Process the nodes one at the time, assign the first legal color from the list to our node"
},
{
"code": null,
"e": 9119,
"s": 9036,
"text": "Since it is a NP-complete-problem we cannot get better than this greedy algorithm."
},
{
"code": null,
"e": 9338,
"s": 9119,
"text": "A greedy algorithm is a simple, intuitive algorithm that is used in optimization problems. The algorithm makes the optimal choice at each step as it attempts to find the overall optimal way to solve the entire problem."
},
{
"code": null,
"e": 9418,
"s": 9338,
"text": "And different orderings (step 1) can give different results for the same graph."
},
{
"code": null,
"e": 9502,
"s": 9418,
"text": "To demonstrate that, we will run the algorithm a couple of times to get an example."
},
{
"code": null,
"e": 9829,
"s": 9502,
"text": "If you are interested, this video gives explanation of the mathematical fundamentals. One thing I want think is important to say, is that the algorithm uses max n+1 colors at max, where n is the number of the highest degree in our network. The degree of a node v_i says to how many nodes a particular node v_i is connected to."
},
{
"code": null,
"e": 9969,
"s": 9829,
"text": "We can find out how many degrees our network has by calling network.degree and this can be converted into a dictionary and we take the max."
},
{
"code": null,
"e": 10010,
"s": 9969,
"text": "max(dict(class_network.degree).values())"
},
{
"code": null,
"e": 10267,
"s": 10010,
"text": "Which is in our network 23. But remember, this is the upper-ceiling, maybe we can get better than this. Is 24 (23+1) actually a good number at all? We have to look at how many courses we offer, to see if we could consolidate the exam dates into fewer ones."
},
{
"code": null,
"e": 10280,
"s": 10267,
"text": "len(courses)"
},
{
"code": null,
"e": 10521,
"s": 10280,
"text": "Oh we have 40 classes! So at least 16 dates can be saved, this is encouraging. Maybe fewer dates are possible? To be on the save side, I will prepare 24 different colors (from matplotlib), which is the maximal number of colors we will need."
},
{
"code": null,
"e": 10829,
"s": 10521,
"text": "colors = [\"lightcoral\", \"gray\", \"lightgray\", \"firebrick\", \"red\", \"chocolate\", \"darkorange\", \"moccasin\", \"gold\", \"yellow\", \"darkolivegreen\", \"chartreuse\", \"forestgreen\", \"lime\", \"mediumaquamarine\", \"turquoise\", \"teal\", \"cadetblue\", \"dogerblue\", \"blue\", \"slateblue\", \"blueviolet\", \"magenta\", \"lightsteelblue\"]"
},
{
"code": null,
"e": 10964,
"s": 10829,
"text": "We need also 24 possible exam dates to which we can assign the exams and a dictionary that translates from colors to datetime objects."
},
{
"code": null,
"e": 11173,
"s": 10964,
"text": "from datetime import datetimedates = []calendar = {}for i in list(range(14,20)): for j in list(range(10,18,2)): date = datetime(2021, 6, i, j, 0) dates.append(date) calendar[date] = []"
},
{
"code": null,
"e": 11201,
"s": 11173,
"text": "Our translating dictionary:"
},
{
"code": null,
"e": 11270,
"s": 11201,
"text": "from_color_to_date = {col: dates[i] for i, col in enumerate(colors)}"
},
{
"code": null,
"e": 11315,
"s": 11270,
"text": "And now we can write our greedy algorithm..."
},
{
"code": null,
"e": 12660,
"s": 11315,
"text": "def greedy_coloring_algorithm(network, colors): nodes = list(network.nodes()) random.shuffle(nodes) # step 1 random ordering for node in nodes: dict_neighbors = dict(network[node])# gives names of nodes that are neighbors nodes_neighbors = list(dict_neighbors.keys()) forbidden_colors = [] for neighbor in nodes_neighbors: example.nodes.data()[1] len(example.nodes.data()[1].keys()) if len(network.nodes.data()[neighbor].keys()) == 0: # if the neighbor has no color, proceed continue else: # if the neighbor has a color, # this color is forbidden example.nodes.data()[1]['color'] forbidden_color = network.nodes.data()[neighbor] forbidden_color = forbidden_color['color'] forbidden_colors.append(forbidden_color) # assign the first color # that is not forbidden for color in colors: # step 2: start everytime at the top of the colors, # so that the smallest number of colors is used if color in forbidden_colors: continue else: # step 3: color one node at the time network.nodes[node]['color'] = color break"
},
{
"code": null,
"e": 12678,
"s": 12660,
"text": "run the algorithm"
},
{
"code": null,
"e": 12727,
"s": 12678,
"text": "greedy_coloring_algorithm(class_network, colors)"
},
{
"code": null,
"e": 12847,
"s": 12727,
"text": "It’s time to look at the results. Let me grab the color values of each node and pass it into a list called colors_node."
},
{
"code": null,
"e": 12989,
"s": 12847,
"text": "colors_nodes = [data[‘color’] for v, data in class_network.nodes(data=True)]nx.draw(class_network, node_color=colors_nodes, with_labels=True)"
},
{
"code": null,
"e": 13131,
"s": 12989,
"text": "This graph looks promising. It’s a little bit messy. But we can also look at the colors list to see how many colors or dates we finally need."
},
{
"code": null,
"e": 13154,
"s": 13131,
"text": "len(set(colors_nodes))"
},
{
"code": null,
"e": 13281,
"s": 13154,
"text": "10! We only need 10 dates for our 40 exams. Who would have guessed that? For me that is amazing. Such a powerful tool at hand."
},
{
"code": null,
"e": 13482,
"s": 13281,
"text": "But as mentioned before, this algorithm does not reliably generate 10 categories, it will for sure generate 24 or less, but how many there will be, depends on the ordering in which we color the nodes."
},
{
"code": null,
"e": 13588,
"s": 13482,
"text": "Let’s make a short comparison with different orderings and see what we get, if we can get better than 10."
},
{
"code": null,
"e": 14021,
"s": 13588,
"text": "number = []for i in list(range(0,50)): greedy_coloring_algorithm(class_network, colors) colors_nodes = [data['color'] for v, data in class_network.nodes(data=True)] num_col = len(set(colors_nodes)) number.append(num_col)[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]"
},
{
"code": null,
"e": 14139,
"s": 14021,
"text": "Okay it seems that this network is an easy to solve regarding the coloring problem. We get the same result everytime."
},
{
"code": null,
"e": 14286,
"s": 14139,
"text": "Our final goal is to provide the principal of our school with a exam schedule. We assume the exam lasts 1.5h and the students get 30-minute break."
},
{
"code": null,
"e": 14586,
"s": 14286,
"text": "for v, data in class_network.nodes(data=True): calendar[from_color_to_date[data['color']]].append(v)max_number_exams_sync = len(max(list(calendar.values()),key=len))rooms = [\"Room \"+str(i) for i in list(range(max_number_exams_sync))]pd.DataFrame.from_dict(calendar, orient='index', columns=rooms)"
},
{
"code": null,
"e": 14728,
"s": 14586,
"text": "Here we go! We can proudly give the exam schedule to the schoolmaster who will be surprised that we can finish all exams after just 2.5 days."
}
] |
Statistics - Reliability Coefficient | A measure of the accuracy of a test or measuring instrument obtained by measuring the same individuals twice and computing the correlation of the two sets of measures.
Reliability Coefficient is defined and given by the following function:
Reliability Coefficient, RC=(N(N−1))×((Total Variance −Sum of Variance)TotalVariance)
Where −
N = Number of Tasks
N = Number of Tasks
Problem Statement:
An undertaking was experienced with three Persons (P) and they are assigned with three distinct Tasks (T). Discover the Reliability Coefficient?
P0-T0 = 10
P1-T0 = 20
P0-T1 = 30
P1-T1 = 40
P0-T2 = 50
P1-T2 = 60
Solution:
Given, Number of Students (P) = 3 Number of Tasks (N) = 3. To Find, Reliability Coefficient, follow the steps as following:
Give us a chance to first figure the average score of the persons and their tasks
The average score of Task (T0) = 10 + 20/2 = 15
The average score of Task (T1) = 30 + 40/2 = 35
The average score of Task (T2) = 50 + 60/2 = 55
Next, figure the variance for:
Variance of P0-T0 and P1-T0:
Variance = square (10-15) + square (20-15)/2 = 25
Variance of P0-T1 and P1-T1:
Variance = square (30-35) + square (40-35)/2 = 25
Variance of P0-T2 and P1-T2:
Variance = square (50-55) + square (50-55)/2 = 25
Presently, figure the individual variance of P0-T0 and P1-T0, P0-T1 and P1-T1, P0-T2 and P1-T2. To ascertain the individual variance value, we ought to include all the above computed change values.
Total of Individual Variance = 25+25+25=75
Compute the Total change
Variance= square ((P0-T0)
- normal score of Person 0)
= square (10-15) = 25
Variance= square ((P1-T0)
- normal score of Person 0)
= square (20-15) = 25
Variance= square ((P0-T1)
- normal score of Person 1)
= square (30-35) = 25
Variance= square ((P1-T1)
- normal score of Person 1)
= square (40-35) = 25
Variance= square ((P0-T2)
- normal score of Person 2)
= square (50-55) = 25
Variance= square ((P1-T2)
- normal score of Person 2)
= square (60-55) = 25
Now, include every one of the qualities and figure the aggregate change
Total Variance= 25+25+25+25+25+25 = 150
At last, substitute the qualities in the underneath offered equation to discover
40 Lectures
3.5 hours
Madhu Bhatia
40 Lectures
2 hours
Megha Aggarwal
66 Lectures
1.5 hours
Mike West
22 Lectures
1 hours
Mike West
60 Lectures
12 hours
Michael Miller
65 Lectures
5 hours
Abhilash Nelson
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 4491,
"s": 4323,
"text": "A measure of the accuracy of a test or measuring instrument obtained by measuring the same individuals twice and computing the correlation of the two sets of measures."
},
{
"code": null,
"e": 4563,
"s": 4491,
"text": "Reliability Coefficient is defined and given by the following function:"
},
{
"code": null,
"e": 4649,
"s": 4563,
"text": "Reliability Coefficient, RC=(N(N−1))×((Total Variance −Sum of Variance)TotalVariance)"
},
{
"code": null,
"e": 4658,
"s": 4649,
"text": "Where − "
},
{
"code": null,
"e": 4678,
"s": 4658,
"text": "N = Number of Tasks"
},
{
"code": null,
"e": 4698,
"s": 4678,
"text": "N = Number of Tasks"
},
{
"code": null,
"e": 4717,
"s": 4698,
"text": "Problem Statement:"
},
{
"code": null,
"e": 4862,
"s": 4717,
"text": "An undertaking was experienced with three Persons (P) and they are assigned with three distinct Tasks (T). Discover the Reliability Coefficient?"
},
{
"code": null,
"e": 4935,
"s": 4862,
"text": "P0-T0 = 10 \nP1-T0 = 20 \nP0-T1 = 30 \nP1-T1 = 40 \nP0-T2 = 50 \nP1-T2 = 60 \n"
},
{
"code": null,
"e": 4945,
"s": 4935,
"text": "Solution:"
},
{
"code": null,
"e": 5069,
"s": 4945,
"text": "Given, Number of Students (P) = 3 Number of Tasks (N) = 3. To Find, Reliability Coefficient, follow the steps as following:"
},
{
"code": null,
"e": 5151,
"s": 5069,
"text": "Give us a chance to first figure the average score of the persons and their tasks"
},
{
"code": null,
"e": 5299,
"s": 5151,
"text": "The average score of Task (T0) = 10 + 20/2 = 15 \nThe average score of Task (T1) = 30 + 40/2 = 35 \nThe average score of Task (T2) = 50 + 60/2 = 55 \n"
},
{
"code": null,
"e": 5330,
"s": 5299,
"text": "Next, figure the variance for:"
},
{
"code": null,
"e": 5572,
"s": 5330,
"text": "Variance of P0-T0 and P1-T0: \nVariance = square (10-15) + square (20-15)/2 = 25\nVariance of P0-T1 and P1-T1: \nVariance = square (30-35) + square (40-35)/2 = 25\nVariance of P0-T2 and P1-T2: \nVariance = square (50-55) + square (50-55)/2 = 25 \n"
},
{
"code": null,
"e": 5770,
"s": 5572,
"text": "Presently, figure the individual variance of P0-T0 and P1-T0, P0-T1 and P1-T1, P0-T2 and P1-T2. To ascertain the individual variance value, we ought to include all the above computed change values."
},
{
"code": null,
"e": 5814,
"s": 5770,
"text": "Total of Individual Variance = 25+25+25=75 "
},
{
"code": null,
"e": 5839,
"s": 5814,
"text": "Compute the Total change"
},
{
"code": null,
"e": 6323,
"s": 5839,
"text": "Variance= square ((P0-T0) \n - normal score of Person 0) \n = square (10-15) = 25\nVariance= square ((P1-T0) \n - normal score of Person 0) \n = square (20-15) = 25 \nVariance= square ((P0-T1) \n - normal score of Person 1) \n = square (30-35) = 25 \nVariance= square ((P1-T1) \n - normal score of Person 1) \n = square (40-35) = 25\nVariance= square ((P0-T2) \n - normal score of Person 2) \n = square (50-55) = 25 \nVariance= square ((P1-T2) \n- normal score of Person 2) \n = square (60-55) = 25 \n"
},
{
"code": null,
"e": 6395,
"s": 6323,
"text": "Now, include every one of the qualities and figure the aggregate change"
},
{
"code": null,
"e": 6437,
"s": 6395,
"text": "Total Variance= 25+25+25+25+25+25 = 150 "
},
{
"code": null,
"e": 6518,
"s": 6437,
"text": "At last, substitute the qualities in the underneath offered equation to discover"
},
{
"code": null,
"e": 6553,
"s": 6518,
"text": "\n 40 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 6567,
"s": 6553,
"text": " Madhu Bhatia"
},
{
"code": null,
"e": 6600,
"s": 6567,
"text": "\n 40 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 6616,
"s": 6600,
"text": " Megha Aggarwal"
},
{
"code": null,
"e": 6651,
"s": 6616,
"text": "\n 66 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 6662,
"s": 6651,
"text": " Mike West"
},
{
"code": null,
"e": 6695,
"s": 6662,
"text": "\n 22 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 6706,
"s": 6695,
"text": " Mike West"
},
{
"code": null,
"e": 6740,
"s": 6706,
"text": "\n 60 Lectures \n 12 hours \n"
},
{
"code": null,
"e": 6756,
"s": 6740,
"text": " Michael Miller"
},
{
"code": null,
"e": 6789,
"s": 6756,
"text": "\n 65 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 6806,
"s": 6789,
"text": " Abhilash Nelson"
},
{
"code": null,
"e": 6813,
"s": 6806,
"text": " Print"
},
{
"code": null,
"e": 6824,
"s": 6813,
"text": " Add Notes"
}
] |
Ruby | Hash each_pair() function - GeeksforGeeks | 07 Jan, 2020
Hash#each_pair() is a Hash class method which finds the nested value which calls block once for each pair in hash by passing the key_value pair as parameters.
Syntax: Hash.each_pair()
Parameter: Hash values
Return: calls block once for key_value pair in hash with key_value pair as parameter otherwise Enumerator if no argument is passed.
Example #1 :
# Ruby code for Hash.each_pair() method # declaring Hash valuea = {a:100, b:200} # declaring Hash valueb = {a:100, c:300, b:200} # declaring Hash valuec = {a:100} # each Valueputs "Hash a each_pair form : #{a.each_pair()}\n\n" puts "Hash b each_pair form : #{b.each_pair {|key| puts "#{key}"}}\n\n" puts "Hash c each_pair form : #{c.each_pair {|value| puts "#{value}"}}\n\n"
Output :
Hash a each_pair form : #
[:a, 100]
[:c, 300]
[:b, 200]
Hash b each_pair form : {:a=>100, :c=>300, :b=>200}
[:a, 100]
Hash c each_pair form : {:a=>100}
Example #2 :
# Ruby code for Hash.each_pair() method # declaring Hash valuea = { "a" => 100, "b" => 200 } # declaring Hash valueb = {"a" => 100} # declaring Hash valuec = {"a" => 100, "c" => 300, "b" => 200} # each Valueputs "Hash a each_pair form : #{a.each_pair()}\n\n" puts "Hash b each_pair form : #{b.each_pair {|key| puts "#{key}"}}\n\n" puts "Hash c each_pair form : #{c.each_pair {|value| puts "#{value}"}}\n\n"
Output :
Hash a each_pair form : #
["a", 100]
Hash b each_pair form : {"a"=>100}
["a", 100]
["c", 300]
["b", 200]
Hash c each_pair form : {"a"=>100, "c"=>300, "b"=>200}
Ruby Hash-class
Ruby-Methods
Ruby
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Ruby | Array reverse() function
Method Overloading In Ruby
Instance Variables in Ruby
Global Variable in Ruby
Ruby | Array transpose() function
Ruby | Array replace() function
Ruby | Hash select() method
Ruby | Array unshift() function
Hello World in Ruby
Ruby Static Members | [
{
"code": null,
"e": 23247,
"s": 23219,
"text": "\n07 Jan, 2020"
},
{
"code": null,
"e": 23406,
"s": 23247,
"text": "Hash#each_pair() is a Hash class method which finds the nested value which calls block once for each pair in hash by passing the key_value pair as parameters."
},
{
"code": null,
"e": 23431,
"s": 23406,
"text": "Syntax: Hash.each_pair()"
},
{
"code": null,
"e": 23454,
"s": 23431,
"text": "Parameter: Hash values"
},
{
"code": null,
"e": 23586,
"s": 23454,
"text": "Return: calls block once for key_value pair in hash with key_value pair as parameter otherwise Enumerator if no argument is passed."
},
{
"code": null,
"e": 23599,
"s": 23586,
"text": "Example #1 :"
},
{
"code": "# Ruby code for Hash.each_pair() method # declaring Hash valuea = {a:100, b:200} # declaring Hash valueb = {a:100, c:300, b:200} # declaring Hash valuec = {a:100} # each Valueputs \"Hash a each_pair form : #{a.each_pair()}\\n\\n\" puts \"Hash b each_pair form : #{b.each_pair {|key| puts \"#{key}\"}}\\n\\n\" puts \"Hash c each_pair form : #{c.each_pair {|value| puts \"#{value}\"}}\\n\\n\"",
"e": 23986,
"s": 23599,
"text": null
},
{
"code": null,
"e": 23995,
"s": 23986,
"text": "Output :"
},
{
"code": null,
"e": 24151,
"s": 23995,
"text": "Hash a each_pair form : #\n\n[:a, 100]\n[:c, 300]\n[:b, 200]\nHash b each_pair form : {:a=>100, :c=>300, :b=>200}\n\n[:a, 100]\nHash c each_pair form : {:a=>100}\n\n"
},
{
"code": null,
"e": 24164,
"s": 24151,
"text": "Example #2 :"
},
{
"code": "# Ruby code for Hash.each_pair() method # declaring Hash valuea = { \"a\" => 100, \"b\" => 200 } # declaring Hash valueb = {\"a\" => 100} # declaring Hash valuec = {\"a\" => 100, \"c\" => 300, \"b\" => 200} # each Valueputs \"Hash a each_pair form : #{a.each_pair()}\\n\\n\" puts \"Hash b each_pair form : #{b.each_pair {|key| puts \"#{key}\"}}\\n\\n\" puts \"Hash c each_pair form : #{c.each_pair {|value| puts \"#{value}\"}}\\n\\n\"",
"e": 24583,
"s": 24164,
"text": null
},
{
"code": null,
"e": 24592,
"s": 24583,
"text": "Output :"
},
{
"code": null,
"e": 24756,
"s": 24592,
"text": "Hash a each_pair form : #\n\n[\"a\", 100]\nHash b each_pair form : {\"a\"=>100}\n\n[\"a\", 100]\n[\"c\", 300]\n[\"b\", 200]\nHash c each_pair form : {\"a\"=>100, \"c\"=>300, \"b\"=>200}\n\n"
},
{
"code": null,
"e": 24772,
"s": 24756,
"text": "Ruby Hash-class"
},
{
"code": null,
"e": 24785,
"s": 24772,
"text": "Ruby-Methods"
},
{
"code": null,
"e": 24790,
"s": 24785,
"text": "Ruby"
},
{
"code": null,
"e": 24888,
"s": 24790,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 24897,
"s": 24888,
"text": "Comments"
},
{
"code": null,
"e": 24910,
"s": 24897,
"text": "Old Comments"
},
{
"code": null,
"e": 24942,
"s": 24910,
"text": "Ruby | Array reverse() function"
},
{
"code": null,
"e": 24969,
"s": 24942,
"text": "Method Overloading In Ruby"
},
{
"code": null,
"e": 24996,
"s": 24969,
"text": "Instance Variables in Ruby"
},
{
"code": null,
"e": 25020,
"s": 24996,
"text": "Global Variable in Ruby"
},
{
"code": null,
"e": 25054,
"s": 25020,
"text": "Ruby | Array transpose() function"
},
{
"code": null,
"e": 25086,
"s": 25054,
"text": "Ruby | Array replace() function"
},
{
"code": null,
"e": 25114,
"s": 25086,
"text": "Ruby | Hash select() method"
},
{
"code": null,
"e": 25146,
"s": 25114,
"text": "Ruby | Array unshift() function"
},
{
"code": null,
"e": 25166,
"s": 25146,
"text": "Hello World in Ruby"
}
] |
Count the number of carry operations required to add two numbers in C++ | We are given two numbers num_1 and num_2. The goal is to count the number of carry operations required if the numbers are added. If numbers are 123 and 157 then carry operations will be 1. (7+3=10, 1+2+5=8, 1+1=2 ).
Let us understand with examples
Input − num_1=432 num_2=638
Output − Count of number of carry operations required to add two numbers are − 2
Explanation − From right to left adding digits and counting carry −
(2+9=10, carry 1 ) count=1,
(1+3+3=7, carry 0 ) count=1,
(4+6=10, carry 1 ) count=2
Input − num_1=9999 num_2=111
Output − Count of number of carry operations required to add two numbers are − 4
Explanation − From right to left adding digits and counting carry −
(9+1=10, carry 1 ) count=1,
(1+9+1=11, carry 1 ) count=2,
(1+9+1=11, carry 1 ) count=3,
(1+9=10, carry 1) count=4
We will convert both the numbers into strings. Start traversing strings from the end, convert character to an integer, add both and also previous carry ( 0 for 1st iteration ), if value>10 set carry as 1. If the carry is 1 increment count of carry.
Take two numbers as num_1 and num_2.
Take two numbers as num_1 and num_2.
Function carry_add_two_numbers(num_1, num_2) takes both number and returns count of carry required when both are added.
Function carry_add_two_numbers(num_1, num_2) takes both number and returns count of carry required when both are added.
Convert both numbers to string using to_string(x) and store in str_1 and str_2.
Convert both numbers to string using to_string(x) and store in str_1 and str_2.
Take lengths of both strings using length() as lenght_str_1 and length_str_2.
Take lengths of both strings using length() as lenght_str_1 and length_str_2.
Take initial count as 0 and initial carry also like 0.
Take initial count as 0 and initial carry also like 0.
While both lengths are non-zero.
While both lengths are non-zero.
Keep converting from the last character to integer and store integers in variables i and j.
Keep converting from the last character to integer and store integers in variables i and j.
Reduce lengths of both strings.
Reduce lengths of both strings.
Take variable to add as i+j+carry.
Take variable to add as i+j+carry.
If add>10 then increment count (as it is carried). And set cary=1.Otherwise set carry=0 for next iteration.
If add>10 then increment count (as it is carried). And set cary=1.Otherwise set carry=0 for next iteration.
After the end of all iterations, the count will have a total number of carries.
After the end of all iterations, the count will have a total number of carries.
Return count as result.
Return count as result.
Live Demo
#include <bits/stdc++.h>
using namespace std;
int carry_add_two_numbers(int num_1, int num_2){
string str_1 = to_string(num_1);
int length_str_1 = str_1.length();
string str_2 = to_string(num_2);
int length_str_2 = str_2.length();
int count = 0, carr = 0;
while(length_str_1 != 0 || length_str_2 != 0){
int i = 0, j = 0;
if (length_str_1 > 0){
i = str_1[length_str_1 - 1] - '0';
length_str_1--;
}
if (length_str_2 > 0){
j = str_2[length_str_2 - 1] - '0';
length_str_2--;
}
int add = i + j + carr;
if (add >= 10){
carr = 1;
count++;
}
else{
carr = 0;
}
}
return count;
}
int main(){
int num_1 = 234578;
int num_2 = 1234;
int count = carry_add_two_numbers(num_1, num_2);
cout<<"Count of number of carry operations required to add two numbers are: "<<count;
return 0;
}
If we run the above code it will generate the following output −
Count of number of carry operations required to add two numbers are: 2 | [
{
"code": null,
"e": 1278,
"s": 1062,
"text": "We are given two numbers num_1 and num_2. The goal is to count the number of carry operations required if the numbers are added. If numbers are 123 and 157 then carry operations will be 1. (7+3=10, 1+2+5=8, 1+1=2 )."
},
{
"code": null,
"e": 1310,
"s": 1278,
"text": "Let us understand with examples"
},
{
"code": null,
"e": 1338,
"s": 1310,
"text": "Input − num_1=432 num_2=638"
},
{
"code": null,
"e": 1419,
"s": 1338,
"text": "Output − Count of number of carry operations required to add two numbers are − 2"
},
{
"code": null,
"e": 1487,
"s": 1419,
"text": "Explanation − From right to left adding digits and counting carry −"
},
{
"code": null,
"e": 1571,
"s": 1487,
"text": "(2+9=10, carry 1 ) count=1,\n(1+3+3=7, carry 0 ) count=1,\n(4+6=10, carry 1 ) count=2"
},
{
"code": null,
"e": 1600,
"s": 1571,
"text": "Input − num_1=9999 num_2=111"
},
{
"code": null,
"e": 1681,
"s": 1600,
"text": "Output − Count of number of carry operations required to add two numbers are − 4"
},
{
"code": null,
"e": 1749,
"s": 1681,
"text": "Explanation − From right to left adding digits and counting carry −"
},
{
"code": null,
"e": 1863,
"s": 1749,
"text": "(9+1=10, carry 1 ) count=1,\n(1+9+1=11, carry 1 ) count=2,\n(1+9+1=11, carry 1 ) count=3,\n(1+9=10, carry 1) count=4"
},
{
"code": null,
"e": 2112,
"s": 1863,
"text": "We will convert both the numbers into strings. Start traversing strings from the end, convert character to an integer, add both and also previous carry ( 0 for 1st iteration ), if value>10 set carry as 1. If the carry is 1 increment count of carry."
},
{
"code": null,
"e": 2149,
"s": 2112,
"text": "Take two numbers as num_1 and num_2."
},
{
"code": null,
"e": 2186,
"s": 2149,
"text": "Take two numbers as num_1 and num_2."
},
{
"code": null,
"e": 2306,
"s": 2186,
"text": "Function carry_add_two_numbers(num_1, num_2) takes both number and returns count of carry required when both are added."
},
{
"code": null,
"e": 2426,
"s": 2306,
"text": "Function carry_add_two_numbers(num_1, num_2) takes both number and returns count of carry required when both are added."
},
{
"code": null,
"e": 2506,
"s": 2426,
"text": "Convert both numbers to string using to_string(x) and store in str_1 and str_2."
},
{
"code": null,
"e": 2586,
"s": 2506,
"text": "Convert both numbers to string using to_string(x) and store in str_1 and str_2."
},
{
"code": null,
"e": 2664,
"s": 2586,
"text": "Take lengths of both strings using length() as lenght_str_1 and length_str_2."
},
{
"code": null,
"e": 2742,
"s": 2664,
"text": "Take lengths of both strings using length() as lenght_str_1 and length_str_2."
},
{
"code": null,
"e": 2797,
"s": 2742,
"text": "Take initial count as 0 and initial carry also like 0."
},
{
"code": null,
"e": 2852,
"s": 2797,
"text": "Take initial count as 0 and initial carry also like 0."
},
{
"code": null,
"e": 2885,
"s": 2852,
"text": "While both lengths are non-zero."
},
{
"code": null,
"e": 2918,
"s": 2885,
"text": "While both lengths are non-zero."
},
{
"code": null,
"e": 3010,
"s": 2918,
"text": "Keep converting from the last character to integer and store integers in variables i and j."
},
{
"code": null,
"e": 3102,
"s": 3010,
"text": "Keep converting from the last character to integer and store integers in variables i and j."
},
{
"code": null,
"e": 3134,
"s": 3102,
"text": "Reduce lengths of both strings."
},
{
"code": null,
"e": 3166,
"s": 3134,
"text": "Reduce lengths of both strings."
},
{
"code": null,
"e": 3201,
"s": 3166,
"text": "Take variable to add as i+j+carry."
},
{
"code": null,
"e": 3236,
"s": 3201,
"text": "Take variable to add as i+j+carry."
},
{
"code": null,
"e": 3344,
"s": 3236,
"text": "If add>10 then increment count (as it is carried). And set cary=1.Otherwise set carry=0 for next iteration."
},
{
"code": null,
"e": 3452,
"s": 3344,
"text": "If add>10 then increment count (as it is carried). And set cary=1.Otherwise set carry=0 for next iteration."
},
{
"code": null,
"e": 3532,
"s": 3452,
"text": "After the end of all iterations, the count will have a total number of carries."
},
{
"code": null,
"e": 3612,
"s": 3532,
"text": "After the end of all iterations, the count will have a total number of carries."
},
{
"code": null,
"e": 3636,
"s": 3612,
"text": "Return count as result."
},
{
"code": null,
"e": 3660,
"s": 3636,
"text": "Return count as result."
},
{
"code": null,
"e": 3671,
"s": 3660,
"text": " Live Demo"
},
{
"code": null,
"e": 4600,
"s": 3671,
"text": "#include <bits/stdc++.h>\nusing namespace std;\nint carry_add_two_numbers(int num_1, int num_2){\n string str_1 = to_string(num_1);\n int length_str_1 = str_1.length();\n string str_2 = to_string(num_2);\n int length_str_2 = str_2.length();\n int count = 0, carr = 0;\n while(length_str_1 != 0 || length_str_2 != 0){\n int i = 0, j = 0;\n if (length_str_1 > 0){\n i = str_1[length_str_1 - 1] - '0';\n length_str_1--;\n }\n if (length_str_2 > 0){\n j = str_2[length_str_2 - 1] - '0';\n length_str_2--;\n }\n int add = i + j + carr;\n if (add >= 10){\n carr = 1;\n count++;\n }\n else{\n carr = 0;\n }\n }\n return count;\n}\nint main(){\n int num_1 = 234578;\n int num_2 = 1234;\n int count = carry_add_two_numbers(num_1, num_2);\n cout<<\"Count of number of carry operations required to add two numbers are: \"<<count;\n return 0;\n}"
},
{
"code": null,
"e": 4665,
"s": 4600,
"text": "If we run the above code it will generate the following output −"
},
{
"code": null,
"e": 4736,
"s": 4665,
"text": "Count of number of carry operations required to add two numbers are: 2"
}
] |
HTML - <img> Tag | The HTML <img> tag is used to put an image in an HTML document.
<!DOCTYPE html>
<html>
<head>
<title>HTML Tag</title>
</head>
<body>
<img src = "https://www.tutorialspoint.com/images/html.gif"
alt = "HTML Tutorial" height = "150" width = "140" />
</body>
</html>
This will produce the following result −
This tag supports all the global attributes described in − HTML Attribute Reference
The HTML <img> tag also supports the following additional attributes −
This tag supports all the event attributes described in − HTML Events Reference
19 Lectures
2 hours
Anadi Sharma
16 Lectures
1.5 hours
Anadi Sharma
18 Lectures
1.5 hours
Frahaan Hussain
57 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
54 Lectures
6 hours
DigiFisk (Programming Is Fun)
45 Lectures
5.5 hours
DigiFisk (Programming Is Fun)
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2438,
"s": 2374,
"text": "The HTML <img> tag is used to put an image in an HTML document."
},
{
"code": null,
"e": 2674,
"s": 2438,
"text": "<!DOCTYPE html>\n<html>\n\n <head>\n <title>HTML Tag</title>\n </head>\n\n <body>\n <img src = \"https://www.tutorialspoint.com/images/html.gif\"\n alt = \"HTML Tutorial\" height = \"150\" width = \"140\" />\n </body>\n\n</html>"
},
{
"code": null,
"e": 2715,
"s": 2674,
"text": "This will produce the following result −"
},
{
"code": null,
"e": 2799,
"s": 2715,
"text": "This tag supports all the global attributes described in − HTML Attribute Reference"
},
{
"code": null,
"e": 2870,
"s": 2799,
"text": "The HTML <img> tag also supports the following additional attributes −"
},
{
"code": null,
"e": 2950,
"s": 2870,
"text": "This tag supports all the event attributes described in − HTML Events Reference"
},
{
"code": null,
"e": 2983,
"s": 2950,
"text": "\n 19 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 2997,
"s": 2983,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 3032,
"s": 2997,
"text": "\n 16 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 3046,
"s": 3032,
"text": " Anadi Sharma"
},
{
"code": null,
"e": 3081,
"s": 3046,
"text": "\n 18 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 3098,
"s": 3081,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 3133,
"s": 3098,
"text": "\n 57 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3164,
"s": 3133,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3197,
"s": 3164,
"text": "\n 54 Lectures \n 6 hours \n"
},
{
"code": null,
"e": 3228,
"s": 3197,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3263,
"s": 3228,
"text": "\n 45 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3294,
"s": 3263,
"text": " DigiFisk (Programming Is Fun)"
},
{
"code": null,
"e": 3301,
"s": 3294,
"text": " Print"
},
{
"code": null,
"e": 3312,
"s": 3301,
"text": " Add Notes"
}
] |
AWK - Quick Guide | AWK is an interpreted programming language. It is very powerful and specially designed for text processing. Its name is derived from the family names of its authors − Alfred Aho, Peter Weinberger, and Brian Kernighan.
The version of AWK that GNU/Linux distributes is written and maintained by the Free Software Foundation (FSF); it is often referred to as GNU AWK.
Following are the variants of AWK −
AWK − Original AWK from AT & T Laboratory.
AWK − Original AWK from AT & T Laboratory.
NAWK − Newer and improved version of AWK from AT & T Laboratory.
NAWK − Newer and improved version of AWK from AT & T Laboratory.
GAWK − It is GNU AWK. All GNU/Linux distributions ship GAWK. It is fully compatible with AWK and NAWK.
GAWK − It is GNU AWK. All GNU/Linux distributions ship GAWK. It is fully compatible with AWK and NAWK.
Myriad of tasks can be done with AWK. Listed below are just a few of them −
Text processing,
Producing formatted text reports,
Performing arithmetic operations,
Performing string operations, and many more.
This chapter describes how to set up the AWK environment on your GNU/Linux system.
Generally, AWK is available by default on most GNU/Linux distributions. You can use which command to check whether it is present on your system or not. In case you don’t have AWK, then install it on Debian based GNU/Linux using Advance Package Tool (APT) package manager as follows −
[jeryy]$ sudo apt-get update
[jeryy]$ sudo apt-get install gawk
Similarly, to install AWK on RPM based GNU/Linux, use Yellowdog Updator Modifier yum package manager as follows −
[root]# yum install gawk
After installation, ensure that AWK is accessible via command line.
[jerry]$ which awk
On executing the above code, you get the following result −
/usr/bin/awk
As GNU AWK is a part of the GNU project, its source code is available for free download. We have already seen how to install AWK using package manager. Let us now understand how to install AWK from its source code.
The following installation is applicable to any GNU/Linux software, and for most other freely-available programs as well. Here are the installation steps −
Step 1 − Download the source code from an authentic place. The command-line utility wget serves this purpose.
[jerry]$ wget http://ftp.gnu.org/gnu/gawk/gawk-4.1.1.tar.xz
Step 2 − Decompress and extract the downloaded source code.
[jerry]$ tar xvf gawk-4.1.1.tar.xz
Step 3 − Change into the directory and run configure.
[jerry]$ ./configure
Step 4 − Upon successful completion, the configure generates Makefile. To compile the source code, issue a make command.
[jerry]$ make
Step 5 − You can run the test suite to ensure the build is clean. This is an optional step.
[jerry]$ make check
Step 6 − Finally, install AWK. Make sure you have super-user privileges.
[jerry]$ sudo make install
That is it! You have successfully compiled and installed AWK. Verify it by executing the awk command as follows −
[jerry]$ which awk
On executing this code, you get the following result −
/usr/bin/awk
To become an expert AWK programmer, you need to know its internals. AWK follows a simple workflow − Read, Execute, and Repeat. The following diagram depicts the workflow of AWK −
AWK reads a line from the input stream (file, pipe, or stdin) and stores it in memory.
All AWK commands are applied sequentially on the input. By default AWK execute commands on every line. We can restrict this by providing patterns.
This process repeats until the file reaches its end.
Let us now understand the program structure of AWK.
The syntax of the BEGIN block is as follows −
Syntax
BEGIN {awk-commands}
The BEGIN block gets executed at program start-up. It executes only once. This is good place to initialize variables. BEGIN is an AWK keyword and hence it must be in upper-case. Please note that this block is optional.
The syntax of the body block is as follows −
Syntax
/pattern/ {awk-commands}
The body block applies AWK commands on every input line. By default, AWK executes commands on every line. We can restrict this by providing patterns. Note that there are no keywords for the Body block.
The syntax of the END block is as follows −
Syntax
END {awk-commands}
The END block executes at the end of the program. END is an AWK keyword and hence it must be in upper-case. Please note that this block is optional.
Let us create a file marks.txt which contains the serial number, name of the student, subject name, and number of marks obtained.
1) Amit Physics 80
2) Rahul Maths 90
3) Shyam Biology 87
4) Kedar English 85
5) Hari History 89
Let us now display the file contents with header by using AWK script.
Example
[jerry]$ awk 'BEGIN{printf "Sr No\tName\tSub\tMarks\n"} {print}' marks.txt
When this code is executed, it produces the following result −
Output
Sr No Name Sub Marks
1) Amit Physics 80
2) Rahul Maths 90
3) Shyam Biology 87
4) Kedar English 85
5) Hari History 89
At the start, AWK prints the header from the BEGIN block. Then in the body block, it reads a line from a file and executes AWK's print command which just prints the contents on the standard output stream. This process repeats until file reaches the end.
AWK is simple to use. We can provide AWK commands either directly from the command line or in the form of a text file containing AWK commands.
We can specify an AWK command within single quotes at command line as shown −
awk [options] file ...
Consider a text file marks.txt with the following content −
1) Amit Physics 80
2) Rahul Maths 90
3) Shyam Biology 87
4) Kedar English 85
5) Hari History 89
Let us display the complete content of the file using AWK as follows −
Example
[jerry]$ awk '{print}' marks.txt
On executing this code, you get the following result −
Output
1) Amit Physics 80
2) Rahul Maths 90
3) Shyam Biology 87
4) Kedar English 85
5) Hari History 89
We can provide AWK commands in a script file as shown −
awk [options] -f file ....
First, create a text file command.awk containing the AWK command as shown below −
{print}
Now we can instruct the AWK to read commands from the text file and perform the action. Here, we achieve the same result as shown in the above example.
Example
[jerry]$ awk -f command.awk marks.txt
On executing this code, you get the following result −
Output
1) Amit Physics 80
2) Rahul Maths 90
3) Shyam Biology 87
4) Kedar English 85
5) Hari History 89
AWK supports the following standard options which can be provided from the command line.
This option assigns a value to a variable. It allows assignment before the program execution. The following example describes the usage of the -v option.
Example
[jerry]$ awk -v name=Jerry 'BEGIN{printf "Name = %s\n", name}'
On executing this code, you get the following result −
Output
Name = Jerry
It prints a sorted list of global variables and their final values to file. The default file is awkvars.out.
Example
[jerry]$ awk --dump-variables ''
[jerry]$ cat awkvars.out
On executing the above code, you get the following result −
Output
ARGC: 1
ARGIND: 0
ARGV: array, 1 elements
BINMODE: 0
CONVFMT: "%.6g"
ERRNO: ""
FIELDWIDTHS: ""
FILENAME: ""
FNR: 0
FPAT: "[^[:space:]]+"
FS: " "
IGNORECASE: 0
LINT: 0
NF: 0
NR: 0
OFMT: "%.6g"
OFS: " "
ORS: "\n"
RLENGTH: 0
RS: "\n"
RSTART: 0
RT: ""
SUBSEP: "\034"
TEXTDOMAIN: "messages"
This option prints the help message on standard output.
Example
[jerry]$ awk --help
On executing this code, you get the following result −
Output
Usage: awk [POSIX or GNU style options] -f progfile [--] file ...
Usage: awk [POSIX or GNU style options] [--] 'program' file ...
POSIX options : GNU long options: (standard)
-f progfile --file=progfile
-F fs --field-separator=fs
-v var=val --assign=var=val
Short options : GNU long options: (extensions)
-b --characters-as-bytes
-c --traditional
-C --copyright
-d[file] --dump-variables[=file]
-e 'program-text' --source='program-text'
-E file --exec=file
-g --gen-pot
-h --help
-L [fatal] --lint[=fatal]
-n --non-decimal-data
-N --use-lc-numeric
-O --optimize
-p[file] --profile[=file]
-P --posix
-r --re-interval
-S --sandbox
-t --lint-old
-V --version
This option enables checking of non-portable or dubious constructs. When an argument fatal is provided, it treats warning messages as errors. The following example demonstrates this −
Example
[jerry]$ awk --lint '' /bin/ls
On executing this code, you get the following result −
Output
awk: cmd. line:1: warning: empty program text on command line
awk: cmd. line:1: warning: source file does not end in newline
awk: warning: no program text at all!
This option turns on strict POSIX compatibility, in which all common and gawk-specific extensions are disabled.
This option generates a pretty-printed version of the program in file. Default file is awkprof.out. Below simple example illustrates this −
Example
[jerry]$ awk --profile 'BEGIN{printf"---|Header|--\n"} {print}
END{printf"---|Footer|---\n"}' marks.txt > /dev/null
[jerry]$ cat awkprof.out
On executing this code, you get the following result −
Output
# gawk profile, created Sun Oct 26 19:50:48 2014
# BEGIN block(s)
BEGIN {
printf "---|Header|--\n"
}
# Rule(s) {
print $0
}
# END block(s)
END {
printf "---|Footer|---\n"
}
This option disables all gawk-specific extensions.
This option displays the version information of the AWK program.
Example
[jerry]$ awk --version
When this code is executed, it produces the following result −
Output
GNU Awk 4.0.1
Copyright (C) 1989, 1991-2012 Free Software Foundation.
This chapter describes several useful AWK commands and their appropriate examples. Consider a text file marks.txt to be processed with the following content −
1) Amit Physics 80
2) Rahul Maths 90
3) Shyam Biology 87
4) Kedar English 85
5) Hari History 89
You can instruct AWK to print only certain columns from the input field. The following example demonstrates this −
[jerry]$ awk '{print $3 "\t" $4}' marks.txt
On executing this code, you get the following result −
Physics 80
Maths 90
Biology 87
English 85
History 89
In the file marks.txt, the third column contains the subject name and the fourth column contains the marks obtained in a particular subject. Let us print these two columns using AWK print command. In the above example, $3 and $4 represent the third and the fourth fields respectively from the input record.
By default, AWK prints all the lines that match pattern.
[jerry]$ awk '/a/ {print $0}' marks.txt
On executing this code, you get the following result −
2) Rahul Maths 90
3) Shyam Biology 87
4) Kedar English 85
5) Hari History 89
In the above example, we are searching form pattern a. When a pattern match succeeds, it executes a command from the body block. In the absence of a body block − default action is taken which is print the record. Hence, the following command produces the same result −
[jerry]$ awk '/a/' marks.txt
When a pattern match succeeds, AWK prints the entire record by default. But you can instruct AWK to print only certain fields. For instance, the following example prints the third and fourth field when a pattern match succeeds.
[jerry]$ awk '/a/ {print $3 "\t" $4}' marks.txt
On executing this code, you get the following result −
Maths 90
Biology 87
English 85
History 89
You can print columns in any order. For instance, the following example prints the fourth column followed by the third column.
[jerry]$ awk '/a/ {print $4 "\t" $3}' marks.txt
On executing the above code, you get the following result −
90 Maths
87 Biology
85 English
89 History
Let us see an example where you can count and print the number of lines for which a pattern match succeeded.
[jerry]$ awk '/a/{++cnt} END {print "Count = ", cnt}' marks.txt
On executing this code, you get the following result −
Count = 4
In this example, we increment the value of counter when a pattern match succeeds and we print this value in the END block. Note that unlike other programming languages, there is no need to declare a variable before using it.
Let us print only those lines that contain more than 18 characters.
[jerry]$ awk 'length($0) > 18' marks.txt
On executing this code, you get the following result −
3) Shyam Biology 87
4) Kedar English 85
AWK provides a built-in length function that returns the length of the string. $0 variable stores the entire line and in the absence of a body block, default action is taken, i.e., the print action. Hence, if a line has more than 18 characters, then the comparison results true and the line gets printed.
AWK provides several built-in variables. They play an important role while writing AWK scripts. This chapter demonstrates the usage of built-in variables.
The standard AWK variables are discussed below.
It implies the number of arguments provided at the command line.
Example
[jerry]$ awk 'BEGIN {print "Arguments =", ARGC}' One Two Three Four
On executing this code, you get the following result −
Output
Arguments = 5
But why AWK shows 5 when you passed only 4 arguments? Just check the following example to clear your doubt.
It is an array that stores the command-line arguments. The array's valid index ranges from 0 to ARGC-1.
Example
[jerry]$ awk 'BEGIN {
for (i = 0; i < ARGC - 1; ++i) {
printf "ARGV[%d] = %s\n", i, ARGV[i]
}
}' one two three four
On executing this code, you get the following result −
Output
ARGV[0] = awk
ARGV[1] = one
ARGV[2] = two
ARGV[3] = three
It represents the conversion format for numbers. Its default value is %.6g.
Example
[jerry]$ awk 'BEGIN { print "Conversion Format =", CONVFMT }'
On executing this code, you get the following result −
Output
Conversion Format = %.6g
It is an associative array of environment variables.
Example
[jerry]$ awk 'BEGIN { print ENVIRON["USER"] }'
On executing this code, you get the following result −
Output
jerry
To find names of other environment variables, use env command.
It represents the current file name.
Example
[jerry]$ awk 'END {print FILENAME}' marks.txt
On executing this code, you get the following result −
Output
marks.txt
Please note that FILENAME is undefined in the BEGIN block.
It represents the (input) field separator and its default value is space. You can also change this by using -F command line option.
Example
[jerry]$ awk 'BEGIN {print "FS = " FS}' | cat -vte
On executing this code, you get the following result −
Output
FS = $
It represents the number of fields in the current record. For instance, the following example prints only those lines that contain more than two fields.
Example
[jerry]$ echo -e "One Two\nOne Two Three\nOne Two Three Four" | awk 'NF > 2'
On executing this code, you get the following result −
Output
One Two Three
One Two Three Four
It represents the number of the current record. For instance, the following example prints the record if the current record number is less than three.
Example
[jerry]$ echo -e "One Two\nOne Two Three\nOne Two Three Four" | awk 'NR < 3'
On executing this code, you get the following result −
Output
One Two
One Two Three
It is similar to NR, but relative to the current file. It is useful when AWK is operating on multiple files. Value of FNR resets with new file.
It represents the output format number and its default value is %.6g.
Example
[jerry]$ awk 'BEGIN {print "OFMT = " OFMT}'
On executing this code, you get the following result −
Output
OFMT = %.6g
It represents the output field separator and its default value is space.
Example
[jerry]$ awk 'BEGIN {print "OFS = " OFS}' | cat -vte
On executing this code, you get the following result −
Output
OFS = $
It represents the output record separator and its default value is newline.
Example
[jerry]$ awk 'BEGIN {print "ORS = " ORS}' | cat -vte
On executing the above code, you get the following result −
Output
ORS = $
$
It represents the length of the string matched by match function. AWK's match function searches for a given string in the input-string.
Example
[jerry]$ awk 'BEGIN { if (match("One Two Three", "re")) { print RLENGTH } }'
On executing this code, you get the following result −
Output
2
It represents (input) record separator and its default value is newline.
Example
[jerry]$ awk 'BEGIN {print "RS = " RS}' | cat -vte
On executing this code, you get the following result −
Output
RS = $
$
It represents the first position in the string matched by match function.
Example
[jerry]$ awk 'BEGIN { if (match("One Two Three", "Thre")) { print RSTART } }'
On executing this code, you get the following result −
Output
9
It represents the separator character for array subscripts and its default value is \034.
Example
[jerry]$ awk 'BEGIN { print "SUBSEP = " SUBSEP }' | cat -vte
On executing this code, you get the following result −
Output
SUBSEP = ^\$
It represents the entire input record.
Example
[jerry]$ awk '{print $0}' marks.txt
On executing this code, you get the following result −
Output
1) Amit Physics 80
2) Rahul Maths 90
3) Shyam Biology 87
4) Kedar English 85
5) Hari History 89
It represents the nth field in the current record where the fields are separated by FS.
Example
[jerry]$ awk '{print $3 "\t" $4}' marks.txt
On executing this code, you get the following result −
Output
Physics 80
Maths 90
Biology 87
English 85
History 89
GNU AWK specific variables are as follows −
It represents the index in ARGV of the current file being processed.
Example
[jerry]$ awk '{
print "ARGIND = ", ARGIND; print "Filename = ", ARGV[ARGIND]
}' junk1 junk2 junk3
On executing this code, you get the following result −
Output
ARGIND = 1
Filename = junk1
ARGIND = 2
Filename = junk2
ARGIND = 3
Filename = junk3
It is used to specify binary mode for all file I/O on non-POSIX systems. Numeric values of 1, 2, or 3 specify that input files, output files, or all files, respectively, should use binary I/O. String values of r or w specify that input files or output files, respectively, should use binary I/O. String values of rw or wr specify that all files should use binary I/O.
A string indicates an error when a redirection fails for getline or if close call fails.
Example
[jerry]$ awk 'BEGIN { ret = getline < "junk.txt"; if (ret == -1) print "Error:", ERRNO }'
On executing this code, you get the following result −
Output
Error: No such file or directory
A space separated list of field widths variable is set, GAWK parses the input into fields of fixed width, instead of using the value of the FS variable as the field separator.
When this variable is set, GAWK becomes case-insensitive. The following example demonstrates this −
Example
[jerry]$ awk 'BEGIN{IGNORECASE = 1} /amit/' marks.txt
On executing this code, you get the following result −
Output
1) Amit Physics 80
It provides dynamic control of the --lint option from the GAWK program. When this variable is set, GAWK prints lint warnings. When assigned the string value fatal, lint warnings become fatal errors, exactly like --lint=fatal.
Example
[jerry]$ awk 'BEGIN {LINT = 1; a}'
On executing this code, you get the following result −
Output
awk: cmd. line:1: warning: reference to uninitialized variable `a'
awk: cmd. line:1: warning: statement has no effect
This is an associative array containing information about the process, such as real and effective UID numbers, process ID number, and so on.
Example
[jerry]$ awk 'BEGIN { print PROCINFO["pid"] }'
On executing this code, you get the following result −
Output
4316
It represents the text domain of the AWK program. It is used to find the localized translations for the program's strings.
Example
[jerry]$ awk 'BEGIN { print TEXTDOMAIN }'
On executing this code, you get the following result −
Output
messages
The above output shows English text due to en_IN locale
Like other programming languages, AWK also provides a large set of operators. This chapter explains AWK operators with suitable examples.
AWK supports the following arithmetic operators.
AWK supports the following increment and decrement operators.
AWK supports the following assignment operators.
AWK supports the following relational operators.
AWK supports the following logical operators.
We can easily implement a condition expression using ternary operator.
AWK supports the following unary operators.
There are two formats of exponential operators.
Space is a string concatenation operator that merges two strings.
It is represented by in. It is used while accessing array elements.
This example explains the two forms of regular expressions operators.
AWK is very powerful and efficient in handling regular expressions. A number of complex tasks can be solved with simple regular expressions. Any command-line expert knows the power of regular expressions.
This chapter covers standard regular expressions with suitable examples.
It matches any single character except the end of line character. For instance, the following example matches fin, fun, fan etc.
[jerry]$ echo -e "cat\nbat\nfun\nfin\nfan" | awk '/f.n/'
On executing the above code, you get the following result −
fun
fin
fan
It matches the start of line. For instance, the following example prints all the lines that start with pattern The.
[jerry]$ echo -e "This\nThat\nThere\nTheir\nthese" | awk '/^The/'
On executing this code, you get the following result −
There
Their
It matches the end of line. For instance, the following example prints the lines that end with the letter n.
[jerry]$ echo -e "knife\nknow\nfun\nfin\nfan\nnine" | awk '/n$/'
On executing this code, you get the following result −
fun
fin
fan
It is used to match only one out of several characters. For instance, the following example matches pattern Call and Tall but not Ball.
[jerry]$ echo -e "Call\nTall\nBall" | awk '/[CT]all/'
On executing this code, you get the following result −
Call
Tall
In exclusive set, the carat negates the set of characters in the square brackets. For instance, the following example prints only Ball.
[jerry]$ echo -e "Call\nTall\nBall" | awk '/[^CT]all/'
On executing this code, you get the following result −
Ball
A vertical bar allows regular expressions to be logically ORed. For instance, the following example prints Ball and Call.
[jerry]$ echo -e "Call\nTall\nBall\nSmall\nShall" | awk '/Call|Ball/'
On executing this code, you get the following result −
Call
Ball
It matches zero or one occurrence of the preceding character. For instance, the following example matches Colour as well as Color. We have made u as an optional character by using ?.
[jerry]$ echo -e "Colour\nColor" | awk '/Colou?r/'
On executing this code, you get the following result −
Colour
Color
It matches zero or more occurrences of the preceding character. For instance, the following example matches ca, cat, catt, and so on.
[jerry]$ echo -e "ca\ncat\ncatt" | awk '/cat*/'
On executing this code, you get the following result −
ca
cat
catt
It matches one or more occurrence of the preceding character. For instance below example matches one or more occurrences of the 2.
[jerry]$ echo -e "111\n22\n123\n234\n456\n222" | awk '/2+/'
On executing the above code, you get the following result −
22
123
234
222
Parentheses () are used for grouping and the character | is used for alternatives. For instance, the following regular expression matches the lines containing either Apple Juice or Apple Cake.
[jerry]$ echo -e "Apple Juice\nApple Pie\nApple Tart\nApple Cake" | awk
'/Apple (Juice|Cake)/'
On executing this code, you get the following result −
Apple Juice
Apple Cake
AWK has associative arrays and one of the best thing about it is – the indexes need not to be continuous set of number; you can use either string or number as an array index. Also, there is no need to declare the size of an array in advance – arrays can expand/shrink at runtime.
Its syntax is as follows −
array_name[index] = value
Where array_name is the name of array, index is the array index, and value is any value assigning to the element of the array.
To gain more insight on array, let us create and access the elements of an array.
[jerry]$ awk 'BEGIN {
fruits["mango"] = "yellow";
fruits["orange"] = "orange"
print fruits["orange"] "\n" fruits["mango"]
}'
On executing this code, you get the following result −
orange
yellow
In the above example, we declare the array as fruits whose index is fruit name and the value is the color of the fruit. To access array elements, we use array_name[index] format.
For insertion, we used assignment operator. Similarly, we can use delete statement to remove an element from the array. The syntax of delete statement is as follows −
delete array_name[index]
The following example deletes the element orange. Hence the command does not show any output.
[jerry]$ awk 'BEGIN {
fruits["mango"] = "yellow";
fruits["orange"] = "orange";
delete fruits["orange"];
print fruits["orange"]
}'
AWK only supports one-dimensional arrays. But you can easily simulate a multi-dimensional array using the one-dimensional array itself.
For instance, given below is a 3x3 two-dimensional array −
100 200 300
400 500 600
700 800 900
In the above example, array[0][0] stores 100, array[0][1] stores 200, and so on. To store 100 at array location [0][0], we can use the following syntax −
array["0,0"] = 100
Though we gave 0,0 as index, these are not two indexes. In reality, it is just one index with the string 0,0.
The following example simulates a 2-D array −
[jerry]$ awk 'BEGIN {
array["0,0"] = 100;
array["0,1"] = 200;
array["0,2"] = 300;
array["1,0"] = 400;
array["1,1"] = 500;
array["1,2"] = 600;
# print array elements
print "array[0,0] = " array["0,0"];
print "array[0,1] = " array["0,1"];
print "array[0,2] = " array["0,2"];
print "array[1,0] = " array["1,0"];
print "array[1,1] = " array["1,1"];
print "array[1,2] = " array["1,2"];
}'
On executing this code, you get the following result −
array[0,0] = 100
array[0,1] = 200
array[0,2] = 300
array[1,0] = 400
array[1,1] = 500
array[1,2] = 600
You can also perform a variety of operations on an array such as sorting its elements/indexes. For that purpose, you can use assort and asorti functions
Like other programming languages, AWK provides conditional statements to control the flow of a program. This chapter explains AWK's control statements with suitable examples.
It simply tests the condition and performs certain actions depending upon the condition. Given below is the syntax of if statement −
if (condition)
action
We can also use a pair of curly braces as given below to execute multiple actions −
if (condition) {
action-1
action-1
.
.
action-n
}
For instance, the following example checks whether a number is even or not −
[jerry]$ awk 'BEGIN {num = 10; if (num % 2 == 0) printf "%d is even number.\n", num }'
On executing the above code, you get the following result −
10 is even number.
In if-else syntax, we can provide a list of actions to be performed when a condition becomes false.
The syntax of if-else statement is as follows −
if (condition)
action-1
else
action-2
In the above syntax, action-1 is performed when the condition evaluates to true and action-2 is performed when the condition evaluates to false. For instance, the following example checks whether a number is even or not −
[jerry]$ awk 'BEGIN {
num = 11; if (num % 2 == 0) printf "%d is even number.\n", num;
else printf "%d is odd number.\n", num
}'
On executing this code, you get the following result −
11 is odd number.
We can easily create an if-else-if ladder by using multiple if-else statements. The following example demonstrates this −
[jerry]$ awk 'BEGIN {
a = 30;
if (a==10)
print "a = 10";
else if (a == 20)
print "a = 20";
else if (a == 30)
print "a = 30";
}'
On executing this code, you get the following result −
a = 30
This chapter explains AWK's loops with suitable example. Loops are used to execute a set of actions in a repeated manner. The loop execution continues as long as the loop condition is true.
The syntax of for loop is −
for (initialization; condition; increment/decrement)
action
Initially, the for statement performs initialization action, then it checks the condition. If the condition is true, it executes actions, thereafter it performs increment or decrement operation. The loop execution continues as long as the condition is true. For instance, the following example prints 1 to 5 using for loop −
[jerry]$ awk 'BEGIN { for (i = 1; i <= 5; ++i) print i }'
On executing this code, you get the following result −
1
2
3
4
5
The while loop keeps executing the action until a particular logical condition evaluates to true. Here is the syntax of while loop −
while (condition)
action
AWK first checks the condition; if the condition is true, it executes the action. This process repeats as long as the loop condition evaluates to true. For instance, the following example prints 1 to 5 using while loop −
[jerry]$ awk 'BEGIN {i = 1; while (i < 6) { print i; ++i } }'
On executing this code, you get the following result −
1
2
3
4
5
The do-while loop is similar to the while loop, except that the test condition is evaluated at the end of the loop. Here is the syntax of do-whileloop −
do
action
while (condition)
In a do-while loop, the action statement gets executed at least once even when the condition statement evaluates to false. For instance, the following example prints 1 to 5 numbers using do-while loop −
[jerry]$ awk 'BEGIN {i = 1; do { print i; ++i } while (i < 6) }'
On executing this code, you get the following result −
1
2
3
4
5
As its name suggests, it is used to end the loop execution. Here is an example which ends the loop when the sum becomes greater than 50.
[jerry]$ awk 'BEGIN {
sum = 0; for (i = 0; i < 20; ++i) {
sum += i; if (sum > 50) break; else print "Sum =", sum
}
}'
On executing this code, you get the following result −
Sum = 0
Sum = 1
Sum = 3
Sum = 6
Sum = 10
Sum = 15
Sum = 21
Sum = 28
Sum = 36
Sum = 45
The continue statement is used inside a loop to skip to the next iteration of the loop. It is useful when you wish to skip the processing of some data inside the loop. For instance, the following example uses continue statement to print the even numbers between 1 to 20.
[jerry]$ awk 'BEGIN {
for (i = 1; i <= 20; ++i) {
if (i % 2 == 0) print i ; else continue
}
}'
On executing this code, you get the following result −
2
4
6
8
10
12
14
16
18
20
It is used to stop the execution of the script. It accepts an integer as an argument which is the exit status code for AWK process. If no argument is supplied, exit returns status zero. Here is an example that stops the execution when the sum becomes greater than 50.
[jerry]$ awk 'BEGIN {
sum = 0; for (i = 0; i < 20; ++i) {
sum += i; if (sum > 50) exit(10); else print "Sum =", sum
}
}'
On executing this code, you get the following result −
Sum = 0
Sum = 1
Sum = 3
Sum = 6
Sum = 10
Sum = 15
Sum = 21
Sum = 28
Sum = 36
Sum = 45
Let us check the return status of the script.
[jerry]$ echo $?
On executing this code, you get the following result −
10
AWK has a number of functions built into it that are always available to the programmer. This chapter describes Arithmetic, String, Time, Bit manipulation, and other miscellaneous functions with suitable examples.
AWK has the following built-in arithmetic functions.
AWK has the following built-in String functions.
AWK has the following built-in time functions.
AWK has the following built-in bit manipulation functions.
AWK has the following miscellaneous functions.
Functions are basic building blocks of a program. AWK allows us to define our own functions. A large program can be divided into functions and each function can be written/tested independently. It provides re-usability of code.
Given below is the general format of a user-defined function −
function function_name(argument1, argument2, ...) {
function body
}
In this syntax, the function_name is the name of the user-defined function. Function name should begin with a letter and the rest of the characters can be any combination of numbers, alphabetic characters, or underscore. AWK's reserve words cannot be used as function names.
Functions can accept multiple arguments separated by comma. Arguments are not mandatory. You can also create a user-defined function without any argument.
function body consists of one or more AWK statements.
Let us write two functions that calculate the minimum and the maximum number and call these functions from another function called main. The functions.awk file contains −
# Returns minimum number
function find_min(num1, num2){
if (num1 < num2)
return num1
return num2
}
# Returns maximum number
function find_max(num1, num2){
if (num1 > num2)
return num1
return num2
}
# Main function
function main(num1, num2){
# Find minimum number
result = find_min(10, 20)
print "Minimum =", result
# Find maximum number
result = find_max(10, 20)
print "Maximum =", result
}
# Script execution starts here
BEGIN {
main(10, 20)
}
On executing this code, you get the following result −
Minimum = 10
Maximum = 20
So far, we displayed data on standard output stream. We can also redirect data to a file. A redirection appears after the print or printf statement. Redirections in AWK are written just like redirection in shell commands, except that they are written inside the AWK program. This chapter explains redirection with suitable examples.
The syntax of the redirection operator is −
print DATA > output-file
It writes the data into the output-file. If the output-file does not exist, then it creates one. When this type of redirection is used, the output-file is erased before the first output is written to it. Subsequent write operations to the same output-file do not erase the output-file, but append to it. For instance, the following example writes Hello, World !!! to the file.
Let us create a file with some text data.
[jerry]$ echo "Old data" > /tmp/message.txt
[jerry]$ cat /tmp/message.txt
On executing this code, you get the following result −
Old data
Now let us redirect some contents into it using AWK's redirection operator.
[jerry]$ awk 'BEGIN { print "Hello, World !!!" > "/tmp/message.txt" }'
[jerry]$ cat /tmp/message.txt
On executing this code, you get the following result −
Hello, World !!!
The syntax of append operator is as follows −
print DATA >> output-file
It appends the data into the output-file. If the output-file does not exist, then it creates one. When this type of redirection is used, new contents are appended at the end of file. For instance, the following example appends Hello, World !!! to the file.
Let us create a file with some text data.
[jerry]$ echo "Old data" > /tmp/message.txt
[jerry]$ cat /tmp/message.txt
On executing this code, you get the following result −
Old data
Now let us append some contents to it using AWK's append operator.
[jerry]$ awk 'BEGIN { print "Hello, World !!!" >> "/tmp/message.txt" }'
[jerry]$ cat /tmp/message.txt
On executing this code, you get the following result −
Old data
Hello, World !!!
It is possible to send output to another program through a pipe instead of using a file. This redirection opens a pipe to command, and writes the values of items through this pipe to another process to execute the command. The redirection argument command is actually an AWK expression. Here is the syntax of pipe −
print items | command
Let us use tr command to convert lowercase letters to uppercase.
[jerry]$ awk 'BEGIN { print "hello, world !!!" | "tr [a-z] [A-Z]" }'
On executing this code, you get the following result −
HELLO, WORLD !!!
AWK can communicate to an external process using |&, which is two-way communication. For instance, the following example uses tr command to convert lowercase letters to uppercase. Our command.awk file contains −
BEGIN {
cmd = "tr [a-z] [A-Z]"
print "hello, world !!!" |& cmd
close(cmd, "to")
cmd |& getline out
print out;
close(cmd);
}
On executing this code, you get the following result −
HELLO, WORLD !!!
Does the script look cryptic? Let us demystify it.
The first statement, cmd = "tr [a-z] [A-Z]", is the command to which we establish the two-way communication from AWK.
The first statement, cmd = "tr [a-z] [A-Z]", is the command to which we establish the two-way communication from AWK.
The next statement, i.e., the print command provides input to the tr command. Here &| indicates two-way communication.
The next statement, i.e., the print command provides input to the tr command. Here &| indicates two-way communication.
The third statement, i.e., close(cmd, "to"), closes the to process after competing its execution.
The third statement, i.e., close(cmd, "to"), closes the to process after competing its execution.
The next statement cmd |& getline out stores the output into out variable with the aid of getline function.
The next statement cmd |& getline out stores the output into out variable with the aid of getline function.
The next print statement prints the output and finally the close function closes the command.
The next print statement prints the output and finally the close function closes the command.
So far we have used AWK's print and printf functions to display data on standard output. But printf is much more powerful than what we have seen before. This function is borrowed from the C language and is very helpful while producing formatted output. Below is the syntax of the printf statement −
printf fmt, expr-list
In the above syntax fmt is a string of format specifications and constants. expr-list is a list of arguments corresponding to format specifiers.
Similar to any string, format can contain embedded escape sequences. Discussed below are the escape sequences supported by AWK −
The following example prints Hello and World in separate lines using newline character −
Example
[jerry]$ awk 'BEGIN { printf "Hello\nWorld\n" }'
On executing this code, you get the following result −
Output
Hello
World
The following example uses horizontal tab to display different field −
Example
[jerry]$ awk 'BEGIN { printf "Sr No\tName\tSub\tMarks\n" }'
On executing the above code, you get the following result −
Output
Sr No Name Sub Marks
The following example uses vertical tab after each filed −
Example
[jerry]$ awk 'BEGIN { printf "Sr No\vName\vSub\vMarks\n" }'
On executing this code, you get the following result −
Output
Sr No
Name
Sub
Marks
The following example prints a backspace after every field except the last one. It erases the last number from the first three fields. For instance, Field 1 is displayed as Field, because the last character is erased with backspace. However, the last field Field 4 is displayed as it is, as we did not have a \b after Field 4.
Example
[jerry]$ awk 'BEGIN { printf "Field 1\bField 2\bField 3\bField 4\n" }'
On executing this code, you get the following result −
Output
Field Field Field Field 4
In the following example, after printing every field, we do a Carriage Return and print the next value on top of the current printed value. It means, in the final output, you can see only Field 4, as it was the last thing to be printed on top of all the previous fields.
Example
[jerry]$ awk 'BEGIN { printf "Field 1\rField 2\rField 3\rField 4\n" }'
On executing this code, you get the following result −
Output
Field 4
The following example uses form feed after printing each field.
Example
[jerry]$ awk 'BEGIN { printf "Sr No\fName\fSub\fMarks\n" }'
On executing this code, you get the following result −
Output
Sr No
Name
Sub
Marks
As in C-language, AWK also has format specifiers. The AWK version of the printf statement accepts the following conversion specification formats −
It prints a single character. If the argument used for %c is numeric, it is treated as a character and printed. Otherwise, the argument is assumed to be a string, and the only first character of that string is printed.
Example
[jerry]$ awk 'BEGIN { printf "ASCII value 65 = character %c\n", 65 }'
Output
On executing this code, you get the following result −
ASCII value 65 = character A
It prints only the integer part of a decimal number.
Example
[jerry]$ awk 'BEGIN { printf "Percentags = %d\n", 80.66 }'
On executing this code, you get the following result −
Output
Percentags = 80
It prints a floating point number of the form [-]d.dddddde[+-]dd.
Example
[jerry]$ awk 'BEGIN { printf "Percentags = %E\n", 80.66 }'
On executing this code, you get the following result −
Output
Percentags = 8.066000e+01
The %E format uses E instead of e.
Example
[jerry]$ awk 'BEGIN { printf "Percentags = %e\n", 80.66 }'
On executing this code, you get the following result −
Output
Percentags = 8.066000E+01
It prints a floating point number of the form [-]ddd.dddddd.
Example
[jerry]$ awk 'BEGIN { printf "Percentags = %f\n", 80.66 }'
On executing this code, you get the following result −
Output
Percentags = 80.660000
Uses %e or %f conversion, whichever is shorter, with non-significant zeros suppressed.
Example
[jerry]$ awk 'BEGIN { printf "Percentags = %g\n", 80.66 }'
Output
On executing this code, you get the following result −
Percentags = 80.66
The %G format uses %E instead of %e.
Example
[jerry]$ awk 'BEGIN { printf "Percentags = %G\n", 80.66 }'
On executing this code, you get the following result −
Output
Percentags = 80.66
It prints an unsigned octal number.
Example
[jerry]$ awk 'BEGIN { printf "Octal representation of decimal number 10 = %o\n", 10}'
On executing this code, you get the following result −
Output
Octal representation of decimal number 10 = 12
It prints an unsigned decimal number.
Example
[jerry]$ awk 'BEGIN { printf "Unsigned 10 = %u\n", 10 }'
On executing this code, you get the following result −
Output
Unsigned 10 = 10
It prints a character string.
Example
[jerry]$ awk 'BEGIN { printf "Name = %s\n", "Sherlock Holmes" }'
On executing this code, you get the following result −
Output
Name = Sherlock Holmes
It prints an unsigned hexadecimal number. The %X format uses uppercase letters instead of lowercase.
Example
[jerry]$ awk 'BEGIN {
printf "Hexadecimal representation of decimal number 15 = %x\n", 15
}'
On executing this code, you get the following result −
Output
Hexadecimal representation of decimal number 15 = f
Now let use %X and observe the result −
Example
[jerry]$ awk 'BEGIN {
printf "Hexadecimal representation of decimal number 15 = %X\n", 15
}'
On executing this code, you get the following result −
Output
Hexadecimal representation of decimal number 15 = F
It prints a single % character and no argument is converted.
Example
[jerry]$ awk 'BEGIN { printf "Percentags = %d%%\n", 80.66 }'
On executing this code, you get the following result −
Output
Percentags = 80%
With % we can use following optional parameters −
The field is padded to the width. By default, the field is padded with spaces but when 0 flag is used, it is padded with zeroes.
Example
[jerry]$ awk 'BEGIN {
num1 = 10; num2 = 20; printf "Num1 = %10d\nNum2 = %10d\n", num1, num2
}'
On executing this code, you get the following result −
Output
Num1 = 10
Num2 = 20
A leading zero acts as a flag, which indicates that the output should be padded with zeroes instead of spaces. Please note that this flag only has an effect when the field is wider than the value to be printed. The following example describes this −
Example
[jerry]$ awk 'BEGIN {
num1 = -10; num2 = 20; printf "Num1 = %05d\nNum2 = %05d\n", num1, num2
}'
On executing this code, you get the following result −
Output
Num1 = -0010
Num2 = 00020
The expression should be left-justified within its field. When the input-string is less than the number of characters specified, and you want it to be left justified, i.e., by adding spaces to the right, use a minus symbol (–) immediately after the % and before the number.
In the following example, output of the AWK command is piped to the cat command to display the END OF LINE($) character.
Example
[jerry]$ awk 'BEGIN { num = 10; printf "Num = %-5d\n", num }' | cat -vte
On executing this code, you get the following result −
Output
Num = 10 $
It always prefixes numeric values with a sign, even if the value is positive.
Example
[jerry]$ awk 'BEGIN {
num1 = -10; num2 = 20; printf "Num1 = %+d\nNum2 = %+d\n", num1, num2
}'
On executing this code, you get the following result −
Output
Num1 = -10
Num2 = +20
For %o, it supplies a leading zero. For %x and %X, it supplies a leading 0x or 0X respectively, only if the result is non-zero. For %e, %E, %f, and %F, the result always contains a decimal point. For %g and %G, trailing zeros are not removed from the result. The following example describes this −
Example
[jerry]$ awk 'BEGIN {
printf "Octal representation = %#o\nHexadecimal representaion = %#X\n", 10, 10
}'
On executing this code, you get the following result −
Output
Octal representation = 012
Hexadecimal representation = 0XA
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2075,
"s": 1857,
"text": "AWK is an interpreted programming language. It is very powerful and specially designed for text processing. Its name is derived from the family names of its authors − Alfred Aho, Peter Weinberger, and Brian Kernighan."
},
{
"code": null,
"e": 2222,
"s": 2075,
"text": "The version of AWK that GNU/Linux distributes is written and maintained by the Free Software Foundation (FSF); it is often referred to as GNU AWK."
},
{
"code": null,
"e": 2258,
"s": 2222,
"text": "Following are the variants of AWK −"
},
{
"code": null,
"e": 2301,
"s": 2258,
"text": "AWK − Original AWK from AT & T Laboratory."
},
{
"code": null,
"e": 2344,
"s": 2301,
"text": "AWK − Original AWK from AT & T Laboratory."
},
{
"code": null,
"e": 2409,
"s": 2344,
"text": "NAWK − Newer and improved version of AWK from AT & T Laboratory."
},
{
"code": null,
"e": 2474,
"s": 2409,
"text": "NAWK − Newer and improved version of AWK from AT & T Laboratory."
},
{
"code": null,
"e": 2577,
"s": 2474,
"text": "GAWK − It is GNU AWK. All GNU/Linux distributions ship GAWK. It is fully compatible with AWK and NAWK."
},
{
"code": null,
"e": 2680,
"s": 2577,
"text": "GAWK − It is GNU AWK. All GNU/Linux distributions ship GAWK. It is fully compatible with AWK and NAWK."
},
{
"code": null,
"e": 2756,
"s": 2680,
"text": "Myriad of tasks can be done with AWK. Listed below are just a few of them −"
},
{
"code": null,
"e": 2773,
"s": 2756,
"text": "Text processing,"
},
{
"code": null,
"e": 2807,
"s": 2773,
"text": "Producing formatted text reports,"
},
{
"code": null,
"e": 2841,
"s": 2807,
"text": "Performing arithmetic operations,"
},
{
"code": null,
"e": 2886,
"s": 2841,
"text": "Performing string operations, and many more."
},
{
"code": null,
"e": 2969,
"s": 2886,
"text": "This chapter describes how to set up the AWK environment on your GNU/Linux system."
},
{
"code": null,
"e": 3253,
"s": 2969,
"text": "Generally, AWK is available by default on most GNU/Linux distributions. You can use which command to check whether it is present on your system or not. In case you don’t have AWK, then install it on Debian based GNU/Linux using Advance Package Tool (APT) package manager as follows −"
},
{
"code": null,
"e": 3317,
"s": 3253,
"text": "[jeryy]$ sudo apt-get update\n[jeryy]$ sudo apt-get install gawk"
},
{
"code": null,
"e": 3431,
"s": 3317,
"text": "Similarly, to install AWK on RPM based GNU/Linux, use Yellowdog Updator Modifier yum package manager as follows −"
},
{
"code": null,
"e": 3456,
"s": 3431,
"text": "[root]# yum install gawk"
},
{
"code": null,
"e": 3524,
"s": 3456,
"text": "After installation, ensure that AWK is accessible via command line."
},
{
"code": null,
"e": 3543,
"s": 3524,
"text": "[jerry]$ which awk"
},
{
"code": null,
"e": 3603,
"s": 3543,
"text": "On executing the above code, you get the following result −"
},
{
"code": null,
"e": 3617,
"s": 3603,
"text": "/usr/bin/awk\n"
},
{
"code": null,
"e": 3832,
"s": 3617,
"text": "As GNU AWK is a part of the GNU project, its source code is available for free download. We have already seen how to install AWK using package manager. Let us now understand how to install AWK from its source code."
},
{
"code": null,
"e": 3988,
"s": 3832,
"text": "The following installation is applicable to any GNU/Linux software, and for most other freely-available programs as well. Here are the installation steps −"
},
{
"code": null,
"e": 4098,
"s": 3988,
"text": "Step 1 − Download the source code from an authentic place. The command-line utility wget serves this purpose."
},
{
"code": null,
"e": 4158,
"s": 4098,
"text": "[jerry]$ wget http://ftp.gnu.org/gnu/gawk/gawk-4.1.1.tar.xz"
},
{
"code": null,
"e": 4218,
"s": 4158,
"text": "Step 2 − Decompress and extract the downloaded source code."
},
{
"code": null,
"e": 4253,
"s": 4218,
"text": "[jerry]$ tar xvf gawk-4.1.1.tar.xz"
},
{
"code": null,
"e": 4307,
"s": 4253,
"text": "Step 3 − Change into the directory and run configure."
},
{
"code": null,
"e": 4328,
"s": 4307,
"text": "[jerry]$ ./configure"
},
{
"code": null,
"e": 4449,
"s": 4328,
"text": "Step 4 − Upon successful completion, the configure generates Makefile. To compile the source code, issue a make command."
},
{
"code": null,
"e": 4463,
"s": 4449,
"text": "[jerry]$ make"
},
{
"code": null,
"e": 4555,
"s": 4463,
"text": "Step 5 − You can run the test suite to ensure the build is clean. This is an optional step."
},
{
"code": null,
"e": 4575,
"s": 4555,
"text": "[jerry]$ make check"
},
{
"code": null,
"e": 4648,
"s": 4575,
"text": "Step 6 − Finally, install AWK. Make sure you have super-user privileges."
},
{
"code": null,
"e": 4675,
"s": 4648,
"text": "[jerry]$ sudo make install"
},
{
"code": null,
"e": 4789,
"s": 4675,
"text": "That is it! You have successfully compiled and installed AWK. Verify it by executing the awk command as follows −"
},
{
"code": null,
"e": 4808,
"s": 4789,
"text": "[jerry]$ which awk"
},
{
"code": null,
"e": 4863,
"s": 4808,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 4877,
"s": 4863,
"text": "/usr/bin/awk\n"
},
{
"code": null,
"e": 5056,
"s": 4877,
"text": "To become an expert AWK programmer, you need to know its internals. AWK follows a simple workflow − Read, Execute, and Repeat. The following diagram depicts the workflow of AWK −"
},
{
"code": null,
"e": 5143,
"s": 5056,
"text": "AWK reads a line from the input stream (file, pipe, or stdin) and stores it in memory."
},
{
"code": null,
"e": 5290,
"s": 5143,
"text": "All AWK commands are applied sequentially on the input. By default AWK execute commands on every line. We can restrict this by providing patterns."
},
{
"code": null,
"e": 5343,
"s": 5290,
"text": "This process repeats until the file reaches its end."
},
{
"code": null,
"e": 5395,
"s": 5343,
"text": "Let us now understand the program structure of AWK."
},
{
"code": null,
"e": 5441,
"s": 5395,
"text": "The syntax of the BEGIN block is as follows −"
},
{
"code": null,
"e": 5448,
"s": 5441,
"text": "Syntax"
},
{
"code": null,
"e": 5470,
"s": 5448,
"text": "BEGIN {awk-commands}\n"
},
{
"code": null,
"e": 5689,
"s": 5470,
"text": "The BEGIN block gets executed at program start-up. It executes only once. This is good place to initialize variables. BEGIN is an AWK keyword and hence it must be in upper-case. Please note that this block is optional."
},
{
"code": null,
"e": 5734,
"s": 5689,
"text": "The syntax of the body block is as follows −"
},
{
"code": null,
"e": 5741,
"s": 5734,
"text": "Syntax"
},
{
"code": null,
"e": 5767,
"s": 5741,
"text": "/pattern/ {awk-commands}\n"
},
{
"code": null,
"e": 5969,
"s": 5767,
"text": "The body block applies AWK commands on every input line. By default, AWK executes commands on every line. We can restrict this by providing patterns. Note that there are no keywords for the Body block."
},
{
"code": null,
"e": 6013,
"s": 5969,
"text": "The syntax of the END block is as follows −"
},
{
"code": null,
"e": 6020,
"s": 6013,
"text": "Syntax"
},
{
"code": null,
"e": 6040,
"s": 6020,
"text": "END {awk-commands}\n"
},
{
"code": null,
"e": 6189,
"s": 6040,
"text": "The END block executes at the end of the program. END is an AWK keyword and hence it must be in upper-case. Please note that this block is optional."
},
{
"code": null,
"e": 6319,
"s": 6189,
"text": "Let us create a file marks.txt which contains the serial number, name of the student, subject name, and number of marks obtained."
},
{
"code": null,
"e": 6440,
"s": 6319,
"text": "1) Amit Physics 80\n2) Rahul Maths 90\n3) Shyam Biology 87\n4) Kedar English 85\n5) Hari History 89\n"
},
{
"code": null,
"e": 6510,
"s": 6440,
"text": "Let us now display the file contents with header by using AWK script."
},
{
"code": null,
"e": 6518,
"s": 6510,
"text": "Example"
},
{
"code": null,
"e": 6593,
"s": 6518,
"text": "[jerry]$ awk 'BEGIN{printf \"Sr No\\tName\\tSub\\tMarks\\n\"} {print}' marks.txt"
},
{
"code": null,
"e": 6656,
"s": 6593,
"text": "When this code is executed, it produces the following result −"
},
{
"code": null,
"e": 6663,
"s": 6656,
"text": "Output"
},
{
"code": null,
"e": 6781,
"s": 6663,
"text": "Sr No Name Sub Marks\n1) Amit Physics 80\n2) Rahul Maths 90\n3) Shyam Biology 87\n4) Kedar English 85\n5) Hari History 89\n"
},
{
"code": null,
"e": 7035,
"s": 6781,
"text": "At the start, AWK prints the header from the BEGIN block. Then in the body block, it reads a line from a file and executes AWK's print command which just prints the contents on the standard output stream. This process repeats until file reaches the end."
},
{
"code": null,
"e": 7178,
"s": 7035,
"text": "AWK is simple to use. We can provide AWK commands either directly from the command line or in the form of a text file containing AWK commands."
},
{
"code": null,
"e": 7256,
"s": 7178,
"text": "We can specify an AWK command within single quotes at command line as shown −"
},
{
"code": null,
"e": 7280,
"s": 7256,
"text": "awk [options] file ...\n"
},
{
"code": null,
"e": 7340,
"s": 7280,
"text": "Consider a text file marks.txt with the following content −"
},
{
"code": null,
"e": 7471,
"s": 7340,
"text": "1) Amit Physics 80\n2) Rahul Maths 90\n3) Shyam Biology 87\n4) Kedar English 85\n5) Hari History 89\n"
},
{
"code": null,
"e": 7542,
"s": 7471,
"text": "Let us display the complete content of the file using AWK as follows −"
},
{
"code": null,
"e": 7550,
"s": 7542,
"text": "Example"
},
{
"code": null,
"e": 7584,
"s": 7550,
"text": "[jerry]$ awk '{print}' marks.txt "
},
{
"code": null,
"e": 7639,
"s": 7584,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 7646,
"s": 7639,
"text": "Output"
},
{
"code": null,
"e": 7777,
"s": 7646,
"text": "1) Amit Physics 80\n2) Rahul Maths 90\n3) Shyam Biology 87\n4) Kedar English 85\n5) Hari History 89\n"
},
{
"code": null,
"e": 7833,
"s": 7777,
"text": "We can provide AWK commands in a script file as shown −"
},
{
"code": null,
"e": 7861,
"s": 7833,
"text": "awk [options] -f file ....\n"
},
{
"code": null,
"e": 7943,
"s": 7861,
"text": "First, create a text file command.awk containing the AWK command as shown below −"
},
{
"code": null,
"e": 7952,
"s": 7943,
"text": "{print}\n"
},
{
"code": null,
"e": 8104,
"s": 7952,
"text": "Now we can instruct the AWK to read commands from the text file and perform the action. Here, we achieve the same result as shown in the above example."
},
{
"code": null,
"e": 8112,
"s": 8104,
"text": "Example"
},
{
"code": null,
"e": 8150,
"s": 8112,
"text": "[jerry]$ awk -f command.awk marks.txt"
},
{
"code": null,
"e": 8205,
"s": 8150,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 8212,
"s": 8205,
"text": "Output"
},
{
"code": null,
"e": 8313,
"s": 8212,
"text": "1) Amit Physics 80\n2) Rahul Maths 90\n3) Shyam Biology 87\n4) Kedar English 85\n5) Hari History 89\n"
},
{
"code": null,
"e": 8402,
"s": 8313,
"text": "AWK supports the following standard options which can be provided from the command line."
},
{
"code": null,
"e": 8556,
"s": 8402,
"text": "This option assigns a value to a variable. It allows assignment before the program execution. The following example describes the usage of the -v option."
},
{
"code": null,
"e": 8564,
"s": 8556,
"text": "Example"
},
{
"code": null,
"e": 8627,
"s": 8564,
"text": "[jerry]$ awk -v name=Jerry 'BEGIN{printf \"Name = %s\\n\", name}'"
},
{
"code": null,
"e": 8682,
"s": 8627,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 8689,
"s": 8682,
"text": "Output"
},
{
"code": null,
"e": 8703,
"s": 8689,
"text": "Name = Jerry\n"
},
{
"code": null,
"e": 8812,
"s": 8703,
"text": "It prints a sorted list of global variables and their final values to file. The default file is awkvars.out."
},
{
"code": null,
"e": 8820,
"s": 8812,
"text": "Example"
},
{
"code": null,
"e": 8879,
"s": 8820,
"text": "[jerry]$ awk --dump-variables ''\n[jerry]$ cat awkvars.out "
},
{
"code": null,
"e": 8939,
"s": 8879,
"text": "On executing the above code, you get the following result −"
},
{
"code": null,
"e": 8946,
"s": 8939,
"text": "Output"
},
{
"code": null,
"e": 9233,
"s": 8946,
"text": "ARGC: 1\nARGIND: 0\nARGV: array, 1 elements\nBINMODE: 0\nCONVFMT: \"%.6g\"\nERRNO: \"\"\nFIELDWIDTHS: \"\"\nFILENAME: \"\"\nFNR: 0\nFPAT: \"[^[:space:]]+\"\nFS: \" \"\nIGNORECASE: 0\nLINT: 0\nNF: 0\nNR: 0\nOFMT: \"%.6g\"\nOFS: \" \"\nORS: \"\\n\"\nRLENGTH: 0\nRS: \"\\n\"\nRSTART: 0\nRT: \"\"\nSUBSEP: \"\\034\"\nTEXTDOMAIN: \"messages\"\n"
},
{
"code": null,
"e": 9289,
"s": 9233,
"text": "This option prints the help message on standard output."
},
{
"code": null,
"e": 9297,
"s": 9289,
"text": "Example"
},
{
"code": null,
"e": 9317,
"s": 9297,
"text": "[jerry]$ awk --help"
},
{
"code": null,
"e": 9372,
"s": 9317,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 9379,
"s": 9372,
"text": "Output"
},
{
"code": null,
"e": 10543,
"s": 9379,
"text": "Usage: awk [POSIX or GNU style options] -f progfile [--] file ...\nUsage: awk [POSIX or GNU style options] [--] 'program' file ...\nPOSIX options : GNU long options: (standard)\n -f progfile --file=progfile\n -F fs --field-separator=fs\n -v var=val --assign=var=val\nShort options : GNU long options: (extensions)\n -b --characters-as-bytes\n -c --traditional\n -C --copyright\n -d[file] --dump-variables[=file]\n -e 'program-text' --source='program-text'\n -E file --exec=file\n -g --gen-pot\n -h --help\n -L [fatal] --lint[=fatal]\n -n --non-decimal-data\n -N --use-lc-numeric\n -O --optimize\n -p[file] --profile[=file]\n -P --posix\n -r --re-interval\n -S --sandbox\n -t --lint-old\n -V --version\n"
},
{
"code": null,
"e": 10727,
"s": 10543,
"text": "This option enables checking of non-portable or dubious constructs. When an argument fatal is provided, it treats warning messages as errors. The following example demonstrates this −"
},
{
"code": null,
"e": 10735,
"s": 10727,
"text": "Example"
},
{
"code": null,
"e": 10766,
"s": 10735,
"text": "[jerry]$ awk --lint '' /bin/ls"
},
{
"code": null,
"e": 10821,
"s": 10766,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 10828,
"s": 10821,
"text": "Output"
},
{
"code": null,
"e": 10992,
"s": 10828,
"text": "awk: cmd. line:1: warning: empty program text on command line\nawk: cmd. line:1: warning: source file does not end in newline\nawk: warning: no program text at all!\n"
},
{
"code": null,
"e": 11104,
"s": 10992,
"text": "This option turns on strict POSIX compatibility, in which all common and gawk-specific extensions are disabled."
},
{
"code": null,
"e": 11244,
"s": 11104,
"text": "This option generates a pretty-printed version of the program in file. Default file is awkprof.out. Below simple example illustrates this −"
},
{
"code": null,
"e": 11252,
"s": 11244,
"text": "Example"
},
{
"code": null,
"e": 11395,
"s": 11252,
"text": "[jerry]$ awk --profile 'BEGIN{printf\"---|Header|--\\n\"} {print} \nEND{printf\"---|Footer|---\\n\"}' marks.txt > /dev/null \n[jerry]$ cat awkprof.out"
},
{
"code": null,
"e": 11450,
"s": 11395,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 11457,
"s": 11450,
"text": "Output"
},
{
"code": null,
"e": 11678,
"s": 11457,
"text": "# gawk profile, created Sun Oct 26 19:50:48 2014\n\n # BEGIN block(s)\n\n BEGIN {\n printf \"---|Header|--\\n\"\n }\n\n # Rule(s) {\n print $0\n }\n\n # END block(s)\n\n END {\n printf \"---|Footer|---\\n\"\n }\n"
},
{
"code": null,
"e": 11729,
"s": 11678,
"text": "This option disables all gawk-specific extensions."
},
{
"code": null,
"e": 11794,
"s": 11729,
"text": "This option displays the version information of the AWK program."
},
{
"code": null,
"e": 11802,
"s": 11794,
"text": "Example"
},
{
"code": null,
"e": 11825,
"s": 11802,
"text": "[jerry]$ awk --version"
},
{
"code": null,
"e": 11888,
"s": 11825,
"text": "When this code is executed, it produces the following result −"
},
{
"code": null,
"e": 11895,
"s": 11888,
"text": "Output"
},
{
"code": null,
"e": 11966,
"s": 11895,
"text": "GNU Awk 4.0.1\nCopyright (C) 1989, 1991-2012 Free Software Foundation.\n"
},
{
"code": null,
"e": 12125,
"s": 11966,
"text": "This chapter describes several useful AWK commands and their appropriate examples. Consider a text file marks.txt to be processed with the following content −"
},
{
"code": null,
"e": 12251,
"s": 12125,
"text": "1) Amit Physics 80\n2) Rahul Maths 90\n3) Shyam Biology 87\n4) Kedar English 85\n5) Hari History 89\n"
},
{
"code": null,
"e": 12366,
"s": 12251,
"text": "You can instruct AWK to print only certain columns from the input field. The following example demonstrates this −"
},
{
"code": null,
"e": 12410,
"s": 12366,
"text": "[jerry]$ awk '{print $3 \"\\t\" $4}' marks.txt"
},
{
"code": null,
"e": 12465,
"s": 12410,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 12531,
"s": 12465,
"text": "Physics 80\nMaths 90\nBiology 87\nEnglish 85\nHistory 89\n"
},
{
"code": null,
"e": 12838,
"s": 12531,
"text": "In the file marks.txt, the third column contains the subject name and the fourth column contains the marks obtained in a particular subject. Let us print these two columns using AWK print command. In the above example, $3 and $4 represent the third and the fourth fields respectively from the input record."
},
{
"code": null,
"e": 12895,
"s": 12838,
"text": "By default, AWK prints all the lines that match pattern."
},
{
"code": null,
"e": 12935,
"s": 12895,
"text": "[jerry]$ awk '/a/ {print $0}' marks.txt"
},
{
"code": null,
"e": 12990,
"s": 12935,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 13091,
"s": 12990,
"text": "2) Rahul Maths 90\n3) Shyam Biology 87\n4) Kedar English 85\n5) Hari History 89\n"
},
{
"code": null,
"e": 13360,
"s": 13091,
"text": "In the above example, we are searching form pattern a. When a pattern match succeeds, it executes a command from the body block. In the absence of a body block − default action is taken which is print the record. Hence, the following command produces the same result −"
},
{
"code": null,
"e": 13389,
"s": 13360,
"text": "[jerry]$ awk '/a/' marks.txt"
},
{
"code": null,
"e": 13617,
"s": 13389,
"text": "When a pattern match succeeds, AWK prints the entire record by default. But you can instruct AWK to print only certain fields. For instance, the following example prints the third and fourth field when a pattern match succeeds."
},
{
"code": null,
"e": 13665,
"s": 13617,
"text": "[jerry]$ awk '/a/ {print $3 \"\\t\" $4}' marks.txt"
},
{
"code": null,
"e": 13720,
"s": 13665,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 13769,
"s": 13720,
"text": "Maths 90\nBiology 87\nEnglish 85\nHistory 89\n"
},
{
"code": null,
"e": 13896,
"s": 13769,
"text": "You can print columns in any order. For instance, the following example prints the fourth column followed by the third column."
},
{
"code": null,
"e": 13944,
"s": 13896,
"text": "[jerry]$ awk '/a/ {print $4 \"\\t\" $3}' marks.txt"
},
{
"code": null,
"e": 14004,
"s": 13944,
"text": "On executing the above code, you get the following result −"
},
{
"code": null,
"e": 14055,
"s": 14004,
"text": "90 Maths\n87 Biology\n85 English\n89 History\n"
},
{
"code": null,
"e": 14164,
"s": 14055,
"text": "Let us see an example where you can count and print the number of lines for which a pattern match succeeded."
},
{
"code": null,
"e": 14228,
"s": 14164,
"text": "[jerry]$ awk '/a/{++cnt} END {print \"Count = \", cnt}' marks.txt"
},
{
"code": null,
"e": 14283,
"s": 14228,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 14294,
"s": 14283,
"text": "Count = 4\n"
},
{
"code": null,
"e": 14519,
"s": 14294,
"text": "In this example, we increment the value of counter when a pattern match succeeds and we print this value in the END block. Note that unlike other programming languages, there is no need to declare a variable before using it."
},
{
"code": null,
"e": 14587,
"s": 14519,
"text": "Let us print only those lines that contain more than 18 characters."
},
{
"code": null,
"e": 14628,
"s": 14587,
"text": "[jerry]$ awk 'length($0) > 18' marks.txt"
},
{
"code": null,
"e": 14683,
"s": 14628,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 14732,
"s": 14683,
"text": "3) Shyam Biology 87\n4) Kedar English 85\n"
},
{
"code": null,
"e": 15038,
"s": 14732,
"text": "AWK provides a built-in length function that returns the length of the string. $0 variable stores the entire line and in the absence of a body block, default action is taken, i.e., the print action. Hence, if a line has more than 18 characters, then the comparison results true and the line gets printed."
},
{
"code": null,
"e": 15193,
"s": 15038,
"text": "AWK provides several built-in variables. They play an important role while writing AWK scripts. This chapter demonstrates the usage of built-in variables."
},
{
"code": null,
"e": 15241,
"s": 15193,
"text": "The standard AWK variables are discussed below."
},
{
"code": null,
"e": 15306,
"s": 15241,
"text": "It implies the number of arguments provided at the command line."
},
{
"code": null,
"e": 15314,
"s": 15306,
"text": "Example"
},
{
"code": null,
"e": 15382,
"s": 15314,
"text": "[jerry]$ awk 'BEGIN {print \"Arguments =\", ARGC}' One Two Three Four"
},
{
"code": null,
"e": 15437,
"s": 15382,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 15444,
"s": 15437,
"text": "Output"
},
{
"code": null,
"e": 15459,
"s": 15444,
"text": "Arguments = 5\n"
},
{
"code": null,
"e": 15567,
"s": 15459,
"text": "But why AWK shows 5 when you passed only 4 arguments? Just check the following example to clear your doubt."
},
{
"code": null,
"e": 15671,
"s": 15567,
"text": "It is an array that stores the command-line arguments. The array's valid index ranges from 0 to ARGC-1."
},
{
"code": null,
"e": 15679,
"s": 15671,
"text": "Example"
},
{
"code": null,
"e": 15811,
"s": 15679,
"text": "[jerry]$ awk 'BEGIN { \n for (i = 0; i < ARGC - 1; ++i) { \n printf \"ARGV[%d] = %s\\n\", i, ARGV[i] \n } \n}' one two three four"
},
{
"code": null,
"e": 15866,
"s": 15811,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 15873,
"s": 15866,
"text": "Output"
},
{
"code": null,
"e": 15932,
"s": 15873,
"text": "ARGV[0] = awk\nARGV[1] = one\nARGV[2] = two\nARGV[3] = three\n"
},
{
"code": null,
"e": 16008,
"s": 15932,
"text": "It represents the conversion format for numbers. Its default value is %.6g."
},
{
"code": null,
"e": 16016,
"s": 16008,
"text": "Example"
},
{
"code": null,
"e": 16078,
"s": 16016,
"text": "[jerry]$ awk 'BEGIN { print \"Conversion Format =\", CONVFMT }'"
},
{
"code": null,
"e": 16133,
"s": 16078,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 16140,
"s": 16133,
"text": "Output"
},
{
"code": null,
"e": 16166,
"s": 16140,
"text": "Conversion Format = %.6g\n"
},
{
"code": null,
"e": 16219,
"s": 16166,
"text": "It is an associative array of environment variables."
},
{
"code": null,
"e": 16227,
"s": 16219,
"text": "Example"
},
{
"code": null,
"e": 16274,
"s": 16227,
"text": "[jerry]$ awk 'BEGIN { print ENVIRON[\"USER\"] }'"
},
{
"code": null,
"e": 16329,
"s": 16274,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 16336,
"s": 16329,
"text": "Output"
},
{
"code": null,
"e": 16343,
"s": 16336,
"text": "jerry\n"
},
{
"code": null,
"e": 16406,
"s": 16343,
"text": "To find names of other environment variables, use env command."
},
{
"code": null,
"e": 16443,
"s": 16406,
"text": "It represents the current file name."
},
{
"code": null,
"e": 16451,
"s": 16443,
"text": "Example"
},
{
"code": null,
"e": 16497,
"s": 16451,
"text": "[jerry]$ awk 'END {print FILENAME}' marks.txt"
},
{
"code": null,
"e": 16552,
"s": 16497,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 16559,
"s": 16552,
"text": "Output"
},
{
"code": null,
"e": 16570,
"s": 16559,
"text": "marks.txt\n"
},
{
"code": null,
"e": 16629,
"s": 16570,
"text": "Please note that FILENAME is undefined in the BEGIN block."
},
{
"code": null,
"e": 16761,
"s": 16629,
"text": "It represents the (input) field separator and its default value is space. You can also change this by using -F command line option."
},
{
"code": null,
"e": 16769,
"s": 16761,
"text": "Example"
},
{
"code": null,
"e": 16820,
"s": 16769,
"text": "[jerry]$ awk 'BEGIN {print \"FS = \" FS}' | cat -vte"
},
{
"code": null,
"e": 16875,
"s": 16820,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 16882,
"s": 16875,
"text": "Output"
},
{
"code": null,
"e": 16891,
"s": 16882,
"text": "FS = $\n"
},
{
"code": null,
"e": 17044,
"s": 16891,
"text": "It represents the number of fields in the current record. For instance, the following example prints only those lines that contain more than two fields."
},
{
"code": null,
"e": 17052,
"s": 17044,
"text": "Example"
},
{
"code": null,
"e": 17129,
"s": 17052,
"text": "[jerry]$ echo -e \"One Two\\nOne Two Three\\nOne Two Three Four\" | awk 'NF > 2'"
},
{
"code": null,
"e": 17184,
"s": 17129,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 17191,
"s": 17184,
"text": "Output"
},
{
"code": null,
"e": 17225,
"s": 17191,
"text": "One Two Three\nOne Two Three Four\n"
},
{
"code": null,
"e": 17376,
"s": 17225,
"text": "It represents the number of the current record. For instance, the following example prints the record if the current record number is less than three."
},
{
"code": null,
"e": 17384,
"s": 17376,
"text": "Example"
},
{
"code": null,
"e": 17461,
"s": 17384,
"text": "[jerry]$ echo -e \"One Two\\nOne Two Three\\nOne Two Three Four\" | awk 'NR < 3'"
},
{
"code": null,
"e": 17516,
"s": 17461,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 17523,
"s": 17516,
"text": "Output"
},
{
"code": null,
"e": 17546,
"s": 17523,
"text": "One Two\nOne Two Three\n"
},
{
"code": null,
"e": 17690,
"s": 17546,
"text": "It is similar to NR, but relative to the current file. It is useful when AWK is operating on multiple files. Value of FNR resets with new file."
},
{
"code": null,
"e": 17760,
"s": 17690,
"text": "It represents the output format number and its default value is %.6g."
},
{
"code": null,
"e": 17768,
"s": 17760,
"text": "Example"
},
{
"code": null,
"e": 17812,
"s": 17768,
"text": "[jerry]$ awk 'BEGIN {print \"OFMT = \" OFMT}'"
},
{
"code": null,
"e": 17867,
"s": 17812,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 17874,
"s": 17867,
"text": "Output"
},
{
"code": null,
"e": 17887,
"s": 17874,
"text": "OFMT = %.6g\n"
},
{
"code": null,
"e": 17960,
"s": 17887,
"text": "It represents the output field separator and its default value is space."
},
{
"code": null,
"e": 17968,
"s": 17960,
"text": "Example"
},
{
"code": null,
"e": 18021,
"s": 17968,
"text": "[jerry]$ awk 'BEGIN {print \"OFS = \" OFS}' | cat -vte"
},
{
"code": null,
"e": 18076,
"s": 18021,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 18083,
"s": 18076,
"text": "Output"
},
{
"code": null,
"e": 18093,
"s": 18083,
"text": "OFS = $\n"
},
{
"code": null,
"e": 18169,
"s": 18093,
"text": "It represents the output record separator and its default value is newline."
},
{
"code": null,
"e": 18177,
"s": 18169,
"text": "Example"
},
{
"code": null,
"e": 18230,
"s": 18177,
"text": "[jerry]$ awk 'BEGIN {print \"ORS = \" ORS}' | cat -vte"
},
{
"code": null,
"e": 18290,
"s": 18230,
"text": "On executing the above code, you get the following result −"
},
{
"code": null,
"e": 18297,
"s": 18290,
"text": "Output"
},
{
"code": null,
"e": 18308,
"s": 18297,
"text": "ORS = $\n$\n"
},
{
"code": null,
"e": 18444,
"s": 18308,
"text": "It represents the length of the string matched by match function. AWK's match function searches for a given string in the input-string."
},
{
"code": null,
"e": 18452,
"s": 18444,
"text": "Example"
},
{
"code": null,
"e": 18529,
"s": 18452,
"text": "[jerry]$ awk 'BEGIN { if (match(\"One Two Three\", \"re\")) { print RLENGTH } }'"
},
{
"code": null,
"e": 18584,
"s": 18529,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 18591,
"s": 18584,
"text": "Output"
},
{
"code": null,
"e": 18594,
"s": 18591,
"text": "2\n"
},
{
"code": null,
"e": 18667,
"s": 18594,
"text": "It represents (input) record separator and its default value is newline."
},
{
"code": null,
"e": 18675,
"s": 18667,
"text": "Example"
},
{
"code": null,
"e": 18726,
"s": 18675,
"text": "[jerry]$ awk 'BEGIN {print \"RS = \" RS}' | cat -vte"
},
{
"code": null,
"e": 18781,
"s": 18726,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 18788,
"s": 18781,
"text": "Output"
},
{
"code": null,
"e": 18798,
"s": 18788,
"text": "RS = $\n$\n"
},
{
"code": null,
"e": 18872,
"s": 18798,
"text": "It represents the first position in the string matched by match function."
},
{
"code": null,
"e": 18880,
"s": 18872,
"text": "Example"
},
{
"code": null,
"e": 18958,
"s": 18880,
"text": "[jerry]$ awk 'BEGIN { if (match(\"One Two Three\", \"Thre\")) { print RSTART } }'"
},
{
"code": null,
"e": 19013,
"s": 18958,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 19020,
"s": 19013,
"text": "Output"
},
{
"code": null,
"e": 19023,
"s": 19020,
"text": "9\n"
},
{
"code": null,
"e": 19113,
"s": 19023,
"text": "It represents the separator character for array subscripts and its default value is \\034."
},
{
"code": null,
"e": 19121,
"s": 19113,
"text": "Example"
},
{
"code": null,
"e": 19182,
"s": 19121,
"text": "[jerry]$ awk 'BEGIN { print \"SUBSEP = \" SUBSEP }' | cat -vte"
},
{
"code": null,
"e": 19237,
"s": 19182,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 19244,
"s": 19237,
"text": "Output"
},
{
"code": null,
"e": 19258,
"s": 19244,
"text": "SUBSEP = ^\\$\n"
},
{
"code": null,
"e": 19297,
"s": 19258,
"text": "It represents the entire input record."
},
{
"code": null,
"e": 19305,
"s": 19297,
"text": "Example"
},
{
"code": null,
"e": 19341,
"s": 19305,
"text": "[jerry]$ awk '{print $0}' marks.txt"
},
{
"code": null,
"e": 19396,
"s": 19341,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 19403,
"s": 19396,
"text": "Output"
},
{
"code": null,
"e": 19529,
"s": 19403,
"text": "1) Amit Physics 80\n2) Rahul Maths 90\n3) Shyam Biology 87\n4) Kedar English 85\n5) Hari History 89\n"
},
{
"code": null,
"e": 19617,
"s": 19529,
"text": "It represents the nth field in the current record where the fields are separated by FS."
},
{
"code": null,
"e": 19625,
"s": 19617,
"text": "Example"
},
{
"code": null,
"e": 19669,
"s": 19625,
"text": "[jerry]$ awk '{print $3 \"\\t\" $4}' marks.txt"
},
{
"code": null,
"e": 19724,
"s": 19669,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 19731,
"s": 19724,
"text": "Output"
},
{
"code": null,
"e": 19797,
"s": 19731,
"text": "Physics 80\nMaths 90\nBiology 87\nEnglish 85\nHistory 89\n"
},
{
"code": null,
"e": 19841,
"s": 19797,
"text": "GNU AWK specific variables are as follows −"
},
{
"code": null,
"e": 19910,
"s": 19841,
"text": "It represents the index in ARGV of the current file being processed."
},
{
"code": null,
"e": 19918,
"s": 19910,
"text": "Example"
},
{
"code": null,
"e": 20023,
"s": 19918,
"text": "[jerry]$ awk '{ \n print \"ARGIND = \", ARGIND; print \"Filename = \", ARGV[ARGIND] \n}' junk1 junk2 junk3"
},
{
"code": null,
"e": 20078,
"s": 20023,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 20085,
"s": 20078,
"text": "Output"
},
{
"code": null,
"e": 20182,
"s": 20085,
"text": "ARGIND = 1\nFilename = junk1\nARGIND = 2\nFilename = junk2\nARGIND = 3\nFilename = junk3\n"
},
{
"code": null,
"e": 20550,
"s": 20182,
"text": "It is used to specify binary mode for all file I/O on non-POSIX systems. Numeric values of 1, 2, or 3 specify that input files, output files, or all files, respectively, should use binary I/O. String values of r or w specify that input files or output files, respectively, should use binary I/O. String values of rw or wr specify that all files should use binary I/O."
},
{
"code": null,
"e": 20639,
"s": 20550,
"text": "A string indicates an error when a redirection fails for getline or if close call fails."
},
{
"code": null,
"e": 20647,
"s": 20639,
"text": "Example"
},
{
"code": null,
"e": 20737,
"s": 20647,
"text": "[jerry]$ awk 'BEGIN { ret = getline < \"junk.txt\"; if (ret == -1) print \"Error:\", ERRNO }'"
},
{
"code": null,
"e": 20792,
"s": 20737,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 20799,
"s": 20792,
"text": "Output"
},
{
"code": null,
"e": 20833,
"s": 20799,
"text": "Error: No such file or directory\n"
},
{
"code": null,
"e": 21009,
"s": 20833,
"text": "A space separated list of field widths variable is set, GAWK parses the input into fields of fixed width, instead of using the value of the FS variable as the field separator."
},
{
"code": null,
"e": 21109,
"s": 21009,
"text": "When this variable is set, GAWK becomes case-insensitive. The following example demonstrates this −"
},
{
"code": null,
"e": 21117,
"s": 21109,
"text": "Example"
},
{
"code": null,
"e": 21171,
"s": 21117,
"text": "[jerry]$ awk 'BEGIN{IGNORECASE = 1} /amit/' marks.txt"
},
{
"code": null,
"e": 21226,
"s": 21171,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 21233,
"s": 21226,
"text": "Output"
},
{
"code": null,
"e": 21256,
"s": 21233,
"text": "1) Amit Physics 80\n"
},
{
"code": null,
"e": 21482,
"s": 21256,
"text": "It provides dynamic control of the --lint option from the GAWK program. When this variable is set, GAWK prints lint warnings. When assigned the string value fatal, lint warnings become fatal errors, exactly like --lint=fatal."
},
{
"code": null,
"e": 21490,
"s": 21482,
"text": "Example"
},
{
"code": null,
"e": 21525,
"s": 21490,
"text": "[jerry]$ awk 'BEGIN {LINT = 1; a}'"
},
{
"code": null,
"e": 21580,
"s": 21525,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 21587,
"s": 21580,
"text": "Output"
},
{
"code": null,
"e": 21706,
"s": 21587,
"text": "awk: cmd. line:1: warning: reference to uninitialized variable `a'\nawk: cmd. line:1: warning: statement has no effect\n"
},
{
"code": null,
"e": 21847,
"s": 21706,
"text": "This is an associative array containing information about the process, such as real and effective UID numbers, process ID number, and so on."
},
{
"code": null,
"e": 21855,
"s": 21847,
"text": "Example"
},
{
"code": null,
"e": 21902,
"s": 21855,
"text": "[jerry]$ awk 'BEGIN { print PROCINFO[\"pid\"] }'"
},
{
"code": null,
"e": 21957,
"s": 21902,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 21964,
"s": 21957,
"text": "Output"
},
{
"code": null,
"e": 21970,
"s": 21964,
"text": "4316\n"
},
{
"code": null,
"e": 22093,
"s": 21970,
"text": "It represents the text domain of the AWK program. It is used to find the localized translations for the program's strings."
},
{
"code": null,
"e": 22101,
"s": 22093,
"text": "Example"
},
{
"code": null,
"e": 22143,
"s": 22101,
"text": "[jerry]$ awk 'BEGIN { print TEXTDOMAIN }'"
},
{
"code": null,
"e": 22198,
"s": 22143,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 22205,
"s": 22198,
"text": "Output"
},
{
"code": null,
"e": 22215,
"s": 22205,
"text": "messages\n"
},
{
"code": null,
"e": 22271,
"s": 22215,
"text": "The above output shows English text due to en_IN locale"
},
{
"code": null,
"e": 22409,
"s": 22271,
"text": "Like other programming languages, AWK also provides a large set of operators. This chapter explains AWK operators with suitable examples."
},
{
"code": null,
"e": 22458,
"s": 22409,
"text": "AWK supports the following arithmetic operators."
},
{
"code": null,
"e": 22520,
"s": 22458,
"text": "AWK supports the following increment and decrement operators."
},
{
"code": null,
"e": 22569,
"s": 22520,
"text": "AWK supports the following assignment operators."
},
{
"code": null,
"e": 22618,
"s": 22569,
"text": "AWK supports the following relational operators."
},
{
"code": null,
"e": 22664,
"s": 22618,
"text": "AWK supports the following logical operators."
},
{
"code": null,
"e": 22735,
"s": 22664,
"text": "We can easily implement a condition expression using ternary operator."
},
{
"code": null,
"e": 22779,
"s": 22735,
"text": "AWK supports the following unary operators."
},
{
"code": null,
"e": 22827,
"s": 22779,
"text": "There are two formats of exponential operators."
},
{
"code": null,
"e": 22893,
"s": 22827,
"text": "Space is a string concatenation operator that merges two strings."
},
{
"code": null,
"e": 22961,
"s": 22893,
"text": "It is represented by in. It is used while accessing array elements."
},
{
"code": null,
"e": 23031,
"s": 22961,
"text": "This example explains the two forms of regular expressions operators."
},
{
"code": null,
"e": 23236,
"s": 23031,
"text": "AWK is very powerful and efficient in handling regular expressions. A number of complex tasks can be solved with simple regular expressions. Any command-line expert knows the power of regular expressions."
},
{
"code": null,
"e": 23309,
"s": 23236,
"text": "This chapter covers standard regular expressions with suitable examples."
},
{
"code": null,
"e": 23438,
"s": 23309,
"text": "It matches any single character except the end of line character. For instance, the following example matches fin, fun, fan etc."
},
{
"code": null,
"e": 23495,
"s": 23438,
"text": "[jerry]$ echo -e \"cat\\nbat\\nfun\\nfin\\nfan\" | awk '/f.n/'"
},
{
"code": null,
"e": 23555,
"s": 23495,
"text": "On executing the above code, you get the following result −"
},
{
"code": null,
"e": 23568,
"s": 23555,
"text": "fun\nfin\nfan\n"
},
{
"code": null,
"e": 23684,
"s": 23568,
"text": "It matches the start of line. For instance, the following example prints all the lines that start with pattern The."
},
{
"code": null,
"e": 23750,
"s": 23684,
"text": "[jerry]$ echo -e \"This\\nThat\\nThere\\nTheir\\nthese\" | awk '/^The/'"
},
{
"code": null,
"e": 23805,
"s": 23750,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 23818,
"s": 23805,
"text": "There\nTheir\n"
},
{
"code": null,
"e": 23927,
"s": 23818,
"text": "It matches the end of line. For instance, the following example prints the lines that end with the letter n."
},
{
"code": null,
"e": 23992,
"s": 23927,
"text": "[jerry]$ echo -e \"knife\\nknow\\nfun\\nfin\\nfan\\nnine\" | awk '/n$/'"
},
{
"code": null,
"e": 24047,
"s": 23992,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 24060,
"s": 24047,
"text": "fun\nfin\nfan\n"
},
{
"code": null,
"e": 24196,
"s": 24060,
"text": "It is used to match only one out of several characters. For instance, the following example matches pattern Call and Tall but not Ball."
},
{
"code": null,
"e": 24250,
"s": 24196,
"text": "[jerry]$ echo -e \"Call\\nTall\\nBall\" | awk '/[CT]all/'"
},
{
"code": null,
"e": 24305,
"s": 24250,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 24316,
"s": 24305,
"text": "Call\nTall\n"
},
{
"code": null,
"e": 24452,
"s": 24316,
"text": "In exclusive set, the carat negates the set of characters in the square brackets. For instance, the following example prints only Ball."
},
{
"code": null,
"e": 24507,
"s": 24452,
"text": "[jerry]$ echo -e \"Call\\nTall\\nBall\" | awk '/[^CT]all/'"
},
{
"code": null,
"e": 24562,
"s": 24507,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 24568,
"s": 24562,
"text": "Ball\n"
},
{
"code": null,
"e": 24690,
"s": 24568,
"text": "A vertical bar allows regular expressions to be logically ORed. For instance, the following example prints Ball and Call."
},
{
"code": null,
"e": 24760,
"s": 24690,
"text": "[jerry]$ echo -e \"Call\\nTall\\nBall\\nSmall\\nShall\" | awk '/Call|Ball/'"
},
{
"code": null,
"e": 24815,
"s": 24760,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 24826,
"s": 24815,
"text": "Call\nBall\n"
},
{
"code": null,
"e": 25009,
"s": 24826,
"text": "It matches zero or one occurrence of the preceding character. For instance, the following example matches Colour as well as Color. We have made u as an optional character by using ?."
},
{
"code": null,
"e": 25060,
"s": 25009,
"text": "[jerry]$ echo -e \"Colour\\nColor\" | awk '/Colou?r/'"
},
{
"code": null,
"e": 25115,
"s": 25060,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 25129,
"s": 25115,
"text": "Colour\nColor\n"
},
{
"code": null,
"e": 25263,
"s": 25129,
"text": "It matches zero or more occurrences of the preceding character. For instance, the following example matches ca, cat, catt, and so on."
},
{
"code": null,
"e": 25311,
"s": 25263,
"text": "[jerry]$ echo -e \"ca\\ncat\\ncatt\" | awk '/cat*/'"
},
{
"code": null,
"e": 25366,
"s": 25311,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 25379,
"s": 25366,
"text": "ca\ncat\ncatt\n"
},
{
"code": null,
"e": 25510,
"s": 25379,
"text": "It matches one or more occurrence of the preceding character. For instance below example matches one or more occurrences of the 2."
},
{
"code": null,
"e": 25571,
"s": 25510,
"text": "[jerry]$ echo -e \"111\\n22\\n123\\n234\\n456\\n222\" | awk '/2+/'"
},
{
"code": null,
"e": 25631,
"s": 25571,
"text": "On executing the above code, you get the following result −"
},
{
"code": null,
"e": 25647,
"s": 25631,
"text": "22\n123\n234\n222\n"
},
{
"code": null,
"e": 25840,
"s": 25647,
"text": "Parentheses () are used for grouping and the character | is used for alternatives. For instance, the following regular expression matches the lines containing either Apple Juice or Apple Cake."
},
{
"code": null,
"e": 25939,
"s": 25840,
"text": "[jerry]$ echo -e \"Apple Juice\\nApple Pie\\nApple Tart\\nApple Cake\" | awk \n '/Apple (Juice|Cake)/'"
},
{
"code": null,
"e": 25994,
"s": 25939,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 26018,
"s": 25994,
"text": "Apple Juice\nApple Cake\n"
},
{
"code": null,
"e": 26298,
"s": 26018,
"text": "AWK has associative arrays and one of the best thing about it is – the indexes need not to be continuous set of number; you can use either string or number as an array index. Also, there is no need to declare the size of an array in advance – arrays can expand/shrink at runtime."
},
{
"code": null,
"e": 26325,
"s": 26298,
"text": "Its syntax is as follows −"
},
{
"code": null,
"e": 26352,
"s": 26325,
"text": "array_name[index] = value\n"
},
{
"code": null,
"e": 26479,
"s": 26352,
"text": "Where array_name is the name of array, index is the array index, and value is any value assigning to the element of the array."
},
{
"code": null,
"e": 26561,
"s": 26479,
"text": "To gain more insight on array, let us create and access the elements of an array."
},
{
"code": null,
"e": 26695,
"s": 26561,
"text": "[jerry]$ awk 'BEGIN {\n fruits[\"mango\"] = \"yellow\";\n fruits[\"orange\"] = \"orange\"\n print fruits[\"orange\"] \"\\n\" fruits[\"mango\"]\n}'"
},
{
"code": null,
"e": 26750,
"s": 26695,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 26765,
"s": 26750,
"text": "orange\nyellow\n"
},
{
"code": null,
"e": 26944,
"s": 26765,
"text": "In the above example, we declare the array as fruits whose index is fruit name and the value is the color of the fruit. To access array elements, we use array_name[index] format."
},
{
"code": null,
"e": 27111,
"s": 26944,
"text": "For insertion, we used assignment operator. Similarly, we can use delete statement to remove an element from the array. The syntax of delete statement is as follows −"
},
{
"code": null,
"e": 27137,
"s": 27111,
"text": "delete array_name[index]\n"
},
{
"code": null,
"e": 27231,
"s": 27137,
"text": "The following example deletes the element orange. Hence the command does not show any output."
},
{
"code": null,
"e": 27373,
"s": 27231,
"text": "[jerry]$ awk 'BEGIN {\n fruits[\"mango\"] = \"yellow\";\n fruits[\"orange\"] = \"orange\";\n delete fruits[\"orange\"];\n print fruits[\"orange\"]\n}'"
},
{
"code": null,
"e": 27509,
"s": 27373,
"text": "AWK only supports one-dimensional arrays. But you can easily simulate a multi-dimensional array using the one-dimensional array itself."
},
{
"code": null,
"e": 27568,
"s": 27509,
"text": "For instance, given below is a 3x3 two-dimensional array −"
},
{
"code": null,
"e": 27617,
"s": 27568,
"text": "100 200 300\n400 500 600\n700 800 900\n"
},
{
"code": null,
"e": 27771,
"s": 27617,
"text": "In the above example, array[0][0] stores 100, array[0][1] stores 200, and so on. To store 100 at array location [0][0], we can use the following syntax −"
},
{
"code": null,
"e": 27791,
"s": 27771,
"text": "array[\"0,0\"] = 100\n"
},
{
"code": null,
"e": 27901,
"s": 27791,
"text": "Though we gave 0,0 as index, these are not two indexes. In reality, it is just one index with the string 0,0."
},
{
"code": null,
"e": 27947,
"s": 27901,
"text": "The following example simulates a 2-D array −"
},
{
"code": null,
"e": 28371,
"s": 27947,
"text": "[jerry]$ awk 'BEGIN {\n array[\"0,0\"] = 100;\n array[\"0,1\"] = 200;\n array[\"0,2\"] = 300;\n array[\"1,0\"] = 400;\n array[\"1,1\"] = 500;\n array[\"1,2\"] = 600;\n\n # print array elements\n print \"array[0,0] = \" array[\"0,0\"];\n print \"array[0,1] = \" array[\"0,1\"];\n print \"array[0,2] = \" array[\"0,2\"];\n print \"array[1,0] = \" array[\"1,0\"];\n print \"array[1,1] = \" array[\"1,1\"];\n print \"array[1,2] = \" array[\"1,2\"];\n}'"
},
{
"code": null,
"e": 28426,
"s": 28371,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 28529,
"s": 28426,
"text": "array[0,0] = 100\narray[0,1] = 200\narray[0,2] = 300\narray[1,0] = 400\narray[1,1] = 500\narray[1,2] = 600\n"
},
{
"code": null,
"e": 28682,
"s": 28529,
"text": "You can also perform a variety of operations on an array such as sorting its elements/indexes. For that purpose, you can use assort and asorti functions"
},
{
"code": null,
"e": 28857,
"s": 28682,
"text": "Like other programming languages, AWK provides conditional statements to control the flow of a program. This chapter explains AWK's control statements with suitable examples."
},
{
"code": null,
"e": 28990,
"s": 28857,
"text": "It simply tests the condition and performs certain actions depending upon the condition. Given below is the syntax of if statement −"
},
{
"code": null,
"e": 29016,
"s": 28990,
"text": "if (condition)\n action\n"
},
{
"code": null,
"e": 29100,
"s": 29016,
"text": "We can also use a pair of curly braces as given below to execute multiple actions −"
},
{
"code": null,
"e": 29166,
"s": 29100,
"text": "if (condition) {\n action-1\n action-1\n .\n .\n action-n\n}\n"
},
{
"code": null,
"e": 29243,
"s": 29166,
"text": "For instance, the following example checks whether a number is even or not −"
},
{
"code": null,
"e": 29330,
"s": 29243,
"text": "[jerry]$ awk 'BEGIN {num = 10; if (num % 2 == 0) printf \"%d is even number.\\n\", num }'"
},
{
"code": null,
"e": 29390,
"s": 29330,
"text": "On executing the above code, you get the following result −"
},
{
"code": null,
"e": 29410,
"s": 29390,
"text": "10 is even number.\n"
},
{
"code": null,
"e": 29510,
"s": 29410,
"text": "In if-else syntax, we can provide a list of actions to be performed when a condition becomes false."
},
{
"code": null,
"e": 29558,
"s": 29510,
"text": "The syntax of if-else statement is as follows −"
},
{
"code": null,
"e": 29603,
"s": 29558,
"text": "if (condition)\n action-1\nelse\n action-2\n"
},
{
"code": null,
"e": 29825,
"s": 29603,
"text": "In the above syntax, action-1 is performed when the condition evaluates to true and action-2 is performed when the condition evaluates to false. For instance, the following example checks whether a number is even or not −"
},
{
"code": null,
"e": 29964,
"s": 29825,
"text": "[jerry]$ awk 'BEGIN {\n num = 11; if (num % 2 == 0) printf \"%d is even number.\\n\", num; \n else printf \"%d is odd number.\\n\", num \n}'"
},
{
"code": null,
"e": 30019,
"s": 29964,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 30038,
"s": 30019,
"text": "11 is odd number.\n"
},
{
"code": null,
"e": 30160,
"s": 30038,
"text": "We can easily create an if-else-if ladder by using multiple if-else statements. The following example demonstrates this −"
},
{
"code": null,
"e": 30313,
"s": 30160,
"text": "[jerry]$ awk 'BEGIN {\n a = 30;\n \n if (a==10)\n print \"a = 10\";\n else if (a == 20)\n print \"a = 20\";\n else if (a == 30)\n print \"a = 30\";\n}'"
},
{
"code": null,
"e": 30368,
"s": 30313,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 30376,
"s": 30368,
"text": "a = 30\n"
},
{
"code": null,
"e": 30566,
"s": 30376,
"text": "This chapter explains AWK's loops with suitable example. Loops are used to execute a set of actions in a repeated manner. The loop execution continues as long as the loop condition is true."
},
{
"code": null,
"e": 30594,
"s": 30566,
"text": "The syntax of for loop is −"
},
{
"code": null,
"e": 30658,
"s": 30594,
"text": "for (initialization; condition; increment/decrement)\n action\n"
},
{
"code": null,
"e": 30983,
"s": 30658,
"text": "Initially, the for statement performs initialization action, then it checks the condition. If the condition is true, it executes actions, thereafter it performs increment or decrement operation. The loop execution continues as long as the condition is true. For instance, the following example prints 1 to 5 using for loop −"
},
{
"code": null,
"e": 31041,
"s": 30983,
"text": "[jerry]$ awk 'BEGIN { for (i = 1; i <= 5; ++i) print i }'"
},
{
"code": null,
"e": 31096,
"s": 31041,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 31107,
"s": 31096,
"text": "1\n2\n3\n4\n5\n"
},
{
"code": null,
"e": 31240,
"s": 31107,
"text": "The while loop keeps executing the action until a particular logical condition evaluates to true. Here is the syntax of while loop −"
},
{
"code": null,
"e": 31269,
"s": 31240,
"text": "while (condition)\n action\n"
},
{
"code": null,
"e": 31490,
"s": 31269,
"text": "AWK first checks the condition; if the condition is true, it executes the action. This process repeats as long as the loop condition evaluates to true. For instance, the following example prints 1 to 5 using while loop −"
},
{
"code": null,
"e": 31552,
"s": 31490,
"text": "[jerry]$ awk 'BEGIN {i = 1; while (i < 6) { print i; ++i } }'"
},
{
"code": null,
"e": 31607,
"s": 31552,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 31618,
"s": 31607,
"text": "1\n2\n3\n4\n5\n"
},
{
"code": null,
"e": 31771,
"s": 31618,
"text": "The do-while loop is similar to the while loop, except that the test condition is evaluated at the end of the loop. Here is the syntax of do-whileloop −"
},
{
"code": null,
"e": 31803,
"s": 31771,
"text": "do\n action\nwhile (condition)\n"
},
{
"code": null,
"e": 32006,
"s": 31803,
"text": "In a do-while loop, the action statement gets executed at least once even when the condition statement evaluates to false. For instance, the following example prints 1 to 5 numbers using do-while loop −"
},
{
"code": null,
"e": 32071,
"s": 32006,
"text": "[jerry]$ awk 'BEGIN {i = 1; do { print i; ++i } while (i < 6) }'"
},
{
"code": null,
"e": 32126,
"s": 32071,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 32137,
"s": 32126,
"text": "1\n2\n3\n4\n5\n"
},
{
"code": null,
"e": 32274,
"s": 32137,
"text": "As its name suggests, it is used to end the loop execution. Here is an example which ends the loop when the sum becomes greater than 50."
},
{
"code": null,
"e": 32407,
"s": 32274,
"text": "[jerry]$ awk 'BEGIN {\n sum = 0; for (i = 0; i < 20; ++i) { \n sum += i; if (sum > 50) break; else print \"Sum =\", sum \n } \n}'"
},
{
"code": null,
"e": 32462,
"s": 32407,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 32549,
"s": 32462,
"text": "Sum = 0\nSum = 1\nSum = 3\nSum = 6\nSum = 10\nSum = 15\nSum = 21\nSum = 28\nSum = 36\nSum = 45\n"
},
{
"code": null,
"e": 32820,
"s": 32549,
"text": "The continue statement is used inside a loop to skip to the next iteration of the loop. It is useful when you wish to skip the processing of some data inside the loop. For instance, the following example uses continue statement to print the even numbers between 1 to 20."
},
{
"code": null,
"e": 32928,
"s": 32820,
"text": "[jerry]$ awk 'BEGIN {\n for (i = 1; i <= 20; ++i) {\n if (i % 2 == 0) print i ; else continue\n } \n}'"
},
{
"code": null,
"e": 32983,
"s": 32928,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 33010,
"s": 32983,
"text": "2\n4\n6\n8\n10\n12\n14\n16\n18\n20\n"
},
{
"code": null,
"e": 33278,
"s": 33010,
"text": "It is used to stop the execution of the script. It accepts an integer as an argument which is the exit status code for AWK process. If no argument is supplied, exit returns status zero. Here is an example that stops the execution when the sum becomes greater than 50."
},
{
"code": null,
"e": 33413,
"s": 33278,
"text": "[jerry]$ awk 'BEGIN {\n sum = 0; for (i = 0; i < 20; ++i) {\n sum += i; if (sum > 50) exit(10); else print \"Sum =\", sum \n } \n}'"
},
{
"code": null,
"e": 33468,
"s": 33413,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 33555,
"s": 33468,
"text": "Sum = 0\nSum = 1\nSum = 3\nSum = 6\nSum = 10\nSum = 15\nSum = 21\nSum = 28\nSum = 36\nSum = 45\n"
},
{
"code": null,
"e": 33601,
"s": 33555,
"text": "Let us check the return status of the script."
},
{
"code": null,
"e": 33618,
"s": 33601,
"text": "[jerry]$ echo $?"
},
{
"code": null,
"e": 33673,
"s": 33618,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 33677,
"s": 33673,
"text": "10\n"
},
{
"code": null,
"e": 33891,
"s": 33677,
"text": "AWK has a number of functions built into it that are always available to the programmer. This chapter describes Arithmetic, String, Time, Bit manipulation, and other miscellaneous functions with suitable examples."
},
{
"code": null,
"e": 33944,
"s": 33891,
"text": "AWK has the following built-in arithmetic functions."
},
{
"code": null,
"e": 33993,
"s": 33944,
"text": "AWK has the following built-in String functions."
},
{
"code": null,
"e": 34040,
"s": 33993,
"text": "AWK has the following built-in time functions."
},
{
"code": null,
"e": 34099,
"s": 34040,
"text": "AWK has the following built-in bit manipulation functions."
},
{
"code": null,
"e": 34146,
"s": 34099,
"text": "AWK has the following miscellaneous functions."
},
{
"code": null,
"e": 34374,
"s": 34146,
"text": "Functions are basic building blocks of a program. AWK allows us to define our own functions. A large program can be divided into functions and each function can be written/tested independently. It provides re-usability of code."
},
{
"code": null,
"e": 34437,
"s": 34374,
"text": "Given below is the general format of a user-defined function −"
},
{
"code": null,
"e": 34510,
"s": 34437,
"text": "function function_name(argument1, argument2, ...) { \n function body\n}\n"
},
{
"code": null,
"e": 34785,
"s": 34510,
"text": "In this syntax, the function_name is the name of the user-defined function. Function name should begin with a letter and the rest of the characters can be any combination of numbers, alphabetic characters, or underscore. AWK's reserve words cannot be used as function names."
},
{
"code": null,
"e": 34940,
"s": 34785,
"text": "Functions can accept multiple arguments separated by comma. Arguments are not mandatory. You can also create a user-defined function without any argument."
},
{
"code": null,
"e": 34994,
"s": 34940,
"text": "function body consists of one or more AWK statements."
},
{
"code": null,
"e": 35165,
"s": 34994,
"text": "Let us write two functions that calculate the minimum and the maximum number and call these functions from another function called main. The functions.awk file contains −"
},
{
"code": null,
"e": 35652,
"s": 35165,
"text": "# Returns minimum number\nfunction find_min(num1, num2){\n if (num1 < num2)\n return num1\n return num2\n}\n# Returns maximum number\nfunction find_max(num1, num2){\n if (num1 > num2)\n return num1\n return num2\n}\n# Main function\nfunction main(num1, num2){\n # Find minimum number\n result = find_min(10, 20)\n print \"Minimum =\", result\n \n # Find maximum number\n result = find_max(10, 20)\n print \"Maximum =\", result\n}\n# Script execution starts here\nBEGIN {\n main(10, 20)\n}"
},
{
"code": null,
"e": 35707,
"s": 35652,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 35734,
"s": 35707,
"text": "Minimum = 10\nMaximum = 20\n"
},
{
"code": null,
"e": 36067,
"s": 35734,
"text": "So far, we displayed data on standard output stream. We can also redirect data to a file. A redirection appears after the print or printf statement. Redirections in AWK are written just like redirection in shell commands, except that they are written inside the AWK program. This chapter explains redirection with suitable examples."
},
{
"code": null,
"e": 36111,
"s": 36067,
"text": "The syntax of the redirection operator is −"
},
{
"code": null,
"e": 36137,
"s": 36111,
"text": "print DATA > output-file\n"
},
{
"code": null,
"e": 36514,
"s": 36137,
"text": "It writes the data into the output-file. If the output-file does not exist, then it creates one. When this type of redirection is used, the output-file is erased before the first output is written to it. Subsequent write operations to the same output-file do not erase the output-file, but append to it. For instance, the following example writes Hello, World !!! to the file."
},
{
"code": null,
"e": 36556,
"s": 36514,
"text": "Let us create a file with some text data."
},
{
"code": null,
"e": 36630,
"s": 36556,
"text": "[jerry]$ echo \"Old data\" > /tmp/message.txt\n[jerry]$ cat /tmp/message.txt"
},
{
"code": null,
"e": 36685,
"s": 36630,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 36695,
"s": 36685,
"text": "Old data\n"
},
{
"code": null,
"e": 36771,
"s": 36695,
"text": "Now let us redirect some contents into it using AWK's redirection operator."
},
{
"code": null,
"e": 36872,
"s": 36771,
"text": "[jerry]$ awk 'BEGIN { print \"Hello, World !!!\" > \"/tmp/message.txt\" }'\n[jerry]$ cat /tmp/message.txt"
},
{
"code": null,
"e": 36927,
"s": 36872,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 36945,
"s": 36927,
"text": "Hello, World !!!\n"
},
{
"code": null,
"e": 36991,
"s": 36945,
"text": "The syntax of append operator is as follows −"
},
{
"code": null,
"e": 37018,
"s": 36991,
"text": "print DATA >> output-file\n"
},
{
"code": null,
"e": 37275,
"s": 37018,
"text": "It appends the data into the output-file. If the output-file does not exist, then it creates one. When this type of redirection is used, new contents are appended at the end of file. For instance, the following example appends Hello, World !!! to the file."
},
{
"code": null,
"e": 37317,
"s": 37275,
"text": "Let us create a file with some text data."
},
{
"code": null,
"e": 37392,
"s": 37317,
"text": "[jerry]$ echo \"Old data\" > /tmp/message.txt \n[jerry]$ cat /tmp/message.txt"
},
{
"code": null,
"e": 37447,
"s": 37392,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 37457,
"s": 37447,
"text": "Old data\n"
},
{
"code": null,
"e": 37524,
"s": 37457,
"text": "Now let us append some contents to it using AWK's append operator."
},
{
"code": null,
"e": 37626,
"s": 37524,
"text": "[jerry]$ awk 'BEGIN { print \"Hello, World !!!\" >> \"/tmp/message.txt\" }'\n[jerry]$ cat /tmp/message.txt"
},
{
"code": null,
"e": 37681,
"s": 37626,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 37708,
"s": 37681,
"text": "Old data\nHello, World !!!\n"
},
{
"code": null,
"e": 38024,
"s": 37708,
"text": "It is possible to send output to another program through a pipe instead of using a file. This redirection opens a pipe to command, and writes the values of items through this pipe to another process to execute the command. The redirection argument command is actually an AWK expression. Here is the syntax of pipe −"
},
{
"code": null,
"e": 38047,
"s": 38024,
"text": "print items | command\n"
},
{
"code": null,
"e": 38112,
"s": 38047,
"text": "Let us use tr command to convert lowercase letters to uppercase."
},
{
"code": null,
"e": 38181,
"s": 38112,
"text": "[jerry]$ awk 'BEGIN { print \"hello, world !!!\" | \"tr [a-z] [A-Z]\" }'"
},
{
"code": null,
"e": 38236,
"s": 38181,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 38254,
"s": 38236,
"text": "HELLO, WORLD !!!\n"
},
{
"code": null,
"e": 38466,
"s": 38254,
"text": "AWK can communicate to an external process using |&, which is two-way communication. For instance, the following example uses tr command to convert lowercase letters to uppercase. Our command.awk file contains −"
},
{
"code": null,
"e": 38612,
"s": 38466,
"text": "BEGIN {\n cmd = \"tr [a-z] [A-Z]\"\n print \"hello, world !!!\" |& cmd\n close(cmd, \"to\")\n \n cmd |& getline out\n print out;\n close(cmd);\n}"
},
{
"code": null,
"e": 38667,
"s": 38612,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 38685,
"s": 38667,
"text": "HELLO, WORLD !!!\n"
},
{
"code": null,
"e": 38736,
"s": 38685,
"text": "Does the script look cryptic? Let us demystify it."
},
{
"code": null,
"e": 38854,
"s": 38736,
"text": "The first statement, cmd = \"tr [a-z] [A-Z]\", is the command to which we establish the two-way communication from AWK."
},
{
"code": null,
"e": 38972,
"s": 38854,
"text": "The first statement, cmd = \"tr [a-z] [A-Z]\", is the command to which we establish the two-way communication from AWK."
},
{
"code": null,
"e": 39091,
"s": 38972,
"text": "The next statement, i.e., the print command provides input to the tr command. Here &| indicates two-way communication."
},
{
"code": null,
"e": 39210,
"s": 39091,
"text": "The next statement, i.e., the print command provides input to the tr command. Here &| indicates two-way communication."
},
{
"code": null,
"e": 39308,
"s": 39210,
"text": "The third statement, i.e., close(cmd, \"to\"), closes the to process after competing its execution."
},
{
"code": null,
"e": 39406,
"s": 39308,
"text": "The third statement, i.e., close(cmd, \"to\"), closes the to process after competing its execution."
},
{
"code": null,
"e": 39514,
"s": 39406,
"text": "The next statement cmd |& getline out stores the output into out variable with the aid of getline function."
},
{
"code": null,
"e": 39622,
"s": 39514,
"text": "The next statement cmd |& getline out stores the output into out variable with the aid of getline function."
},
{
"code": null,
"e": 39716,
"s": 39622,
"text": "The next print statement prints the output and finally the close function closes the command."
},
{
"code": null,
"e": 39810,
"s": 39716,
"text": "The next print statement prints the output and finally the close function closes the command."
},
{
"code": null,
"e": 40109,
"s": 39810,
"text": "So far we have used AWK's print and printf functions to display data on standard output. But printf is much more powerful than what we have seen before. This function is borrowed from the C language and is very helpful while producing formatted output. Below is the syntax of the printf statement −"
},
{
"code": null,
"e": 40132,
"s": 40109,
"text": "printf fmt, expr-list\n"
},
{
"code": null,
"e": 40277,
"s": 40132,
"text": "In the above syntax fmt is a string of format specifications and constants. expr-list is a list of arguments corresponding to format specifiers."
},
{
"code": null,
"e": 40406,
"s": 40277,
"text": "Similar to any string, format can contain embedded escape sequences. Discussed below are the escape sequences supported by AWK −"
},
{
"code": null,
"e": 40495,
"s": 40406,
"text": "The following example prints Hello and World in separate lines using newline character −"
},
{
"code": null,
"e": 40503,
"s": 40495,
"text": "Example"
},
{
"code": null,
"e": 40552,
"s": 40503,
"text": "[jerry]$ awk 'BEGIN { printf \"Hello\\nWorld\\n\" }'"
},
{
"code": null,
"e": 40607,
"s": 40552,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 40614,
"s": 40607,
"text": "Output"
},
{
"code": null,
"e": 40627,
"s": 40614,
"text": "Hello\nWorld\n"
},
{
"code": null,
"e": 40698,
"s": 40627,
"text": "The following example uses horizontal tab to display different field −"
},
{
"code": null,
"e": 40706,
"s": 40698,
"text": "Example"
},
{
"code": null,
"e": 40766,
"s": 40706,
"text": "[jerry]$ awk 'BEGIN { printf \"Sr No\\tName\\tSub\\tMarks\\n\" }'"
},
{
"code": null,
"e": 40826,
"s": 40766,
"text": "On executing the above code, you get the following result −"
},
{
"code": null,
"e": 40833,
"s": 40826,
"text": "Output"
},
{
"code": null,
"e": 40860,
"s": 40833,
"text": "Sr No Name Sub Marks\n"
},
{
"code": null,
"e": 40919,
"s": 40860,
"text": "The following example uses vertical tab after each filed −"
},
{
"code": null,
"e": 40927,
"s": 40919,
"text": "Example"
},
{
"code": null,
"e": 40987,
"s": 40927,
"text": "[jerry]$ awk 'BEGIN { printf \"Sr No\\vName\\vSub\\vMarks\\n\" }'"
},
{
"code": null,
"e": 41042,
"s": 40987,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 41049,
"s": 41042,
"text": "Output"
},
{
"code": null,
"e": 41089,
"s": 41049,
"text": "Sr No\n Name\n Sub\n Marks\n"
},
{
"code": null,
"e": 41416,
"s": 41089,
"text": "The following example prints a backspace after every field except the last one. It erases the last number from the first three fields. For instance, Field 1 is displayed as Field, because the last character is erased with backspace. However, the last field Field 4 is displayed as it is, as we did not have a \\b after Field 4."
},
{
"code": null,
"e": 41424,
"s": 41416,
"text": "Example"
},
{
"code": null,
"e": 41495,
"s": 41424,
"text": "[jerry]$ awk 'BEGIN { printf \"Field 1\\bField 2\\bField 3\\bField 4\\n\" }'"
},
{
"code": null,
"e": 41550,
"s": 41495,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 41557,
"s": 41550,
"text": "Output"
},
{
"code": null,
"e": 41584,
"s": 41557,
"text": "Field Field Field Field 4\n"
},
{
"code": null,
"e": 41855,
"s": 41584,
"text": "In the following example, after printing every field, we do a Carriage Return and print the next value on top of the current printed value. It means, in the final output, you can see only Field 4, as it was the last thing to be printed on top of all the previous fields."
},
{
"code": null,
"e": 41863,
"s": 41855,
"text": "Example"
},
{
"code": null,
"e": 41934,
"s": 41863,
"text": "[jerry]$ awk 'BEGIN { printf \"Field 1\\rField 2\\rField 3\\rField 4\\n\" }'"
},
{
"code": null,
"e": 41989,
"s": 41934,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 41996,
"s": 41989,
"text": "Output"
},
{
"code": null,
"e": 42005,
"s": 41996,
"text": "Field 4\n"
},
{
"code": null,
"e": 42069,
"s": 42005,
"text": "The following example uses form feed after printing each field."
},
{
"code": null,
"e": 42077,
"s": 42069,
"text": "Example"
},
{
"code": null,
"e": 42137,
"s": 42077,
"text": "[jerry]$ awk 'BEGIN { printf \"Sr No\\fName\\fSub\\fMarks\\n\" }'"
},
{
"code": null,
"e": 42192,
"s": 42137,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 42199,
"s": 42192,
"text": "Output"
},
{
"code": null,
"e": 42239,
"s": 42199,
"text": "Sr No\n Name\n Sub\n Marks\n"
},
{
"code": null,
"e": 42386,
"s": 42239,
"text": "As in C-language, AWK also has format specifiers. The AWK version of the printf statement accepts the following conversion specification formats −"
},
{
"code": null,
"e": 42605,
"s": 42386,
"text": "It prints a single character. If the argument used for %c is numeric, it is treated as a character and printed. Otherwise, the argument is assumed to be a string, and the only first character of that string is printed."
},
{
"code": null,
"e": 42613,
"s": 42605,
"text": "Example"
},
{
"code": null,
"e": 42683,
"s": 42613,
"text": "[jerry]$ awk 'BEGIN { printf \"ASCII value 65 = character %c\\n\", 65 }'"
},
{
"code": null,
"e": 42690,
"s": 42683,
"text": "Output"
},
{
"code": null,
"e": 42745,
"s": 42690,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 42775,
"s": 42745,
"text": "ASCII value 65 = character A\n"
},
{
"code": null,
"e": 42828,
"s": 42775,
"text": "It prints only the integer part of a decimal number."
},
{
"code": null,
"e": 42836,
"s": 42828,
"text": "Example"
},
{
"code": null,
"e": 42895,
"s": 42836,
"text": "[jerry]$ awk 'BEGIN { printf \"Percentags = %d\\n\", 80.66 }'"
},
{
"code": null,
"e": 42950,
"s": 42895,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 42957,
"s": 42950,
"text": "Output"
},
{
"code": null,
"e": 42974,
"s": 42957,
"text": "Percentags = 80\n"
},
{
"code": null,
"e": 43040,
"s": 42974,
"text": "It prints a floating point number of the form [-]d.dddddde[+-]dd."
},
{
"code": null,
"e": 43048,
"s": 43040,
"text": "Example"
},
{
"code": null,
"e": 43107,
"s": 43048,
"text": "[jerry]$ awk 'BEGIN { printf \"Percentags = %E\\n\", 80.66 }'"
},
{
"code": null,
"e": 43162,
"s": 43107,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 43169,
"s": 43162,
"text": "Output"
},
{
"code": null,
"e": 43196,
"s": 43169,
"text": "Percentags = 8.066000e+01\n"
},
{
"code": null,
"e": 43231,
"s": 43196,
"text": "The %E format uses E instead of e."
},
{
"code": null,
"e": 43239,
"s": 43231,
"text": "Example"
},
{
"code": null,
"e": 43298,
"s": 43239,
"text": "[jerry]$ awk 'BEGIN { printf \"Percentags = %e\\n\", 80.66 }'"
},
{
"code": null,
"e": 43353,
"s": 43298,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 43360,
"s": 43353,
"text": "Output"
},
{
"code": null,
"e": 43387,
"s": 43360,
"text": "Percentags = 8.066000E+01\n"
},
{
"code": null,
"e": 43448,
"s": 43387,
"text": "It prints a floating point number of the form [-]ddd.dddddd."
},
{
"code": null,
"e": 43456,
"s": 43448,
"text": "Example"
},
{
"code": null,
"e": 43515,
"s": 43456,
"text": "[jerry]$ awk 'BEGIN { printf \"Percentags = %f\\n\", 80.66 }'"
},
{
"code": null,
"e": 43570,
"s": 43515,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 43577,
"s": 43570,
"text": "Output"
},
{
"code": null,
"e": 43601,
"s": 43577,
"text": "Percentags = 80.660000\n"
},
{
"code": null,
"e": 43688,
"s": 43601,
"text": "Uses %e or %f conversion, whichever is shorter, with non-significant zeros suppressed."
},
{
"code": null,
"e": 43696,
"s": 43688,
"text": "Example"
},
{
"code": null,
"e": 43755,
"s": 43696,
"text": "[jerry]$ awk 'BEGIN { printf \"Percentags = %g\\n\", 80.66 }'"
},
{
"code": null,
"e": 43762,
"s": 43755,
"text": "Output"
},
{
"code": null,
"e": 43817,
"s": 43762,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 43837,
"s": 43817,
"text": "Percentags = 80.66\n"
},
{
"code": null,
"e": 43874,
"s": 43837,
"text": "The %G format uses %E instead of %e."
},
{
"code": null,
"e": 43882,
"s": 43874,
"text": "Example"
},
{
"code": null,
"e": 43941,
"s": 43882,
"text": "[jerry]$ awk 'BEGIN { printf \"Percentags = %G\\n\", 80.66 }'"
},
{
"code": null,
"e": 43996,
"s": 43941,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 44003,
"s": 43996,
"text": "Output"
},
{
"code": null,
"e": 44023,
"s": 44003,
"text": "Percentags = 80.66\n"
},
{
"code": null,
"e": 44059,
"s": 44023,
"text": "It prints an unsigned octal number."
},
{
"code": null,
"e": 44067,
"s": 44059,
"text": "Example"
},
{
"code": null,
"e": 44153,
"s": 44067,
"text": "[jerry]$ awk 'BEGIN { printf \"Octal representation of decimal number 10 = %o\\n\", 10}'"
},
{
"code": null,
"e": 44208,
"s": 44153,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 44215,
"s": 44208,
"text": "Output"
},
{
"code": null,
"e": 44263,
"s": 44215,
"text": "Octal representation of decimal number 10 = 12\n"
},
{
"code": null,
"e": 44301,
"s": 44263,
"text": "It prints an unsigned decimal number."
},
{
"code": null,
"e": 44309,
"s": 44301,
"text": "Example"
},
{
"code": null,
"e": 44366,
"s": 44309,
"text": "[jerry]$ awk 'BEGIN { printf \"Unsigned 10 = %u\\n\", 10 }'"
},
{
"code": null,
"e": 44421,
"s": 44366,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 44428,
"s": 44421,
"text": "Output"
},
{
"code": null,
"e": 44446,
"s": 44428,
"text": "Unsigned 10 = 10\n"
},
{
"code": null,
"e": 44476,
"s": 44446,
"text": "It prints a character string."
},
{
"code": null,
"e": 44484,
"s": 44476,
"text": "Example"
},
{
"code": null,
"e": 44549,
"s": 44484,
"text": "[jerry]$ awk 'BEGIN { printf \"Name = %s\\n\", \"Sherlock Holmes\" }'"
},
{
"code": null,
"e": 44604,
"s": 44549,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 44611,
"s": 44604,
"text": "Output"
},
{
"code": null,
"e": 44635,
"s": 44611,
"text": "Name = Sherlock Holmes\n"
},
{
"code": null,
"e": 44736,
"s": 44635,
"text": "It prints an unsigned hexadecimal number. The %X format uses uppercase letters instead of lowercase."
},
{
"code": null,
"e": 44744,
"s": 44736,
"text": "Example"
},
{
"code": null,
"e": 44841,
"s": 44744,
"text": "[jerry]$ awk 'BEGIN { \n printf \"Hexadecimal representation of decimal number 15 = %x\\n\", 15\n}'"
},
{
"code": null,
"e": 44896,
"s": 44841,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 44903,
"s": 44896,
"text": "Output"
},
{
"code": null,
"e": 44956,
"s": 44903,
"text": "Hexadecimal representation of decimal number 15 = f\n"
},
{
"code": null,
"e": 44996,
"s": 44956,
"text": "Now let use %X and observe the result −"
},
{
"code": null,
"e": 45004,
"s": 44996,
"text": "Example"
},
{
"code": null,
"e": 45101,
"s": 45004,
"text": "[jerry]$ awk 'BEGIN { \n printf \"Hexadecimal representation of decimal number 15 = %X\\n\", 15\n}'"
},
{
"code": null,
"e": 45156,
"s": 45101,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 45163,
"s": 45156,
"text": "Output"
},
{
"code": null,
"e": 45216,
"s": 45163,
"text": "Hexadecimal representation of decimal number 15 = F\n"
},
{
"code": null,
"e": 45277,
"s": 45216,
"text": "It prints a single % character and no argument is converted."
},
{
"code": null,
"e": 45285,
"s": 45277,
"text": "Example"
},
{
"code": null,
"e": 45346,
"s": 45285,
"text": "[jerry]$ awk 'BEGIN { printf \"Percentags = %d%%\\n\", 80.66 }'"
},
{
"code": null,
"e": 45401,
"s": 45346,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 45408,
"s": 45401,
"text": "Output"
},
{
"code": null,
"e": 45426,
"s": 45408,
"text": "Percentags = 80%\n"
},
{
"code": null,
"e": 45476,
"s": 45426,
"text": "With % we can use following optional parameters −"
},
{
"code": null,
"e": 45605,
"s": 45476,
"text": "The field is padded to the width. By default, the field is padded with spaces but when 0 flag is used, it is padded with zeroes."
},
{
"code": null,
"e": 45613,
"s": 45605,
"text": "Example"
},
{
"code": null,
"e": 45713,
"s": 45613,
"text": "[jerry]$ awk 'BEGIN { \n num1 = 10; num2 = 20; printf \"Num1 = %10d\\nNum2 = %10d\\n\", num1, num2 \n}'"
},
{
"code": null,
"e": 45768,
"s": 45713,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 45775,
"s": 45768,
"text": "Output"
},
{
"code": null,
"e": 45812,
"s": 45775,
"text": "Num1 = 10\nNum2 = 20\n"
},
{
"code": null,
"e": 46062,
"s": 45812,
"text": "A leading zero acts as a flag, which indicates that the output should be padded with zeroes instead of spaces. Please note that this flag only has an effect when the field is wider than the value to be printed. The following example describes this −"
},
{
"code": null,
"e": 46070,
"s": 46062,
"text": "Example"
},
{
"code": null,
"e": 46171,
"s": 46070,
"text": "[jerry]$ awk 'BEGIN { \n num1 = -10; num2 = 20; printf \"Num1 = %05d\\nNum2 = %05d\\n\", num1, num2 \n}'"
},
{
"code": null,
"e": 46226,
"s": 46171,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 46233,
"s": 46226,
"text": "Output"
},
{
"code": null,
"e": 46260,
"s": 46233,
"text": "Num1 = -0010\nNum2 = 00020\n"
},
{
"code": null,
"e": 46534,
"s": 46260,
"text": "The expression should be left-justified within its field. When the input-string is less than the number of characters specified, and you want it to be left justified, i.e., by adding spaces to the right, use a minus symbol (–) immediately after the % and before the number."
},
{
"code": null,
"e": 46655,
"s": 46534,
"text": "In the following example, output of the AWK command is piped to the cat command to display the END OF LINE($) character."
},
{
"code": null,
"e": 46663,
"s": 46655,
"text": "Example"
},
{
"code": null,
"e": 46736,
"s": 46663,
"text": "[jerry]$ awk 'BEGIN { num = 10; printf \"Num = %-5d\\n\", num }' | cat -vte"
},
{
"code": null,
"e": 46791,
"s": 46736,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 46798,
"s": 46791,
"text": "Output"
},
{
"code": null,
"e": 46812,
"s": 46798,
"text": "Num = 10 $\n"
},
{
"code": null,
"e": 46890,
"s": 46812,
"text": "It always prefixes numeric values with a sign, even if the value is positive."
},
{
"code": null,
"e": 46898,
"s": 46890,
"text": "Example"
},
{
"code": null,
"e": 46997,
"s": 46898,
"text": "[jerry]$ awk 'BEGIN { \n num1 = -10; num2 = 20; printf \"Num1 = %+d\\nNum2 = %+d\\n\", num1, num2 \n}'"
},
{
"code": null,
"e": 47052,
"s": 46997,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 47059,
"s": 47052,
"text": "Output"
},
{
"code": null,
"e": 47082,
"s": 47059,
"text": "Num1 = -10\nNum2 = +20\n"
},
{
"code": null,
"e": 47380,
"s": 47082,
"text": "For %o, it supplies a leading zero. For %x and %X, it supplies a leading 0x or 0X respectively, only if the result is non-zero. For %e, %E, %f, and %F, the result always contains a decimal point. For %g and %G, trailing zeros are not removed from the result. The following example describes this −"
},
{
"code": null,
"e": 47388,
"s": 47380,
"text": "Example"
},
{
"code": null,
"e": 47496,
"s": 47388,
"text": "[jerry]$ awk 'BEGIN { \n printf \"Octal representation = %#o\\nHexadecimal representaion = %#X\\n\", 10, 10\n}'"
},
{
"code": null,
"e": 47551,
"s": 47496,
"text": "On executing this code, you get the following result −"
},
{
"code": null,
"e": 47558,
"s": 47551,
"text": "Output"
},
{
"code": null,
"e": 47619,
"s": 47558,
"text": "Octal representation = 012\nHexadecimal representation = 0XA\n"
},
{
"code": null,
"e": 47626,
"s": 47619,
"text": " Print"
},
{
"code": null,
"e": 47637,
"s": 47626,
"text": " Add Notes"
}
] |
Infinite Steps CartPole Problem With Variable Reward | by Suraj Regmi | Towards Data Science | In the last blog post, we wrote our first reinforcement learning application — CartPole problem. We used Deep -Q-Network to train the algorithm. As we can see in the blog, the fixed reward of +1 was used for all the stable states and when the CartPole loses its balance, a reward of 0 was given. We saw at the end: when the CartPole approaches 200 steps, it tends to lose balance. We ended the blog suggesting a remark: the maximum number of steps (which we defined 200) and the fixed reward may have led to such behavior. Today, let’s not limit the number of steps and modify the reward and see how the CartPole behaves.
The CartPole problem is considered to be solved when the average reward is greater than or equal to 195.0 over 100 consecutive trials. This is considering the fixed reward of 1.0. Thanks to its definition, it makes sense to keep a fixed reward of 1.0 for every balance state and limit the maximum number of steps to 200. It delights to know that the problem was solved in the previous blog.
The CartPole problem has the following conditions for episode termination:
Pole angle is more than 12 degrees.Cart position is more than 2.4 — center of the cart reaches the edge of the display.
Pole angle is more than 12 degrees.
Cart position is more than 2.4 — center of the cart reaches the edge of the display.
Our goal here is to remove the number of steps limitation and give a variable reward to each state.
If x and θ represents cart position and pole angle respectively, we define the reward as:
reward = (1 - (x ** 2) / 11.52 - (θ ** 2) / 288)
Here, both the cart position and pole angle components are normalized to [0, 1] interval to give equal weightage to them. Let’s see the screenshot of the 2D view of the 3D graph.
We see in the graph that when the CartPole is perfectly balanced (i.e. x = 0 and θ = 0), the maximum reward is achieved (i.e. 1). With increase in the absolute values of x and θ, the reward decreases and reaches 0 when |x| = 2.4 and |θ| = 12.
Let’s inherit the CartPole environment gym class (CartPoleEnv) to our custom class, CustomCartPoleEnv, and overwrite the step method. In the step method, we write the variable reward instead of the fixed reward.
By using the above block of code, the components of TF-Agents are made and the Deep Q-Network is trained. We see that the CartPole is even more balanced and stable over a large number of steps.
Let’s see the video of how our CartPole behaves after using the variable reward.
One episode lasts 35.4 seconds on an average. Impressive, isn’t it?
Here, the reward becomes zero only when both of the expressions (pole angle and cart position) reach the extreme values. We can employ different reward function that returns zero when one of the extreme conditions is reached. I expect such a reward function to do even better. Therefore, readers are encouraged to try such a reward function and comment how the CartPole behaved. Happy RLing! | [
{
"code": null,
"e": 793,
"s": 171,
"text": "In the last blog post, we wrote our first reinforcement learning application — CartPole problem. We used Deep -Q-Network to train the algorithm. As we can see in the blog, the fixed reward of +1 was used for all the stable states and when the CartPole loses its balance, a reward of 0 was given. We saw at the end: when the CartPole approaches 200 steps, it tends to lose balance. We ended the blog suggesting a remark: the maximum number of steps (which we defined 200) and the fixed reward may have led to such behavior. Today, let’s not limit the number of steps and modify the reward and see how the CartPole behaves."
},
{
"code": null,
"e": 1184,
"s": 793,
"text": "The CartPole problem is considered to be solved when the average reward is greater than or equal to 195.0 over 100 consecutive trials. This is considering the fixed reward of 1.0. Thanks to its definition, it makes sense to keep a fixed reward of 1.0 for every balance state and limit the maximum number of steps to 200. It delights to know that the problem was solved in the previous blog."
},
{
"code": null,
"e": 1259,
"s": 1184,
"text": "The CartPole problem has the following conditions for episode termination:"
},
{
"code": null,
"e": 1379,
"s": 1259,
"text": "Pole angle is more than 12 degrees.Cart position is more than 2.4 — center of the cart reaches the edge of the display."
},
{
"code": null,
"e": 1415,
"s": 1379,
"text": "Pole angle is more than 12 degrees."
},
{
"code": null,
"e": 1500,
"s": 1415,
"text": "Cart position is more than 2.4 — center of the cart reaches the edge of the display."
},
{
"code": null,
"e": 1600,
"s": 1500,
"text": "Our goal here is to remove the number of steps limitation and give a variable reward to each state."
},
{
"code": null,
"e": 1690,
"s": 1600,
"text": "If x and θ represents cart position and pole angle respectively, we define the reward as:"
},
{
"code": null,
"e": 1739,
"s": 1690,
"text": "reward = (1 - (x ** 2) / 11.52 - (θ ** 2) / 288)"
},
{
"code": null,
"e": 1918,
"s": 1739,
"text": "Here, both the cart position and pole angle components are normalized to [0, 1] interval to give equal weightage to them. Let’s see the screenshot of the 2D view of the 3D graph."
},
{
"code": null,
"e": 2161,
"s": 1918,
"text": "We see in the graph that when the CartPole is perfectly balanced (i.e. x = 0 and θ = 0), the maximum reward is achieved (i.e. 1). With increase in the absolute values of x and θ, the reward decreases and reaches 0 when |x| = 2.4 and |θ| = 12."
},
{
"code": null,
"e": 2373,
"s": 2161,
"text": "Let’s inherit the CartPole environment gym class (CartPoleEnv) to our custom class, CustomCartPoleEnv, and overwrite the step method. In the step method, we write the variable reward instead of the fixed reward."
},
{
"code": null,
"e": 2567,
"s": 2373,
"text": "By using the above block of code, the components of TF-Agents are made and the Deep Q-Network is trained. We see that the CartPole is even more balanced and stable over a large number of steps."
},
{
"code": null,
"e": 2648,
"s": 2567,
"text": "Let’s see the video of how our CartPole behaves after using the variable reward."
},
{
"code": null,
"e": 2716,
"s": 2648,
"text": "One episode lasts 35.4 seconds on an average. Impressive, isn’t it?"
}
] |
Page Rank Algorithm and Implementation using Python | The PageRank algorithm is applicable in web pages. Web page is a directed graph, we know that the two components of Directed graphsare -nodes and connections. The pages are nodes and hyperlinks are the connections, the connection between two nodes.
We can find out the importance of each page by the PageRank and it is accurate. The value of the PageRank is the probability will be between 0 and 1.
The PageRank value of individual node in a graph depends on the PageRank value of all the nodes which connect to it and those nodes are cyclically connected to the nodes whose ranking we want, we use converging iterative method for assigning values to PageRank.
import numpy as np
import scipy as sc
import pandas as pd
from fractions import Fraction
def display_format(my_vector, my_decimal):
return np.round((my_vector).astype(np.float), decimals=my_decimal)
my_dp = Fraction(1,3)
Mat = np.matrix([[0,0,1],
[Fraction(1,2),0,0],
[Fraction(1,2),1,0]])
Ex = np.zeros((3,3))
Ex[:] = my_dp
beta = 0.7
Al = beta * Mat + ((1-beta) * Ex)
r = np.matrix([my_dp, my_dp, my_dp])
r = np.transpose(r)
previous_r = r
for i in range(1,100):
r = Al * r
print (display_format(r,3))
if (previous_r==r).all():
break
previous_r = r
print ("Final:\n", display_format(r,3))
print ("sum", np.sum(r))
[[0.333]
[0.217]
[0.45 ]]
[[0.415]
[0.217]
[0.368]]
[[0.358]
[0.245]
[0.397]]
[[0.378]
[0.225]
[0.397]]
[[0.378]
[0.232]
[0.39 ]]
[[0.373]
[0.232]
[0.395]]
[[0.376]
[0.231]
[0.393]]
[[0.375]
[0.232]
[0.393]]
[[0.375]
[0.231]
[0.394]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
[[0.375]
[0.231]
[0.393]]
Final:
[[0.375]
[0.231]
[0.393]]
sum 0.9999999999999951 | [
{
"code": null,
"e": 1311,
"s": 1062,
"text": "The PageRank algorithm is applicable in web pages. Web page is a directed graph, we know that the two components of Directed graphsare -nodes and connections. The pages are nodes and hyperlinks are the connections, the connection between two nodes."
},
{
"code": null,
"e": 1461,
"s": 1311,
"text": "We can find out the importance of each page by the PageRank and it is accurate. The value of the PageRank is the probability will be between 0 and 1."
},
{
"code": null,
"e": 1723,
"s": 1461,
"text": "The PageRank value of individual node in a graph depends on the PageRank value of all the nodes which connect to it and those nodes are cyclically connected to the nodes whose ranking we want, we use converging iterative method for assigning values to PageRank."
},
{
"code": null,
"e": 2432,
"s": 1723,
"text": "import numpy as np\nimport scipy as sc\nimport pandas as pd\nfrom fractions import Fraction\n def display_format(my_vector, my_decimal):\n return np.round((my_vector).astype(np.float), decimals=my_decimal)\n my_dp = Fraction(1,3)\n Mat = np.matrix([[0,0,1],\n [Fraction(1,2),0,0],\n [Fraction(1,2),1,0]])\n Ex = np.zeros((3,3))\n Ex[:] = my_dp\n beta = 0.7\n Al = beta * Mat + ((1-beta) * Ex)\n r = np.matrix([my_dp, my_dp, my_dp])\n r = np.transpose(r)\n previous_r = r\n for i in range(1,100):\n r = Al * r\n print (display_format(r,3))\nif (previous_r==r).all():\n break\nprevious_r = r\nprint (\"Final:\\n\", display_format(r,3))\nprint (\"sum\", np.sum(r))"
},
{
"code": null,
"e": 4204,
"s": 2432,
"text": "[[0.333]\n[0.217]\n[0.45 ]]\n[[0.415]\n[0.217]\n[0.368]]\n[[0.358]\n[0.245]\n[0.397]]\n[[0.378]\n[0.225]\n[0.397]]\n[[0.378]\n[0.232]\n[0.39 ]]\n[[0.373]\n[0.232]\n[0.395]]\n[[0.376]\n[0.231]\n[0.393]]\n[[0.375]\n[0.232]\n[0.393]]\n[[0.375]\n[0.231]\n[0.394]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\n[[0.375]\n[0.231]\n[0.393]]\nFinal:\n[[0.375]\n[0.231]\n[0.393]]\nsum 0.9999999999999951"
}
] |
Deep Learning for image classification w/ implementation in PyTorch | by Amine Hadj-Youcef, PhD. | Towards Data Science | This article will explain the Convolutional Neural Network (CNN) with an illustration of image classification. It provides a simple implementation of the CNN algorithm using the framework PyTorch on Python.There are many free courses that can be found on the internet. Personally, I suggest the course of Andrej Karpathy (@karpathy) at Stanford. You will learn a lot, it is a step by step course. In addition, it provides many practical strategies to implement the CNN architecture.
cs231n.github.io
Before going deep into Convolutional Neural Network, it is worth understanding their concept. CNN falls in the category of the supervised algorithms. The algorithm learns from training data, e,g, a set of images in the input and their associated labels at the output.
It consists of feeding the convolutional neural network with images of the training set, x, and their associated labels (targets), y, in order to learn network’s function, y=f(x). After learning the parameter of the network’s function (namely weight and bias), we test the network with unseen images in order to predict their labels. The architecture of a Convolutional Neural Network (CNN or ConvNet)
The CNN architecture we used in this article is proposed in this paper.
The network is implemented as a class called CNN. It contains two main methods. The first method (__init__) defines layers components of the network. In the second method (forward) we wire the network and put every component in the desired order (as shown in the picture).
The python code below is straightforward. The network is defined using the neural network module of Torch. Notice that we already choose hyper-parameters of the network, such as Padding (P), Stride (S) and Kernel_size (F). Also the number of filters at each layer,....
The input image has four dimensions, (batch_size, num_channel, height, width). The algorithm outputs an array with ten values, corresponding to the score (or amount of energy) of the predicted labels of the image. Therefore, the maximum score is the predicted label (or class) to retain for the tested image.
In the following bullet we will explain the role of each layer of the algorithm:
Conv layer: This is the main layer of the algorithm. It consists of extracting the key features in the input image (sharp edge, smoothness, rounded shape, ...). This is done through a set of 2-dimensional convolutions of the image inthe input with one or many filters. Note that the convolution is performed simultaneously for each channel of the input image, e.g. a color image has C=3 channels, RGB: Red, Green, and Blue. The filters are set to have odd size for practical purpose CxFxF, e.g, 3x3x3, 3x5x5. The output of this operation is one scalar value, an artificial neuron. An illustrative animation for the convolution layer is given in http://cs231n.github.io/convolutional-networks/#conv.
Furthermore, the Conv layer is applied repeatedly to extract fine features that characterize the input image. The outputs of the Conv layer are called features map (or activation map), where each spatial position (or pixel) represents an artificial neuron.
ReLU (Rectifier Linear Units): It performs hard thresholding for negative values to zero, and leave positive values untouched, i,e, ReLU(x)=max(0, x). This layer preserves the dynamic range of the feature map.
Maxpooling layer: It performs spatial down-sampling of the feature map and retains only the most relevant information. See the picture below for a visual illustration of this operation. From a practical point of view, a pooling of size 2x2 with a stride of 2 gives good results on most applications. Having said that, other types of pooling exist, e,g, average pooling, median pooling, sum pooling, ...
For this article, I used the neural network framework PyTorch to implement the CNN architecture detailed above.
The full code is available in my GitHub repository:
github.com
The code is quite simple to understand, hopefully, since it mentions all the layers we discussed earlier in an intuitive way.
Note that all the number mentioned in the input of the methods is parameters. They define the CNN architecture: kernel_size, stride, padding, input/output of each Conv layer. The code below defines a class called CNN where we define the CNN architecture in order.
The output of the above code summarizes the network architecture:
CNN( (layer1): Sequential( (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Dropout(p=0.30) ) (layer2): Sequential( (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Dropout(p=0.30) ) (layer3): Sequential( (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False) (3): Dropout(p=0.30) ) (fc1): Linear(in_features=2048, out_features=625, bias=True) (layer4): Sequential( (0): Linear(in_features=2048, out_features=625, bias=True) (1): ReLU() (2): Dropout(p=0.30) ) (fc2): Linear(in_features=625, out_features=10, bias=True))
In order to acquire the MNIST images, we use a method of torchvision library. Just copy-paste this code to download the data. Basically, two datasets are loaded. The training dataset serves as ground truth to compute the network parameters. The testing images
This is an essential stage of a supervised algorithm. By feeding the algorithm with many examples of image and their associated labels, we teach the algorithm to find patterns of each class. This is done by computing the filter’s parameters (weight and bias).
The training of the network is composed of two major steps, forward and backward:
During the forward pass, we feed the network by images of the training set and compute the features map till the end of the network, then we compute the loss function to measure how far / close the solution (predicted label) is from the ground truth label.
The backward pass performs the computation of the loss function’s gradient and updates the filters' parameters.
We also need to define a loss function, e.g. Cross-entropy loss function, and an optimization algorithm, such as Gradient descent, SGD, Adam (Adaptive moment estimation)...
What to remember for training a deep learning algorithm:
— Initialize the parameters (weights: w, bias: b) — Optimize the loss iteratively to learn parameters (w,b) — Compute the loss function and its gradients — Update parameters using an optimization algorithm (e,g, Adam) — Use the learned parameters to predict the label for a given input image
Python implementation of training CNN in Python using PyTorch
Plotting these plots help monitor understanding the convergence of the algorithm. The loss plot is decreasing during the training which a what we want since the goal of the optimization algorithm (Adam) is to minimize the loss function. On the right, the plot shows the evolution of the classification accuracy during the training. The more we train the algorithm, the better the classification accuracy. Notice the fluctuation of the accuracy between ~90 and 100 %. Better tuning of hyper-parameters will provide a precise classification.
The algorithm now is able to understand the content of these images thanks to the training of CNN.
This article attempts to explain briefly the convolutional neural network without going deep into mathematical development. An illustration is provided at each step with a visual explanation, as well as an application of image classification of MNIST dataset. Finally, a python implementation using PyTorch library is presented in order to provide a concrete example of application. Hopefully, you will find it interesting and easy to read.
What to do next?
Let me know what you think in the comment section and/or direct message me on LinkedIn.
Read my other article on medium: Have you Optimized your Deep Learning Model Before Deployment? | [
{
"code": null,
"e": 655,
"s": 172,
"text": "This article will explain the Convolutional Neural Network (CNN) with an illustration of image classification. It provides a simple implementation of the CNN algorithm using the framework PyTorch on Python.There are many free courses that can be found on the internet. Personally, I suggest the course of Andrej Karpathy (@karpathy) at Stanford. You will learn a lot, it is a step by step course. In addition, it provides many practical strategies to implement the CNN architecture."
},
{
"code": null,
"e": 672,
"s": 655,
"text": "cs231n.github.io"
},
{
"code": null,
"e": 940,
"s": 672,
"text": "Before going deep into Convolutional Neural Network, it is worth understanding their concept. CNN falls in the category of the supervised algorithms. The algorithm learns from training data, e,g, a set of images in the input and their associated labels at the output."
},
{
"code": null,
"e": 1342,
"s": 940,
"text": "It consists of feeding the convolutional neural network with images of the training set, x, and their associated labels (targets), y, in order to learn network’s function, y=f(x). After learning the parameter of the network’s function (namely weight and bias), we test the network with unseen images in order to predict their labels. The architecture of a Convolutional Neural Network (CNN or ConvNet)"
},
{
"code": null,
"e": 1414,
"s": 1342,
"text": "The CNN architecture we used in this article is proposed in this paper."
},
{
"code": null,
"e": 1687,
"s": 1414,
"text": "The network is implemented as a class called CNN. It contains two main methods. The first method (__init__) defines layers components of the network. In the second method (forward) we wire the network and put every component in the desired order (as shown in the picture)."
},
{
"code": null,
"e": 1956,
"s": 1687,
"text": "The python code below is straightforward. The network is defined using the neural network module of Torch. Notice that we already choose hyper-parameters of the network, such as Padding (P), Stride (S) and Kernel_size (F). Also the number of filters at each layer,...."
},
{
"code": null,
"e": 2265,
"s": 1956,
"text": "The input image has four dimensions, (batch_size, num_channel, height, width). The algorithm outputs an array with ten values, corresponding to the score (or amount of energy) of the predicted labels of the image. Therefore, the maximum score is the predicted label (or class) to retain for the tested image."
},
{
"code": null,
"e": 2346,
"s": 2265,
"text": "In the following bullet we will explain the role of each layer of the algorithm:"
},
{
"code": null,
"e": 3045,
"s": 2346,
"text": "Conv layer: This is the main layer of the algorithm. It consists of extracting the key features in the input image (sharp edge, smoothness, rounded shape, ...). This is done through a set of 2-dimensional convolutions of the image inthe input with one or many filters. Note that the convolution is performed simultaneously for each channel of the input image, e.g. a color image has C=3 channels, RGB: Red, Green, and Blue. The filters are set to have odd size for practical purpose CxFxF, e.g, 3x3x3, 3x5x5. The output of this operation is one scalar value, an artificial neuron. An illustrative animation for the convolution layer is given in http://cs231n.github.io/convolutional-networks/#conv."
},
{
"code": null,
"e": 3302,
"s": 3045,
"text": "Furthermore, the Conv layer is applied repeatedly to extract fine features that characterize the input image. The outputs of the Conv layer are called features map (or activation map), where each spatial position (or pixel) represents an artificial neuron."
},
{
"code": null,
"e": 3512,
"s": 3302,
"text": "ReLU (Rectifier Linear Units): It performs hard thresholding for negative values to zero, and leave positive values untouched, i,e, ReLU(x)=max(0, x). This layer preserves the dynamic range of the feature map."
},
{
"code": null,
"e": 3915,
"s": 3512,
"text": "Maxpooling layer: It performs spatial down-sampling of the feature map and retains only the most relevant information. See the picture below for a visual illustration of this operation. From a practical point of view, a pooling of size 2x2 with a stride of 2 gives good results on most applications. Having said that, other types of pooling exist, e,g, average pooling, median pooling, sum pooling, ..."
},
{
"code": null,
"e": 4027,
"s": 3915,
"text": "For this article, I used the neural network framework PyTorch to implement the CNN architecture detailed above."
},
{
"code": null,
"e": 4079,
"s": 4027,
"text": "The full code is available in my GitHub repository:"
},
{
"code": null,
"e": 4090,
"s": 4079,
"text": "github.com"
},
{
"code": null,
"e": 4216,
"s": 4090,
"text": "The code is quite simple to understand, hopefully, since it mentions all the layers we discussed earlier in an intuitive way."
},
{
"code": null,
"e": 4480,
"s": 4216,
"text": "Note that all the number mentioned in the input of the methods is parameters. They define the CNN architecture: kernel_size, stride, padding, input/output of each Conv layer. The code below defines a class called CNN where we define the CNN architecture in order."
},
{
"code": null,
"e": 4546,
"s": 4480,
"text": "The output of the above code summarizes the network architecture:"
},
{
"code": null,
"e": 5467,
"s": 4546,
"text": "CNN( (layer1): Sequential( (0): Conv2d(1, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Dropout(p=0.30) ) (layer2): Sequential( (0): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False) (3): Dropout(p=0.30) ) (layer3): Sequential( (0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)) (1): ReLU() (2): MaxPool2d(kernel_size=2, stride=2, padding=1, dilation=1, ceil_mode=False) (3): Dropout(p=0.30) ) (fc1): Linear(in_features=2048, out_features=625, bias=True) (layer4): Sequential( (0): Linear(in_features=2048, out_features=625, bias=True) (1): ReLU() (2): Dropout(p=0.30) ) (fc2): Linear(in_features=625, out_features=10, bias=True))"
},
{
"code": null,
"e": 5727,
"s": 5467,
"text": "In order to acquire the MNIST images, we use a method of torchvision library. Just copy-paste this code to download the data. Basically, two datasets are loaded. The training dataset serves as ground truth to compute the network parameters. The testing images"
},
{
"code": null,
"e": 5987,
"s": 5727,
"text": "This is an essential stage of a supervised algorithm. By feeding the algorithm with many examples of image and their associated labels, we teach the algorithm to find patterns of each class. This is done by computing the filter’s parameters (weight and bias)."
},
{
"code": null,
"e": 6069,
"s": 5987,
"text": "The training of the network is composed of two major steps, forward and backward:"
},
{
"code": null,
"e": 6326,
"s": 6069,
"text": "During the forward pass, we feed the network by images of the training set and compute the features map till the end of the network, then we compute the loss function to measure how far / close the solution (predicted label) is from the ground truth label."
},
{
"code": null,
"e": 6438,
"s": 6326,
"text": "The backward pass performs the computation of the loss function’s gradient and updates the filters' parameters."
},
{
"code": null,
"e": 6611,
"s": 6438,
"text": "We also need to define a loss function, e.g. Cross-entropy loss function, and an optimization algorithm, such as Gradient descent, SGD, Adam (Adaptive moment estimation)..."
},
{
"code": null,
"e": 6668,
"s": 6611,
"text": "What to remember for training a deep learning algorithm:"
},
{
"code": null,
"e": 6960,
"s": 6668,
"text": "— Initialize the parameters (weights: w, bias: b) — Optimize the loss iteratively to learn parameters (w,b) — Compute the loss function and its gradients — Update parameters using an optimization algorithm (e,g, Adam) — Use the learned parameters to predict the label for a given input image"
},
{
"code": null,
"e": 7022,
"s": 6960,
"text": "Python implementation of training CNN in Python using PyTorch"
},
{
"code": null,
"e": 7562,
"s": 7022,
"text": "Plotting these plots help monitor understanding the convergence of the algorithm. The loss plot is decreasing during the training which a what we want since the goal of the optimization algorithm (Adam) is to minimize the loss function. On the right, the plot shows the evolution of the classification accuracy during the training. The more we train the algorithm, the better the classification accuracy. Notice the fluctuation of the accuracy between ~90 and 100 %. Better tuning of hyper-parameters will provide a precise classification."
},
{
"code": null,
"e": 7661,
"s": 7562,
"text": "The algorithm now is able to understand the content of these images thanks to the training of CNN."
},
{
"code": null,
"e": 8102,
"s": 7661,
"text": "This article attempts to explain briefly the convolutional neural network without going deep into mathematical development. An illustration is provided at each step with a visual explanation, as well as an application of image classification of MNIST dataset. Finally, a python implementation using PyTorch library is presented in order to provide a concrete example of application. Hopefully, you will find it interesting and easy to read."
},
{
"code": null,
"e": 8119,
"s": 8102,
"text": "What to do next?"
},
{
"code": null,
"e": 8207,
"s": 8119,
"text": "Let me know what you think in the comment section and/or direct message me on LinkedIn."
}
] |
p5.js | random() Function - GeeksforGeeks | 22 Apr, 2019
The random() function in p5.js is used to return a random floating point number between ranges given as the parameter.
Syntax:
random(Min, Max)
or
random(Array)
Parameters: This function accepts three parameters as mentioned above and described below:
Min: This is the lower bound of the random number which is going to be created. This is an inclusive number with the created random number.
Max: This is the upper bound of the random number which is going to be created. This is an exclusive number with the created random number.
Array: This is an array of some elements from which any random number is returned.
Return Value: It returns the random number.
Below programs illustrate the random() function in p5.js:
Example 1: This example uses random() function to return a random floating point number between the given range.
function setup() { // Creating Canvas size createCanvas(550, 140); // Set the background color background(220); // Calling to random() function with // min and max parameters let A = random(1, 2); let B = random(0, 1); let C = random(2); let D = random(2, 10); // Set the size of text textSize(16); // Set the text color fill(color('red')); // Getting random number text("Random number between 1 and 2 is: " + A, 50, 30); text("Random number between 0 and 1 is: " + B, 50, 60); text("Random number between 0 and 2 is: " + C, 50, 90); text("Random number between 2 and 10 is: " + D, 50, 110);}
Output:
Note: In the above code, in variable “C” only one parameter is passed then it returns a random number from lower bound 0 to upper bound of that number.
Example 2: This example uses random() function to return a random floating point number between the given range.
function setup() { // Creating Canvas size createCanvas(550, 140); // Set the background color background(220); // Calling to random() function with // parameter array of some elements let A = random([1, 2, 3, 4]); let B = random([0, 1]); let C = random([2, 6, 7, 9]); let D = random([2, 10]); // Set the size of text textSize(16); // Set the text color fill(color('red')); // Getting random number text("Random number is: " + A, 50, 30); text("Random number is: " + B, 50, 60); text("Random number is: " + C, 50, 90); text("Random number is: " + D, 50, 110);}
Output:
Reference: https://p5js.org/reference/#/p5/random
JavaScript-p5.js
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between var, let and const keywords in JavaScript
Convert a string to an integer in JavaScript
Differences between Functional Components and Class Components in React
How to Open URL in New Tab using JavaScript ?
JavaScript | console.log() with Examples
Top 10 Front End Developer Skills That You Need in 2022
Installation of Node.js on Linux
Top 10 Projects For Beginners To Practice HTML and CSS Skills
How to fetch data from an API in ReactJS ?
How to insert spaces/tabs in text using HTML/CSS? | [
{
"code": null,
"e": 24341,
"s": 24313,
"text": "\n22 Apr, 2019"
},
{
"code": null,
"e": 24460,
"s": 24341,
"text": "The random() function in p5.js is used to return a random floating point number between ranges given as the parameter."
},
{
"code": null,
"e": 24468,
"s": 24460,
"text": "Syntax:"
},
{
"code": null,
"e": 24485,
"s": 24468,
"text": "random(Min, Max)"
},
{
"code": null,
"e": 24488,
"s": 24485,
"text": "or"
},
{
"code": null,
"e": 24502,
"s": 24488,
"text": "random(Array)"
},
{
"code": null,
"e": 24593,
"s": 24502,
"text": "Parameters: This function accepts three parameters as mentioned above and described below:"
},
{
"code": null,
"e": 24733,
"s": 24593,
"text": "Min: This is the lower bound of the random number which is going to be created. This is an inclusive number with the created random number."
},
{
"code": null,
"e": 24873,
"s": 24733,
"text": "Max: This is the upper bound of the random number which is going to be created. This is an exclusive number with the created random number."
},
{
"code": null,
"e": 24956,
"s": 24873,
"text": "Array: This is an array of some elements from which any random number is returned."
},
{
"code": null,
"e": 25000,
"s": 24956,
"text": "Return Value: It returns the random number."
},
{
"code": null,
"e": 25058,
"s": 25000,
"text": "Below programs illustrate the random() function in p5.js:"
},
{
"code": null,
"e": 25171,
"s": 25058,
"text": "Example 1: This example uses random() function to return a random floating point number between the given range."
},
{
"code": "function setup() { // Creating Canvas size createCanvas(550, 140); // Set the background color background(220); // Calling to random() function with // min and max parameters let A = random(1, 2); let B = random(0, 1); let C = random(2); let D = random(2, 10); // Set the size of text textSize(16); // Set the text color fill(color('red')); // Getting random number text(\"Random number between 1 and 2 is: \" + A, 50, 30); text(\"Random number between 0 and 1 is: \" + B, 50, 60); text(\"Random number between 0 and 2 is: \" + C, 50, 90); text(\"Random number between 2 and 10 is: \" + D, 50, 110);} ",
"e": 25864,
"s": 25171,
"text": null
},
{
"code": null,
"e": 25872,
"s": 25864,
"text": "Output:"
},
{
"code": null,
"e": 26024,
"s": 25872,
"text": "Note: In the above code, in variable “C” only one parameter is passed then it returns a random number from lower bound 0 to upper bound of that number."
},
{
"code": null,
"e": 26137,
"s": 26024,
"text": "Example 2: This example uses random() function to return a random floating point number between the given range."
},
{
"code": "function setup() { // Creating Canvas size createCanvas(550, 140); // Set the background color background(220); // Calling to random() function with // parameter array of some elements let A = random([1, 2, 3, 4]); let B = random([0, 1]); let C = random([2, 6, 7, 9]); let D = random([2, 10]); // Set the size of text textSize(16); // Set the text color fill(color('red')); // Getting random number text(\"Random number is: \" + A, 50, 30); text(\"Random number is: \" + B, 50, 60); text(\"Random number is: \" + C, 50, 90); text(\"Random number is: \" + D, 50, 110);} ",
"e": 26798,
"s": 26137,
"text": null
},
{
"code": null,
"e": 26806,
"s": 26798,
"text": "Output:"
},
{
"code": null,
"e": 26856,
"s": 26806,
"text": "Reference: https://p5js.org/reference/#/p5/random"
},
{
"code": null,
"e": 26873,
"s": 26856,
"text": "JavaScript-p5.js"
},
{
"code": null,
"e": 26884,
"s": 26873,
"text": "JavaScript"
},
{
"code": null,
"e": 26901,
"s": 26884,
"text": "Web Technologies"
},
{
"code": null,
"e": 26999,
"s": 26901,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27008,
"s": 26999,
"text": "Comments"
},
{
"code": null,
"e": 27021,
"s": 27008,
"text": "Old Comments"
},
{
"code": null,
"e": 27082,
"s": 27021,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 27127,
"s": 27082,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 27199,
"s": 27127,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 27245,
"s": 27199,
"text": "How to Open URL in New Tab using JavaScript ?"
},
{
"code": null,
"e": 27286,
"s": 27245,
"text": "JavaScript | console.log() with Examples"
},
{
"code": null,
"e": 27342,
"s": 27286,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 27375,
"s": 27342,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 27437,
"s": 27375,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 27480,
"s": 27437,
"text": "How to fetch data from an API in ReactJS ?"
}
] |
What are generic methods in C#? | Generics allow you to write a class or method that can work with any data type. Declare a generic method with a type parameter −
static void Swap(ref T lhs, ref T rhs) {}
To call the above shown generic method, here is an example −
Swap(ref a, ref b);
Let us see how to create a generic method in C# −
Live Demo
using System;
using System.Collections.Generic;
namespace Demo {
class Program {
static void Swap(ref T lhs, ref T rhs) {
T temp;
temp = lhs;
lhs = rhs;
rhs = temp;
}
static void Main(string[] args) {
int a, b;
char c, d;
a = 45;
b = 60;
c = 'K';
d = 'P';
Console.WriteLine("Int values before calling swap:");
Console.WriteLine("a = {0}, b = {1}", a, b);
Console.WriteLine("Char values before calling swap:");
Console.WriteLine("c = {0}, d = {1}", c, d);
Swap(ref a, ref b);
Swap(ref c, ref d);
Console.WriteLine("Int values after calling swap:");
Console.WriteLine("a = {0}, b = {1}", a, b);
Console.WriteLine("Char values after calling swap:");
Console.WriteLine("c = {0}, d = {1}", c, d);
Console.ReadKey();
}
}
}
Int values before calling swap:
a = 45, b = 60
Char values before calling swap:
c = K, d = P
Int values after calling swap:
a = 60, b = 45
Char values after calling swap:
c = P, d = K | [
{
"code": null,
"e": 1191,
"s": 1062,
"text": "Generics allow you to write a class or method that can work with any data type. Declare a generic method with a type parameter −"
},
{
"code": null,
"e": 1233,
"s": 1191,
"text": "static void Swap(ref T lhs, ref T rhs) {}"
},
{
"code": null,
"e": 1294,
"s": 1233,
"text": "To call the above shown generic method, here is an example −"
},
{
"code": null,
"e": 1314,
"s": 1294,
"text": "Swap(ref a, ref b);"
},
{
"code": null,
"e": 1364,
"s": 1314,
"text": "Let us see how to create a generic method in C# −"
},
{
"code": null,
"e": 1375,
"s": 1364,
"text": " Live Demo"
},
{
"code": null,
"e": 2313,
"s": 1375,
"text": "using System;\nusing System.Collections.Generic;\n\nnamespace Demo {\n class Program {\n static void Swap(ref T lhs, ref T rhs) {\n T temp;\n temp = lhs;\n lhs = rhs;\n rhs = temp;\n }\n\n static void Main(string[] args) {\n int a, b;\n char c, d;\n a = 45;\n b = 60;\n c = 'K';\n d = 'P';\n Console.WriteLine(\"Int values before calling swap:\");\n Console.WriteLine(\"a = {0}, b = {1}\", a, b);\n Console.WriteLine(\"Char values before calling swap:\");\n Console.WriteLine(\"c = {0}, d = {1}\", c, d);\n Swap(ref a, ref b);\n Swap(ref c, ref d);\n Console.WriteLine(\"Int values after calling swap:\");\n Console.WriteLine(\"a = {0}, b = {1}\", a, b);\n Console.WriteLine(\"Char values after calling swap:\");\n Console.WriteLine(\"c = {0}, d = {1}\", c, d);\n Console.ReadKey();\n }\n }\n}"
},
{
"code": null,
"e": 2497,
"s": 2313,
"text": "Int values before calling swap:\na = 45, b = 60\nChar values before calling swap:\nc = K, d = P\nInt values after calling swap:\na = 60, b = 45\nChar values after calling swap:\nc = P, d = K"
}
] |
Check data type in NumPy - GeeksforGeeks | 09 Aug, 2021
Numpy is a module in python. It is originally called numerical python, but in short, we pronounce it as numpy. NumPy is a general-purpose array-processing package in python. It provides high-performance multidimensional data structures like array objects and tools for working with these arrays. Numpy provides faster and efficient calculations of matrices and arrays.
NumPy provides familiarity with almost all mathematical functions. In numpy these functions are called universal function ufunc.
Method #1
Checking datatype using dtype.
Example 1:
Python3
# importing numpy libraryimport numpy as np # creating and initializing an arrayarr = np.array([1, 2, 3, 23, 56, 100]) # printing the array and checking datatypeprint('Array:', arr) print('Datatype:', arr.dtype)
Output:
Array: [ 1 2 3 23 56 100]
Datatype: int32
Example 2:
Python3
import numpy as np # creating and initializing array of stringarr_1 = np.array(['apple', 'ball', 'cat', 'dog']) # printing array and its datatypeprint('Array:', arr_1) print('Datatype:', arr_1.dtype)
Output:
Array: ['a' 'b' 'c' 'd']
Datatype: <U1
Method #2
Creating the array with a defined datatype. Creating numpy array by using an array function array(). This function takes argument dtype that allows us to define the expected data type of the array elements:
Example 1:
Python3
import numpy as np # Creating and initializing array with datatypearr = np.array([1, 2, 3, 8, 7, 5], dtype='S') # printing array and its datatypeprint("Array:", arr)print("Datatype:", arr.dtype)
Output:
Array: [b'1' b'2' b'3' b'8' b'7' b'5']
Datatype: |S1
S is used for defining string datatype. We use i, u, f, S and U for defining various other data types along with their size.
Example 2:
Python3
import numpy as np # creating and initialising array along# with datatype and its size 4 i.e. 32bytesarr = np.array([1, 2, 3, 4], dtype='i4') # printing array and datatypeprint('Array:', arr)print('Datatype:', arr.dtype)
Output:
Array: [1 2 3 4 8 9 5]
Datatype: int32
In the above example, the size of integer elements is 4 i.e. 32bytes
Example 3:
Python3
import numpy as np # creating and initialising array along# with datatype and its size 8 i.e. 64bytesarr = np.array([1, 2, 3, 4], dtype='i8') # printing array and datatypeprint('Array:', arr)print('Datatype:', arr.dtype)
Output:
Array: [1 2 3 4 8 9 7]
Datatype: int64
And in this example the size of elements is 64bytes.
Example 4:
Python3
import numpy as np # creating and initialising array along# with datatype and its size 4 i.e. 32bytesarr = np.array([1, 2, 3, 4, 8, 9, 7], dtype='f4') # printing array and datatypeprint('Array:', arr)print('Datatype:', arr.dtype)
Output:
Array: [1. 2. 3. 4. 8. 9. 7.]
Datatype: float32
In the above example, the data type is float and the size is 32bytes.
Example 5:
Python3
import numpy as np # creating and initialising array along# with datatype and its size 2arr = np.array([1, 2, 3, 4, 8, 9, 7], dtype='S2') # printing array and datatypeprint('Array:', arr)print('Datatype:', arr.dtype)
Output:
Array: [b'1' b'2' b'3' b'4' b'8' b'9' b'7']
Datatype: |S2
In the above example, the datatype is a string and the size is 2.
anikakapoor
Python numpy-DataType
Python-numpy
Technical Scripter 2020
Python
Technical Scripter
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Install PIP on Windows ?
How To Convert Python Dictionary To JSON?
How to drop one or multiple columns in Pandas Dataframe
Check if element exists in list in Python
Selecting rows in pandas DataFrame based on conditions
Python | os.path.join() method
Defaultdict in Python
Create a directory in Python
Python | Get unique values from a list
Python | Pandas dataframe.groupby() | [
{
"code": null,
"e": 24292,
"s": 24264,
"text": "\n09 Aug, 2021"
},
{
"code": null,
"e": 24661,
"s": 24292,
"text": "Numpy is a module in python. It is originally called numerical python, but in short, we pronounce it as numpy. NumPy is a general-purpose array-processing package in python. It provides high-performance multidimensional data structures like array objects and tools for working with these arrays. Numpy provides faster and efficient calculations of matrices and arrays."
},
{
"code": null,
"e": 24790,
"s": 24661,
"text": "NumPy provides familiarity with almost all mathematical functions. In numpy these functions are called universal function ufunc."
},
{
"code": null,
"e": 24800,
"s": 24790,
"text": "Method #1"
},
{
"code": null,
"e": 24831,
"s": 24800,
"text": "Checking datatype using dtype."
},
{
"code": null,
"e": 24842,
"s": 24831,
"text": "Example 1:"
},
{
"code": null,
"e": 24850,
"s": 24842,
"text": "Python3"
},
{
"code": "# importing numpy libraryimport numpy as np # creating and initializing an arrayarr = np.array([1, 2, 3, 23, 56, 100]) # printing the array and checking datatypeprint('Array:', arr) print('Datatype:', arr.dtype)",
"e": 25062,
"s": 24850,
"text": null
},
{
"code": null,
"e": 25070,
"s": 25062,
"text": "Output:"
},
{
"code": null,
"e": 25119,
"s": 25070,
"text": "Array: [ 1 2 3 23 56 100]\nDatatype: int32"
},
{
"code": null,
"e": 25130,
"s": 25119,
"text": "Example 2:"
},
{
"code": null,
"e": 25138,
"s": 25130,
"text": "Python3"
},
{
"code": "import numpy as np # creating and initializing array of stringarr_1 = np.array(['apple', 'ball', 'cat', 'dog']) # printing array and its datatypeprint('Array:', arr_1) print('Datatype:', arr_1.dtype)",
"e": 25338,
"s": 25138,
"text": null
},
{
"code": null,
"e": 25346,
"s": 25338,
"text": "Output:"
},
{
"code": null,
"e": 25385,
"s": 25346,
"text": "Array: ['a' 'b' 'c' 'd']\nDatatype: <U1"
},
{
"code": null,
"e": 25395,
"s": 25385,
"text": "Method #2"
},
{
"code": null,
"e": 25602,
"s": 25395,
"text": "Creating the array with a defined datatype. Creating numpy array by using an array function array(). This function takes argument dtype that allows us to define the expected data type of the array elements:"
},
{
"code": null,
"e": 25613,
"s": 25602,
"text": "Example 1:"
},
{
"code": null,
"e": 25621,
"s": 25613,
"text": "Python3"
},
{
"code": "import numpy as np # Creating and initializing array with datatypearr = np.array([1, 2, 3, 8, 7, 5], dtype='S') # printing array and its datatypeprint(\"Array:\", arr)print(\"Datatype:\", arr.dtype)",
"e": 25816,
"s": 25621,
"text": null
},
{
"code": null,
"e": 25824,
"s": 25816,
"text": "Output:"
},
{
"code": null,
"e": 25877,
"s": 25824,
"text": "Array: [b'1' b'2' b'3' b'8' b'7' b'5']\nDatatype: |S1"
},
{
"code": null,
"e": 26002,
"s": 25877,
"text": "S is used for defining string datatype. We use i, u, f, S and U for defining various other data types along with their size."
},
{
"code": null,
"e": 26013,
"s": 26002,
"text": "Example 2:"
},
{
"code": null,
"e": 26021,
"s": 26013,
"text": "Python3"
},
{
"code": "import numpy as np # creating and initialising array along# with datatype and its size 4 i.e. 32bytesarr = np.array([1, 2, 3, 4], dtype='i4') # printing array and datatypeprint('Array:', arr)print('Datatype:', arr.dtype)",
"e": 26242,
"s": 26021,
"text": null
},
{
"code": null,
"e": 26250,
"s": 26242,
"text": "Output:"
},
{
"code": null,
"e": 26289,
"s": 26250,
"text": "Array: [1 2 3 4 8 9 5]\nDatatype: int32"
},
{
"code": null,
"e": 26358,
"s": 26289,
"text": "In the above example, the size of integer elements is 4 i.e. 32bytes"
},
{
"code": null,
"e": 26369,
"s": 26358,
"text": "Example 3:"
},
{
"code": null,
"e": 26377,
"s": 26369,
"text": "Python3"
},
{
"code": "import numpy as np # creating and initialising array along# with datatype and its size 8 i.e. 64bytesarr = np.array([1, 2, 3, 4], dtype='i8') # printing array and datatypeprint('Array:', arr)print('Datatype:', arr.dtype)",
"e": 26598,
"s": 26377,
"text": null
},
{
"code": null,
"e": 26606,
"s": 26598,
"text": "Output:"
},
{
"code": null,
"e": 26645,
"s": 26606,
"text": "Array: [1 2 3 4 8 9 7]\nDatatype: int64"
},
{
"code": null,
"e": 26698,
"s": 26645,
"text": "And in this example the size of elements is 64bytes."
},
{
"code": null,
"e": 26709,
"s": 26698,
"text": "Example 4:"
},
{
"code": null,
"e": 26717,
"s": 26709,
"text": "Python3"
},
{
"code": "import numpy as np # creating and initialising array along# with datatype and its size 4 i.e. 32bytesarr = np.array([1, 2, 3, 4, 8, 9, 7], dtype='f4') # printing array and datatypeprint('Array:', arr)print('Datatype:', arr.dtype)",
"e": 26947,
"s": 26717,
"text": null
},
{
"code": null,
"e": 26955,
"s": 26947,
"text": "Output:"
},
{
"code": null,
"e": 27003,
"s": 26955,
"text": "Array: [1. 2. 3. 4. 8. 9. 7.]\nDatatype: float32"
},
{
"code": null,
"e": 27073,
"s": 27003,
"text": "In the above example, the data type is float and the size is 32bytes."
},
{
"code": null,
"e": 27084,
"s": 27073,
"text": "Example 5:"
},
{
"code": null,
"e": 27092,
"s": 27084,
"text": "Python3"
},
{
"code": "import numpy as np # creating and initialising array along# with datatype and its size 2arr = np.array([1, 2, 3, 4, 8, 9, 7], dtype='S2') # printing array and datatypeprint('Array:', arr)print('Datatype:', arr.dtype)",
"e": 27309,
"s": 27092,
"text": null
},
{
"code": null,
"e": 27317,
"s": 27309,
"text": "Output:"
},
{
"code": null,
"e": 27375,
"s": 27317,
"text": "Array: [b'1' b'2' b'3' b'4' b'8' b'9' b'7']\nDatatype: |S2"
},
{
"code": null,
"e": 27441,
"s": 27375,
"text": "In the above example, the datatype is a string and the size is 2."
},
{
"code": null,
"e": 27453,
"s": 27441,
"text": "anikakapoor"
},
{
"code": null,
"e": 27475,
"s": 27453,
"text": "Python numpy-DataType"
},
{
"code": null,
"e": 27488,
"s": 27475,
"text": "Python-numpy"
},
{
"code": null,
"e": 27512,
"s": 27488,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 27519,
"s": 27512,
"text": "Python"
},
{
"code": null,
"e": 27538,
"s": 27519,
"text": "Technical Scripter"
},
{
"code": null,
"e": 27636,
"s": 27538,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27668,
"s": 27636,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 27710,
"s": 27668,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 27766,
"s": 27710,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 27808,
"s": 27766,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 27863,
"s": 27808,
"text": "Selecting rows in pandas DataFrame based on conditions"
},
{
"code": null,
"e": 27894,
"s": 27863,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 27916,
"s": 27894,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 27945,
"s": 27916,
"text": "Create a directory in Python"
},
{
"code": null,
"e": 27984,
"s": 27945,
"text": "Python | Get unique values from a list"
}
] |
JavaMail API - Deleting Emails | In this chapter we will see how to delete an email using JavaMail API. Deleting messages involves working with the Flags associated with the messages. There are different flags for different states, some system-defined and some user-defined. The predefined flags are defined in the inner class Flags.Flag and are listed below:
Flags.Flag.ANSWERED
Flags.Flag.ANSWERED
Flags.Flag.DELETED
Flags.Flag.DELETED
Flags.Flag.DRAFT
Flags.Flag.DRAFT
Flags.Flag.FLAGGED
Flags.Flag.FLAGGED
Flags.Flag.RECENT
Flags.Flag.RECENT
Flags.Flag.SEEN
Flags.Flag.SEEN
Flags.Flag.USER
Flags.Flag.USER
POP protocol supports only deleting of the messages.
Basic steps followed in the delete program are:
Get the Session object with POP and SMPT server details in the properties. We would need POP details to retrieve messages and SMPT details to send messages.
Get the Session object with POP and SMPT server details in the properties. We would need POP details to retrieve messages and SMPT details to send messages.
Create POP3 store object and connect to the store.
Create POP3 store object and connect to the store.
Create Folder object and open the appropriate folder in your mailbox in READ_WRITE mode.
Create Folder object and open the appropriate folder in your mailbox in READ_WRITE mode.
Retrieves messages from inbox folder.
Retrieves messages from inbox folder.
Iterate through the messages and type "Y" or "y" if you want to delete the message by invoking the method setFlag(Flags.Flag.DELETED, true) on the Message object.
Iterate through the messages and type "Y" or "y" if you want to delete the message by invoking the method setFlag(Flags.Flag.DELETED, true) on the Message object.
The messages marked DELETED are not actually deleted, until we call the expunge() method on the Folder object, or close the folder with expunge set to true.
The messages marked DELETED are not actually deleted, until we call the expunge() method on the Folder object, or close the folder with expunge set to true.
Close the store object.
Close the store object.
Create a java class file ForwardEmail, the contents of which are as follows:
package com.tutorialspoint;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.Properties;
import javax.mail.Flags;
import javax.mail.Folder;
import javax.mail.Message;
import javax.mail.MessagingException;
import javax.mail.NoSuchProviderException;
import javax.mail.Session;
import javax.mail.Store;
public class DeleteEmail {
public static void delete(String pop3Host, String storeType, String user,
String password)
{
try
{
// get the session object
Properties properties = new Properties();
properties.put("mail.store.protocol", "pop3");
properties.put("mail.pop3s.host", pop3Host);
properties.put("mail.pop3s.port", "995");
properties.put("mail.pop3.starttls.enable", "true");
Session emailSession = Session.getDefaultInstance(properties);
// emailSession.setDebug(true);
// create the POP3 store object and connect with the pop server
Store store = emailSession.getStore("pop3s");
store.connect(pop3Host, user, password);
// create the folder object and open it
Folder emailFolder = store.getFolder("INBOX");
emailFolder.open(Folder.READ_WRITE);
BufferedReader reader = new BufferedReader(new InputStreamReader(
System.in));
// retrieve the messages from the folder in an array and print it
Message[] messages = emailFolder.getMessages();
System.out.println("messages.length---" + messages.length);
for (int i = 0; i < messages.length; i++) {
Message message = messages[i];
System.out.println("---------------------------------");
System.out.println("Email Number " + (i + 1));
System.out.println("Subject: " + message.getSubject());
System.out.println("From: " + message.getFrom()[0]);
String subject = message.getSubject();
System.out.print("Do you want to delete this message [y/n] ? ");
String ans = reader.readLine();
if ("Y".equals(ans) || "y".equals(ans)) {
// set the DELETE flag to true
message.setFlag(Flags.Flag.DELETED, true);
System.out.println("Marked DELETE for message: " + subject);
} else if ("n".equals(ans)) {
break;
}
}
// expunges the folder to remove messages which are marked deleted
emailFolder.close(true);
store.close();
} catch (NoSuchProviderException e) {
e.printStackTrace();
} catch (MessagingException e) {
e.printStackTrace();
} catch (IOException io) {
io.printStackTrace();
}
}
public static void main(String[] args) {
String host = "pop.gmail.com";// change accordingly
String mailStoreType = "pop3";
String username = "[email protected]";// change accordingly
String password = "*****";// change accordingly
delete(host, mailStoreType, username, password);
}
}
Now that our class is ready, let us compile the above class. I've saved the class DeleteEmail.java to directory : /home/manisha/JavaMailAPIExercise. We would need the jars javax.mail.jar and activation.jar in the classpath. Execute the command below to compile the class (both the jars are placed in /home/manisha/ directory) from command prompt:
javac -cp /home/manisha/activation.jar:/home/manisha/javax.mail.jar: DeleteEmail.java
Now that the class is compiled, execute the following command to run:
java -cp /home/manisha/activation.jar:/home/manisha/javax.mail.jar: DeleteEmail
You should see the following message on the command console:
messages.length---1
---------------------------------
Email Number 1
Subject: Testing
From: ABC <[email protected]>
Do you want to delete this message [y/n] ? y
Marked DELETE for message: Testing
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2398,
"s": 2071,
"text": "In this chapter we will see how to delete an email using JavaMail API. Deleting messages involves working with the Flags associated with the messages. There are different flags for different states, some system-defined and some user-defined. The predefined flags are defined in the inner class Flags.Flag and are listed below:"
},
{
"code": null,
"e": 2418,
"s": 2398,
"text": "Flags.Flag.ANSWERED"
},
{
"code": null,
"e": 2438,
"s": 2418,
"text": "Flags.Flag.ANSWERED"
},
{
"code": null,
"e": 2457,
"s": 2438,
"text": "Flags.Flag.DELETED"
},
{
"code": null,
"e": 2476,
"s": 2457,
"text": "Flags.Flag.DELETED"
},
{
"code": null,
"e": 2493,
"s": 2476,
"text": "Flags.Flag.DRAFT"
},
{
"code": null,
"e": 2510,
"s": 2493,
"text": "Flags.Flag.DRAFT"
},
{
"code": null,
"e": 2529,
"s": 2510,
"text": "Flags.Flag.FLAGGED"
},
{
"code": null,
"e": 2548,
"s": 2529,
"text": "Flags.Flag.FLAGGED"
},
{
"code": null,
"e": 2566,
"s": 2548,
"text": "Flags.Flag.RECENT"
},
{
"code": null,
"e": 2584,
"s": 2566,
"text": "Flags.Flag.RECENT"
},
{
"code": null,
"e": 2600,
"s": 2584,
"text": "Flags.Flag.SEEN"
},
{
"code": null,
"e": 2616,
"s": 2600,
"text": "Flags.Flag.SEEN"
},
{
"code": null,
"e": 2632,
"s": 2616,
"text": "Flags.Flag.USER"
},
{
"code": null,
"e": 2648,
"s": 2632,
"text": "Flags.Flag.USER"
},
{
"code": null,
"e": 2701,
"s": 2648,
"text": "POP protocol supports only deleting of the messages."
},
{
"code": null,
"e": 2749,
"s": 2701,
"text": "Basic steps followed in the delete program are:"
},
{
"code": null,
"e": 2907,
"s": 2749,
"text": "Get the Session object with POP and SMPT server details in the properties. We would need POP details to retrieve messages and SMPT details to send messages."
},
{
"code": null,
"e": 3065,
"s": 2907,
"text": "Get the Session object with POP and SMPT server details in the properties. We would need POP details to retrieve messages and SMPT details to send messages."
},
{
"code": null,
"e": 3116,
"s": 3065,
"text": "Create POP3 store object and connect to the store."
},
{
"code": null,
"e": 3167,
"s": 3116,
"text": "Create POP3 store object and connect to the store."
},
{
"code": null,
"e": 3256,
"s": 3167,
"text": "Create Folder object and open the appropriate folder in your mailbox in READ_WRITE mode."
},
{
"code": null,
"e": 3345,
"s": 3256,
"text": "Create Folder object and open the appropriate folder in your mailbox in READ_WRITE mode."
},
{
"code": null,
"e": 3383,
"s": 3345,
"text": "Retrieves messages from inbox folder."
},
{
"code": null,
"e": 3421,
"s": 3383,
"text": "Retrieves messages from inbox folder."
},
{
"code": null,
"e": 3584,
"s": 3421,
"text": "Iterate through the messages and type \"Y\" or \"y\" if you want to delete the message by invoking the method setFlag(Flags.Flag.DELETED, true) on the Message object."
},
{
"code": null,
"e": 3747,
"s": 3584,
"text": "Iterate through the messages and type \"Y\" or \"y\" if you want to delete the message by invoking the method setFlag(Flags.Flag.DELETED, true) on the Message object."
},
{
"code": null,
"e": 3904,
"s": 3747,
"text": "The messages marked DELETED are not actually deleted, until we call the expunge() method on the Folder object, or close the folder with expunge set to true."
},
{
"code": null,
"e": 4061,
"s": 3904,
"text": "The messages marked DELETED are not actually deleted, until we call the expunge() method on the Folder object, or close the folder with expunge set to true."
},
{
"code": null,
"e": 4085,
"s": 4061,
"text": "Close the store object."
},
{
"code": null,
"e": 4109,
"s": 4085,
"text": "Close the store object."
},
{
"code": null,
"e": 4186,
"s": 4109,
"text": "Create a java class file ForwardEmail, the contents of which are as follows:"
},
{
"code": null,
"e": 7261,
"s": 4186,
"text": "package com.tutorialspoint;\n\nimport java.io.BufferedReader;\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.util.Properties;\n\nimport javax.mail.Flags;\nimport javax.mail.Folder;\nimport javax.mail.Message;\nimport javax.mail.MessagingException;\nimport javax.mail.NoSuchProviderException;\nimport javax.mail.Session;\nimport javax.mail.Store;\n\npublic class DeleteEmail {\n\n public static void delete(String pop3Host, String storeType, String user,\n String password) \n {\n try \n {\n // get the session object\n Properties properties = new Properties();\n properties.put(\"mail.store.protocol\", \"pop3\");\n properties.put(\"mail.pop3s.host\", pop3Host);\n properties.put(\"mail.pop3s.port\", \"995\");\n properties.put(\"mail.pop3.starttls.enable\", \"true\");\n Session emailSession = Session.getDefaultInstance(properties);\n // emailSession.setDebug(true);\n\n // create the POP3 store object and connect with the pop server\n Store store = emailSession.getStore(\"pop3s\");\n\n store.connect(pop3Host, user, password);\n\n // create the folder object and open it\n Folder emailFolder = store.getFolder(\"INBOX\");\n emailFolder.open(Folder.READ_WRITE);\n\n BufferedReader reader = new BufferedReader(new InputStreamReader(\n System.in));\n // retrieve the messages from the folder in an array and print it\n Message[] messages = emailFolder.getMessages();\n System.out.println(\"messages.length---\" + messages.length);\n for (int i = 0; i < messages.length; i++) {\n Message message = messages[i];\n System.out.println(\"---------------------------------\");\n System.out.println(\"Email Number \" + (i + 1));\n System.out.println(\"Subject: \" + message.getSubject());\n System.out.println(\"From: \" + message.getFrom()[0]);\n\n String subject = message.getSubject();\n System.out.print(\"Do you want to delete this message [y/n] ? \");\n String ans = reader.readLine();\n if (\"Y\".equals(ans) || \"y\".equals(ans)) {\n\t // set the DELETE flag to true\n\t message.setFlag(Flags.Flag.DELETED, true);\n\t System.out.println(\"Marked DELETE for message: \" + subject);\n } else if (\"n\".equals(ans)) {\n\t break;\n }\n }\n // expunges the folder to remove messages which are marked deleted\n emailFolder.close(true);\n store.close();\n\n } catch (NoSuchProviderException e) {\n e.printStackTrace();\n } catch (MessagingException e) {\n e.printStackTrace();\n } catch (IOException io) {\n io.printStackTrace();\n }\n }\n\n public static void main(String[] args) {\n\n String host = \"pop.gmail.com\";// change accordingly\n String mailStoreType = \"pop3\";\n String username = \"[email protected]\";// change accordingly\n String password = \"*****\";// change accordingly\n\n delete(host, mailStoreType, username, password);\n\n }\n\n}"
},
{
"code": null,
"e": 7608,
"s": 7261,
"text": "Now that our class is ready, let us compile the above class. I've saved the class DeleteEmail.java to directory : /home/manisha/JavaMailAPIExercise. We would need the jars javax.mail.jar and activation.jar in the classpath. Execute the command below to compile the class (both the jars are placed in /home/manisha/ directory) from command prompt:"
},
{
"code": null,
"e": 7694,
"s": 7608,
"text": "javac -cp /home/manisha/activation.jar:/home/manisha/javax.mail.jar: DeleteEmail.java"
},
{
"code": null,
"e": 7764,
"s": 7694,
"text": "Now that the class is compiled, execute the following command to run:"
},
{
"code": null,
"e": 7844,
"s": 7764,
"text": "java -cp /home/manisha/activation.jar:/home/manisha/javax.mail.jar: DeleteEmail"
},
{
"code": null,
"e": 7905,
"s": 7844,
"text": "You should see the following message on the command console:"
},
{
"code": null,
"e": 8097,
"s": 7905,
"text": "messages.length---1\n---------------------------------\nEmail Number 1\nSubject: Testing\nFrom: ABC <[email protected]>\nDo you want to delete this message [y/n] ? y\nMarked DELETE for message: Testing"
},
{
"code": null,
"e": 8104,
"s": 8097,
"text": " Print"
},
{
"code": null,
"e": 8115,
"s": 8104,
"text": " Add Notes"
}
] |
GETUTCDATE() Function in SQL Server - GeeksforGeeks | 21 Jan, 2021
GETUTCDATE() :
This function in SQL Server is used to return the UTC date and time of the present database system in a ‘YYYY-MM-DD hh:mm:ss.mmm’ pattern.
Features :
This function is used to find the UTC date and time of the current database system.
This function comes under Date Functions.
This function doesn’t accept any parameter.
This function returns the output in ‘YYYY-MM-DD hh:mm:ss.mmm’ format.
Syntax :
GETUTCDATE()
Parameter :
This method doesn’t accept any parameter.
Returns :
It returns the UTC date and time of the current database system in a ‘YYYY-MM-DD hh:mm:ss.mmm’ format.
Example-1 :
Using the GETUTCDATE() function and getting the output.
SELECT GETUTCDATE();
Output :
2021-01-03 15:34:14.403
Here, the output will vary each time the code is compiled as this method returns the current UTC date and time.
Example-2 :
Using GETUTCDATE() as a default value in the below example and getting the output.
CREATE TABLE get_utc_date
(
id_num INT IDENTITY,
message VARCHAR(150) NOT NULL,
generated_at DATETIME NOT NULL
DEFAULT GETUTCDATE(),
PRIMARY KEY(id_num)
);
INSERT INTO get_utc_date(message)
VALUES('Its the first message.');
INSERT INTO get_utc_date(message)
VALUES('get_utc_date');
SELECT
id_num,
message,
generated_at
FROM
get_utc_date;
Output :
|id_num | message | generated_at
-------------------------------------------------------------
1 | 1 | Its the first message.| 03.01.2021 17:32:16
-------------------------------------------------------------
2 | 2 | get_utc_date | 03.01.2021 17:32:16
Here, firstly you need to create a table then insert values into it then generate the required output using the GETUTCDATE() function as a default value.
Note: For running the above code use SQL server compiler, you can also use an online compiler.
Example-3 :
Using CONVERT() function in order to translate the output of the GETUTCDATE() function into the current date only.
SELECT CONVERT(DATE, GETUTCDATE());
Output :
2021-01-07
Here, the output may vary every time the code is compiled as it returns current date.
Example-4 :
Using CONVERT() function in order to translate the output of the GETUTCDATE() function into the current time only.
SELECT CONVERT(TIME, GETUTCDATE());
Output :
06:40:14.4700000
Here, the output may vary every time the code is compiled as it returns current time.
Application :
This function is used to return the current UTC date and time of the database system.
DBMS-SQL
SQL-Server
SQL
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to Update Multiple Columns in Single Update Statement in SQL?
What is Temporary Table in SQL?
SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter
SQL using Python
SQL | Subquery
How to Write a SQL Query For a Specific Date Range and Date Time?
SQL Query to Convert VARCHAR to INT
How to Select Data Between Two Dates and Times in SQL Server?
SQL - SELECT from Multiple Tables with MS SQL Server
SQL Query to Delete Duplicate Rows | [
{
"code": null,
"e": 24268,
"s": 24240,
"text": "\n21 Jan, 2021"
},
{
"code": null,
"e": 24283,
"s": 24268,
"text": "GETUTCDATE() :"
},
{
"code": null,
"e": 24422,
"s": 24283,
"text": "This function in SQL Server is used to return the UTC date and time of the present database system in a ‘YYYY-MM-DD hh:mm:ss.mmm’ pattern."
},
{
"code": null,
"e": 24433,
"s": 24422,
"text": "Features :"
},
{
"code": null,
"e": 24517,
"s": 24433,
"text": "This function is used to find the UTC date and time of the current database system."
},
{
"code": null,
"e": 24559,
"s": 24517,
"text": "This function comes under Date Functions."
},
{
"code": null,
"e": 24603,
"s": 24559,
"text": "This function doesn’t accept any parameter."
},
{
"code": null,
"e": 24673,
"s": 24603,
"text": "This function returns the output in ‘YYYY-MM-DD hh:mm:ss.mmm’ format."
},
{
"code": null,
"e": 24682,
"s": 24673,
"text": "Syntax :"
},
{
"code": null,
"e": 24695,
"s": 24682,
"text": "GETUTCDATE()"
},
{
"code": null,
"e": 24707,
"s": 24695,
"text": "Parameter :"
},
{
"code": null,
"e": 24749,
"s": 24707,
"text": "This method doesn’t accept any parameter."
},
{
"code": null,
"e": 24759,
"s": 24749,
"text": "Returns :"
},
{
"code": null,
"e": 24862,
"s": 24759,
"text": "It returns the UTC date and time of the current database system in a ‘YYYY-MM-DD hh:mm:ss.mmm’ format."
},
{
"code": null,
"e": 24874,
"s": 24862,
"text": "Example-1 :"
},
{
"code": null,
"e": 24930,
"s": 24874,
"text": "Using the GETUTCDATE() function and getting the output."
},
{
"code": null,
"e": 24951,
"s": 24930,
"text": "SELECT GETUTCDATE();"
},
{
"code": null,
"e": 24960,
"s": 24951,
"text": "Output :"
},
{
"code": null,
"e": 24984,
"s": 24960,
"text": "2021-01-03 15:34:14.403"
},
{
"code": null,
"e": 25096,
"s": 24984,
"text": "Here, the output will vary each time the code is compiled as this method returns the current UTC date and time."
},
{
"code": null,
"e": 25108,
"s": 25096,
"text": "Example-2 :"
},
{
"code": null,
"e": 25191,
"s": 25108,
"text": "Using GETUTCDATE() as a default value in the below example and getting the output."
},
{
"code": null,
"e": 25580,
"s": 25191,
"text": "CREATE TABLE get_utc_date\n(\n id_num INT IDENTITY,\n message VARCHAR(150) NOT NULL,\n generated_at DATETIME NOT NULL\n DEFAULT GETUTCDATE(),\n PRIMARY KEY(id_num)\n);\nINSERT INTO get_utc_date(message)\nVALUES('Its the first message.');\n\nINSERT INTO get_utc_date(message)\nVALUES('get_utc_date');\n\nSELECT\n id_num,\n message,\n generated_at\nFROM\n get_utc_date;"
},
{
"code": null,
"e": 25589,
"s": 25580,
"text": "Output :"
},
{
"code": null,
"e": 25878,
"s": 25589,
"text": " |id_num | message | generated_at\n------------------------------------------------------------- \n1 | 1 | Its the first message.| 03.01.2021 17:32:16\n-------------------------------------------------------------\n2 | 2 | get_utc_date | 03.01.2021 17:32:16"
},
{
"code": null,
"e": 26032,
"s": 25878,
"text": "Here, firstly you need to create a table then insert values into it then generate the required output using the GETUTCDATE() function as a default value."
},
{
"code": null,
"e": 26127,
"s": 26032,
"text": "Note: For running the above code use SQL server compiler, you can also use an online compiler."
},
{
"code": null,
"e": 26139,
"s": 26127,
"text": "Example-3 :"
},
{
"code": null,
"e": 26254,
"s": 26139,
"text": "Using CONVERT() function in order to translate the output of the GETUTCDATE() function into the current date only."
},
{
"code": null,
"e": 26290,
"s": 26254,
"text": "SELECT CONVERT(DATE, GETUTCDATE());"
},
{
"code": null,
"e": 26299,
"s": 26290,
"text": "Output :"
},
{
"code": null,
"e": 26310,
"s": 26299,
"text": "2021-01-07"
},
{
"code": null,
"e": 26396,
"s": 26310,
"text": "Here, the output may vary every time the code is compiled as it returns current date."
},
{
"code": null,
"e": 26408,
"s": 26396,
"text": "Example-4 :"
},
{
"code": null,
"e": 26523,
"s": 26408,
"text": "Using CONVERT() function in order to translate the output of the GETUTCDATE() function into the current time only."
},
{
"code": null,
"e": 26559,
"s": 26523,
"text": "SELECT CONVERT(TIME, GETUTCDATE());"
},
{
"code": null,
"e": 26568,
"s": 26559,
"text": "Output :"
},
{
"code": null,
"e": 26585,
"s": 26568,
"text": "06:40:14.4700000"
},
{
"code": null,
"e": 26671,
"s": 26585,
"text": "Here, the output may vary every time the code is compiled as it returns current time."
},
{
"code": null,
"e": 26685,
"s": 26671,
"text": "Application :"
},
{
"code": null,
"e": 26771,
"s": 26685,
"text": "This function is used to return the current UTC date and time of the database system."
},
{
"code": null,
"e": 26780,
"s": 26771,
"text": "DBMS-SQL"
},
{
"code": null,
"e": 26791,
"s": 26780,
"text": "SQL-Server"
},
{
"code": null,
"e": 26795,
"s": 26791,
"text": "SQL"
},
{
"code": null,
"e": 26799,
"s": 26795,
"text": "SQL"
},
{
"code": null,
"e": 26897,
"s": 26799,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26963,
"s": 26897,
"text": "How to Update Multiple Columns in Single Update Statement in SQL?"
},
{
"code": null,
"e": 26995,
"s": 26963,
"text": "What is Temporary Table in SQL?"
},
{
"code": null,
"e": 27073,
"s": 26995,
"text": "SQL Query to Find the Name of a Person Whose Name Starts with Specific Letter"
},
{
"code": null,
"e": 27090,
"s": 27073,
"text": "SQL using Python"
},
{
"code": null,
"e": 27105,
"s": 27090,
"text": "SQL | Subquery"
},
{
"code": null,
"e": 27171,
"s": 27105,
"text": "How to Write a SQL Query For a Specific Date Range and Date Time?"
},
{
"code": null,
"e": 27207,
"s": 27171,
"text": "SQL Query to Convert VARCHAR to INT"
},
{
"code": null,
"e": 27269,
"s": 27207,
"text": "How to Select Data Between Two Dates and Times in SQL Server?"
},
{
"code": null,
"e": 27322,
"s": 27269,
"text": "SQL - SELECT from Multiple Tables with MS SQL Server"
}
] |
Creating Containerized Workflows with Argo | by George Novack | Towards Data Science | In a previous article, I explored Kubeflow Pipelines and walked through the process of creating and executing a simple Machine Learning Pipeline. In this article, I will take a closer look at Argo, the open-source workflow orchestration engine that is used as the default orchestration platform for Kubeflow Pipelines.
In order to start playing with Argo, we will first need a running Kubernetes cluster. I will be using EKS, the managed Kubernetes service on AWS, and will walk through the steps required to get Argo running on EKS. Any Kubernetes cluster will work, so long as Argo is deployed and has the permissions it needs to run Workflows on the cluster.
The easiest way to get started with EKS, is to use the eksctl command-line tool. The instructions for installing eksctl can be found here: https://github.com/weaveworks/eksctl/blob/main/README.md
Once eksctl is installed, we’ll create the Kubernetes cluster by running the following command:
eksctl create cluster
Note: The command above creates AWS resources that will incur costs on your account. By default, the eksctl create cluster command creates a Kubernetes cluster with 2 m5.large type EC2 Instances, each of which, at the time of this writing, cost around $0.10 per hour.
This command may take 15–20 minutes to finish provisioning the cluster. Once it is complete, the first thing we will do is create a Namespace that will house our Argo resources. We’ll use kubectl to do this:
kubectl create namespace argo
Next, we will download the Argo installation manifest from the Argo GitHub repository:
wget https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/install.yaml .
This will download a YAML manifest file called install.yaml that describes all of the Kubernetes resources that we will need to get Argo up and running.
We’ll have to make a few modifications to this file in order to access the Argo server UI.
Update the argo-server Service to be of type LoadBalancer. This will provision a Network Load Balancer in AWS that will route requests to the argo-server Service. The resulting resource definition should look like this:
Update the argo-server Service to be of type LoadBalancer. This will provision a Network Load Balancer in AWS that will route requests to the argo-server Service. The resulting resource definition should look like this:
2. Update the argo-server deployment to add an environment variable called BASE_HREF with a value of “/”. The result should look like this (the resource definition is truncated for brevity):
After making these modifications, we can create the resources defined in this manifest:
kubectl apply -n argo -f install.yaml
Next, we will need to create an Ingress resource:
Finally, in order for Argo to be able to access other Kubernetes resources, we will need to assign a role to the Argo Service Account. On a real production cluster, we would want to assign this Service Account to a role with specific limited permissions, but for now, we will just grant admin access to this account. We can accomplish this with the following command:
kubectl -n argo create rolebinding default-admin --clusterrole=admin --serviceaccount=argo:default
And that’s all the setup we will need to perform to get started. Now, we will get the public URL for the Argo server UI by running the following:
kubectl get svc argo-server -n argo
The public URL will be returned by this command in the EXTERNAL-IP column. It should end in .elb.amazonaws.com.
Now open a browser and navigate to: https://{YOUR_EXTERNAL_IP}:2746
Note: If you are using Chrome on MacOS and are having difficulty bypassing the security warning raised by your browser when navigating to the Argo Server UI, there is a somewhat hidden workaround in Chrome: Simply click anywhere in the browser window to ensure the window is in focus, then type thisisunsafe. For more details on this workaround, see This is unsafe — Bypassing the Google Chrome “Your connection is not private” Warning
If everything is working properly, you should see an Argo Server landing page like the one shown below:
With the setup out of the way, let’s execute our first workflow with Argo.
Navigate to Workflows (this is the top icon in the sidebar) > Submit New Workflow > Edit using full workflow options
This will open the Workflow creation experience, which includes a text editor populated with a basic Workflow YAML manifest, and additional tabs for Workflow parameters and metadata.
Let’s replace this default Workflow manifest with a simple Workflow that runs the Whalesay container as shown below, then click Create to run the Workflow:
Once you kick off the Workflow, you will be shown a Workflow diagram that will update as the workflow progresses (in this case, it will just be a single step). From this page, you will be able to view the details, inputs, outputs, and logs of your Workflow. There isn’t much to see here with our barebones Hello World Workflow, but you can navigate to Main Logs to confirm that the Whalesay ASCII art was logged.
Now that we know how to execute a Workflow, let’s take a closer look at the YAML manifest that we used to define the Workflow.
The first few lines of the manifest define Workflow metadata, specifically the generateName field, which will be combined with a randomly generated string to form the name of our Workflow (we could also have used the name field to explicitly define the name, but this name would have to be unique), and the namespace field, which defines the Kubernetes namespace in which the Workflow will be executed.
The rest of the manifest defines the Workflow Spec, which is where the actually Workflow logic is defined. In the example above, the Workflow Spec has 2 top-level fields:
entrypoint: This specifies the template that will be executed first.
templates: This defines all of the templates used in the Workflow. As we will see shortly, template is a bit of a loaded term in Argo, but here template simply means a unit of Workflow execution. In our Whalesay Workflow, we have just a single template named whalesay of type Container. Since this template defines a container to execute, we specify the container image to use, the command to execute, and the arguments to pass in.
Now let’s introduce a more complicated Workflow that executes a few different steps in sequence. This Workflow will accept 2 arguments, a salutation, and a username, and it will end by executing the Whalesay container using the provided arguments to create a greeting message.
Although we could easily accomplish this with just the Whalesay container, we will take a more roundabout approach here for the sake of demonstrating some of the other features that Argo offers.
Here is the YAML manifest for our new Workflow:
Let’s break this down:
The first difference you’ll notice is the addition of the arguments field on line 6. This is where we define the salutation and username parameters that will be passed into our Workflow.
The first template defined in this Workflow is on line 14. This template is called greeting, accepts a parameter called salutation, and is of type Script. Script templates are similar to Container templates in that they require a container image and command; however, Script templates allow us to pass in an inline script to execute on the container. This template executes a simple Python script that retrieves the value of the salutation parameter, then prints it out.
The next template is called user and is also of type Script. This template is just like the salutation template, except for that this template uses a shell script instead of a Python script (There is no particular reason not to use Python here; this is just to illustrate that we are not limited to Python as our scripting language).
The third template is the same Whalesay Container template that we used earlier, except now it accepts 2 parameters, salutation and username, and uses these parameters to create the greeting.
The fourth template is a Steps template. Steps templates allow us to chain together a series of other templates, creating a multi-step workflow where the outputs of one template can be used as the inputs of another. The first step uses the greeting template and passes in the Workflow parameter salutation, the second step uses the user template and passes in the Workflow parameter username, and the final template uses the whalesay template and passes in the outputs of the previous two steps.
Here is what the workflow diagram looks like for this workflow:
Note: If you executed this workflow (or just looked closely at the diagram) you may have noticed that the get-title and get-greeting steps are executed in parallel, and then after both complete the whalesay step runs. This is because the Steps template actually accepts a list of lists of steps. Steps in the inner lists will be run in parallel, and steps in the outer list will be run in sequence. For example, the list of lists structure for the Workflow above looks like:
[[get-title, get-greeting], [whalesay]]
This is why get-title and get-greeting are run in parallel, and whalesay is run after both complete.
One of the most important features of a workflow orchestration platform is the ability to share frequently-used steps across multiple workflows. This allows users of the platform to quickly get started creating workflows by using a library of common steps that have already been implemented by other users. For example, you might create reusable steps for common data transformations or for integrations with external services.
The construct in Argo that represents a reusable step (or set of steps) is called a Workflow Template. This is not to be confused with the templates that we have discussed earlier when defining our Workflow YAML manifests. For clarity, here is the difference:
template: represents a single unit of execution in a Workflow. There are a number of different types of templates, including Container, Script, and Steps.
Workflow Template: A Workflow definition that is saved in your cluster, which can then be executed as a standalone Workflow, or as part of any other Workflow.
Let’s create a Workflow Template to see how this works. Our Workflow Template will be a random number generator that accepts a min and max value, and returns a random integer in the specified range.
The Workflow Templates tab in the Argo Server UI is where we can view and create Workflow Templates.
Creating a Workflow Template is very similar to creating a Workflow, only when we create a Workflow Template, it is saved for later use rather than executed immediately in the way that a Workflow is.
Here is the YAML manifest for the Workflow Template we will be working with:
As you can see, this manifest is just like the Workflow manifests that we created earlier: it takes 2 parameters, min and max, and has a single Python script template which generates a random number in the specified range.
Once this Workflow Template is created, we will be able to view it from the Workflow Templates tab in the UI, and submit it as a standalone Workflow by selecting it and clicking Submit
We can also use this Workflow Template from within other Workflows. Let’s see how this works. We’ll go back to the Workflows tab and choose Submit New Workflow, then enter the following YAML manifest:
This Workflow is a lot like the multi-step Workflow that we already discussed, only this time the Steps template is not just chaining together templates defined directly in this Workflow. The first step in the Step template (line 19 above) uses the templateRef field instead of the template field to reference a template defined in a Workflow Template. In this particular case, we are referencing the generator template, which is defined in the random-number-generator Workflow Template.
In case this is still a little confusing, here is a diagram that illustrates the relationships between the Workflow, Workflow Template, and templates in this example:
When we execute this Workflow, the call-random-number-generator step will run just like it is a template defined directly within the Workflow, when in reality it’s implementation is encapsulated in the random-number-generator Workflow Template.
This means that we can create many other Workflows (or other Workflow Templates) that consume this functionality without concern about the details of its implementation, and if we ever need to modify this functionality (for example, the current random range excludes the max value, this may or may not be the desired behavior; or we may want to refactor this into a container image rather than an inline script in order to more easily test this function), we can do so in the Workflow Template, knowing that all consumers will start to use these modification without any additional effort on their end.
In this article, we saw how to get started with Argo Workflows by creating a few simple Workflows. We looked at how to pass parameters between multiple templates in a Workflow, and then how to encapsulate shared logic using Workflow Templates.
Although the examples we explored here were very simplistic, these same core concepts can be used to build and execute powerful containerized workflows on Kubernetes, and they will act as a strong foundation to build upon if you decide to continue exploring the other features that Argo has to offer.
You can find all of the YAML manifests that we created in the argo-workflows-getting-started respository: https://github.com/gnovack/argo-workflows-getting-started
https://argoproj.github.io/argo-workflows/quick-start/
https://www.eksworkshop.com/advanced/410_batch/
Thanks for reading. Feel free to reach out with any questions or comments. | [
{
"code": null,
"e": 491,
"s": 172,
"text": "In a previous article, I explored Kubeflow Pipelines and walked through the process of creating and executing a simple Machine Learning Pipeline. In this article, I will take a closer look at Argo, the open-source workflow orchestration engine that is used as the default orchestration platform for Kubeflow Pipelines."
},
{
"code": null,
"e": 834,
"s": 491,
"text": "In order to start playing with Argo, we will first need a running Kubernetes cluster. I will be using EKS, the managed Kubernetes service on AWS, and will walk through the steps required to get Argo running on EKS. Any Kubernetes cluster will work, so long as Argo is deployed and has the permissions it needs to run Workflows on the cluster."
},
{
"code": null,
"e": 1030,
"s": 834,
"text": "The easiest way to get started with EKS, is to use the eksctl command-line tool. The instructions for installing eksctl can be found here: https://github.com/weaveworks/eksctl/blob/main/README.md"
},
{
"code": null,
"e": 1126,
"s": 1030,
"text": "Once eksctl is installed, we’ll create the Kubernetes cluster by running the following command:"
},
{
"code": null,
"e": 1148,
"s": 1126,
"text": "eksctl create cluster"
},
{
"code": null,
"e": 1416,
"s": 1148,
"text": "Note: The command above creates AWS resources that will incur costs on your account. By default, the eksctl create cluster command creates a Kubernetes cluster with 2 m5.large type EC2 Instances, each of which, at the time of this writing, cost around $0.10 per hour."
},
{
"code": null,
"e": 1624,
"s": 1416,
"text": "This command may take 15–20 minutes to finish provisioning the cluster. Once it is complete, the first thing we will do is create a Namespace that will house our Argo resources. We’ll use kubectl to do this:"
},
{
"code": null,
"e": 1654,
"s": 1624,
"text": "kubectl create namespace argo"
},
{
"code": null,
"e": 1741,
"s": 1654,
"text": "Next, we will download the Argo installation manifest from the Argo GitHub repository:"
},
{
"code": null,
"e": 1836,
"s": 1741,
"text": "wget https://raw.githubusercontent.com/argoproj/argo-workflows/stable/manifests/install.yaml ."
},
{
"code": null,
"e": 1989,
"s": 1836,
"text": "This will download a YAML manifest file called install.yaml that describes all of the Kubernetes resources that we will need to get Argo up and running."
},
{
"code": null,
"e": 2080,
"s": 1989,
"text": "We’ll have to make a few modifications to this file in order to access the Argo server UI."
},
{
"code": null,
"e": 2300,
"s": 2080,
"text": "Update the argo-server Service to be of type LoadBalancer. This will provision a Network Load Balancer in AWS that will route requests to the argo-server Service. The resulting resource definition should look like this:"
},
{
"code": null,
"e": 2520,
"s": 2300,
"text": "Update the argo-server Service to be of type LoadBalancer. This will provision a Network Load Balancer in AWS that will route requests to the argo-server Service. The resulting resource definition should look like this:"
},
{
"code": null,
"e": 2711,
"s": 2520,
"text": "2. Update the argo-server deployment to add an environment variable called BASE_HREF with a value of “/”. The result should look like this (the resource definition is truncated for brevity):"
},
{
"code": null,
"e": 2799,
"s": 2711,
"text": "After making these modifications, we can create the resources defined in this manifest:"
},
{
"code": null,
"e": 2837,
"s": 2799,
"text": "kubectl apply -n argo -f install.yaml"
},
{
"code": null,
"e": 2887,
"s": 2837,
"text": "Next, we will need to create an Ingress resource:"
},
{
"code": null,
"e": 3255,
"s": 2887,
"text": "Finally, in order for Argo to be able to access other Kubernetes resources, we will need to assign a role to the Argo Service Account. On a real production cluster, we would want to assign this Service Account to a role with specific limited permissions, but for now, we will just grant admin access to this account. We can accomplish this with the following command:"
},
{
"code": null,
"e": 3354,
"s": 3255,
"text": "kubectl -n argo create rolebinding default-admin --clusterrole=admin --serviceaccount=argo:default"
},
{
"code": null,
"e": 3500,
"s": 3354,
"text": "And that’s all the setup we will need to perform to get started. Now, we will get the public URL for the Argo server UI by running the following:"
},
{
"code": null,
"e": 3536,
"s": 3500,
"text": "kubectl get svc argo-server -n argo"
},
{
"code": null,
"e": 3648,
"s": 3536,
"text": "The public URL will be returned by this command in the EXTERNAL-IP column. It should end in .elb.amazonaws.com."
},
{
"code": null,
"e": 3716,
"s": 3648,
"text": "Now open a browser and navigate to: https://{YOUR_EXTERNAL_IP}:2746"
},
{
"code": null,
"e": 4152,
"s": 3716,
"text": "Note: If you are using Chrome on MacOS and are having difficulty bypassing the security warning raised by your browser when navigating to the Argo Server UI, there is a somewhat hidden workaround in Chrome: Simply click anywhere in the browser window to ensure the window is in focus, then type thisisunsafe. For more details on this workaround, see This is unsafe — Bypassing the Google Chrome “Your connection is not private” Warning"
},
{
"code": null,
"e": 4256,
"s": 4152,
"text": "If everything is working properly, you should see an Argo Server landing page like the one shown below:"
},
{
"code": null,
"e": 4331,
"s": 4256,
"text": "With the setup out of the way, let’s execute our first workflow with Argo."
},
{
"code": null,
"e": 4448,
"s": 4331,
"text": "Navigate to Workflows (this is the top icon in the sidebar) > Submit New Workflow > Edit using full workflow options"
},
{
"code": null,
"e": 4631,
"s": 4448,
"text": "This will open the Workflow creation experience, which includes a text editor populated with a basic Workflow YAML manifest, and additional tabs for Workflow parameters and metadata."
},
{
"code": null,
"e": 4787,
"s": 4631,
"text": "Let’s replace this default Workflow manifest with a simple Workflow that runs the Whalesay container as shown below, then click Create to run the Workflow:"
},
{
"code": null,
"e": 5200,
"s": 4787,
"text": "Once you kick off the Workflow, you will be shown a Workflow diagram that will update as the workflow progresses (in this case, it will just be a single step). From this page, you will be able to view the details, inputs, outputs, and logs of your Workflow. There isn’t much to see here with our barebones Hello World Workflow, but you can navigate to Main Logs to confirm that the Whalesay ASCII art was logged."
},
{
"code": null,
"e": 5327,
"s": 5200,
"text": "Now that we know how to execute a Workflow, let’s take a closer look at the YAML manifest that we used to define the Workflow."
},
{
"code": null,
"e": 5730,
"s": 5327,
"text": "The first few lines of the manifest define Workflow metadata, specifically the generateName field, which will be combined with a randomly generated string to form the name of our Workflow (we could also have used the name field to explicitly define the name, but this name would have to be unique), and the namespace field, which defines the Kubernetes namespace in which the Workflow will be executed."
},
{
"code": null,
"e": 5901,
"s": 5730,
"text": "The rest of the manifest defines the Workflow Spec, which is where the actually Workflow logic is defined. In the example above, the Workflow Spec has 2 top-level fields:"
},
{
"code": null,
"e": 5970,
"s": 5901,
"text": "entrypoint: This specifies the template that will be executed first."
},
{
"code": null,
"e": 6402,
"s": 5970,
"text": "templates: This defines all of the templates used in the Workflow. As we will see shortly, template is a bit of a loaded term in Argo, but here template simply means a unit of Workflow execution. In our Whalesay Workflow, we have just a single template named whalesay of type Container. Since this template defines a container to execute, we specify the container image to use, the command to execute, and the arguments to pass in."
},
{
"code": null,
"e": 6679,
"s": 6402,
"text": "Now let’s introduce a more complicated Workflow that executes a few different steps in sequence. This Workflow will accept 2 arguments, a salutation, and a username, and it will end by executing the Whalesay container using the provided arguments to create a greeting message."
},
{
"code": null,
"e": 6874,
"s": 6679,
"text": "Although we could easily accomplish this with just the Whalesay container, we will take a more roundabout approach here for the sake of demonstrating some of the other features that Argo offers."
},
{
"code": null,
"e": 6922,
"s": 6874,
"text": "Here is the YAML manifest for our new Workflow:"
},
{
"code": null,
"e": 6945,
"s": 6922,
"text": "Let’s break this down:"
},
{
"code": null,
"e": 7132,
"s": 6945,
"text": "The first difference you’ll notice is the addition of the arguments field on line 6. This is where we define the salutation and username parameters that will be passed into our Workflow."
},
{
"code": null,
"e": 7603,
"s": 7132,
"text": "The first template defined in this Workflow is on line 14. This template is called greeting, accepts a parameter called salutation, and is of type Script. Script templates are similar to Container templates in that they require a container image and command; however, Script templates allow us to pass in an inline script to execute on the container. This template executes a simple Python script that retrieves the value of the salutation parameter, then prints it out."
},
{
"code": null,
"e": 7937,
"s": 7603,
"text": "The next template is called user and is also of type Script. This template is just like the salutation template, except for that this template uses a shell script instead of a Python script (There is no particular reason not to use Python here; this is just to illustrate that we are not limited to Python as our scripting language)."
},
{
"code": null,
"e": 8129,
"s": 7937,
"text": "The third template is the same Whalesay Container template that we used earlier, except now it accepts 2 parameters, salutation and username, and uses these parameters to create the greeting."
},
{
"code": null,
"e": 8625,
"s": 8129,
"text": "The fourth template is a Steps template. Steps templates allow us to chain together a series of other templates, creating a multi-step workflow where the outputs of one template can be used as the inputs of another. The first step uses the greeting template and passes in the Workflow parameter salutation, the second step uses the user template and passes in the Workflow parameter username, and the final template uses the whalesay template and passes in the outputs of the previous two steps."
},
{
"code": null,
"e": 8689,
"s": 8625,
"text": "Here is what the workflow diagram looks like for this workflow:"
},
{
"code": null,
"e": 9164,
"s": 8689,
"text": "Note: If you executed this workflow (or just looked closely at the diagram) you may have noticed that the get-title and get-greeting steps are executed in parallel, and then after both complete the whalesay step runs. This is because the Steps template actually accepts a list of lists of steps. Steps in the inner lists will be run in parallel, and steps in the outer list will be run in sequence. For example, the list of lists structure for the Workflow above looks like:"
},
{
"code": null,
"e": 9204,
"s": 9164,
"text": "[[get-title, get-greeting], [whalesay]]"
},
{
"code": null,
"e": 9305,
"s": 9204,
"text": "This is why get-title and get-greeting are run in parallel, and whalesay is run after both complete."
},
{
"code": null,
"e": 9733,
"s": 9305,
"text": "One of the most important features of a workflow orchestration platform is the ability to share frequently-used steps across multiple workflows. This allows users of the platform to quickly get started creating workflows by using a library of common steps that have already been implemented by other users. For example, you might create reusable steps for common data transformations or for integrations with external services."
},
{
"code": null,
"e": 9993,
"s": 9733,
"text": "The construct in Argo that represents a reusable step (or set of steps) is called a Workflow Template. This is not to be confused with the templates that we have discussed earlier when defining our Workflow YAML manifests. For clarity, here is the difference:"
},
{
"code": null,
"e": 10148,
"s": 9993,
"text": "template: represents a single unit of execution in a Workflow. There are a number of different types of templates, including Container, Script, and Steps."
},
{
"code": null,
"e": 10307,
"s": 10148,
"text": "Workflow Template: A Workflow definition that is saved in your cluster, which can then be executed as a standalone Workflow, or as part of any other Workflow."
},
{
"code": null,
"e": 10506,
"s": 10307,
"text": "Let’s create a Workflow Template to see how this works. Our Workflow Template will be a random number generator that accepts a min and max value, and returns a random integer in the specified range."
},
{
"code": null,
"e": 10607,
"s": 10506,
"text": "The Workflow Templates tab in the Argo Server UI is where we can view and create Workflow Templates."
},
{
"code": null,
"e": 10807,
"s": 10607,
"text": "Creating a Workflow Template is very similar to creating a Workflow, only when we create a Workflow Template, it is saved for later use rather than executed immediately in the way that a Workflow is."
},
{
"code": null,
"e": 10884,
"s": 10807,
"text": "Here is the YAML manifest for the Workflow Template we will be working with:"
},
{
"code": null,
"e": 11107,
"s": 10884,
"text": "As you can see, this manifest is just like the Workflow manifests that we created earlier: it takes 2 parameters, min and max, and has a single Python script template which generates a random number in the specified range."
},
{
"code": null,
"e": 11292,
"s": 11107,
"text": "Once this Workflow Template is created, we will be able to view it from the Workflow Templates tab in the UI, and submit it as a standalone Workflow by selecting it and clicking Submit"
},
{
"code": null,
"e": 11493,
"s": 11292,
"text": "We can also use this Workflow Template from within other Workflows. Let’s see how this works. We’ll go back to the Workflows tab and choose Submit New Workflow, then enter the following YAML manifest:"
},
{
"code": null,
"e": 11981,
"s": 11493,
"text": "This Workflow is a lot like the multi-step Workflow that we already discussed, only this time the Steps template is not just chaining together templates defined directly in this Workflow. The first step in the Step template (line 19 above) uses the templateRef field instead of the template field to reference a template defined in a Workflow Template. In this particular case, we are referencing the generator template, which is defined in the random-number-generator Workflow Template."
},
{
"code": null,
"e": 12148,
"s": 11981,
"text": "In case this is still a little confusing, here is a diagram that illustrates the relationships between the Workflow, Workflow Template, and templates in this example:"
},
{
"code": null,
"e": 12393,
"s": 12148,
"text": "When we execute this Workflow, the call-random-number-generator step will run just like it is a template defined directly within the Workflow, when in reality it’s implementation is encapsulated in the random-number-generator Workflow Template."
},
{
"code": null,
"e": 12996,
"s": 12393,
"text": "This means that we can create many other Workflows (or other Workflow Templates) that consume this functionality without concern about the details of its implementation, and if we ever need to modify this functionality (for example, the current random range excludes the max value, this may or may not be the desired behavior; or we may want to refactor this into a container image rather than an inline script in order to more easily test this function), we can do so in the Workflow Template, knowing that all consumers will start to use these modification without any additional effort on their end."
},
{
"code": null,
"e": 13240,
"s": 12996,
"text": "In this article, we saw how to get started with Argo Workflows by creating a few simple Workflows. We looked at how to pass parameters between multiple templates in a Workflow, and then how to encapsulate shared logic using Workflow Templates."
},
{
"code": null,
"e": 13541,
"s": 13240,
"text": "Although the examples we explored here were very simplistic, these same core concepts can be used to build and execute powerful containerized workflows on Kubernetes, and they will act as a strong foundation to build upon if you decide to continue exploring the other features that Argo has to offer."
},
{
"code": null,
"e": 13705,
"s": 13541,
"text": "You can find all of the YAML manifests that we created in the argo-workflows-getting-started respository: https://github.com/gnovack/argo-workflows-getting-started"
},
{
"code": null,
"e": 13760,
"s": 13705,
"text": "https://argoproj.github.io/argo-workflows/quick-start/"
},
{
"code": null,
"e": 13808,
"s": 13760,
"text": "https://www.eksworkshop.com/advanced/410_batch/"
}
] |
URL getPath() method in Java with Examples | 31 Dec, 2018
The getPath() function is a part of URL class. The function getPath() returns the Path name of a specified URL.
Function Signature:
public String getPath()
Syntax:
url.getPath()
Parameter: This function does not require any parameterReturn Type: The function returns String Type
Below programs illustrates the use of getPath() function:
Example 1: Given a URL we will get the Path using the getPath() function.
// Java program to show the// use of the function getPath() import java.net.*; class Solution { public static void main(String args[]) { // url object URL url = null; try { // create a URL url = new URL( "https:// www.geeksforgeeks.org/url-getprotocol-method-in-java-with-examples/"); // get the Path String _Path = url.getPath(); // display the URL System.out.println("URL = " + url); // display the Path System.out.println(" Path= " + _Path); } // if any error occurs catch (Exception e) { // display the error System.out.println(e); } }}
URL = https:// www.geeksforgeeks.org/url-getprotocol-method-in-java-with-examples/
Path= /url-getprotocol-method-in-java-with-examples/
Example 2: Now see how getPath() is different from getFile(). getPath() will exclude the query but getFile() will include the query
// Java program to show the// use of the function getPath() import java.net.*; class Solution { public static void main(String args[]) { // url object URL url = null; try { // create a URL url = new URL( "https:// www.geeksforgeeks.org/url-getprotocol-method-in-java-with-examples?title=protocol"); // get the Path String _Path = url.getPath(); // display the URL System.out.println("URL = " + url); // display the Path System.out.println(" Path= " + _Path); } // if any error occurs catch (Exception e) { // display the error System.out.println(e); } }}
URL = https:// www.geeksforgeeks.org/url-getprotocol-method-in-java-with-examples?title=protocol
Path= /url-getprotocol-method-in-java-with-examples
Java-Functions
Java-net-package
Java-URL
Java
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Object Oriented Programming (OOPs) Concept in Java
How to iterate any Map in Java
Interfaces in Java
HashMap in Java with Examples
ArrayList in Java
Stream In Java
Collections in Java
Singleton Class in Java
Multidimensional Arrays in Java
Stack Class in Java | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n31 Dec, 2018"
},
{
"code": null,
"e": 140,
"s": 28,
"text": "The getPath() function is a part of URL class. The function getPath() returns the Path name of a specified URL."
},
{
"code": null,
"e": 160,
"s": 140,
"text": "Function Signature:"
},
{
"code": null,
"e": 184,
"s": 160,
"text": "public String getPath()"
},
{
"code": null,
"e": 192,
"s": 184,
"text": "Syntax:"
},
{
"code": null,
"e": 206,
"s": 192,
"text": "url.getPath()"
},
{
"code": null,
"e": 307,
"s": 206,
"text": "Parameter: This function does not require any parameterReturn Type: The function returns String Type"
},
{
"code": null,
"e": 365,
"s": 307,
"text": "Below programs illustrates the use of getPath() function:"
},
{
"code": null,
"e": 439,
"s": 365,
"text": "Example 1: Given a URL we will get the Path using the getPath() function."
},
{
"code": "// Java program to show the// use of the function getPath() import java.net.*; class Solution { public static void main(String args[]) { // url object URL url = null; try { // create a URL url = new URL( \"https:// www.geeksforgeeks.org/url-getprotocol-method-in-java-with-examples/\"); // get the Path String _Path = url.getPath(); // display the URL System.out.println(\"URL = \" + url); // display the Path System.out.println(\" Path= \" + _Path); } // if any error occurs catch (Exception e) { // display the error System.out.println(e); } }}",
"e": 1178,
"s": 439,
"text": null
},
{
"code": null,
"e": 1316,
"s": 1178,
"text": "URL = https:// www.geeksforgeeks.org/url-getprotocol-method-in-java-with-examples/\n Path= /url-getprotocol-method-in-java-with-examples/\n"
},
{
"code": null,
"e": 1448,
"s": 1316,
"text": "Example 2: Now see how getPath() is different from getFile(). getPath() will exclude the query but getFile() will include the query"
},
{
"code": "// Java program to show the// use of the function getPath() import java.net.*; class Solution { public static void main(String args[]) { // url object URL url = null; try { // create a URL url = new URL( \"https:// www.geeksforgeeks.org/url-getprotocol-method-in-java-with-examples?title=protocol\"); // get the Path String _Path = url.getPath(); // display the URL System.out.println(\"URL = \" + url); // display the Path System.out.println(\" Path= \" + _Path); } // if any error occurs catch (Exception e) { // display the error System.out.println(e); } }}",
"e": 2201,
"s": 1448,
"text": null
},
{
"code": null,
"e": 2352,
"s": 2201,
"text": "URL = https:// www.geeksforgeeks.org/url-getprotocol-method-in-java-with-examples?title=protocol\n Path= /url-getprotocol-method-in-java-with-examples\n"
},
{
"code": null,
"e": 2367,
"s": 2352,
"text": "Java-Functions"
},
{
"code": null,
"e": 2384,
"s": 2367,
"text": "Java-net-package"
},
{
"code": null,
"e": 2393,
"s": 2384,
"text": "Java-URL"
},
{
"code": null,
"e": 2398,
"s": 2393,
"text": "Java"
},
{
"code": null,
"e": 2403,
"s": 2398,
"text": "Java"
},
{
"code": null,
"e": 2501,
"s": 2403,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2552,
"s": 2501,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 2583,
"s": 2552,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 2602,
"s": 2583,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 2632,
"s": 2602,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 2650,
"s": 2632,
"text": "ArrayList in Java"
},
{
"code": null,
"e": 2665,
"s": 2650,
"text": "Stream In Java"
},
{
"code": null,
"e": 2685,
"s": 2665,
"text": "Collections in Java"
},
{
"code": null,
"e": 2709,
"s": 2685,
"text": "Singleton Class in Java"
},
{
"code": null,
"e": 2741,
"s": 2709,
"text": "Multidimensional Arrays in Java"
}
] |
Pie plot using Plotly in Python | 28 Jun, 2021
Plotly is a Python library which is used to design graphs, especially interactive graphs. It can plot various graphs and charts like histogram, barplot, boxplot, spreadplot and many more. It is mainly used in data analysis as well as financial analysis. plotly is an interactive visualization library.
A pie chart is a circular analytical chart, which is divided into region to symbolize numerical percentage. In px.pie, data anticipated by the sectors of the pie to set the values. All sector are classify in names. Pie chart is used usually to show the percentage with next corresponding slice of pie. Pie chart helps to make understand well because of its different portions and color codings.
Syntax: plotly.express.pie(data_frame=None, names=None, values=None, color=None, color_discrete_sequence=None, color_discrete_map={}, hover_name=None, hover_data=None, custom_data=None, labels={}, title=None, template=None, width=None, height=None, opacity=None, hole=None)
Parameters:
Example:
Python3
import plotly.express as pximport numpy # Random Datarandom_x = [100, 2000, 550]names = ['A', 'B', 'C'] fig = px.pie(values=random_x, names=names)fig.show()
Output:
The same value for the names parameter are grouped together. Repeated labels visually groups rows or columns together to make the data easier to understand. Let’s see one example given below.
Example: The iris dataset contains many rows but only 3 species so the data is grouped according to the species.
Python3
import plotly.express as px # Loading the iris datasetdf = px.data.iris() fig = px.pie(df, values="sepal_width", names="species")fig.show()
Output:
The pie chart can be customized by using the px.pie, using some of its parameters such as hover_data and labels. Let’s see the below example for better understanding.
Example:
Python3
import plotly.express as px # Loading the iris datasetdf = px.data.iris() fig = px.pie(df, values="sepal_width", names="species", title='Iris Dataset', hover_data=['sepal_length'])fig.show()
Output:
The color of pie can be change in plotly module. Different colors help in to distinguish the data from each other, which help to understand the data more efficiently.
Example:
Python3
import plotly.express as px # Loading the iris datasetdf = px.data.iris() fig = px.pie(df, values="sepal_width", names="species", color_discrete_sequence=px.colors.sequential.RdBu)fig.show()
Output:
sooda367
Python-Plotly
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Read JSON file using Python
Adding new column to existing DataFrame in Pandas
Python map() function
Python Dictionary
How to get column names in Pandas dataframe
Different ways to create Pandas Dataframe
Taking input in Python
Enumerate() in Python
Read a file line by line in Python
Python String | replace() | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n28 Jun, 2021"
},
{
"code": null,
"e": 331,
"s": 28,
"text": "Plotly is a Python library which is used to design graphs, especially interactive graphs. It can plot various graphs and charts like histogram, barplot, boxplot, spreadplot and many more. It is mainly used in data analysis as well as financial analysis. plotly is an interactive visualization library. "
},
{
"code": null,
"e": 728,
"s": 331,
"text": "A pie chart is a circular analytical chart, which is divided into region to symbolize numerical percentage. In px.pie, data anticipated by the sectors of the pie to set the values. All sector are classify in names. Pie chart is used usually to show the percentage with next corresponding slice of pie. Pie chart helps to make understand well because of its different portions and color codings."
},
{
"code": null,
"e": 1002,
"s": 728,
"text": "Syntax: plotly.express.pie(data_frame=None, names=None, values=None, color=None, color_discrete_sequence=None, color_discrete_map={}, hover_name=None, hover_data=None, custom_data=None, labels={}, title=None, template=None, width=None, height=None, opacity=None, hole=None)"
},
{
"code": null,
"e": 1014,
"s": 1002,
"text": "Parameters:"
},
{
"code": null,
"e": 1023,
"s": 1014,
"text": "Example:"
},
{
"code": null,
"e": 1031,
"s": 1023,
"text": "Python3"
},
{
"code": "import plotly.express as pximport numpy # Random Datarandom_x = [100, 2000, 550]names = ['A', 'B', 'C'] fig = px.pie(values=random_x, names=names)fig.show()",
"e": 1188,
"s": 1031,
"text": null
},
{
"code": null,
"e": 1200,
"s": 1192,
"text": "Output:"
},
{
"code": null,
"e": 1396,
"s": 1204,
"text": "The same value for the names parameter are grouped together. Repeated labels visually groups rows or columns together to make the data easier to understand. Let’s see one example given below."
},
{
"code": null,
"e": 1511,
"s": 1398,
"text": "Example: The iris dataset contains many rows but only 3 species so the data is grouped according to the species."
},
{
"code": null,
"e": 1521,
"s": 1513,
"text": "Python3"
},
{
"code": "import plotly.express as px # Loading the iris datasetdf = px.data.iris() fig = px.pie(df, values=\"sepal_width\", names=\"species\")fig.show()",
"e": 1661,
"s": 1521,
"text": null
},
{
"code": null,
"e": 1674,
"s": 1665,
"text": "Output: "
},
{
"code": null,
"e": 1845,
"s": 1678,
"text": "The pie chart can be customized by using the px.pie, using some of its parameters such as hover_data and labels. Let’s see the below example for better understanding."
},
{
"code": null,
"e": 1856,
"s": 1847,
"text": "Example:"
},
{
"code": null,
"e": 1866,
"s": 1858,
"text": "Python3"
},
{
"code": "import plotly.express as px # Loading the iris datasetdf = px.data.iris() fig = px.pie(df, values=\"sepal_width\", names=\"species\", title='Iris Dataset', hover_data=['sepal_length'])fig.show()",
"e": 2069,
"s": 1866,
"text": null
},
{
"code": null,
"e": 2081,
"s": 2073,
"text": "Output:"
},
{
"code": null,
"e": 2253,
"s": 2085,
"text": "The color of pie can be change in plotly module. Different colors help in to distinguish the data from each other, which help to understand the data more efficiently."
},
{
"code": null,
"e": 2264,
"s": 2255,
"text": "Example:"
},
{
"code": null,
"e": 2274,
"s": 2266,
"text": "Python3"
},
{
"code": "import plotly.express as px # Loading the iris datasetdf = px.data.iris() fig = px.pie(df, values=\"sepal_width\", names=\"species\", color_discrete_sequence=px.colors.sequential.RdBu)fig.show()",
"e": 2489,
"s": 2274,
"text": null
},
{
"code": null,
"e": 2501,
"s": 2493,
"text": "Output:"
},
{
"code": null,
"e": 2514,
"s": 2505,
"text": "sooda367"
},
{
"code": null,
"e": 2528,
"s": 2514,
"text": "Python-Plotly"
},
{
"code": null,
"e": 2535,
"s": 2528,
"text": "Python"
},
{
"code": null,
"e": 2633,
"s": 2535,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2661,
"s": 2633,
"text": "Read JSON file using Python"
},
{
"code": null,
"e": 2711,
"s": 2661,
"text": "Adding new column to existing DataFrame in Pandas"
},
{
"code": null,
"e": 2733,
"s": 2711,
"text": "Python map() function"
},
{
"code": null,
"e": 2751,
"s": 2733,
"text": "Python Dictionary"
},
{
"code": null,
"e": 2795,
"s": 2751,
"text": "How to get column names in Pandas dataframe"
},
{
"code": null,
"e": 2837,
"s": 2795,
"text": "Different ways to create Pandas Dataframe"
},
{
"code": null,
"e": 2860,
"s": 2837,
"text": "Taking input in Python"
},
{
"code": null,
"e": 2882,
"s": 2860,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 2917,
"s": 2882,
"text": "Read a file line by line in Python"
}
] |
C# | SortedDictionary.Add() Method | 21 Jan, 2019
This is used to add a specified key and value to the sorted dictionary. The elements are sorted according to TKey.
Syntax:
public void Add (TKey key, TValue value);
Parameters:
key: It is the key of the element to add.value: It is the value of the element to add. The value can be null for reference types.
Exceptions:
ArgumentNullException : If the key is null.
ArgumentException : If an element with the same key already exists in the Dictionary.
Below are the programs to illustrate the use of Dictionary<TKey, TValue>.Add() Method:
Example 1:
// C# code to add the specified key// and value into the SortedDictionaryusing System;using System.Collections.Generic; class GFG { // Driver code public static void Main() { // Create a new SortedDictionary // of strings, with string keys. SortedDictionary<string, string> myDict = new SortedDictionary<string, string>(); // Adding key/value pairs in myDict myDict.Add("Australia", "Canberra"); myDict.Add("Belgium", "Brussels"); myDict.Add("Netherlands", "Amsterdam"); myDict.Add("China", "Beijing"); myDict.Add("Russia", "Moscow"); myDict.Add("India", "New Delhi"); // To get count of key/value // pairs in myDict Console.WriteLine("Total key/value pairs in" + " myDict are : " + myDict.Count); // Displaying the key/value // pairs in myDict Console.WriteLine("The key/value pairs" + " in myDict are : "); foreach(KeyValuePair<string, string> kvp in myDict) { Console.WriteLine("Key = {0}, Value = {1}", kvp.Key, kvp.Value); } }}
Total key/value pairs in myDict are : 6
The key/value pairs in myDict are :
Key = Australia, Value = Canberra
Key = Belgium, Value = Brussels
Key = China, Value = Beijing
Key = India, Value = New Delhi
Key = Netherlands, Value = Amsterdam
Key = Russia, Value = Moscow
Example 2:
// C# code to add the specified key// and value into the SortedDictionaryusing System;using System.Collections.Generic; class GFG { // Driver code public static void Main() { // Create a new SortedDictionary // of strings, with string keys. SortedDictionary<string, string> myDict = new SortedDictionary<string, string>(); // Adding key/value pairs in myDict myDict.Add("Australia", "Canberra"); myDict.Add("Belgium", "Brussels"); myDict.Add("Netherlands", "Amsterdam"); myDict.Add("China", "Beijing"); myDict.Add("Russia", "Moscow"); myDict.Add("India", "New Delhi"); // The Add method throws an // exception if the new key is // already in the dictionary. try { myDict.Add("Russia", "Moscow"); } catch (ArgumentException) { Console.WriteLine("An element with Key " + "= \"Russia\" already exists."); } }}
An element with Key = "Russia" already exists.
Note:
A key cannot be null, but a value can be. If the value type TValue is a reference type.
This method is an O(log n) operation, where n is Count of elements in the SortedDictionary.
Reference:
https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.sorteddictionary-2.add?view=netframework-4.7.2
CSharp SortedDictionary Class
CSharp-Generic-Namespace
CSharp-method
C#
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
C# | Multiple inheritance using interfaces
Differences Between .NET Core and .NET Framework
Extension Method in C#
C# | List Class
HashSet in C# with Examples
C# | .NET Framework (Basic Architecture and Component Stack)
Switch Statement in C#
Partial Classes in C#
Lambda Expressions in C#
Hello World in C# | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n21 Jan, 2019"
},
{
"code": null,
"e": 143,
"s": 28,
"text": "This is used to add a specified key and value to the sorted dictionary. The elements are sorted according to TKey."
},
{
"code": null,
"e": 151,
"s": 143,
"text": "Syntax:"
},
{
"code": null,
"e": 194,
"s": 151,
"text": "public void Add (TKey key, TValue value);\n"
},
{
"code": null,
"e": 206,
"s": 194,
"text": "Parameters:"
},
{
"code": null,
"e": 336,
"s": 206,
"text": "key: It is the key of the element to add.value: It is the value of the element to add. The value can be null for reference types."
},
{
"code": null,
"e": 348,
"s": 336,
"text": "Exceptions:"
},
{
"code": null,
"e": 392,
"s": 348,
"text": "ArgumentNullException : If the key is null."
},
{
"code": null,
"e": 478,
"s": 392,
"text": "ArgumentException : If an element with the same key already exists in the Dictionary."
},
{
"code": null,
"e": 565,
"s": 478,
"text": "Below are the programs to illustrate the use of Dictionary<TKey, TValue>.Add() Method:"
},
{
"code": null,
"e": 576,
"s": 565,
"text": "Example 1:"
},
{
"code": "// C# code to add the specified key// and value into the SortedDictionaryusing System;using System.Collections.Generic; class GFG { // Driver code public static void Main() { // Create a new SortedDictionary // of strings, with string keys. SortedDictionary<string, string> myDict = new SortedDictionary<string, string>(); // Adding key/value pairs in myDict myDict.Add(\"Australia\", \"Canberra\"); myDict.Add(\"Belgium\", \"Brussels\"); myDict.Add(\"Netherlands\", \"Amsterdam\"); myDict.Add(\"China\", \"Beijing\"); myDict.Add(\"Russia\", \"Moscow\"); myDict.Add(\"India\", \"New Delhi\"); // To get count of key/value // pairs in myDict Console.WriteLine(\"Total key/value pairs in\" + \" myDict are : \" + myDict.Count); // Displaying the key/value // pairs in myDict Console.WriteLine(\"The key/value pairs\" + \" in myDict are : \"); foreach(KeyValuePair<string, string> kvp in myDict) { Console.WriteLine(\"Key = {0}, Value = {1}\", kvp.Key, kvp.Value); } }}",
"e": 1759,
"s": 576,
"text": null
},
{
"code": null,
"e": 2029,
"s": 1759,
"text": "Total key/value pairs in myDict are : 6\nThe key/value pairs in myDict are : \nKey = Australia, Value = Canberra\nKey = Belgium, Value = Brussels\nKey = China, Value = Beijing\nKey = India, Value = New Delhi\nKey = Netherlands, Value = Amsterdam\nKey = Russia, Value = Moscow\n"
},
{
"code": null,
"e": 2040,
"s": 2029,
"text": "Example 2:"
},
{
"code": "// C# code to add the specified key// and value into the SortedDictionaryusing System;using System.Collections.Generic; class GFG { // Driver code public static void Main() { // Create a new SortedDictionary // of strings, with string keys. SortedDictionary<string, string> myDict = new SortedDictionary<string, string>(); // Adding key/value pairs in myDict myDict.Add(\"Australia\", \"Canberra\"); myDict.Add(\"Belgium\", \"Brussels\"); myDict.Add(\"Netherlands\", \"Amsterdam\"); myDict.Add(\"China\", \"Beijing\"); myDict.Add(\"Russia\", \"Moscow\"); myDict.Add(\"India\", \"New Delhi\"); // The Add method throws an // exception if the new key is // already in the dictionary. try { myDict.Add(\"Russia\", \"Moscow\"); } catch (ArgumentException) { Console.WriteLine(\"An element with Key \" + \"= \\\"Russia\\\" already exists.\"); } }}",
"e": 3042,
"s": 2040,
"text": null
},
{
"code": null,
"e": 3090,
"s": 3042,
"text": "An element with Key = \"Russia\" already exists.\n"
},
{
"code": null,
"e": 3096,
"s": 3090,
"text": "Note:"
},
{
"code": null,
"e": 3184,
"s": 3096,
"text": "A key cannot be null, but a value can be. If the value type TValue is a reference type."
},
{
"code": null,
"e": 3276,
"s": 3184,
"text": "This method is an O(log n) operation, where n is Count of elements in the SortedDictionary."
},
{
"code": null,
"e": 3287,
"s": 3276,
"text": "Reference:"
},
{
"code": null,
"e": 3405,
"s": 3287,
"text": "https://docs.microsoft.com/en-us/dotnet/api/system.collections.generic.sorteddictionary-2.add?view=netframework-4.7.2"
},
{
"code": null,
"e": 3435,
"s": 3405,
"text": "CSharp SortedDictionary Class"
},
{
"code": null,
"e": 3460,
"s": 3435,
"text": "CSharp-Generic-Namespace"
},
{
"code": null,
"e": 3474,
"s": 3460,
"text": "CSharp-method"
},
{
"code": null,
"e": 3477,
"s": 3474,
"text": "C#"
},
{
"code": null,
"e": 3575,
"s": 3477,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3618,
"s": 3575,
"text": "C# | Multiple inheritance using interfaces"
},
{
"code": null,
"e": 3667,
"s": 3618,
"text": "Differences Between .NET Core and .NET Framework"
},
{
"code": null,
"e": 3690,
"s": 3667,
"text": "Extension Method in C#"
},
{
"code": null,
"e": 3706,
"s": 3690,
"text": "C# | List Class"
},
{
"code": null,
"e": 3734,
"s": 3706,
"text": "HashSet in C# with Examples"
},
{
"code": null,
"e": 3795,
"s": 3734,
"text": "C# | .NET Framework (Basic Architecture and Component Stack)"
},
{
"code": null,
"e": 3818,
"s": 3795,
"text": "Switch Statement in C#"
},
{
"code": null,
"e": 3840,
"s": 3818,
"text": "Partial Classes in C#"
},
{
"code": null,
"e": 3865,
"s": 3840,
"text": "Lambda Expressions in C#"
}
] |
TreeSet in Java | 07 Jul, 2022
TreeSet is one of the most important implementations of the SortedSet interface in Java that uses a Tree for storage. The ordering of the elements is maintained by a set using their natural ordering whether or not an explicit comparator is provided. This must be consistent with equals if it is to correctly implement the Set interface.
It can also be ordered by a Comparator provided at set creation time, depending on which constructor is used. The TreeSet implements a NavigableSet interface by inheriting AbstractSet class.
It can clearly be perceived from the above image that the navigable set extends the sorted set interface. Since a set doesn’t retain the insertion order, the navigable set interface provides the implementation to navigate through the Set. The class which implements the navigable set is a TreeSet which is an implementation of a self-balancing tree. Therefore, this interface provides us with a way to navigate through this tree.
Note:
An object is said to be comparable if and only if the corresponding class implements a Comparable interface.
String class and all the Wrapper classes already implement Comparable interface but StringBuffer class implements Comparable interface. Hence, we DO NOT get a ClassCastException in the above example.
For an empty tree-set, when trying to insert null as the first value, one will get NPE from JDK 7. From JDK 7 onwards, null is not at all accepted by TreeSet. However, up to JDK 6, null was accepted as the first value, but any insertion of more null values in the TreeSet resulted in NullPointerException. Hence, it was considered a bug and thus removed in JDK 7.
TreeSet serves as an excellent choice for storing large amounts of sorted information which are supposed to be accessed quickly because of its faster access and retrieval time.
The insertion of null values into a TreeSet throws NullPointerException because while insertion of null, it gets compared to the existing elements, and null cannot be compared to any value.
How does TreeSet work Internally?
TreeSet is basically an implementation of a self-balancing binary search tree like a Red-Black Tree. Therefore operations like add, remove, and search takes O(log(N)) time. The reason is that in a self-balancing tree, it is made sure that the height of the tree is always O(log(N)) for all the operations. Therefore, this is considered as one of the most efficient data structures in order to store the huge sorted data and perform operations on it. However, operations like printing N elements in the sorted order take O(N) time.
Now let us discuss Synchronized TreeSet prior moving ahead. The implementation of a TreeSet is not synchronized. This means that if multiple threads access a tree set concurrently, and at least one of the threads modifies the set, it must be synchronized externally. This is typically accomplished by synchronizing some object that naturally encapsulates the set. If no such object exists, the set should be “wrapped” using the Collections.synchronizedSortedSet method. This is best done at the creation time, to prevent accidental unsynchronized access to the set. It can be achieved as shown below as follows:
TreeSet ts = new TreeSet();
Set syncSet = Collections.synchronziedSet(ts);
Constructors of TreeSet Class are as follows:
In order to create a TreeSet, we need to create an object of the TreeSet class. The TreeSet class consists of various constructors which allow the possible creation of the TreeSet. The following are the constructors available in this class:
TreeSet(): This constructor is used to build an empty TreeSet object in which elements will get stored in default natural sorting order.
Syntax: If we wish to create an empty TreeSet with the name ts, then, it can be created as:
TreeSet ts = new TreeSet();
TreeSet(Comparator): This constructor is used to build an empty TreeSet object in which elements will need an external specification of the sorting order.
Syntax: If we wish to create an empty TreeSet with the name ts with an external sorting phenomenon, then, it can be created as:
TreeSet ts = new TreeSet(Comparator comp);
TreeSet(Collection): This constructor is used to build a TreeSet object containing all the elements from the given collection in which elements will get stored in default natural sorting order. In short, this constructor is used when any conversion is needed from any Collection object to TreeSet object.
Syntax: If we wish to create a TreeSet with the name ts, then, it can be created as follows:
TreeSet t = new TreeSet(Collection col);
TreeSet(SortedSet): This constructor is used to build a TreeSet object containing all the elements from the given sortedset in which elements will get stored in default natural sorting order. In short, this constructor is used to convert the SortedSet object to the TreeSet object.
Syntax: If we wish to create a TreeSet with the name ts, then, it can be created as follows:
TreeSet t = new TreeSet(SortedSet s);
Methods in TreeSet Class are depicted below in tabular format which later on we will be implementing to showcase in the implementation part.
TreeSet implements SortedSet so it has the availability of all methods in Collection, Set, and SortedSet interfaces. Following are the methods in the Treeset interface. In the table below, the “?” signifies that the method works with any type of object including user-defined objects.
Illustration: The following implementation demonstrates how to create and use a TreeSet.
Java
// Java program to Illustrate Working of TreeSet // Importing required utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating a Set interface with reference to // TreeSet Set<String> ts1 = new TreeSet<>(); // Elements are added using add() method ts1.add("A"); ts1.add("B"); ts1.add("C"); // Duplicates will not get insert ts1.add("C"); // Elements get stored in default natural // Sorting Order(Ascending) System.out.println(ts1); }}
[A, B, C]
Implementation:
Here we will be performing various operations over the TreeSet object to get familiar with the methods and concepts of TreeSet in java. Let’s see how to perform a few frequently used operations on the TreeSet. They are listed as follows:
Adding elements
Accessing elements
Removing elements
Iterating through elements
Now let us discuss each operation individually one by one later alongside grasping with the help of a clean java program.
Operation 1: Adding Elements
In order to add an element to the TreeSet, we can use the add() method. However, the insertion order is not retained in the TreeSet. Internally, for every element, the values are compared and sorted in ascending order. We need to keep a note that duplicate elements are not allowed and all the duplicate elements are ignored. And also, Null values are not accepted by the TreeSet.
Example
Java
// Java code to Illustrate Addition of Elements to TreeSet // Importing utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating a Set interface with // reference to TreeSet class // Declaring object of string type Set<String> ts = new TreeSet<>(); // Elements are added using add() method ts.add("Geek"); ts.add("For"); ts.add("Geeks"); // Print all elements inside object System.out.println(ts); }}
[For, Geek, Geeks]
Operation 2: Accessing the Elements
After adding the elements, if we wish to access the elements, we can use inbuilt methods like contains(), first(), last(), etc.
Example
Java
// Java code to Illustrate Working of TreeSet by// Accessing the Element of TreeSet // Importing utility classesimport java.util.*; // Main class class GFG { // Main driver method public static void main(String[] args) { // Creating a NavigableSet object with // reference to TreeSet class NavigableSet<String> ts = new TreeSet<>(); // Elements are added using add() method ts.add("Geek"); ts.add("For"); ts.add("Geeks"); // Printing the elements inside the TreeSet object System.out.println("Tree Set is " + ts); String check = "Geeks"; // Check if the above string exists in // the treeset or not System.out.println("Contains " + check + " " + ts.contains(check)); // Print the first element in // the TreeSet System.out.println("First Value " + ts.first()); // Print the last element in // the TreeSet System.out.println("Last Value " + ts.last()); String val = "Geek"; // Find the values just greater // and smaller than the above string System.out.println("Higher " + ts.higher(val)); System.out.println("Lower " + ts.lower(val)); }}
Tree Set is [For, Geek, Geeks]
Contains Geeks true
First Value For
Last Value Geeks
Higher Geeks
Lower For
Operation 3: Removing the Values
The values can be removed from the TreeSet using the remove() method. There are various other methods that are used to remove the first value or the last value.
Example
Java
// Java Program to Illustrate Removal of Elements// in a TreeSet // Importing utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating an object of NavigableSet // with reference to TreeSet class // Declaring object of string type NavigableSet<String> ts = new TreeSet<>(); // Elements are added // using add() method ts.add("Geek"); ts.add("For"); ts.add("Geeks"); ts.add("A"); ts.add("B"); ts.add("Z"); // Print and display initial elements of TreeSet System.out.println("Initial TreeSet " + ts); // Removing a specific existing element inserted // above ts.remove("B"); // Printing the updated TreeSet System.out.println("After removing element " + ts); // Now removing the first element // using pollFirst() method ts.pollFirst(); // Again printing the updated TreeSet System.out.println("After removing first " + ts); // Removing the last element // using pollLast() method ts.pollLast(); // Lastly printing the elements of TreeSet remaining // to figure out pollLast() method System.out.println("After removing last " + ts); }}
Initial TreeSet [A, B, For, Geek, Geeks, Z]
After removing element [A, For, Geek, Geeks, Z]
After removing first [For, Geek, Geeks, Z]
After removing last [For, Geek, Geeks]
Operation 4: Iterating through the TreeSet
There are various ways to iterate through the TreeSet. The most famous one is to use the enhanced for loop. and geeks mostly you would be iterating the elements with this approach while practicing questions over TreeSet as this is most frequently used when it comes to tree, maps, and graphs problems.
Example
Java
// Java Program to Illustrate Working of TreeSet // Importing utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating an object of Set with reference to // TreeSet class // Note: You can refer above media if geek // is confused in programs why we are not // directly creating TreeSet object Set<String> ts = new TreeSet<>(); // Adding elements in above object // using add() method ts.add("Geek"); ts.add("For"); ts.add("Geeks"); ts.add("A"); ts.add("B"); ts.add("Z"); // Now we will be using for each loop in order // to iterate through the TreeSet for (String value : ts) // Printing the values inside the object System.out.print(value + ", "); System.out.println(); }}
A, B, For, Geek, Geeks, Z,
TreeSet implements the SortedSet interface. So, duplicate values are not allowed.Objects in a TreeSet are stored in a sorted and ascending order.TreeSet does not preserve the insertion order of elements but elements are sorted by keys.If we are depending on the default natural sorting order, the objects that are being inserted into the tree should be homogeneous and comparable. TreeSet does not allow the insertion of heterogeneous objects. It will throw a classCastException at Runtime if we try to add heterogeneous objects.The TreeSet can only accept generic types which are comparable.For example, the StringBuffer class implements the Comparable interface
TreeSet implements the SortedSet interface. So, duplicate values are not allowed.
Objects in a TreeSet are stored in a sorted and ascending order.
TreeSet does not preserve the insertion order of elements but elements are sorted by keys.
If we are depending on the default natural sorting order, the objects that are being inserted into the tree should be homogeneous and comparable. TreeSet does not allow the insertion of heterogeneous objects. It will throw a classCastException at Runtime if we try to add heterogeneous objects.
The TreeSet can only accept generic types which are comparable.For example, the StringBuffer class implements the Comparable interface
Java
// Java code to illustrate What if Heterogeneous // Objects are Inserted // Importing all utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Object creation Set<StringBuffer> ts = new TreeSet<>(); // Adding elements to above object // using add() method ts.add(new StringBuffer("A")); ts.add(new StringBuffer("Z")); ts.add(new StringBuffer("L")); ts.add(new StringBuffer("B")); ts.add(new StringBuffer("O")); ts.add(new StringBuffer(1)); // Note: StringBuffer implements Comparable // interface // Printing the elements System.out.println(ts); }}
[, A, B, L, O, Z]
Chinmoy Lenka
Vinay Pathak
KaashyapMSK
adarsh thimmappa
Anshul_Aggarwal
subtlyrude
Priya_Bhimjyani
anikakapoor
akshaysingh98088
simmytarika5
Java - util package
Java-Collections
java-treeset
Java
Java
Java-Collections
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Split() String method in Java with examples
Arrays.sort() in Java with examples
Reverse a string in Java
How to iterate any Map in Java
Stream In Java
Singleton Class in Java
Initialize an ArrayList in Java
Initializing a List in Java
Generics in Java
Java Programming Examples | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n07 Jul, 2022"
},
{
"code": null,
"e": 390,
"s": 52,
"text": "TreeSet is one of the most important implementations of the SortedSet interface in Java that uses a Tree for storage. The ordering of the elements is maintained by a set using their natural ordering whether or not an explicit comparator is provided. This must be consistent with equals if it is to correctly implement the Set interface. "
},
{
"code": null,
"e": 581,
"s": 390,
"text": "It can also be ordered by a Comparator provided at set creation time, depending on which constructor is used. The TreeSet implements a NavigableSet interface by inheriting AbstractSet class."
},
{
"code": null,
"e": 1012,
"s": 581,
"text": "It can clearly be perceived from the above image that the navigable set extends the sorted set interface. Since a set doesn’t retain the insertion order, the navigable set interface provides the implementation to navigate through the Set. The class which implements the navigable set is a TreeSet which is an implementation of a self-balancing tree. Therefore, this interface provides us with a way to navigate through this tree. "
},
{
"code": null,
"e": 1018,
"s": 1012,
"text": "Note:"
},
{
"code": null,
"e": 1127,
"s": 1018,
"text": "An object is said to be comparable if and only if the corresponding class implements a Comparable interface."
},
{
"code": null,
"e": 1328,
"s": 1127,
"text": "String class and all the Wrapper classes already implement Comparable interface but StringBuffer class implements Comparable interface. Hence, we DO NOT get a ClassCastException in the above example."
},
{
"code": null,
"e": 1692,
"s": 1328,
"text": "For an empty tree-set, when trying to insert null as the first value, one will get NPE from JDK 7. From JDK 7 onwards, null is not at all accepted by TreeSet. However, up to JDK 6, null was accepted as the first value, but any insertion of more null values in the TreeSet resulted in NullPointerException. Hence, it was considered a bug and thus removed in JDK 7."
},
{
"code": null,
"e": 1869,
"s": 1692,
"text": "TreeSet serves as an excellent choice for storing large amounts of sorted information which are supposed to be accessed quickly because of its faster access and retrieval time."
},
{
"code": null,
"e": 2059,
"s": 1869,
"text": "The insertion of null values into a TreeSet throws NullPointerException because while insertion of null, it gets compared to the existing elements, and null cannot be compared to any value."
},
{
"code": null,
"e": 2093,
"s": 2059,
"text": "How does TreeSet work Internally?"
},
{
"code": null,
"e": 2624,
"s": 2093,
"text": "TreeSet is basically an implementation of a self-balancing binary search tree like a Red-Black Tree. Therefore operations like add, remove, and search takes O(log(N)) time. The reason is that in a self-balancing tree, it is made sure that the height of the tree is always O(log(N)) for all the operations. Therefore, this is considered as one of the most efficient data structures in order to store the huge sorted data and perform operations on it. However, operations like printing N elements in the sorted order take O(N) time."
},
{
"code": null,
"e": 3236,
"s": 2624,
"text": "Now let us discuss Synchronized TreeSet prior moving ahead. The implementation of a TreeSet is not synchronized. This means that if multiple threads access a tree set concurrently, and at least one of the threads modifies the set, it must be synchronized externally. This is typically accomplished by synchronizing some object that naturally encapsulates the set. If no such object exists, the set should be “wrapped” using the Collections.synchronizedSortedSet method. This is best done at the creation time, to prevent accidental unsynchronized access to the set. It can be achieved as shown below as follows:"
},
{
"code": null,
"e": 3313,
"s": 3236,
"text": "TreeSet ts = new TreeSet(); \nSet syncSet = Collections.synchronziedSet(ts); "
},
{
"code": null,
"e": 3359,
"s": 3313,
"text": "Constructors of TreeSet Class are as follows:"
},
{
"code": null,
"e": 3600,
"s": 3359,
"text": "In order to create a TreeSet, we need to create an object of the TreeSet class. The TreeSet class consists of various constructors which allow the possible creation of the TreeSet. The following are the constructors available in this class:"
},
{
"code": null,
"e": 3737,
"s": 3600,
"text": "TreeSet(): This constructor is used to build an empty TreeSet object in which elements will get stored in default natural sorting order."
},
{
"code": null,
"e": 3830,
"s": 3737,
"text": "Syntax: If we wish to create an empty TreeSet with the name ts, then, it can be created as: "
},
{
"code": null,
"e": 3859,
"s": 3830,
"text": "TreeSet ts = new TreeSet(); "
},
{
"code": null,
"e": 4014,
"s": 3859,
"text": "TreeSet(Comparator): This constructor is used to build an empty TreeSet object in which elements will need an external specification of the sorting order."
},
{
"code": null,
"e": 4142,
"s": 4014,
"text": "Syntax: If we wish to create an empty TreeSet with the name ts with an external sorting phenomenon, then, it can be created as:"
},
{
"code": null,
"e": 4186,
"s": 4142,
"text": "TreeSet ts = new TreeSet(Comparator comp); "
},
{
"code": null,
"e": 4491,
"s": 4186,
"text": "TreeSet(Collection): This constructor is used to build a TreeSet object containing all the elements from the given collection in which elements will get stored in default natural sorting order. In short, this constructor is used when any conversion is needed from any Collection object to TreeSet object."
},
{
"code": null,
"e": 4584,
"s": 4491,
"text": "Syntax: If we wish to create a TreeSet with the name ts, then, it can be created as follows:"
},
{
"code": null,
"e": 4627,
"s": 4584,
"text": "TreeSet t = new TreeSet(Collection col); "
},
{
"code": null,
"e": 4909,
"s": 4627,
"text": "TreeSet(SortedSet): This constructor is used to build a TreeSet object containing all the elements from the given sortedset in which elements will get stored in default natural sorting order. In short, this constructor is used to convert the SortedSet object to the TreeSet object."
},
{
"code": null,
"e": 5002,
"s": 4909,
"text": "Syntax: If we wish to create a TreeSet with the name ts, then, it can be created as follows:"
},
{
"code": null,
"e": 5040,
"s": 5002,
"text": "TreeSet t = new TreeSet(SortedSet s);"
},
{
"code": null,
"e": 5181,
"s": 5040,
"text": "Methods in TreeSet Class are depicted below in tabular format which later on we will be implementing to showcase in the implementation part."
},
{
"code": null,
"e": 5467,
"s": 5181,
"text": "TreeSet implements SortedSet so it has the availability of all methods in Collection, Set, and SortedSet interfaces. Following are the methods in the Treeset interface. In the table below, the “?” signifies that the method works with any type of object including user-defined objects. "
},
{
"code": null,
"e": 5556,
"s": 5467,
"text": "Illustration: The following implementation demonstrates how to create and use a TreeSet."
},
{
"code": null,
"e": 5561,
"s": 5556,
"text": "Java"
},
{
"code": "// Java program to Illustrate Working of TreeSet // Importing required utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating a Set interface with reference to // TreeSet Set<String> ts1 = new TreeSet<>(); // Elements are added using add() method ts1.add(\"A\"); ts1.add(\"B\"); ts1.add(\"C\"); // Duplicates will not get insert ts1.add(\"C\"); // Elements get stored in default natural // Sorting Order(Ascending) System.out.println(ts1); }}",
"e": 6184,
"s": 5561,
"text": null
},
{
"code": null,
"e": 6194,
"s": 6184,
"text": "[A, B, C]"
},
{
"code": null,
"e": 6212,
"s": 6196,
"text": "Implementation:"
},
{
"code": null,
"e": 6450,
"s": 6212,
"text": "Here we will be performing various operations over the TreeSet object to get familiar with the methods and concepts of TreeSet in java. Let’s see how to perform a few frequently used operations on the TreeSet. They are listed as follows:"
},
{
"code": null,
"e": 6466,
"s": 6450,
"text": "Adding elements"
},
{
"code": null,
"e": 6485,
"s": 6466,
"text": "Accessing elements"
},
{
"code": null,
"e": 6503,
"s": 6485,
"text": "Removing elements"
},
{
"code": null,
"e": 6530,
"s": 6503,
"text": "Iterating through elements"
},
{
"code": null,
"e": 6652,
"s": 6530,
"text": "Now let us discuss each operation individually one by one later alongside grasping with the help of a clean java program."
},
{
"code": null,
"e": 6681,
"s": 6652,
"text": "Operation 1: Adding Elements"
},
{
"code": null,
"e": 7062,
"s": 6681,
"text": "In order to add an element to the TreeSet, we can use the add() method. However, the insertion order is not retained in the TreeSet. Internally, for every element, the values are compared and sorted in ascending order. We need to keep a note that duplicate elements are not allowed and all the duplicate elements are ignored. And also, Null values are not accepted by the TreeSet."
},
{
"code": null,
"e": 7070,
"s": 7062,
"text": "Example"
},
{
"code": null,
"e": 7075,
"s": 7070,
"text": "Java"
},
{
"code": "// Java code to Illustrate Addition of Elements to TreeSet // Importing utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating a Set interface with // reference to TreeSet class // Declaring object of string type Set<String> ts = new TreeSet<>(); // Elements are added using add() method ts.add(\"Geek\"); ts.add(\"For\"); ts.add(\"Geeks\"); // Print all elements inside object System.out.println(ts); }}",
"e": 7644,
"s": 7075,
"text": null
},
{
"code": null,
"e": 7663,
"s": 7644,
"text": "[For, Geek, Geeks]"
},
{
"code": null,
"e": 7701,
"s": 7665,
"text": "Operation 2: Accessing the Elements"
},
{
"code": null,
"e": 7830,
"s": 7701,
"text": "After adding the elements, if we wish to access the elements, we can use inbuilt methods like contains(), first(), last(), etc. "
},
{
"code": null,
"e": 7838,
"s": 7830,
"text": "Example"
},
{
"code": null,
"e": 7843,
"s": 7838,
"text": "Java"
},
{
"code": "// Java code to Illustrate Working of TreeSet by// Accessing the Element of TreeSet // Importing utility classesimport java.util.*; // Main class class GFG { // Main driver method public static void main(String[] args) { // Creating a NavigableSet object with // reference to TreeSet class NavigableSet<String> ts = new TreeSet<>(); // Elements are added using add() method ts.add(\"Geek\"); ts.add(\"For\"); ts.add(\"Geeks\"); // Printing the elements inside the TreeSet object System.out.println(\"Tree Set is \" + ts); String check = \"Geeks\"; // Check if the above string exists in // the treeset or not System.out.println(\"Contains \" + check + \" \" + ts.contains(check)); // Print the first element in // the TreeSet System.out.println(\"First Value \" + ts.first()); // Print the last element in // the TreeSet System.out.println(\"Last Value \" + ts.last()); String val = \"Geek\"; // Find the values just greater // and smaller than the above string System.out.println(\"Higher \" + ts.higher(val)); System.out.println(\"Lower \" + ts.lower(val)); }}",
"e": 9112,
"s": 7843,
"text": null
},
{
"code": null,
"e": 9219,
"s": 9112,
"text": "Tree Set is [For, Geek, Geeks]\nContains Geeks true\nFirst Value For\nLast Value Geeks\nHigher Geeks\nLower For"
},
{
"code": null,
"e": 9254,
"s": 9221,
"text": "Operation 3: Removing the Values"
},
{
"code": null,
"e": 9416,
"s": 9254,
"text": "The values can be removed from the TreeSet using the remove() method. There are various other methods that are used to remove the first value or the last value. "
},
{
"code": null,
"e": 9424,
"s": 9416,
"text": "Example"
},
{
"code": null,
"e": 9429,
"s": 9424,
"text": "Java"
},
{
"code": "// Java Program to Illustrate Removal of Elements// in a TreeSet // Importing utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating an object of NavigableSet // with reference to TreeSet class // Declaring object of string type NavigableSet<String> ts = new TreeSet<>(); // Elements are added // using add() method ts.add(\"Geek\"); ts.add(\"For\"); ts.add(\"Geeks\"); ts.add(\"A\"); ts.add(\"B\"); ts.add(\"Z\"); // Print and display initial elements of TreeSet System.out.println(\"Initial TreeSet \" + ts); // Removing a specific existing element inserted // above ts.remove(\"B\"); // Printing the updated TreeSet System.out.println(\"After removing element \" + ts); // Now removing the first element // using pollFirst() method ts.pollFirst(); // Again printing the updated TreeSet System.out.println(\"After removing first \" + ts); // Removing the last element // using pollLast() method ts.pollLast(); // Lastly printing the elements of TreeSet remaining // to figure out pollLast() method System.out.println(\"After removing last \" + ts); }}",
"e": 10783,
"s": 9429,
"text": null
},
{
"code": null,
"e": 10957,
"s": 10783,
"text": "Initial TreeSet [A, B, For, Geek, Geeks, Z]\nAfter removing element [A, For, Geek, Geeks, Z]\nAfter removing first [For, Geek, Geeks, Z]\nAfter removing last [For, Geek, Geeks]"
},
{
"code": null,
"e": 11002,
"s": 10959,
"text": "Operation 4: Iterating through the TreeSet"
},
{
"code": null,
"e": 11305,
"s": 11002,
"text": "There are various ways to iterate through the TreeSet. The most famous one is to use the enhanced for loop. and geeks mostly you would be iterating the elements with this approach while practicing questions over TreeSet as this is most frequently used when it comes to tree, maps, and graphs problems. "
},
{
"code": null,
"e": 11313,
"s": 11305,
"text": "Example"
},
{
"code": null,
"e": 11318,
"s": 11313,
"text": "Java"
},
{
"code": "// Java Program to Illustrate Working of TreeSet // Importing utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Creating an object of Set with reference to // TreeSet class // Note: You can refer above media if geek // is confused in programs why we are not // directly creating TreeSet object Set<String> ts = new TreeSet<>(); // Adding elements in above object // using add() method ts.add(\"Geek\"); ts.add(\"For\"); ts.add(\"Geeks\"); ts.add(\"A\"); ts.add(\"B\"); ts.add(\"Z\"); // Now we will be using for each loop in order // to iterate through the TreeSet for (String value : ts) // Printing the values inside the object System.out.print(value + \", \"); System.out.println(); }}",
"e": 12243,
"s": 11318,
"text": null
},
{
"code": null,
"e": 12270,
"s": 12243,
"text": "A, B, For, Geek, Geeks, Z,"
},
{
"code": null,
"e": 12936,
"s": 12272,
"text": "TreeSet implements the SortedSet interface. So, duplicate values are not allowed.Objects in a TreeSet are stored in a sorted and ascending order.TreeSet does not preserve the insertion order of elements but elements are sorted by keys.If we are depending on the default natural sorting order, the objects that are being inserted into the tree should be homogeneous and comparable. TreeSet does not allow the insertion of heterogeneous objects. It will throw a classCastException at Runtime if we try to add heterogeneous objects.The TreeSet can only accept generic types which are comparable.For example, the StringBuffer class implements the Comparable interface"
},
{
"code": null,
"e": 13018,
"s": 12936,
"text": "TreeSet implements the SortedSet interface. So, duplicate values are not allowed."
},
{
"code": null,
"e": 13083,
"s": 13018,
"text": "Objects in a TreeSet are stored in a sorted and ascending order."
},
{
"code": null,
"e": 13174,
"s": 13083,
"text": "TreeSet does not preserve the insertion order of elements but elements are sorted by keys."
},
{
"code": null,
"e": 13469,
"s": 13174,
"text": "If we are depending on the default natural sorting order, the objects that are being inserted into the tree should be homogeneous and comparable. TreeSet does not allow the insertion of heterogeneous objects. It will throw a classCastException at Runtime if we try to add heterogeneous objects."
},
{
"code": null,
"e": 13604,
"s": 13469,
"text": "The TreeSet can only accept generic types which are comparable.For example, the StringBuffer class implements the Comparable interface"
},
{
"code": null,
"e": 13609,
"s": 13604,
"text": "Java"
},
{
"code": "// Java code to illustrate What if Heterogeneous // Objects are Inserted // Importing all utility classesimport java.util.*; // Main classclass GFG { // Main driver method public static void main(String[] args) { // Object creation Set<StringBuffer> ts = new TreeSet<>(); // Adding elements to above object // using add() method ts.add(new StringBuffer(\"A\")); ts.add(new StringBuffer(\"Z\")); ts.add(new StringBuffer(\"L\")); ts.add(new StringBuffer(\"B\")); ts.add(new StringBuffer(\"O\")); ts.add(new StringBuffer(1)); // Note: StringBuffer implements Comparable // interface // Printing the elements System.out.println(ts); }}",
"e": 14353,
"s": 13609,
"text": null
},
{
"code": null,
"e": 14371,
"s": 14353,
"text": "[, A, B, L, O, Z]"
},
{
"code": null,
"e": 14385,
"s": 14371,
"text": "Chinmoy Lenka"
},
{
"code": null,
"e": 14398,
"s": 14385,
"text": "Vinay Pathak"
},
{
"code": null,
"e": 14410,
"s": 14398,
"text": "KaashyapMSK"
},
{
"code": null,
"e": 14427,
"s": 14410,
"text": "adarsh thimmappa"
},
{
"code": null,
"e": 14443,
"s": 14427,
"text": "Anshul_Aggarwal"
},
{
"code": null,
"e": 14454,
"s": 14443,
"text": "subtlyrude"
},
{
"code": null,
"e": 14470,
"s": 14454,
"text": "Priya_Bhimjyani"
},
{
"code": null,
"e": 14482,
"s": 14470,
"text": "anikakapoor"
},
{
"code": null,
"e": 14499,
"s": 14482,
"text": "akshaysingh98088"
},
{
"code": null,
"e": 14512,
"s": 14499,
"text": "simmytarika5"
},
{
"code": null,
"e": 14532,
"s": 14512,
"text": "Java - util package"
},
{
"code": null,
"e": 14549,
"s": 14532,
"text": "Java-Collections"
},
{
"code": null,
"e": 14562,
"s": 14549,
"text": "java-treeset"
},
{
"code": null,
"e": 14567,
"s": 14562,
"text": "Java"
},
{
"code": null,
"e": 14572,
"s": 14567,
"text": "Java"
},
{
"code": null,
"e": 14589,
"s": 14572,
"text": "Java-Collections"
},
{
"code": null,
"e": 14687,
"s": 14589,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 14731,
"s": 14687,
"text": "Split() String method in Java with examples"
},
{
"code": null,
"e": 14767,
"s": 14731,
"text": "Arrays.sort() in Java with examples"
},
{
"code": null,
"e": 14792,
"s": 14767,
"text": "Reverse a string in Java"
},
{
"code": null,
"e": 14823,
"s": 14792,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 14838,
"s": 14823,
"text": "Stream In Java"
},
{
"code": null,
"e": 14862,
"s": 14838,
"text": "Singleton Class in Java"
},
{
"code": null,
"e": 14894,
"s": 14862,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 14922,
"s": 14894,
"text": "Initializing a List in Java"
},
{
"code": null,
"e": 14939,
"s": 14922,
"text": "Generics in Java"
}
] |
Maximize count of non overlapping substrings which contains all occurrences of its characters | 10 Mar, 2022
Given string str consisting of lowercase letters, the task is to find the maximum number of non-overlapping substrings such that each substring contains all occurrences of its characters from the entire string. If multiple solutions with the same number of substrings exist, then print the one with the minimum total length.
Examples:
Input: str = “abbaccd” Output: bb cc d Explanation: The maximum number of substrings is such that all occurrences of its characters in the string are present.
The substrings are {{d, bb, cc}, {d, abba, cc}}
Therefore, the substrings of smallest possible length are {d, bb, cc}.
Input: str = “adefaddaccc” Output: e f ccc
Approach: The problem can be solved using the Greedy technique. Follow the steps below to solve the problem:
Initialize an array, say res[], to store the required substrings.
Initialize two arrays, say L[] and R[], to store the leftmost and rightmost indices of all possible characters of the given string respectively.
Traverse the string and store the leftmost and rightmost index of all possible characters of the given string.
Traverse the string using the variable i and check if i is the leftmost index of str[i], check if the substring starting from the ith position consisting of all occurrences of str[i] does not overlap with any of the substrings consisting of characters up to str[i -1] or not. If found to be true, then append the current substring into res[].
Finally, print the res[] array.
Below is the implementation of the above approach:
C++
Java
Python3
C#
Javascript
// C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to check if substring contains all// occurrences of each character of str or notint checkValid(string str,int i, int L[], int R[]){ // Stores rightmost index of str[i] int right = R[str[i] - 'a']; // Traverse the current substring for (int j = i; j < right; j++){ // If leftmost index of str[j] // less than i if (L[str[j] - 'a'] < i) return -1; // Update right right = max(right, R[str[j] - 'a']); } return right; } // Function to find maximum number of substring// that satisfy the conditionvector<string> maxcntOfSubstrings(string str) { // Stores all substrings that // satisfy the condition vector<string> res; // Stores length of str int n = str.length(); // Stores leftmost index // of each character int L[26]; // Stores rightmost index // of each character int R[26]; // Initialize L[] and R[] for(int i = 0; i <26; i++) { // Initialize L[i] // and R[i] L[i] = R[i] = -1; } // Traverse the string for (int i = 0; i < n; i++) { // If str[i] not // already occurred if (L[str[i] - 'a'] == -1) { // Update leftmost index // of str[i] L[str[i] - 'a'] = i; } // Update rightmost index // of str[i] R[str[i]-'a'] = i; } // Stores rightmost index of last // substring inserted into res[] int right = -1; // Traverse the string for (int i = 0; i < n; i++) { // If i is leftmost index of str[i] if (i == L[str[i] - 'a']) { // Check if a new substring starting // from i satisfies the conditions or not int new_right = checkValid(str, i, L, R); // If the substring starting from i // satisfies the conditions if(new_right != -1){ // Stores the substring starting from // i that satisfy the condition string sub = str.substr(i, new_right - i + 1); // If the substring overlaps // with another substring if(new_right < right){ // Stores sub to the last // of res res.back() = sub; } else { // If sub not overlaps to // other string then append // sub to the end of res res.push_back(sub); } // Update right right = new_right; } } } return res;} // Driver Codeint main(){ string str = "abbaccd"; // Stores maximum number of substring // that satisfy the condition vector<string> res = maxcntOfSubstrings(str); // Print all substring for(auto sub : res) { cout<<sub<<" "; }}
// Java program to implement// the above approachimport java.util.*; class GFG{ // Function to check if subString contains all// occurrences of each character of str or notstatic int checkValid(String str, int i, int L[], int R[]){ // Stores rightmost index of str.charAt(i) int right = R[(int)(str.charAt(i)) - 97]; // Traverse the current subString for(int j = i; j < right; j++) { // If leftmost index of str[j] // less than i if (L[(int)(str.charAt(j)) - 97] < i) return -1; // Update right right = Math.max(right, R[(int)(str.charAt(j)) - 97]); } return right;} // Function to find maximum number of subString// that satisfy the conditionstatic Vector<String> maxcntOfSubStrings(String str){ // Stores all subStrings that // satisfy the condition Vector<String> res = new Vector<String>(); // Stores length of str int n = str.length(); // Stores leftmost index // of each character int []L = new int[26]; // Stores rightmost index // of each character int []R = new int[26]; // Initialize L[] and R[] for(int i = 0; i < 26; i++) { // Initialize L[i] // and R[i] L[i] = R[i] = -1; } // Traverse the String for(int i = 0; i < n; i++) { // If str.charAt(i) not // already occurred if (L[(int)(str.charAt(i)) - 97] == -1) { // Update leftmost index // of str.charAt(i) L[(int)(str.charAt(i)) - 97] = i; } // Update rightmost index // of str.charAt(i) R[(int)(str.charAt(i)) - 97] = i; } // Stores rightmost index of last // subString inserted into res[] int right = -1; // Traverse the String for(int i = 0; i < n; i++) { // If i is leftmost index of str.charAt(i) if (i == L[(int)(str.charAt(i)) - 97]) { // Check if a new subString starting // from i satisfies the conditions or not int new_right = checkValid(str, i, L, R); // If the subString starting from i // satisfies the conditions if (new_right != -1) { // Stores the subString starting from // i that satisfy the condition String sub = str.substring(i, new_right + 1); // If the subString overlaps // with another subString if(new_right < right) { // Stores sub to the last // of res res.set(res.size() - 1, sub); } else { // If sub not overlaps to // other String then append // sub to the end of res res.add(sub); } // Update right right = new_right; } } } return res;} // Driver Codepublic static void main(String args[]){ String str = "abbaccd"; // Stores maximum number of subString // that satisfy the condition Vector<String> res = maxcntOfSubStrings(str); // Print all subString for(int i = 0; i < res.size(); i++) { System.out.print(res.get(i) + " "); }}} // This code is contributed by SURENDRA_GANGWAR
# Python3 program to implement# the above approach # Function to check if substring contains# all occurrences of each character# of str or notdef checkValid(str,i, L, R): # Stores rightmost index # of str[i] right = R[ord(str[i]) - ord('a')] # Traverse the current sub for j in range(i, right): # If leftmost index of str[j] # less than i if (L[ord(str[j]) - ord('a')] < i): return -1 # Update right right = max(right, R[ord(str[j]) - ord('a')]) return right # Function to find maximum# number of substring that satisfy# the conditiondef maxcntOfSubstrings(str): # Stores all substrings that # satisfy the condition res = [] # Stores length of str n = len(str) # Stores leftmost index # of each character L = [-1] * 26 # Stores rightmost index # of each character R = [-1] * 26 for j, i in enumerate(str): x = ord(i) - ord('a') # If str[i] not # already occurred if L[x] == -1: # Update leftmost index # of str[i] L[x] = j # Update rightmost index # of str[i] R[x] = j # Stores rightmost index of # last substring inserted # into res[] right = -1 for j, i in enumerate(str): x = ord(i) - ord('a') # If i is leftmost index # of str[i] if j == L[x]: # Check if a new substring # starting from i satisfies # the conditions or not new_right = checkValid(str, j, L, R) # If the substring starting # from i satisfies the conditions if new_right != -1: # Stores the substring starting # from i that satisfy the condition sub = str[j : new_right + 1] # If the substring overlaps # with another substring if new_right < right: res[-1] = sub else: # If sub not overlaps to # other string then append # sub to the end of res res.append(sub) right = new_right return res # Driver Codeif __name__ == '__main__': str = "abbaccd" # Stores maximum number of sub # that satisfy the condition res = maxcntOfSubstrings(str) # Print sub for sub in res: print(sub, end = " ") # This code is contributed by Mohit Kumar 29
// C# program to implement// the above approachusing System;using System.Collections.Generic;class GFG { // Function to check if substring contains all // occurrences of each character of str or not static int checkValid(string str,int i, int[] L, int[] R) { // Stores rightmost index of str[i] int right = R[str[i] - 'a']; // Traverse the current substring for (int j = i; j < right; j++){ // If leftmost index of str[j] // less than i if (L[str[j] - 'a'] < i) return -1; // Update right right = Math.Max(right, R[str[j] - 'a']); } return right; } // Function to find maximum number of substring // that satisfy the condition static List<string> maxcntOfSubstrings(string str) { // Stores all substrings that // satisfy the condition List<string> res = new List<string>(); // Stores length of str int n = str.Length; // Stores leftmost index // of each character int[] L = new int[26]; // Stores rightmost index // of each character int[] R = new int[26]; // Initialize L[] and R[] for(int i = 0; i <26; i++) { // Initialize L[i] // and R[i] L[i] = R[i] = -1; } // Traverse the string for (int i = 0; i < n; i++) { // If str[i] not // already occurred if (L[str[i] - 'a'] == -1) { // Update leftmost index // of str[i] L[str[i] - 'a'] = i; } // Update rightmost index // of str[i] R[str[i]-'a'] = i; } // Stores rightmost index of last // substring inserted into res[] int right = -1; // Traverse the string for (int i = 0; i < n; i++) { // If i is leftmost index of str[i] if (i == L[str[i] - 'a']) { // Check if a new substring starting // from i satisfies the conditions or not int new_right = checkValid(str, i, L, R); // If the substring starting from i // satisfies the conditions if(new_right != -1){ // Stores the substring starting from // i that satisfy the condition string sub = str.Substring(i, new_right - i + 1); // If the substring overlaps // with another substring if(new_right < right){ // Stores sub to the last // of res res[res.Count - 1] = sub; } else { // If sub not overlaps to // other string then append // sub to the end of res res.Add(sub); } // Update right right = new_right; } } } return res; } // Driver code static void Main() { string str = "abbaccd"; // Stores maximum number of substring // that satisfy the condition List<string> res = maxcntOfSubstrings(str); // Print all substring foreach(string sub in res) { Console.Write(sub + " "); } }} // This code is contributed by divyeshrabadiya
<script> // JavaScript program to implement // the above approach // Function to check if substring contains all // occurrences of each character of str or not function checkValid(str, i, L, R) { // Stores rightmost index of str[i] var right = R[str[i].charCodeAt(0) - "a".charCodeAt(0)]; // Traverse the current substring for (var j = i; j < right; j++) { // If leftmost index of str[j] // less than i if (L[str[j].charCodeAt(0) - "a".charCodeAt(0)] < i) return -1; // Update right right = Math.max(right, R[str[j].charCodeAt(0) - "a".charCodeAt(0)]); } return right; } // Function to find maximum number of substring // that satisfy the condition function maxcntOfSubstrings(str) { // Stores all substrings that // satisfy the condition var res = []; // Stores length of str var n = str.length; // Stores leftmost index // of each character var L = new Array(26).fill(-1); // Stores rightmost index // of each character var R = new Array(26).fill(-1); // Traverse the string for (var i = 0; i < n; i++) { var x = str[i].charCodeAt(0) - "a".charCodeAt(0); // If str[i] not // already occurred if (L[x] === -1) { // Update leftmost index // of str[i] L[x] = i; } // Update rightmost index // of str[i] R[x] = i; } // Stores rightmost index of last // substring inserted into res[] var right = -1; // Traverse the string for (var i = 0; i < n; i++) { var x = str[i].charCodeAt(0) - "a".charCodeAt(0); // If i is leftmost index of str[i] if (i === L[x]) { // Check if a new substring starting // from i satisfies the conditions or not var new_right = checkValid(str, i, L, R); // If the substring starting from i // satisfies the conditions if (new_right !== -1) { // Stores the substring starting from // i that satisfy the condition var sub = str.substring(i, new_right + 1); // If the substring overlaps // with another substring if (new_right < right) { // Stores sub to the last // of res res[res.length - 1] = sub; } else { // If sub not overlaps to // other string then append // sub to the end of res res.push(sub); } // Update right right = new_right; } } } return res; } // Driver code var str = "abbaccd"; // Stores maximum number of substring // that satisfy the condition var res = maxcntOfSubstrings(str); // Print all substring for (const sub of res) { document.write(sub + " "); }</script>
bb cc d
Time Complexity: O(N * 26) Auxiliary Space: O(26)
mohit kumar 29
SURENDRA_GANGWAR
divyeshrabadiya07
rdtank
akshaysingh98088
simmytarika5
surinderdawra388
frequency-counting
substring
Strings
Strings
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Check for Balanced Brackets in an expression (well-formedness) using Stack
Different Methods to Reverse a String in C++
Python program to check if a string is palindrome or not
KMP Algorithm for Pattern Searching
Longest Palindromic Substring | Set 1
Top 50 String Coding Problems for Interviews
What is Data Structure: Types, Classifications and Applications
Length of the longest substring without repeating characters
Convert string to char array in C++
Reverse words in a given string | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n10 Mar, 2022"
},
{
"code": null,
"e": 353,
"s": 28,
"text": "Given string str consisting of lowercase letters, the task is to find the maximum number of non-overlapping substrings such that each substring contains all occurrences of its characters from the entire string. If multiple solutions with the same number of substrings exist, then print the one with the minimum total length."
},
{
"code": null,
"e": 363,
"s": 353,
"text": "Examples:"
},
{
"code": null,
"e": 523,
"s": 363,
"text": "Input: str = “abbaccd” Output: bb cc d Explanation: The maximum number of substrings is such that all occurrences of its characters in the string are present. "
},
{
"code": null,
"e": 572,
"s": 523,
"text": "The substrings are {{d, bb, cc}, {d, abba, cc}} "
},
{
"code": null,
"e": 644,
"s": 572,
"text": "Therefore, the substrings of smallest possible length are {d, bb, cc}. "
},
{
"code": null,
"e": 687,
"s": 644,
"text": "Input: str = “adefaddaccc” Output: e f ccc"
},
{
"code": null,
"e": 796,
"s": 687,
"text": "Approach: The problem can be solved using the Greedy technique. Follow the steps below to solve the problem:"
},
{
"code": null,
"e": 862,
"s": 796,
"text": "Initialize an array, say res[], to store the required substrings."
},
{
"code": null,
"e": 1007,
"s": 862,
"text": "Initialize two arrays, say L[] and R[], to store the leftmost and rightmost indices of all possible characters of the given string respectively."
},
{
"code": null,
"e": 1118,
"s": 1007,
"text": "Traverse the string and store the leftmost and rightmost index of all possible characters of the given string."
},
{
"code": null,
"e": 1461,
"s": 1118,
"text": "Traverse the string using the variable i and check if i is the leftmost index of str[i], check if the substring starting from the ith position consisting of all occurrences of str[i] does not overlap with any of the substrings consisting of characters up to str[i -1] or not. If found to be true, then append the current substring into res[]."
},
{
"code": null,
"e": 1493,
"s": 1461,
"text": "Finally, print the res[] array."
},
{
"code": null,
"e": 1544,
"s": 1493,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 1548,
"s": 1544,
"text": "C++"
},
{
"code": null,
"e": 1553,
"s": 1548,
"text": "Java"
},
{
"code": null,
"e": 1561,
"s": 1553,
"text": "Python3"
},
{
"code": null,
"e": 1564,
"s": 1561,
"text": "C#"
},
{
"code": null,
"e": 1575,
"s": 1564,
"text": "Javascript"
},
{
"code": "// C++ program to implement// the above approach #include <bits/stdc++.h>using namespace std; // Function to check if substring contains all// occurrences of each character of str or notint checkValid(string str,int i, int L[], int R[]){ // Stores rightmost index of str[i] int right = R[str[i] - 'a']; // Traverse the current substring for (int j = i; j < right; j++){ // If leftmost index of str[j] // less than i if (L[str[j] - 'a'] < i) return -1; // Update right right = max(right, R[str[j] - 'a']); } return right; } // Function to find maximum number of substring// that satisfy the conditionvector<string> maxcntOfSubstrings(string str) { // Stores all substrings that // satisfy the condition vector<string> res; // Stores length of str int n = str.length(); // Stores leftmost index // of each character int L[26]; // Stores rightmost index // of each character int R[26]; // Initialize L[] and R[] for(int i = 0; i <26; i++) { // Initialize L[i] // and R[i] L[i] = R[i] = -1; } // Traverse the string for (int i = 0; i < n; i++) { // If str[i] not // already occurred if (L[str[i] - 'a'] == -1) { // Update leftmost index // of str[i] L[str[i] - 'a'] = i; } // Update rightmost index // of str[i] R[str[i]-'a'] = i; } // Stores rightmost index of last // substring inserted into res[] int right = -1; // Traverse the string for (int i = 0; i < n; i++) { // If i is leftmost index of str[i] if (i == L[str[i] - 'a']) { // Check if a new substring starting // from i satisfies the conditions or not int new_right = checkValid(str, i, L, R); // If the substring starting from i // satisfies the conditions if(new_right != -1){ // Stores the substring starting from // i that satisfy the condition string sub = str.substr(i, new_right - i + 1); // If the substring overlaps // with another substring if(new_right < right){ // Stores sub to the last // of res res.back() = sub; } else { // If sub not overlaps to // other string then append // sub to the end of res res.push_back(sub); } // Update right right = new_right; } } } return res;} // Driver Codeint main(){ string str = \"abbaccd\"; // Stores maximum number of substring // that satisfy the condition vector<string> res = maxcntOfSubstrings(str); // Print all substring for(auto sub : res) { cout<<sub<<\" \"; }} ",
"e": 4877,
"s": 1575,
"text": null
},
{
"code": "// Java program to implement// the above approachimport java.util.*; class GFG{ // Function to check if subString contains all// occurrences of each character of str or notstatic int checkValid(String str, int i, int L[], int R[]){ // Stores rightmost index of str.charAt(i) int right = R[(int)(str.charAt(i)) - 97]; // Traverse the current subString for(int j = i; j < right; j++) { // If leftmost index of str[j] // less than i if (L[(int)(str.charAt(j)) - 97] < i) return -1; // Update right right = Math.max(right, R[(int)(str.charAt(j)) - 97]); } return right;} // Function to find maximum number of subString// that satisfy the conditionstatic Vector<String> maxcntOfSubStrings(String str){ // Stores all subStrings that // satisfy the condition Vector<String> res = new Vector<String>(); // Stores length of str int n = str.length(); // Stores leftmost index // of each character int []L = new int[26]; // Stores rightmost index // of each character int []R = new int[26]; // Initialize L[] and R[] for(int i = 0; i < 26; i++) { // Initialize L[i] // and R[i] L[i] = R[i] = -1; } // Traverse the String for(int i = 0; i < n; i++) { // If str.charAt(i) not // already occurred if (L[(int)(str.charAt(i)) - 97] == -1) { // Update leftmost index // of str.charAt(i) L[(int)(str.charAt(i)) - 97] = i; } // Update rightmost index // of str.charAt(i) R[(int)(str.charAt(i)) - 97] = i; } // Stores rightmost index of last // subString inserted into res[] int right = -1; // Traverse the String for(int i = 0; i < n; i++) { // If i is leftmost index of str.charAt(i) if (i == L[(int)(str.charAt(i)) - 97]) { // Check if a new subString starting // from i satisfies the conditions or not int new_right = checkValid(str, i, L, R); // If the subString starting from i // satisfies the conditions if (new_right != -1) { // Stores the subString starting from // i that satisfy the condition String sub = str.substring(i, new_right + 1); // If the subString overlaps // with another subString if(new_right < right) { // Stores sub to the last // of res res.set(res.size() - 1, sub); } else { // If sub not overlaps to // other String then append // sub to the end of res res.add(sub); } // Update right right = new_right; } } } return res;} // Driver Codepublic static void main(String args[]){ String str = \"abbaccd\"; // Stores maximum number of subString // that satisfy the condition Vector<String> res = maxcntOfSubStrings(str); // Print all subString for(int i = 0; i < res.size(); i++) { System.out.print(res.get(i) + \" \"); }}} // This code is contributed by SURENDRA_GANGWAR",
"e": 8529,
"s": 4877,
"text": null
},
{
"code": "# Python3 program to implement# the above approach # Function to check if substring contains# all occurrences of each character# of str or notdef checkValid(str,i, L, R): # Stores rightmost index # of str[i] right = R[ord(str[i]) - ord('a')] # Traverse the current sub for j in range(i, right): # If leftmost index of str[j] # less than i if (L[ord(str[j]) - ord('a')] < i): return -1 # Update right right = max(right, R[ord(str[j]) - ord('a')]) return right # Function to find maximum# number of substring that satisfy# the conditiondef maxcntOfSubstrings(str): # Stores all substrings that # satisfy the condition res = [] # Stores length of str n = len(str) # Stores leftmost index # of each character L = [-1] * 26 # Stores rightmost index # of each character R = [-1] * 26 for j, i in enumerate(str): x = ord(i) - ord('a') # If str[i] not # already occurred if L[x] == -1: # Update leftmost index # of str[i] L[x] = j # Update rightmost index # of str[i] R[x] = j # Stores rightmost index of # last substring inserted # into res[] right = -1 for j, i in enumerate(str): x = ord(i) - ord('a') # If i is leftmost index # of str[i] if j == L[x]: # Check if a new substring # starting from i satisfies # the conditions or not new_right = checkValid(str, j, L, R) # If the substring starting # from i satisfies the conditions if new_right != -1: # Stores the substring starting # from i that satisfy the condition sub = str[j : new_right + 1] # If the substring overlaps # with another substring if new_right < right: res[-1] = sub else: # If sub not overlaps to # other string then append # sub to the end of res res.append(sub) right = new_right return res # Driver Codeif __name__ == '__main__': str = \"abbaccd\" # Stores maximum number of sub # that satisfy the condition res = maxcntOfSubstrings(str) # Print sub for sub in res: print(sub, end = \" \") # This code is contributed by Mohit Kumar 29",
"e": 11178,
"s": 8529,
"text": null
},
{
"code": "// C# program to implement// the above approachusing System;using System.Collections.Generic;class GFG { // Function to check if substring contains all // occurrences of each character of str or not static int checkValid(string str,int i, int[] L, int[] R) { // Stores rightmost index of str[i] int right = R[str[i] - 'a']; // Traverse the current substring for (int j = i; j < right; j++){ // If leftmost index of str[j] // less than i if (L[str[j] - 'a'] < i) return -1; // Update right right = Math.Max(right, R[str[j] - 'a']); } return right; } // Function to find maximum number of substring // that satisfy the condition static List<string> maxcntOfSubstrings(string str) { // Stores all substrings that // satisfy the condition List<string> res = new List<string>(); // Stores length of str int n = str.Length; // Stores leftmost index // of each character int[] L = new int[26]; // Stores rightmost index // of each character int[] R = new int[26]; // Initialize L[] and R[] for(int i = 0; i <26; i++) { // Initialize L[i] // and R[i] L[i] = R[i] = -1; } // Traverse the string for (int i = 0; i < n; i++) { // If str[i] not // already occurred if (L[str[i] - 'a'] == -1) { // Update leftmost index // of str[i] L[str[i] - 'a'] = i; } // Update rightmost index // of str[i] R[str[i]-'a'] = i; } // Stores rightmost index of last // substring inserted into res[] int right = -1; // Traverse the string for (int i = 0; i < n; i++) { // If i is leftmost index of str[i] if (i == L[str[i] - 'a']) { // Check if a new substring starting // from i satisfies the conditions or not int new_right = checkValid(str, i, L, R); // If the substring starting from i // satisfies the conditions if(new_right != -1){ // Stores the substring starting from // i that satisfy the condition string sub = str.Substring(i, new_right - i + 1); // If the substring overlaps // with another substring if(new_right < right){ // Stores sub to the last // of res res[res.Count - 1] = sub; } else { // If sub not overlaps to // other string then append // sub to the end of res res.Add(sub); } // Update right right = new_right; } } } return res; } // Driver code static void Main() { string str = \"abbaccd\"; // Stores maximum number of substring // that satisfy the condition List<string> res = maxcntOfSubstrings(str); // Print all substring foreach(string sub in res) { Console.Write(sub + \" \"); } }} // This code is contributed by divyeshrabadiya",
"e": 15072,
"s": 11178,
"text": null
},
{
"code": "<script> // JavaScript program to implement // the above approach // Function to check if substring contains all // occurrences of each character of str or not function checkValid(str, i, L, R) { // Stores rightmost index of str[i] var right = R[str[i].charCodeAt(0) - \"a\".charCodeAt(0)]; // Traverse the current substring for (var j = i; j < right; j++) { // If leftmost index of str[j] // less than i if (L[str[j].charCodeAt(0) - \"a\".charCodeAt(0)] < i) return -1; // Update right right = Math.max(right, R[str[j].charCodeAt(0) - \"a\".charCodeAt(0)]); } return right; } // Function to find maximum number of substring // that satisfy the condition function maxcntOfSubstrings(str) { // Stores all substrings that // satisfy the condition var res = []; // Stores length of str var n = str.length; // Stores leftmost index // of each character var L = new Array(26).fill(-1); // Stores rightmost index // of each character var R = new Array(26).fill(-1); // Traverse the string for (var i = 0; i < n; i++) { var x = str[i].charCodeAt(0) - \"a\".charCodeAt(0); // If str[i] not // already occurred if (L[x] === -1) { // Update leftmost index // of str[i] L[x] = i; } // Update rightmost index // of str[i] R[x] = i; } // Stores rightmost index of last // substring inserted into res[] var right = -1; // Traverse the string for (var i = 0; i < n; i++) { var x = str[i].charCodeAt(0) - \"a\".charCodeAt(0); // If i is leftmost index of str[i] if (i === L[x]) { // Check if a new substring starting // from i satisfies the conditions or not var new_right = checkValid(str, i, L, R); // If the substring starting from i // satisfies the conditions if (new_right !== -1) { // Stores the substring starting from // i that satisfy the condition var sub = str.substring(i, new_right + 1); // If the substring overlaps // with another substring if (new_right < right) { // Stores sub to the last // of res res[res.length - 1] = sub; } else { // If sub not overlaps to // other string then append // sub to the end of res res.push(sub); } // Update right right = new_right; } } } return res; } // Driver code var str = \"abbaccd\"; // Stores maximum number of substring // that satisfy the condition var res = maxcntOfSubstrings(str); // Print all substring for (const sub of res) { document.write(sub + \" \"); }</script>",
"e": 18204,
"s": 15072,
"text": null
},
{
"code": null,
"e": 18212,
"s": 18204,
"text": "bb cc d"
},
{
"code": null,
"e": 18264,
"s": 18214,
"text": "Time Complexity: O(N * 26) Auxiliary Space: O(26)"
},
{
"code": null,
"e": 18279,
"s": 18264,
"text": "mohit kumar 29"
},
{
"code": null,
"e": 18296,
"s": 18279,
"text": "SURENDRA_GANGWAR"
},
{
"code": null,
"e": 18314,
"s": 18296,
"text": "divyeshrabadiya07"
},
{
"code": null,
"e": 18321,
"s": 18314,
"text": "rdtank"
},
{
"code": null,
"e": 18338,
"s": 18321,
"text": "akshaysingh98088"
},
{
"code": null,
"e": 18351,
"s": 18338,
"text": "simmytarika5"
},
{
"code": null,
"e": 18368,
"s": 18351,
"text": "surinderdawra388"
},
{
"code": null,
"e": 18387,
"s": 18368,
"text": "frequency-counting"
},
{
"code": null,
"e": 18397,
"s": 18387,
"text": "substring"
},
{
"code": null,
"e": 18405,
"s": 18397,
"text": "Strings"
},
{
"code": null,
"e": 18413,
"s": 18405,
"text": "Strings"
},
{
"code": null,
"e": 18511,
"s": 18413,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 18586,
"s": 18511,
"text": "Check for Balanced Brackets in an expression (well-formedness) using Stack"
},
{
"code": null,
"e": 18631,
"s": 18586,
"text": "Different Methods to Reverse a String in C++"
},
{
"code": null,
"e": 18688,
"s": 18631,
"text": "Python program to check if a string is palindrome or not"
},
{
"code": null,
"e": 18724,
"s": 18688,
"text": "KMP Algorithm for Pattern Searching"
},
{
"code": null,
"e": 18762,
"s": 18724,
"text": "Longest Palindromic Substring | Set 1"
},
{
"code": null,
"e": 18807,
"s": 18762,
"text": "Top 50 String Coding Problems for Interviews"
},
{
"code": null,
"e": 18871,
"s": 18807,
"text": "What is Data Structure: Types, Classifications and Applications"
},
{
"code": null,
"e": 18932,
"s": 18871,
"text": "Length of the longest substring without repeating characters"
},
{
"code": null,
"e": 18968,
"s": 18932,
"text": "Convert string to char array in C++"
}
] |
Python – seaborn.boxenplot() method | 18 Aug, 2020
Prerequisite : Fundamentals of Seaborn
Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. There is just something extraordinary about a well-designed visualization. The colors stand out, the layers blend nicely together, the contours flow throughout, and the overall package not only has a nice aesthetic quality, but it provides meaningful insights to us as well.
Draw an enhanced box plot for larger datasets. This style of plot was originally named a “letter value” plot because it shows a large number of quantiles that are defined as “letter values”. It is similar to a box plot in plotting a nonparametric representation of a distribution in which all features correspond to actual observations. By plotting more quantiles, it provides more information about the shape of the distribution, particularly in the tails.
Syntax : seaborn.boxenplot(parameters)
Parameters :
x, y, hue : Inputs for plotting long-form data.
data : Dataset for plotting.
order, hue_order : Order to plot the categorical levels in, otherwise the levels are inferred from the data objects.
orient : Orientation of the plot (vertical or horizontal).
color : Color for all of the elements, or seed for a gradient palette.
palette : Colors to use for the different levels of the hue variable.
saturation : Proportion of the original saturation to draw colors at.
width : Width of a full element when not using hue nesting, or width of all the elements for one level of the major grouping variable.
dodge : When hue nesting is used, whether elements should be shifted along the categorical axis.
k_depth : The number of boxes, and by extension number of percentiles, to draw.
linewidth : Width of the gray lines that frame the plot elements.
scale : Method to use for the width of the letter value boxes.
outlier_prop : Proportion of data believed to be outliers.
showfliers : If False, suppress the plotting of outliers.
ax : Axes object to draw the plot onto, otherwise uses the current Axes.
kwargs : Other keyword arguments
Returns : Returns the Axes object with the plot drawn onto it.
Below is the implementation of above method with some examples :
Example 1:
# importing packagesimport seaborn as snsimport matplotlib.pyplot as plt # loading datasetdata = sns.load_dataset("tips") # plot the boxenplotsns.boxenplot(x = "day", y = "total_bill", data = data)plt.show()
Output :Example 2:
# importing packagesimport seaborn as snsimport matplotlib.pyplot as plt # loading datasetdata = sns.load_dataset("tips") # plot the boxenplot# hue by sex# width of 0.8sns.boxenplot(x ="day", y = "total_bill", hue = "sex", data = data, width = 0.8)plt.show()
Output :
Example 3:
# importing packagesimport seaborn as snsimport matplotlib.pyplot as plt # loading datasetdata = sns.load_dataset("tips") # plot the boxenplot# orient to horizontalsns.boxenplot(x = "total_bill", y = "size", data = data, orient ="h")plt.show()
Output :
Python-Seaborn
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n18 Aug, 2020"
},
{
"code": null,
"e": 67,
"s": 28,
"text": "Prerequisite : Fundamentals of Seaborn"
},
{
"code": null,
"e": 506,
"s": 67,
"text": "Seaborn is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. There is just something extraordinary about a well-designed visualization. The colors stand out, the layers blend nicely together, the contours flow throughout, and the overall package not only has a nice aesthetic quality, but it provides meaningful insights to us as well."
},
{
"code": null,
"e": 965,
"s": 506,
"text": "Draw an enhanced box plot for larger datasets. This style of plot was originally named a “letter value” plot because it shows a large number of quantiles that are defined as “letter values”. It is similar to a box plot in plotting a nonparametric representation of a distribution in which all features correspond to actual observations. By plotting more quantiles, it provides more information about the shape of the distribution, particularly in the tails."
},
{
"code": null,
"e": 1004,
"s": 965,
"text": "Syntax : seaborn.boxenplot(parameters)"
},
{
"code": null,
"e": 1017,
"s": 1004,
"text": "Parameters :"
},
{
"code": null,
"e": 1065,
"s": 1017,
"text": "x, y, hue : Inputs for plotting long-form data."
},
{
"code": null,
"e": 1094,
"s": 1065,
"text": "data : Dataset for plotting."
},
{
"code": null,
"e": 1211,
"s": 1094,
"text": "order, hue_order : Order to plot the categorical levels in, otherwise the levels are inferred from the data objects."
},
{
"code": null,
"e": 1270,
"s": 1211,
"text": "orient : Orientation of the plot (vertical or horizontal)."
},
{
"code": null,
"e": 1341,
"s": 1270,
"text": "color : Color for all of the elements, or seed for a gradient palette."
},
{
"code": null,
"e": 1411,
"s": 1341,
"text": "palette : Colors to use for the different levels of the hue variable."
},
{
"code": null,
"e": 1481,
"s": 1411,
"text": "saturation : Proportion of the original saturation to draw colors at."
},
{
"code": null,
"e": 1616,
"s": 1481,
"text": "width : Width of a full element when not using hue nesting, or width of all the elements for one level of the major grouping variable."
},
{
"code": null,
"e": 1713,
"s": 1616,
"text": "dodge : When hue nesting is used, whether elements should be shifted along the categorical axis."
},
{
"code": null,
"e": 1793,
"s": 1713,
"text": "k_depth : The number of boxes, and by extension number of percentiles, to draw."
},
{
"code": null,
"e": 1859,
"s": 1793,
"text": "linewidth : Width of the gray lines that frame the plot elements."
},
{
"code": null,
"e": 1922,
"s": 1859,
"text": "scale : Method to use for the width of the letter value boxes."
},
{
"code": null,
"e": 1981,
"s": 1922,
"text": "outlier_prop : Proportion of data believed to be outliers."
},
{
"code": null,
"e": 2039,
"s": 1981,
"text": "showfliers : If False, suppress the plotting of outliers."
},
{
"code": null,
"e": 2112,
"s": 2039,
"text": "ax : Axes object to draw the plot onto, otherwise uses the current Axes."
},
{
"code": null,
"e": 2145,
"s": 2112,
"text": "kwargs : Other keyword arguments"
},
{
"code": null,
"e": 2208,
"s": 2145,
"text": "Returns : Returns the Axes object with the plot drawn onto it."
},
{
"code": null,
"e": 2273,
"s": 2208,
"text": "Below is the implementation of above method with some examples :"
},
{
"code": null,
"e": 2284,
"s": 2273,
"text": "Example 1:"
},
{
"code": "# importing packagesimport seaborn as snsimport matplotlib.pyplot as plt # loading datasetdata = sns.load_dataset(\"tips\") # plot the boxenplotsns.boxenplot(x = \"day\", y = \"total_bill\", data = data)plt.show()",
"e": 2508,
"s": 2284,
"text": null
},
{
"code": null,
"e": 2527,
"s": 2508,
"text": "Output :Example 2:"
},
{
"code": "# importing packagesimport seaborn as snsimport matplotlib.pyplot as plt # loading datasetdata = sns.load_dataset(\"tips\") # plot the boxenplot# hue by sex# width of 0.8sns.boxenplot(x =\"day\", y = \"total_bill\", hue = \"sex\", data = data, width = 0.8)plt.show()",
"e": 2802,
"s": 2527,
"text": null
},
{
"code": null,
"e": 2811,
"s": 2802,
"text": "Output :"
},
{
"code": null,
"e": 2822,
"s": 2811,
"text": "Example 3:"
},
{
"code": "# importing packagesimport seaborn as snsimport matplotlib.pyplot as plt # loading datasetdata = sns.load_dataset(\"tips\") # plot the boxenplot# orient to horizontalsns.boxenplot(x = \"total_bill\", y = \"size\", data = data, orient =\"h\")plt.show()",
"e": 3082,
"s": 2822,
"text": null
},
{
"code": null,
"e": 3091,
"s": 3082,
"text": "Output :"
},
{
"code": null,
"e": 3106,
"s": 3091,
"text": "Python-Seaborn"
},
{
"code": null,
"e": 3113,
"s": 3106,
"text": "Python"
}
] |
Java Program to Validate Phone Numbers using Google’s libphonenumber Library | 02 Feb, 2021
Validating phone numbers is a common prerequisite in today’s web, mobile, or desktop applications, but Java does not have an integrated method for carrying out this kind of common validation. So, we have to use some open source libraries to perform such validation. One such library is Google’s phone number library. It helps to verify any phone number, whether foreign, specific to India, or specific to any country.
We can also use regular expressions to validate the phone numbers, but it requires some skills to write such complex expressions and then the testing would be an endless task.
The libphonenumber is an open-source library from Google for formatting, parsing, and validating international phone numbers. It contains many methods to achieve such functionality. Some of them are discussed below:
Return Type
Method
Description
format(Phonenumber.PhoneNumber number,
PhoneNumberUtil.PhoneNumberFormat numberFormat)
This is a rich library with even more utility features and takes care of much of the needs of our program.
Below is Java Implementation for Validating Phone Numbers using Google’s libphonenumber Library. Here we will be using Eclipse IDE.
Step 1: Creating a Maven Project
To begin with first create a Maven Project in Eclipse. The reason behind creating a Maven Project rather than a normal Java Project is that the libphonenumber library is present in the Maven Repository so we have to use it as a dependency in our project.
Leave everything to be the default. The Artifact Id will be the name of your Maven project.
Step 2: Adding the Dependency
Once you have created the Maven Project, add the libphonenumber dependency in the pom.xml file. As soon as you save the file, the library will be downloaded for offline use.
XML
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.Demo</groupId> <artifactId>DemoProject</artifactId> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>com.googlecode.libphonenumber</groupId> <artifactId>libphonenumber</artifactId> <version>8.12.16</version> </dependency> </dependencies></project>
Step 3: Creating the Driver Class
Now, simply create a Java class to use the functionalities of this library.
Java
import com.google.i18n.phonenumbers.NumberParseException;import com.google.i18n.phonenumbers.PhoneNumberUtil;import com.google.i18n.phonenumbers.Phonenumber.PhoneNumber; public class GFG { public static void main(String args[]) { // creating an array of random phone numbers String[] phonenumbers = { "+91 94483 76473", "1800 425 3800", "+91 83944 7484", "0294 2424447" }; // iterating over each number to validate for (String phone : phonenumbers) { if (isPhoneNumberValid(phone)) { System.out.println(phone + " is valid."); } else { System.out.println(phone + " is not valid."); } } } // this method return true if the passed phone number is // valid as per the region specified public static boolean isPhoneNumberValid(String phone) { // creating an instance of PhoneNumber Utility class PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); // creating a variable of type PhoneNumber PhoneNumber phoneNumber = null; try { // the parse method parses the string and // returns a PhoneNumber in the format of // specified region phoneNumber = phoneUtil.parse(phone, "IN"); // this statement prints the type of the phone // number System.out.println( "\nType: " + phoneUtil.getNumberType(phoneNumber)); } catch (NumberParseException e) { // if the phoneUtil is unable to parse any phone // number an exception occurs and gets caught in // this block System.out.println( "Unable to parse the given phone number: " + phone); e.printStackTrace(); } // return the boolean value of the validation // performed return phoneUtil.isValidNumber(phoneNumber); }}
Output:
Type: MOBILE
+91 94483 76473 is valid.
Type: TOLL_FREE
1800 425 3800 is valid.
Type: UNKNOWN
+91 83944 7484 is not valid.
Type: FIXED_LINE
0294 2424447 is valid.
Picked
Technical Scripter 2020
Java
Java Programs
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Introduction to Java
Constructors in Java
Exceptions in Java
Generics in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
Factory method design pattern in Java
Java Program to Remove Duplicate Elements From the Array | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n02 Feb, 2021"
},
{
"code": null,
"e": 472,
"s": 54,
"text": "Validating phone numbers is a common prerequisite in today’s web, mobile, or desktop applications, but Java does not have an integrated method for carrying out this kind of common validation. So, we have to use some open source libraries to perform such validation. One such library is Google’s phone number library. It helps to verify any phone number, whether foreign, specific to India, or specific to any country."
},
{
"code": null,
"e": 648,
"s": 472,
"text": "We can also use regular expressions to validate the phone numbers, but it requires some skills to write such complex expressions and then the testing would be an endless task."
},
{
"code": null,
"e": 864,
"s": 648,
"text": "The libphonenumber is an open-source library from Google for formatting, parsing, and validating international phone numbers. It contains many methods to achieve such functionality. Some of them are discussed below:"
},
{
"code": null,
"e": 876,
"s": 864,
"text": "Return Type"
},
{
"code": null,
"e": 883,
"s": 876,
"text": "Method"
},
{
"code": null,
"e": 895,
"s": 883,
"text": "Description"
},
{
"code": null,
"e": 934,
"s": 895,
"text": "format(Phonenumber.PhoneNumber number,"
},
{
"code": null,
"e": 982,
"s": 934,
"text": "PhoneNumberUtil.PhoneNumberFormat numberFormat)"
},
{
"code": null,
"e": 1090,
"s": 982,
"text": " This is a rich library with even more utility features and takes care of much of the needs of our program."
},
{
"code": null,
"e": 1223,
"s": 1090,
"text": "Below is Java Implementation for Validating Phone Numbers using Google’s libphonenumber Library. Here we will be using Eclipse IDE."
},
{
"code": null,
"e": 1256,
"s": 1223,
"text": "Step 1: Creating a Maven Project"
},
{
"code": null,
"e": 1511,
"s": 1256,
"text": "To begin with first create a Maven Project in Eclipse. The reason behind creating a Maven Project rather than a normal Java Project is that the libphonenumber library is present in the Maven Repository so we have to use it as a dependency in our project."
},
{
"code": null,
"e": 1603,
"s": 1511,
"text": "Leave everything to be the default. The Artifact Id will be the name of your Maven project."
},
{
"code": null,
"e": 1633,
"s": 1603,
"text": "Step 2: Adding the Dependency"
},
{
"code": null,
"e": 1807,
"s": 1633,
"text": "Once you have created the Maven Project, add the libphonenumber dependency in the pom.xml file. As soon as you save the file, the library will be downloaded for offline use."
},
{
"code": null,
"e": 1811,
"s": 1807,
"text": "XML"
},
{
"code": "<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.Demo</groupId> <artifactId>DemoProject</artifactId> <version>0.0.1-SNAPSHOT</version> <dependencies> <dependency> <groupId>com.googlecode.libphonenumber</groupId> <artifactId>libphonenumber</artifactId> <version>8.12.16</version> </dependency> </dependencies></project>",
"e": 2389,
"s": 1811,
"text": null
},
{
"code": null,
"e": 2423,
"s": 2389,
"text": "Step 3: Creating the Driver Class"
},
{
"code": null,
"e": 2499,
"s": 2423,
"text": "Now, simply create a Java class to use the functionalities of this library."
},
{
"code": null,
"e": 2504,
"s": 2499,
"text": "Java"
},
{
"code": "import com.google.i18n.phonenumbers.NumberParseException;import com.google.i18n.phonenumbers.PhoneNumberUtil;import com.google.i18n.phonenumbers.Phonenumber.PhoneNumber; public class GFG { public static void main(String args[]) { // creating an array of random phone numbers String[] phonenumbers = { \"+91 94483 76473\", \"1800 425 3800\", \"+91 83944 7484\", \"0294 2424447\" }; // iterating over each number to validate for (String phone : phonenumbers) { if (isPhoneNumberValid(phone)) { System.out.println(phone + \" is valid.\"); } else { System.out.println(phone + \" is not valid.\"); } } } // this method return true if the passed phone number is // valid as per the region specified public static boolean isPhoneNumberValid(String phone) { // creating an instance of PhoneNumber Utility class PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance(); // creating a variable of type PhoneNumber PhoneNumber phoneNumber = null; try { // the parse method parses the string and // returns a PhoneNumber in the format of // specified region phoneNumber = phoneUtil.parse(phone, \"IN\"); // this statement prints the type of the phone // number System.out.println( \"\\nType: \" + phoneUtil.getNumberType(phoneNumber)); } catch (NumberParseException e) { // if the phoneUtil is unable to parse any phone // number an exception occurs and gets caught in // this block System.out.println( \"Unable to parse the given phone number: \" + phone); e.printStackTrace(); } // return the boolean value of the validation // performed return phoneUtil.isValidNumber(phoneNumber); }}",
"e": 4578,
"s": 2504,
"text": null
},
{
"code": null,
"e": 4586,
"s": 4578,
"text": "Output:"
},
{
"code": null,
"e": 4748,
"s": 4586,
"text": "Type: MOBILE\n+91 94483 76473 is valid.\nType: TOLL_FREE\n1800 425 3800 is valid.\nType: UNKNOWN\n+91 83944 7484 is not valid.\nType: FIXED_LINE\n0294 2424447 is valid."
},
{
"code": null,
"e": 4755,
"s": 4748,
"text": "Picked"
},
{
"code": null,
"e": 4779,
"s": 4755,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 4784,
"s": 4779,
"text": "Java"
},
{
"code": null,
"e": 4798,
"s": 4784,
"text": "Java Programs"
},
{
"code": null,
"e": 4817,
"s": 4798,
"text": "Technical Scripter"
},
{
"code": null,
"e": 4822,
"s": 4817,
"text": "Java"
},
{
"code": null,
"e": 4920,
"s": 4822,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4935,
"s": 4920,
"text": "Stream In Java"
},
{
"code": null,
"e": 4956,
"s": 4935,
"text": "Introduction to Java"
},
{
"code": null,
"e": 4977,
"s": 4956,
"text": "Constructors in Java"
},
{
"code": null,
"e": 4996,
"s": 4977,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 5013,
"s": 4996,
"text": "Generics in Java"
},
{
"code": null,
"e": 5039,
"s": 5013,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 5073,
"s": 5039,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 5120,
"s": 5073,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 5158,
"s": 5120,
"text": "Factory method design pattern in Java"
}
] |
Change the data type of a column or a Pandas Series | 17 Aug, 2020
Series is a one-dimensional labeled array capable of holding data of the type integer, string, float, python objects, etc. The axis labels are collectively called index.
Let’s see the program to change the data type of column or a Series in Pandas Dataframe.Method 1: Using DataFrame.astype() method.
We can pass any Python, Numpy or Pandas datatype to change all columns of a dataframe to that type, or we can pass a dictionary having column names as keys and datatype as values to change type of selected columns.
Syntax: DataFrame.astype(dtype, copy = True, errors = ’raise’, **kwargs)
Return: casted : type of caller
Let’s see the examples: Example 1: The Data type of the column is changed to “str” object.
Python3
# importing the pandas libraryimport pandas as pd # creating a DataFramedf = pd.DataFrame({'srNo': [1, 2, 3], 'Name': ['Geeks', 'for', 'Geeks'], 'id': [111, 222, 333]})# show the dataframeprint(df) # show the datatypesprint(df.dtypes)
Output:
Now, changing the dataframe data types to string.
Python3
# changing the dataframe # data types to stringdf = df.astype(str) # show the data types # of dataframedf.dtypes
Output:
Example 2: Now, let us change the data type of the “id” column from “int” to “str”. We create a dictionary and specify the column name with the desired data type.
Python3
# importing the pandas libraryimport pandas as pd # creating a DataFramedf = pd.DataFrame({'No': [1, 2, 3], 'Name': ['Geeks', 'for', 'Geeks'], 'id': [111, 222, 333]})# show the dataframeprint(df) # show the datatypesprint(df.dtypes)
Output:
Now, change the data type of ‘id’ column to string.
Python3
# creating a dictionary # with column name and data typedata_types_dict = {'id': str} # we will change the data type # of id column to str by giving# the dict to the astype methoddf = df.astype(data_types_dict) # checking the data types# using df.dtypes methoddf.dtypes
Output:
Example 3: Convert the data type of “grade” column from “float” to “int”.
Python3
# import pandas libraryimport pandas as pd # dictionaryresult_data = {'name': ['Alia', 'Rima', 'Kate', 'John', 'Emma', 'Misa', 'Matt'], 'grade': [13.5, 7.1, 11.5, 3.77, 8.21, 21.22, 17.5], 'qualify': ['yes', 'no', 'yes', 'no', 'no', 'yes', 'yes']} # create a dataframedf = pd.DataFrame(result_data) # show the dataframeprint(df) #show the datatypesprint(df.dtypes)
Output:
Now, we convert the data type of “grade” column from “float” to “int”.
Python3
# convert data type of grade column # into integerdf.grade = df.grade.astype(int) # show the dataframeprint(df) # show the datatypesprint(df.dtypes)
Output:
Method 2: Using Dataframe.apply() method.
We can pass pandas.to_numeric, pandas.to_datetime and pandas.to_timedelta as argument to apply() function to change the datatype of one or more columns to numeric, datetime and timedelta respectively.
Syntax: Dataframe/Series.apply(func, convert_dtype=True, args=())
Return: Dataframe/Series after applied function/operation.
Let’s see the example:
Example: Convert the data type of “B” column from “string” to “int”.
Python3
# importing pandas as pd import pandas as pd # sample dataframe df = pd.DataFrame({ 'A': ['a', 'b', 'c', 'd', 'e'], 'B': [12, 22, 35, '47', '55'], 'C': [1.1, '2.1', 3.0, '4.1', '5.1'] }) # show the dataframeprint(df) # show the data types# of all columnsdf.dtypes
Output:
Now, we convert the datatype of column “B” into an “int” type.
Python3
# using apply method df[['B']] = df[['B']].apply(pd.to_numeric) # show the data types# of all columnsdf.dtypes
Output:
Python pandas-series
Python-pandas
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Iterate over a list in Python
How to iterate through Excel rows in Python?
Enumerate() in Python
Rotate axis tick labels in Seaborn and Matplotlib
Python Dictionary
Deque in Python
Stack in Python
Queue in Python
Read a file line by line in Python
Defaultdict in Python | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n17 Aug, 2020"
},
{
"code": null,
"e": 199,
"s": 28,
"text": "Series is a one-dimensional labeled array capable of holding data of the type integer, string, float, python objects, etc. The axis labels are collectively called index. "
},
{
"code": null,
"e": 330,
"s": 199,
"text": "Let’s see the program to change the data type of column or a Series in Pandas Dataframe.Method 1: Using DataFrame.astype() method."
},
{
"code": null,
"e": 545,
"s": 330,
"text": "We can pass any Python, Numpy or Pandas datatype to change all columns of a dataframe to that type, or we can pass a dictionary having column names as keys and datatype as values to change type of selected columns."
},
{
"code": null,
"e": 618,
"s": 545,
"text": "Syntax: DataFrame.astype(dtype, copy = True, errors = ’raise’, **kwargs)"
},
{
"code": null,
"e": 650,
"s": 618,
"text": "Return: casted : type of caller"
},
{
"code": null,
"e": 743,
"s": 650,
"text": "Let’s see the examples: Example 1: The Data type of the column is changed to “str” object. "
},
{
"code": null,
"e": 751,
"s": 743,
"text": "Python3"
},
{
"code": "# importing the pandas libraryimport pandas as pd # creating a DataFramedf = pd.DataFrame({'srNo': [1, 2, 3], 'Name': ['Geeks', 'for', 'Geeks'], 'id': [111, 222, 333]})# show the dataframeprint(df) # show the datatypesprint(df.dtypes)",
"e": 1079,
"s": 751,
"text": null
},
{
"code": null,
"e": 1087,
"s": 1079,
"text": "Output:"
},
{
"code": null,
"e": 1137,
"s": 1087,
"text": "Now, changing the dataframe data types to string."
},
{
"code": null,
"e": 1145,
"s": 1137,
"text": "Python3"
},
{
"code": "# changing the dataframe # data types to stringdf = df.astype(str) # show the data types # of dataframedf.dtypes",
"e": 1259,
"s": 1145,
"text": null
},
{
"code": null,
"e": 1267,
"s": 1259,
"text": "Output:"
},
{
"code": null,
"e": 1431,
"s": 1267,
"text": "Example 2: Now, let us change the data type of the “id” column from “int” to “str”. We create a dictionary and specify the column name with the desired data type. "
},
{
"code": null,
"e": 1439,
"s": 1431,
"text": "Python3"
},
{
"code": "# importing the pandas libraryimport pandas as pd # creating a DataFramedf = pd.DataFrame({'No': [1, 2, 3], 'Name': ['Geeks', 'for', 'Geeks'], 'id': [111, 222, 333]})# show the dataframeprint(df) # show the datatypesprint(df.dtypes)",
"e": 1765,
"s": 1439,
"text": null
},
{
"code": null,
"e": 1773,
"s": 1765,
"text": "Output:"
},
{
"code": null,
"e": 1826,
"s": 1773,
"text": "Now, change the data type of ‘id’ column to string. "
},
{
"code": null,
"e": 1834,
"s": 1826,
"text": "Python3"
},
{
"code": "# creating a dictionary # with column name and data typedata_types_dict = {'id': str} # we will change the data type # of id column to str by giving# the dict to the astype methoddf = df.astype(data_types_dict) # checking the data types# using df.dtypes methoddf.dtypes",
"e": 2106,
"s": 1834,
"text": null
},
{
"code": null,
"e": 2114,
"s": 2106,
"text": "Output:"
},
{
"code": null,
"e": 2188,
"s": 2114,
"text": "Example 3: Convert the data type of “grade” column from “float” to “int”."
},
{
"code": null,
"e": 2196,
"s": 2188,
"text": "Python3"
},
{
"code": "# import pandas libraryimport pandas as pd # dictionaryresult_data = {'name': ['Alia', 'Rima', 'Kate', 'John', 'Emma', 'Misa', 'Matt'], 'grade': [13.5, 7.1, 11.5, 3.77, 8.21, 21.22, 17.5], 'qualify': ['yes', 'no', 'yes', 'no', 'no', 'yes', 'yes']} # create a dataframedf = pd.DataFrame(result_data) # show the dataframeprint(df) #show the datatypesprint(df.dtypes)",
"e": 2699,
"s": 2196,
"text": null
},
{
"code": null,
"e": 2707,
"s": 2699,
"text": "Output:"
},
{
"code": null,
"e": 2778,
"s": 2707,
"text": "Now, we convert the data type of “grade” column from “float” to “int”."
},
{
"code": null,
"e": 2786,
"s": 2778,
"text": "Python3"
},
{
"code": "# convert data type of grade column # into integerdf.grade = df.grade.astype(int) # show the dataframeprint(df) # show the datatypesprint(df.dtypes)",
"e": 2937,
"s": 2786,
"text": null
},
{
"code": null,
"e": 2945,
"s": 2937,
"text": "Output:"
},
{
"code": null,
"e": 2989,
"s": 2947,
"text": "Method 2: Using Dataframe.apply() method."
},
{
"code": null,
"e": 3190,
"s": 2989,
"text": "We can pass pandas.to_numeric, pandas.to_datetime and pandas.to_timedelta as argument to apply() function to change the datatype of one or more columns to numeric, datetime and timedelta respectively."
},
{
"code": null,
"e": 3256,
"s": 3190,
"text": "Syntax: Dataframe/Series.apply(func, convert_dtype=True, args=())"
},
{
"code": null,
"e": 3316,
"s": 3256,
"text": "Return: Dataframe/Series after applied function/operation. "
},
{
"code": null,
"e": 3339,
"s": 3316,
"text": "Let’s see the example:"
},
{
"code": null,
"e": 3408,
"s": 3339,
"text": "Example: Convert the data type of “B” column from “string” to “int”."
},
{
"code": null,
"e": 3416,
"s": 3408,
"text": "Python3"
},
{
"code": "# importing pandas as pd import pandas as pd # sample dataframe df = pd.DataFrame({ 'A': ['a', 'b', 'c', 'd', 'e'], 'B': [12, 22, 35, '47', '55'], 'C': [1.1, '2.1', 3.0, '4.1', '5.1'] }) # show the dataframeprint(df) # show the data types# of all columnsdf.dtypes",
"e": 3727,
"s": 3416,
"text": null
},
{
"code": null,
"e": 3735,
"s": 3727,
"text": "Output:"
},
{
"code": null,
"e": 3798,
"s": 3735,
"text": "Now, we convert the datatype of column “B” into an “int” type."
},
{
"code": null,
"e": 3806,
"s": 3798,
"text": "Python3"
},
{
"code": "# using apply method df[['B']] = df[['B']].apply(pd.to_numeric) # show the data types# of all columnsdf.dtypes",
"e": 3919,
"s": 3806,
"text": null
},
{
"code": null,
"e": 3927,
"s": 3919,
"text": "Output:"
},
{
"code": null,
"e": 3948,
"s": 3927,
"text": "Python pandas-series"
},
{
"code": null,
"e": 3962,
"s": 3948,
"text": "Python-pandas"
},
{
"code": null,
"e": 3969,
"s": 3962,
"text": "Python"
},
{
"code": null,
"e": 4067,
"s": 3969,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4097,
"s": 4067,
"text": "Iterate over a list in Python"
},
{
"code": null,
"e": 4142,
"s": 4097,
"text": "How to iterate through Excel rows in Python?"
},
{
"code": null,
"e": 4164,
"s": 4142,
"text": "Enumerate() in Python"
},
{
"code": null,
"e": 4214,
"s": 4164,
"text": "Rotate axis tick labels in Seaborn and Matplotlib"
},
{
"code": null,
"e": 4232,
"s": 4214,
"text": "Python Dictionary"
},
{
"code": null,
"e": 4248,
"s": 4232,
"text": "Deque in Python"
},
{
"code": null,
"e": 4264,
"s": 4248,
"text": "Stack in Python"
},
{
"code": null,
"e": 4280,
"s": 4264,
"text": "Queue in Python"
},
{
"code": null,
"e": 4315,
"s": 4280,
"text": "Read a file line by line in Python"
}
] |
jQuery | height() and innerHeight() with Examples | 13 Feb, 2019
The height() is an inbuilt method in jQuery which is used to check the height of an element but it will not check the padding, border and margin of the element.Syntax:
$("param").height()
Parameters : This function do not accept any parameter.Return value : It returns height of the selected element.
<html> <head> <script src="https://ajax.googleapis.com/ajax/libs/ jquery/3.3.1/jquery.min.js"></script> <script> $(document).ready(function() { $("button").click(function() { var msg = ""; msg += "height of div: " + $("#demo").height(); $("#demo").html(msg); }); }); </script> <style> #demo { height: 150px; width: 350px; padding: 10px; margin: 3px; border: 1px solid blue; background-color: lightgreen; } </style></head> <body> <div id="demo"></div> <button>Click Me!!!</button> <p>Click on the button and check the height of the element(excluding padding).</p></body> </html>
Output:Before clicking on “Click Me” button-After clicking on “Click Me” button-
jQuery also include innerHeight() method i.e, it used to check inner height of the element including padding.Syntax:
$("param").innerHeight()
Parameters: This function do not accept any parameter.Return value: It returns the inner height of the selected element.Code #2:
<html> <head> <script src="https://ajax.googleapis.com/ajax/libs/ jquery/3.3.1/jquery.min.js"></script> <script> $(document).ready(function() { $("button").click(function() { var msg = ""; msg += "Inner Height of div: " + $("#demo"). innerHeight() + "</br>"; $("#demo").html(msg); }); }); </script></head><style> #demo { height: 150px; width: 350px; padding: 10px; margin: 3px; border: 1px solid blue; background-color: lightgreen; }</style> <body> <div id="demo"></div> <button>Click Me!!!</button> <p>Click on the button and check the innerHeight of an element(includes padding).</p></body> </html>
Output:Before clicking on “Click Me” button-After clicking on “Click Me” button-
jQuery-HTML/CSS
JavaScript
JQuery
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n13 Feb, 2019"
},
{
"code": null,
"e": 196,
"s": 28,
"text": "The height() is an inbuilt method in jQuery which is used to check the height of an element but it will not check the padding, border and margin of the element.Syntax:"
},
{
"code": null,
"e": 217,
"s": 196,
"text": "$(\"param\").height()\n"
},
{
"code": null,
"e": 330,
"s": 217,
"text": "Parameters : This function do not accept any parameter.Return value : It returns height of the selected element."
},
{
"code": "<html> <head> <script src=\"https://ajax.googleapis.com/ajax/libs/ jquery/3.3.1/jquery.min.js\"></script> <script> $(document).ready(function() { $(\"button\").click(function() { var msg = \"\"; msg += \"height of div: \" + $(\"#demo\").height(); $(\"#demo\").html(msg); }); }); </script> <style> #demo { height: 150px; width: 350px; padding: 10px; margin: 3px; border: 1px solid blue; background-color: lightgreen; } </style></head> <body> <div id=\"demo\"></div> <button>Click Me!!!</button> <p>Click on the button and check the height of the element(excluding padding).</p></body> </html>",
"e": 1118,
"s": 330,
"text": null
},
{
"code": null,
"e": 1199,
"s": 1118,
"text": "Output:Before clicking on “Click Me” button-After clicking on “Click Me” button-"
},
{
"code": null,
"e": 1316,
"s": 1199,
"text": "jQuery also include innerHeight() method i.e, it used to check inner height of the element including padding.Syntax:"
},
{
"code": null,
"e": 1342,
"s": 1316,
"text": "$(\"param\").innerHeight()\n"
},
{
"code": null,
"e": 1471,
"s": 1342,
"text": "Parameters: This function do not accept any parameter.Return value: It returns the inner height of the selected element.Code #2:"
},
{
"code": "<html> <head> <script src=\"https://ajax.googleapis.com/ajax/libs/ jquery/3.3.1/jquery.min.js\"></script> <script> $(document).ready(function() { $(\"button\").click(function() { var msg = \"\"; msg += \"Inner Height of div: \" + $(\"#demo\"). innerHeight() + \"</br>\"; $(\"#demo\").html(msg); }); }); </script></head><style> #demo { height: 150px; width: 350px; padding: 10px; margin: 3px; border: 1px solid blue; background-color: lightgreen; }</style> <body> <div id=\"demo\"></div> <button>Click Me!!!</button> <p>Click on the button and check the innerHeight of an element(includes padding).</p></body> </html>",
"e": 2267,
"s": 1471,
"text": null
},
{
"code": null,
"e": 2348,
"s": 2267,
"text": "Output:Before clicking on “Click Me” button-After clicking on “Click Me” button-"
},
{
"code": null,
"e": 2364,
"s": 2348,
"text": "jQuery-HTML/CSS"
},
{
"code": null,
"e": 2375,
"s": 2364,
"text": "JavaScript"
},
{
"code": null,
"e": 2382,
"s": 2375,
"text": "JQuery"
}
] |
How to validate MAC address using Regular Expression | 27 Jan, 2021
Given string str, the task is to check whether the given string is a valid MAC address or not by using Regular Expression.
A valid MAC address must satisfy the following conditions:
It must contain 12 hexadecimal digits.One way to represent them is to form six pairs of the characters separated with a hyphen (-) or colon(:). For example, 01-23-45-67-89-AB is a valid MAC address.Another way to represent them is to form three groups of four hexadecimal digits separated by dots(.). For example, 0123.4567.89AB is a valid MAC address.
It must contain 12 hexadecimal digits.
One way to represent them is to form six pairs of the characters separated with a hyphen (-) or colon(:). For example, 01-23-45-67-89-AB is a valid MAC address.
Another way to represent them is to form three groups of four hexadecimal digits separated by dots(.). For example, 0123.4567.89AB is a valid MAC address.
Examples:
Input: str = “01-23-45-67-89-AB”; Output: true Explanation: The given string satisfies all the above mentioned conditions. Therefore, it is a valid MAC address.
Input: str = “01-23-45-67-89-AH”; Output: false Explanation: The given string contains ‘H’, the valid hexadecimal digits should be followed by letter from a-f, A-F, and 0-9. Therefore, it is not a valid MAC address.
Input: str = “01-23-45-67-AH”; Output: false Explanation: The given string has five groups of two hexadecimal digits. Therefore, it is not a valid MAC address.
Approach: The idea is to use Regular Expression to solve this problem. The following steps can be followed to compute the answer.
Get the String.
Create a regular expression to check valid MAC address as mentioned below:
regex = “^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})|([0-9a-fA-F]{4}\\.[0-9a-fA-F]{4}\\.[0-9a-fA-F]{4})$”;
Where: ^ represents the starting of the string.([0-9A-Fa-f]{2}[:-]){5} represents the five groups of two hexadecimal digits separated by hyphens (-) or colons (:)([0-9A-Fa-f]{2}) represents the one groups of two hexadecimal digits.| represents the or.( represents the starting of the group.[0-9a-fA-F]{4}\\. represents the first part of four hexadecimal digits separated by dots (.).[0-9a-fA-F]{4}\\. represents the second part of four hexadecimal digits separated by dots (.).[0-9a-fA-F]{4} represents the third part of four hexadecimal digits.) represents the ending of the group.$ represents the ending of the string.
^ represents the starting of the string.
([0-9A-Fa-f]{2}[:-]){5} represents the five groups of two hexadecimal digits separated by hyphens (-) or colons (:)
([0-9A-Fa-f]{2}) represents the one groups of two hexadecimal digits.
| represents the or.
( represents the starting of the group.
[0-9a-fA-F]{4}\\. represents the first part of four hexadecimal digits separated by dots (.).
[0-9a-fA-F]{4}\\. represents the second part of four hexadecimal digits separated by dots (.).
[0-9a-fA-F]{4} represents the third part of four hexadecimal digits.
) represents the ending of the group.
$ represents the ending of the string.
Match the given string with the Regular Expression. In Java, this can be done by using Pattern.matcher().
Return true if the string matches with the given regular expression, else return false.
Below is the implementation of the above approach:
Java
Python3
C++
// Java program to validate// MAC address using// regular expression import java.util.regex.*;class GFG { // Function to validate // MAC address // using regular expression public static boolean isValidMACAddress(String str) { // Regex to check valid // MAC address String regex = "^([0-9A-Fa-f]{2}[:-])" + "{5}([0-9A-Fa-f]{2})|" + "([0-9a-fA-F]{4}\\." + "[0-9a-fA-F]{4}\\." + "[0-9a-fA-F]{4})$"; // Compile the ReGex Pattern p = Pattern.compile(regex); // If the string is empty // return false if (str == null) { return false; } // Find match between given string // and regular expression // uSing Pattern.matcher() Matcher m = p.matcher(str); // Return if the string // matched the ReGex return m.matches(); } // Driver code public static void main(String args[]) { // Test Case 1: String str1 = "01-23-45-67-89-AB"; System.out.println(isValidMACAddress(str1)); // Test Case 2: String str2 = "01:23:45:67:89:AB"; System.out.println(isValidMACAddress(str2)); // Test Case 3: String str3 = "0123.4567.89AB"; System.out.println(isValidMACAddress(str3)); // Test Case 4: String str4 = "01-23-45-67-89-AH"; System.out.println(isValidMACAddress(str4)); // Test Case 5: String str5 = "01-23-45-67-AH"; System.out.println(isValidMACAddress(str5)); }}
# Python3 program to validate# MAC address using# using regular expressionimport re # Function to validate MAC address. def isValidMACAddress(str): # Regex to check valid # MAC address regex = ("^([0-9A-Fa-f]{2}[:-])" + "{5}([0-9A-Fa-f]{2})|" + "([0-9a-fA-F]{4}\\." + "[0-9a-fA-F]{4}\\." + "[0-9a-fA-F]{4})$") # Compile the ReGex p = re.compile(regex) # If the string is empty # return false if (str == None): return False # Return if the string # matched the ReGex if(re.search(p, str)): return True else: return False # Driver code # Test Case 1:str1 = "01-23-45-67-89-AB"print(isValidMACAddress(str1)) # Test Case 2:str2 = "01:23:45:67:89:AB"print(isValidMACAddress(str2)) # Test Case 3:str3 = "0123.4567.89AB"print(isValidMACAddress(str3)) # Test Case 4:str4 = "01-23-45-67-89-AH"print(isValidMACAddress(str4)) # Test Case 5:str5 = "01-23-45-67-AH"print(isValidMACAddress(str5)) # This code is contributed by avanitrachhadiya2155
// C++ program to validate the// MAC address// using Regular Expression#include <iostream>#include <regex>using namespace std; // Function to validate the MAC addressbool isValidMACAddress(string str){ // Regex to check valid MAC address const regex pattern( "^([0-9A-Fa-f]{2}[:-]){5}" "([0-9A-Fa-f]{2})|([0-9a-" "fA-F]{4}\\.[0-9a-fA-F]" "{4}\\.[0-9a-fA-F]{4})$"); // If the MAC address // is empty return false if (str.empty()) { return false; } // Return true if the MAC address // matched the ReGex if (regex_match(str, pattern)) { return true; } else { return false; }} // Driver Codeint main(){ // Test Case 1: string str1 = "01-23-45-67-89-AB"; cout << isValidMACAddress(str1) << endl; // Test Case 2: string str2 = "01:23:45:67:89:AB"; cout << isValidMACAddress(str2) << endl; // Test Case 3: string str3 = "0123.4567.89AB"; cout << isValidMACAddress(str3) << endl; // Test Case 4: string str4 = "01-23-45-67-89-AH"; cout << isValidMACAddress(str4) << endl; // Test Case 5: string str5 = "01-23-45-67-AH"; cout << isValidMACAddress(str5) << endl; return 0;} // This code is contributed by yuvraj_chandra
true
true
true
false
false
avanitrachhadiya2155
yuvraj_chandra
Computer Networks-IP Addressing
CPP-regex
java-regular-expression
regular-expression
Computer Networks
Pattern Searching
Strings
Strings
Pattern Searching
Computer Networks
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Differences between TCP and UDP
Types of Network Topology
RSA Algorithm in Cryptography
GSM in Wireless Communication
Socket Programming in Python
KMP Algorithm for Pattern Searching
Rabin-Karp Algorithm for Pattern Searching
Check if an URL is valid or not using Regular Expression
Check if a string is substring of another
Boyer Moore Algorithm for Pattern Searching | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n27 Jan, 2021"
},
{
"code": null,
"e": 177,
"s": 54,
"text": "Given string str, the task is to check whether the given string is a valid MAC address or not by using Regular Expression."
},
{
"code": null,
"e": 237,
"s": 177,
"text": "A valid MAC address must satisfy the following conditions: "
},
{
"code": null,
"e": 590,
"s": 237,
"text": "It must contain 12 hexadecimal digits.One way to represent them is to form six pairs of the characters separated with a hyphen (-) or colon(:). For example, 01-23-45-67-89-AB is a valid MAC address.Another way to represent them is to form three groups of four hexadecimal digits separated by dots(.). For example, 0123.4567.89AB is a valid MAC address."
},
{
"code": null,
"e": 629,
"s": 590,
"text": "It must contain 12 hexadecimal digits."
},
{
"code": null,
"e": 790,
"s": 629,
"text": "One way to represent them is to form six pairs of the characters separated with a hyphen (-) or colon(:). For example, 01-23-45-67-89-AB is a valid MAC address."
},
{
"code": null,
"e": 945,
"s": 790,
"text": "Another way to represent them is to form three groups of four hexadecimal digits separated by dots(.). For example, 0123.4567.89AB is a valid MAC address."
},
{
"code": null,
"e": 956,
"s": 945,
"text": "Examples: "
},
{
"code": null,
"e": 1117,
"s": 956,
"text": "Input: str = “01-23-45-67-89-AB”; Output: true Explanation: The given string satisfies all the above mentioned conditions. Therefore, it is a valid MAC address."
},
{
"code": null,
"e": 1333,
"s": 1117,
"text": "Input: str = “01-23-45-67-89-AH”; Output: false Explanation: The given string contains ‘H’, the valid hexadecimal digits should be followed by letter from a-f, A-F, and 0-9. Therefore, it is not a valid MAC address."
},
{
"code": null,
"e": 1495,
"s": 1333,
"text": "Input: str = “01-23-45-67-AH”; Output: false Explanation: The given string has five groups of two hexadecimal digits. Therefore, it is not a valid MAC address. "
},
{
"code": null,
"e": 1626,
"s": 1495,
"text": "Approach: The idea is to use Regular Expression to solve this problem. The following steps can be followed to compute the answer. "
},
{
"code": null,
"e": 1642,
"s": 1626,
"text": "Get the String."
},
{
"code": null,
"e": 1717,
"s": 1642,
"text": "Create a regular expression to check valid MAC address as mentioned below:"
},
{
"code": null,
"e": 1822,
"s": 1717,
"text": "regex = “^([0-9A-Fa-f]{2}[:-]){5}([0-9A-Fa-f]{2})|([0-9a-fA-F]{4}\\\\.[0-9a-fA-F]{4}\\\\.[0-9a-fA-F]{4})$”; "
},
{
"code": null,
"e": 2443,
"s": 1822,
"text": "Where: ^ represents the starting of the string.([0-9A-Fa-f]{2}[:-]){5} represents the five groups of two hexadecimal digits separated by hyphens (-) or colons (:)([0-9A-Fa-f]{2}) represents the one groups of two hexadecimal digits.| represents the or.( represents the starting of the group.[0-9a-fA-F]{4}\\\\. represents the first part of four hexadecimal digits separated by dots (.).[0-9a-fA-F]{4}\\\\. represents the second part of four hexadecimal digits separated by dots (.).[0-9a-fA-F]{4} represents the third part of four hexadecimal digits.) represents the ending of the group.$ represents the ending of the string."
},
{
"code": null,
"e": 2484,
"s": 2443,
"text": "^ represents the starting of the string."
},
{
"code": null,
"e": 2600,
"s": 2484,
"text": "([0-9A-Fa-f]{2}[:-]){5} represents the five groups of two hexadecimal digits separated by hyphens (-) or colons (:)"
},
{
"code": null,
"e": 2670,
"s": 2600,
"text": "([0-9A-Fa-f]{2}) represents the one groups of two hexadecimal digits."
},
{
"code": null,
"e": 2691,
"s": 2670,
"text": "| represents the or."
},
{
"code": null,
"e": 2731,
"s": 2691,
"text": "( represents the starting of the group."
},
{
"code": null,
"e": 2825,
"s": 2731,
"text": "[0-9a-fA-F]{4}\\\\. represents the first part of four hexadecimal digits separated by dots (.)."
},
{
"code": null,
"e": 2920,
"s": 2825,
"text": "[0-9a-fA-F]{4}\\\\. represents the second part of four hexadecimal digits separated by dots (.)."
},
{
"code": null,
"e": 2989,
"s": 2920,
"text": "[0-9a-fA-F]{4} represents the third part of four hexadecimal digits."
},
{
"code": null,
"e": 3027,
"s": 2989,
"text": ") represents the ending of the group."
},
{
"code": null,
"e": 3066,
"s": 3027,
"text": "$ represents the ending of the string."
},
{
"code": null,
"e": 3172,
"s": 3066,
"text": "Match the given string with the Regular Expression. In Java, this can be done by using Pattern.matcher()."
},
{
"code": null,
"e": 3260,
"s": 3172,
"text": "Return true if the string matches with the given regular expression, else return false."
},
{
"code": null,
"e": 3312,
"s": 3260,
"text": "Below is the implementation of the above approach: "
},
{
"code": null,
"e": 3317,
"s": 3312,
"text": "Java"
},
{
"code": null,
"e": 3325,
"s": 3317,
"text": "Python3"
},
{
"code": null,
"e": 3329,
"s": 3325,
"text": "C++"
},
{
"code": "// Java program to validate// MAC address using// regular expression import java.util.regex.*;class GFG { // Function to validate // MAC address // using regular expression public static boolean isValidMACAddress(String str) { // Regex to check valid // MAC address String regex = \"^([0-9A-Fa-f]{2}[:-])\" + \"{5}([0-9A-Fa-f]{2})|\" + \"([0-9a-fA-F]{4}\\\\.\" + \"[0-9a-fA-F]{4}\\\\.\" + \"[0-9a-fA-F]{4})$\"; // Compile the ReGex Pattern p = Pattern.compile(regex); // If the string is empty // return false if (str == null) { return false; } // Find match between given string // and regular expression // uSing Pattern.matcher() Matcher m = p.matcher(str); // Return if the string // matched the ReGex return m.matches(); } // Driver code public static void main(String args[]) { // Test Case 1: String str1 = \"01-23-45-67-89-AB\"; System.out.println(isValidMACAddress(str1)); // Test Case 2: String str2 = \"01:23:45:67:89:AB\"; System.out.println(isValidMACAddress(str2)); // Test Case 3: String str3 = \"0123.4567.89AB\"; System.out.println(isValidMACAddress(str3)); // Test Case 4: String str4 = \"01-23-45-67-89-AH\"; System.out.println(isValidMACAddress(str4)); // Test Case 5: String str5 = \"01-23-45-67-AH\"; System.out.println(isValidMACAddress(str5)); }}",
"e": 4941,
"s": 3329,
"text": null
},
{
"code": "# Python3 program to validate# MAC address using# using regular expressionimport re # Function to validate MAC address. def isValidMACAddress(str): # Regex to check valid # MAC address regex = (\"^([0-9A-Fa-f]{2}[:-])\" + \"{5}([0-9A-Fa-f]{2})|\" + \"([0-9a-fA-F]{4}\\\\.\" + \"[0-9a-fA-F]{4}\\\\.\" + \"[0-9a-fA-F]{4})$\") # Compile the ReGex p = re.compile(regex) # If the string is empty # return false if (str == None): return False # Return if the string # matched the ReGex if(re.search(p, str)): return True else: return False # Driver code # Test Case 1:str1 = \"01-23-45-67-89-AB\"print(isValidMACAddress(str1)) # Test Case 2:str2 = \"01:23:45:67:89:AB\"print(isValidMACAddress(str2)) # Test Case 3:str3 = \"0123.4567.89AB\"print(isValidMACAddress(str3)) # Test Case 4:str4 = \"01-23-45-67-89-AH\"print(isValidMACAddress(str4)) # Test Case 5:str5 = \"01-23-45-67-AH\"print(isValidMACAddress(str5)) # This code is contributed by avanitrachhadiya2155",
"e": 5986,
"s": 4941,
"text": null
},
{
"code": "// C++ program to validate the// MAC address// using Regular Expression#include <iostream>#include <regex>using namespace std; // Function to validate the MAC addressbool isValidMACAddress(string str){ // Regex to check valid MAC address const regex pattern( \"^([0-9A-Fa-f]{2}[:-]){5}\" \"([0-9A-Fa-f]{2})|([0-9a-\" \"fA-F]{4}\\\\.[0-9a-fA-F]\" \"{4}\\\\.[0-9a-fA-F]{4})$\"); // If the MAC address // is empty return false if (str.empty()) { return false; } // Return true if the MAC address // matched the ReGex if (regex_match(str, pattern)) { return true; } else { return false; }} // Driver Codeint main(){ // Test Case 1: string str1 = \"01-23-45-67-89-AB\"; cout << isValidMACAddress(str1) << endl; // Test Case 2: string str2 = \"01:23:45:67:89:AB\"; cout << isValidMACAddress(str2) << endl; // Test Case 3: string str3 = \"0123.4567.89AB\"; cout << isValidMACAddress(str3) << endl; // Test Case 4: string str4 = \"01-23-45-67-89-AH\"; cout << isValidMACAddress(str4) << endl; // Test Case 5: string str5 = \"01-23-45-67-AH\"; cout << isValidMACAddress(str5) << endl; return 0;} // This code is contributed by yuvraj_chandra",
"e": 7235,
"s": 5986,
"text": null
},
{
"code": null,
"e": 7262,
"s": 7235,
"text": "true\ntrue\ntrue\nfalse\nfalse"
},
{
"code": null,
"e": 7283,
"s": 7262,
"text": "avanitrachhadiya2155"
},
{
"code": null,
"e": 7298,
"s": 7283,
"text": "yuvraj_chandra"
},
{
"code": null,
"e": 7330,
"s": 7298,
"text": "Computer Networks-IP Addressing"
},
{
"code": null,
"e": 7340,
"s": 7330,
"text": "CPP-regex"
},
{
"code": null,
"e": 7364,
"s": 7340,
"text": "java-regular-expression"
},
{
"code": null,
"e": 7383,
"s": 7364,
"text": "regular-expression"
},
{
"code": null,
"e": 7401,
"s": 7383,
"text": "Computer Networks"
},
{
"code": null,
"e": 7419,
"s": 7401,
"text": "Pattern Searching"
},
{
"code": null,
"e": 7427,
"s": 7419,
"text": "Strings"
},
{
"code": null,
"e": 7435,
"s": 7427,
"text": "Strings"
},
{
"code": null,
"e": 7453,
"s": 7435,
"text": "Pattern Searching"
},
{
"code": null,
"e": 7471,
"s": 7453,
"text": "Computer Networks"
},
{
"code": null,
"e": 7569,
"s": 7471,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 7601,
"s": 7569,
"text": "Differences between TCP and UDP"
},
{
"code": null,
"e": 7627,
"s": 7601,
"text": "Types of Network Topology"
},
{
"code": null,
"e": 7657,
"s": 7627,
"text": "RSA Algorithm in Cryptography"
},
{
"code": null,
"e": 7687,
"s": 7657,
"text": "GSM in Wireless Communication"
},
{
"code": null,
"e": 7716,
"s": 7687,
"text": "Socket Programming in Python"
},
{
"code": null,
"e": 7752,
"s": 7716,
"text": "KMP Algorithm for Pattern Searching"
},
{
"code": null,
"e": 7795,
"s": 7752,
"text": "Rabin-Karp Algorithm for Pattern Searching"
},
{
"code": null,
"e": 7852,
"s": 7795,
"text": "Check if an URL is valid or not using Regular Expression"
},
{
"code": null,
"e": 7894,
"s": 7852,
"text": "Check if a string is substring of another"
}
] |
How to change css styles of elements in JavaScript? | JavaScript can change Css styles such as color, font size etc. of elements using some methods such as getElementById(), getElementByClassName() etc.
In the following example font style and font size of the elements have changed using getElementById() method.
Live Demo
In the following example, using style commands "style.fontSize" and "style.fontStyle", the provided texts are changed to a font size of "35px" and font style to "italic"
<html>
<body>
<p id="size">JavaScript can change the style of an HTML element.</p>
<p id="style">JavaScript can change the style of an HTML element.</p>
<button type="button"
onclick="document.getElementById('size').style.fontSize='35px'">Size</button>
<button type="button" onclick="document.getElementById('style')
.style.fontStyle='italic'">Style</button>
</body>
</html>
On executing the above code we will get the following on the browser.
After clicking the above buttons first text will be changed to different font size and second text will be changed to a different font style as shown in the output.
In the following example the color of the text has changed to blue using style command "style.color".
Live Demo
<html>
<body>
<p id="color">JavaScript can change the color of an HTML element.</p>
<button type="button" onclick="document.getElementById('color').
style.color='blue'">Color Me</button>
</body>
</html>
After executing above code we will get the following on the browser window.
On clicking on the "color me" button the provided text's color will be changed to 'blue' as shown in the output. | [
{
"code": null,
"e": 1336,
"s": 1187,
"text": "JavaScript can change Css styles such as color, font size etc. of elements using some methods such as getElementById(), getElementByClassName() etc."
},
{
"code": null,
"e": 1446,
"s": 1336,
"text": "In the following example font style and font size of the elements have changed using getElementById() method."
},
{
"code": null,
"e": 1456,
"s": 1446,
"text": "Live Demo"
},
{
"code": null,
"e": 1626,
"s": 1456,
"text": "In the following example, using style commands \"style.fontSize\" and \"style.fontStyle\", the provided texts are changed to a font size of \"35px\" and font style to \"italic\""
},
{
"code": null,
"e": 2016,
"s": 1626,
"text": "<html>\n<body>\n <p id=\"size\">JavaScript can change the style of an HTML element.</p>\n <p id=\"style\">JavaScript can change the style of an HTML element.</p>\n <button type=\"button\"\n onclick=\"document.getElementById('size').style.fontSize='35px'\">Size</button>\n <button type=\"button\" onclick=\"document.getElementById('style')\n.style.fontStyle='italic'\">Style</button>\n</body>\n</html>"
},
{
"code": null,
"e": 2086,
"s": 2016,
"text": "On executing the above code we will get the following on the browser."
},
{
"code": null,
"e": 2251,
"s": 2086,
"text": "After clicking the above buttons first text will be changed to different font size and second text will be changed to a different font style as shown in the output."
},
{
"code": null,
"e": 2353,
"s": 2251,
"text": "In the following example the color of the text has changed to blue using style command \"style.color\"."
},
{
"code": null,
"e": 2363,
"s": 2353,
"text": "Live Demo"
},
{
"code": null,
"e": 2575,
"s": 2363,
"text": "<html>\n<body>\n <p id=\"color\">JavaScript can change the color of an HTML element.</p>\n <button type=\"button\" onclick=\"document.getElementById('color').\n style.color='blue'\">Color Me</button>\n</body>\n</html>"
},
{
"code": null,
"e": 2652,
"s": 2575,
"text": "After executing above code we will get the following on the browser window. "
},
{
"code": null,
"e": 2765,
"s": 2652,
"text": "On clicking on the \"color me\" button the provided text's color will be changed to 'blue' as shown in the output."
}
] |
Gerrit - Configuring Git-Review | Gerrit is built on top of Git version control system, which extracts the code from other host, pushing changes to the code, submitting the code for review, etc. The default remote name of Git is origin and we tell git-review to use this name 'origin' by using the following command.
$ git config --global gitreview.remote origin | [
{
"code": null,
"e": 2655,
"s": 2372,
"text": "Gerrit is built on top of Git version control system, which extracts the code from other host, pushing changes to the code, submitting the code for review, etc. The default remote name of Git is origin and we tell git-review to use this name 'origin' by using the following command."
}
] |
Reverse Delete Algorithm for Minimum Spanning Tree | 07 Jul, 2022
Reverse Delete algorithm is closely related to Kruskal’s algorithm. In Kruskal’s algorithm what we do is : Sort edges by increasing order of their weights. After sorting, we one by one pick edges in increasing order. We include current picked edge if by including this in spanning tree not form any cycle until there are V-1 edges in spanning tree, where V = number of vertices.
In Reverse Delete algorithm, we sort all edges in decreasing order of their weights. After sorting, we one by one pick edges in decreasing order. We include current picked edge if excluding current edge causes disconnection in current graph. The main idea is delete edge if its deletion does not lead to disconnection of graph.
The Algorithm :
Sort all edges of graph in non-increasing order of edge weights.Initialize MST as original graph and remove extra edges using step 3.Pick highest weight edge from remaining edges and check if deleting the edge disconnects the graph or not. If disconnects, then we don’t delete the edge.Else we delete the edge and continue.
Sort all edges of graph in non-increasing order of edge weights.
Initialize MST as original graph and remove extra edges using step 3.
Pick highest weight edge from remaining edges and check if deleting the edge disconnects the graph or not. If disconnects, then we don’t delete the edge.Else we delete the edge and continue.
Illustration:
Let us understand with the following example:
If we delete highest weight edge of weight 14, graph doesn’t become disconnected, so we remove it.
Next we delete 11 as deleting it doesn’t disconnect the graph.
Next we delete 10 as deleting it doesn’t disconnect the graph.
Next is 9. We cannot delete 9 as deleting it causes disconnection.
We continue this way and following edges remain in final MST.
Edges in MST
(3, 4)
(0, 7)
(2, 3)
(2, 5)
(0, 1)
(5, 6)
(2, 8)
(6, 7)
Note : In case of same weight edges, we can pick any edge of the same weight edges.
Implementation:
C++
Python3
C#
// C++ program to find Minimum Spanning Tree// of a graph using Reverse Delete Algorithm#include<bits/stdc++.h>using namespace std; // Creating shortcut for an integer pairtypedef pair<int, int> iPair; // Graph class represents a directed graph// using adjacency list representationclass Graph{ int V; // No. of vertices list<int> *adj; vector< pair<int, iPair> > edges; void DFS(int v, bool visited[]); public: Graph(int V); // Constructor // function to add an edge to graph void addEdge(int u, int v, int w); // Returns true if graph is connected bool isConnected(); void reverseDeleteMST();}; Graph::Graph(int V){ this->V = V; adj = new list<int>[V];} void Graph::addEdge(int u, int v, int w){ adj[u].push_back(v); // Add w to v’s list. adj[v].push_back(u); // Add w to v’s list. edges.push_back({w, {u, v}});} void Graph::DFS(int v, bool visited[]){ // Mark the current node as visited and print it visited[v] = true; // Recur for all the vertices adjacent to // this vertex list<int>::iterator i; for (i = adj[v].begin(); i != adj[v].end(); ++i) if (!visited[*i]) DFS(*i, visited);} // Returns true if given graph is connected, else falsebool Graph::isConnected(){ bool visited[V]; memset(visited, false, sizeof(visited)); // Find all reachable vertices from first vertex DFS(0, visited); // If set of reachable vertices includes all, // return true. for (int i=1; i<V; i++) if (visited[i] == false) return false; return true;} // This function assumes that edge (u, v)// exists in graph or not,void Graph::reverseDeleteMST(){ // Sort edges in increasing order on basis of cost sort(edges.begin(), edges.end()); int mst_wt = 0; // Initialize weight of MST cout << "Edges in MST\n"; // Iterate through all sorted edges in // decreasing order of weights for (int i=edges.size()-1; i>=0; i--) { int u = edges[i].second.first; int v = edges[i].second.second; // Remove edge from undirected graph adj[u].remove(v); adj[v].remove(u); // Adding the edge back if removing it // causes disconnection. In this case this // edge becomes part of MST. if (isConnected() == false) { adj[u].push_back(v); adj[v].push_back(u); // This edge is part of MST cout << "(" << u << ", " << v << ") \n"; mst_wt += edges[i].first; } } cout << "Total weight of MST is " << mst_wt;} // Driver codeint main(){ // create the graph given in above figure int V = 9; Graph g(V); // making above shown graph g.addEdge(0, 1, 4); g.addEdge(0, 7, 8); g.addEdge(1, 2, 8); g.addEdge(1, 7, 11); g.addEdge(2, 3, 7); g.addEdge(2, 8, 2); g.addEdge(2, 5, 4); g.addEdge(3, 4, 9); g.addEdge(3, 5, 14); g.addEdge(4, 5, 10); g.addEdge(5, 6, 2); g.addEdge(6, 7, 1); g.addEdge(6, 8, 6); g.addEdge(7, 8, 7); g.reverseDeleteMST(); return 0;}
# Python3 program to find Minimum Spanning Tree# of a graph using Reverse Delete Algorithm # Graph class represents a directed graph# using adjacency list representationclass Graph: def __init__(self, v): # No. of vertices self.v = v self.adj = [0] * v self.edges = [] for i in range(v): self.adj[i] = [] # function to add an edge to graph def addEdge(self, u: int, v: int, w: int): self.adj[u].append(v) # Add w to v’s list. self.adj[v].append(u) # Add w to v’s list. self.edges.append((w, (u, v))) def dfs(self, v: int, visited: list): # Mark the current node as visited and print it visited[v] = True # Recur for all the vertices adjacent to # this vertex for i in self.adj[v]: if not visited[i]: self.dfs(i, visited) # Returns true if graph is connected # Returns true if given graph is connected, else false def connected(self): visited = [False] * self.v # Find all reachable vertices from first vertex self.dfs(0, visited) # If set of reachable vertices includes all, # return true. for i in range(1, self.v): if not visited[i]: return False return True # This function assumes that edge (u, v) # exists in graph or not, def reverseDeleteMST(self): # Sort edges in increasing order on basis of cost self.edges.sort(key = lambda a: a[0]) mst_wt = 0 # Initialize weight of MST print("Edges in MST") # Iterate through all sorted edges in # decreasing order of weights for i in range(len(self.edges) - 1, -1, -1): u = self.edges[i][1][0] v = self.edges[i][1][1] # Remove edge from undirected graph self.adj[u].remove(v) self.adj[v].remove(u) # Adding the edge back if removing it # causes disconnection. In this case this # edge becomes part of MST. if self.connected() == False: self.adj[u].append(v) self.adj[v].append(u) # This edge is part of MST print("( %d, %d )" % (u, v)) mst_wt += self.edges[i][0] print("Total weight of MST is", mst_wt) # Driver Codeif __name__ == "__main__": # create the graph given in above figure V = 9 g = Graph(V) # making above shown graph g.addEdge(0, 1, 4) g.addEdge(0, 7, 8) g.addEdge(1, 2, 8) g.addEdge(1, 7, 11) g.addEdge(2, 3, 7) g.addEdge(2, 8, 2) g.addEdge(2, 5, 4) g.addEdge(3, 4, 9) g.addEdge(3, 5, 14) g.addEdge(4, 5, 10) g.addEdge(5, 6, 2) g.addEdge(6, 7, 1) g.addEdge(6, 8, 6) g.addEdge(7, 8, 7) g.reverseDeleteMST() # This code is contributed by# sanjeev2552
// C# program to find Minimum Spanning Tree// of a graph using Reverse Delete Algorithm using System;using System.Collections.Generic; // class to represent an edgepublic class Edge : IComparable<Edge> { public int u, v, w; public Edge(int u, int v, int w) { this.u = u; this.v = v; this.w = w; } public int CompareTo(Edge other) { return this.w.CompareTo(other.w); }} // Graph class represents a directed graph// using adjacency list representationpublic class Graph { private int V; // No. of vertices private List<int>[] adj; private List<Edge> edges; public Graph(int v) // Constructor { V = v; adj = new List<int>[ v ]; for (int i = 0; i < v; i++) adj[i] = new List<int>(); edges = new List<Edge>(); } // function to Add an edge public void AddEdge(int u, int v, int w) { adj[u].Add(v); // Add w to v’s list. adj[v].Add(u); // Add w to v’s list. edges.Add(new Edge(u, v, w)); } // function to perform dfs private void DFS(int v, bool[] visited) { // Mark the current node as visited and print it visited[v] = true; // Recur for all the vertices adjacent to // this vertex foreach(int i in adj[v]) { if (!visited[i]) DFS(i, visited); } } // Returns true if given graph is connected, else false private bool IsConnected() { bool[] visited = new bool[V]; // Find all reachable vertices from first vertex DFS(0, visited); // If set of reachable vertices includes all, // return true. for (int i = 1; i < V; i++) { if (visited[i] == false) return false; } return true; } // This function assumes that edge (u, v) // exists in graph or not, public void ReverseDeleteMST() { // Sort edges in increasing order on basis of cost edges.Sort(); int mst_wt = 0; // Initialize weight of MST Console.WriteLine("Edges in MST"); // Iterate through all sorted edges in // decreasing order of weights for (int i = edges.Count - 1; i >= 0; i--) { int u = edges[i].u; int v = edges[i].v; // Remove edge from undirected graph adj[u].Remove(v); adj[v].Remove(u); // Adding the edge back if removing it // causes disconnection. In this case this // edge becomes part of MST. if (IsConnected() == false) { adj[u].Add(v); adj[v].Add(u); // This edge is part of MST Console.WriteLine("({0}, {1})", u, v); mst_wt += edges[i].w; } } Console.WriteLine("Total weight of MST is {0}", mst_wt); }} class GFG { // Driver code static void Main(string[] args) { // create the graph given in above figure int V = 9; Graph g = new Graph(V); // making above shown graph g.AddEdge(0, 1, 4); g.AddEdge(0, 7, 8); g.AddEdge(1, 2, 8); g.AddEdge(1, 7, 11); g.AddEdge(2, 3, 7); g.AddEdge(2, 8, 2); g.AddEdge(2, 5, 4); g.AddEdge(3, 4, 9); g.AddEdge(3, 5, 14); g.AddEdge(4, 5, 10); g.AddEdge(5, 6, 2); g.AddEdge(6, 7, 1); g.AddEdge(6, 8, 6); g.AddEdge(7, 8, 7); g.ReverseDeleteMST(); }} // This code is contributed by cavi4762
Edges in MST
(3, 4)
(0, 7)
(2, 3)
(2, 5)
(0, 1)
(5, 6)
(2, 8)
(6, 7)
Total weight of MST is 37
Notes :
The above implementation is a simple/naive implementation of Reverse Delete algorithm and can be optimized to O(E log V (log log V)3) [Source : Wiki]. But this optimized time complexity is still less than Prim and Kruskal Algorithms for MST.The above implementation modifies the original graph. We can create a copy of the graph if original graph must be retained.
The above implementation is a simple/naive implementation of Reverse Delete algorithm and can be optimized to O(E log V (log log V)3) [Source : Wiki]. But this optimized time complexity is still less than Prim and Kruskal Algorithms for MST.
The above implementation modifies the original graph. We can create a copy of the graph if original graph must be retained.
This article is contributed by Antra Purohit. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
sanjeev2552
simmytarika5
cavi4762
hardikkoriintern
MST
Graph
Greedy
Greedy
Graph
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Breadth First Search or BFS for a Graph
Depth First Search or DFS for a Graph
Graph and its representations
Topological Sorting
Detect Cycle in a Directed Graph
Program for array rotation
Write a program to print all permutations of a given string
Coin Change | DP-7
Minimum Number of Platforms Required for a Railway/Bus Station
Program for Shortest Job First (or SJF) CPU Scheduling | Set 1 (Non- preemptive) | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n07 Jul, 2022"
},
{
"code": null,
"e": 433,
"s": 54,
"text": "Reverse Delete algorithm is closely related to Kruskal’s algorithm. In Kruskal’s algorithm what we do is : Sort edges by increasing order of their weights. After sorting, we one by one pick edges in increasing order. We include current picked edge if by including this in spanning tree not form any cycle until there are V-1 edges in spanning tree, where V = number of vertices."
},
{
"code": null,
"e": 761,
"s": 433,
"text": "In Reverse Delete algorithm, we sort all edges in decreasing order of their weights. After sorting, we one by one pick edges in decreasing order. We include current picked edge if excluding current edge causes disconnection in current graph. The main idea is delete edge if its deletion does not lead to disconnection of graph."
},
{
"code": null,
"e": 777,
"s": 761,
"text": "The Algorithm :"
},
{
"code": null,
"e": 1104,
"s": 777,
"text": "Sort all edges of graph in non-increasing order of edge weights.Initialize MST as original graph and remove extra edges using step 3.Pick highest weight edge from remaining edges and check if deleting the edge disconnects the graph or not. If disconnects, then we don’t delete the edge.Else we delete the edge and continue. "
},
{
"code": null,
"e": 1169,
"s": 1104,
"text": "Sort all edges of graph in non-increasing order of edge weights."
},
{
"code": null,
"e": 1239,
"s": 1169,
"text": "Initialize MST as original graph and remove extra edges using step 3."
},
{
"code": null,
"e": 1433,
"s": 1239,
"text": "Pick highest weight edge from remaining edges and check if deleting the edge disconnects the graph or not. If disconnects, then we don’t delete the edge.Else we delete the edge and continue. "
},
{
"code": null,
"e": 1448,
"s": 1433,
"text": "Illustration: "
},
{
"code": null,
"e": 1494,
"s": 1448,
"text": "Let us understand with the following example:"
},
{
"code": null,
"e": 1595,
"s": 1494,
"text": "If we delete highest weight edge of weight 14, graph doesn’t become disconnected, so we remove it. "
},
{
"code": null,
"e": 1660,
"s": 1595,
"text": "Next we delete 11 as deleting it doesn’t disconnect the graph. "
},
{
"code": null,
"e": 1725,
"s": 1660,
"text": "Next we delete 10 as deleting it doesn’t disconnect the graph. "
},
{
"code": null,
"e": 1794,
"s": 1725,
"text": "Next is 9. We cannot delete 9 as deleting it causes disconnection. "
},
{
"code": null,
"e": 1857,
"s": 1794,
"text": "We continue this way and following edges remain in final MST. "
},
{
"code": null,
"e": 1934,
"s": 1857,
"text": "Edges in MST\n(3, 4) \n(0, 7) \n(2, 3) \n(2, 5) \n(0, 1) \n(5, 6) \n(2, 8) \n(6, 7) "
},
{
"code": null,
"e": 2018,
"s": 1934,
"text": "Note : In case of same weight edges, we can pick any edge of the same weight edges."
},
{
"code": null,
"e": 2034,
"s": 2018,
"text": "Implementation:"
},
{
"code": null,
"e": 2038,
"s": 2034,
"text": "C++"
},
{
"code": null,
"e": 2046,
"s": 2038,
"text": "Python3"
},
{
"code": null,
"e": 2049,
"s": 2046,
"text": "C#"
},
{
"code": "// C++ program to find Minimum Spanning Tree// of a graph using Reverse Delete Algorithm#include<bits/stdc++.h>using namespace std; // Creating shortcut for an integer pairtypedef pair<int, int> iPair; // Graph class represents a directed graph// using adjacency list representationclass Graph{ int V; // No. of vertices list<int> *adj; vector< pair<int, iPair> > edges; void DFS(int v, bool visited[]); public: Graph(int V); // Constructor // function to add an edge to graph void addEdge(int u, int v, int w); // Returns true if graph is connected bool isConnected(); void reverseDeleteMST();}; Graph::Graph(int V){ this->V = V; adj = new list<int>[V];} void Graph::addEdge(int u, int v, int w){ adj[u].push_back(v); // Add w to v’s list. adj[v].push_back(u); // Add w to v’s list. edges.push_back({w, {u, v}});} void Graph::DFS(int v, bool visited[]){ // Mark the current node as visited and print it visited[v] = true; // Recur for all the vertices adjacent to // this vertex list<int>::iterator i; for (i = adj[v].begin(); i != adj[v].end(); ++i) if (!visited[*i]) DFS(*i, visited);} // Returns true if given graph is connected, else falsebool Graph::isConnected(){ bool visited[V]; memset(visited, false, sizeof(visited)); // Find all reachable vertices from first vertex DFS(0, visited); // If set of reachable vertices includes all, // return true. for (int i=1; i<V; i++) if (visited[i] == false) return false; return true;} // This function assumes that edge (u, v)// exists in graph or not,void Graph::reverseDeleteMST(){ // Sort edges in increasing order on basis of cost sort(edges.begin(), edges.end()); int mst_wt = 0; // Initialize weight of MST cout << \"Edges in MST\\n\"; // Iterate through all sorted edges in // decreasing order of weights for (int i=edges.size()-1; i>=0; i--) { int u = edges[i].second.first; int v = edges[i].second.second; // Remove edge from undirected graph adj[u].remove(v); adj[v].remove(u); // Adding the edge back if removing it // causes disconnection. In this case this // edge becomes part of MST. if (isConnected() == false) { adj[u].push_back(v); adj[v].push_back(u); // This edge is part of MST cout << \"(\" << u << \", \" << v << \") \\n\"; mst_wt += edges[i].first; } } cout << \"Total weight of MST is \" << mst_wt;} // Driver codeint main(){ // create the graph given in above figure int V = 9; Graph g(V); // making above shown graph g.addEdge(0, 1, 4); g.addEdge(0, 7, 8); g.addEdge(1, 2, 8); g.addEdge(1, 7, 11); g.addEdge(2, 3, 7); g.addEdge(2, 8, 2); g.addEdge(2, 5, 4); g.addEdge(3, 4, 9); g.addEdge(3, 5, 14); g.addEdge(4, 5, 10); g.addEdge(5, 6, 2); g.addEdge(6, 7, 1); g.addEdge(6, 8, 6); g.addEdge(7, 8, 7); g.reverseDeleteMST(); return 0;}",
"e": 5115,
"s": 2049,
"text": null
},
{
"code": "# Python3 program to find Minimum Spanning Tree# of a graph using Reverse Delete Algorithm # Graph class represents a directed graph# using adjacency list representationclass Graph: def __init__(self, v): # No. of vertices self.v = v self.adj = [0] * v self.edges = [] for i in range(v): self.adj[i] = [] # function to add an edge to graph def addEdge(self, u: int, v: int, w: int): self.adj[u].append(v) # Add w to v’s list. self.adj[v].append(u) # Add w to v’s list. self.edges.append((w, (u, v))) def dfs(self, v: int, visited: list): # Mark the current node as visited and print it visited[v] = True # Recur for all the vertices adjacent to # this vertex for i in self.adj[v]: if not visited[i]: self.dfs(i, visited) # Returns true if graph is connected # Returns true if given graph is connected, else false def connected(self): visited = [False] * self.v # Find all reachable vertices from first vertex self.dfs(0, visited) # If set of reachable vertices includes all, # return true. for i in range(1, self.v): if not visited[i]: return False return True # This function assumes that edge (u, v) # exists in graph or not, def reverseDeleteMST(self): # Sort edges in increasing order on basis of cost self.edges.sort(key = lambda a: a[0]) mst_wt = 0 # Initialize weight of MST print(\"Edges in MST\") # Iterate through all sorted edges in # decreasing order of weights for i in range(len(self.edges) - 1, -1, -1): u = self.edges[i][1][0] v = self.edges[i][1][1] # Remove edge from undirected graph self.adj[u].remove(v) self.adj[v].remove(u) # Adding the edge back if removing it # causes disconnection. In this case this # edge becomes part of MST. if self.connected() == False: self.adj[u].append(v) self.adj[v].append(u) # This edge is part of MST print(\"( %d, %d )\" % (u, v)) mst_wt += self.edges[i][0] print(\"Total weight of MST is\", mst_wt) # Driver Codeif __name__ == \"__main__\": # create the graph given in above figure V = 9 g = Graph(V) # making above shown graph g.addEdge(0, 1, 4) g.addEdge(0, 7, 8) g.addEdge(1, 2, 8) g.addEdge(1, 7, 11) g.addEdge(2, 3, 7) g.addEdge(2, 8, 2) g.addEdge(2, 5, 4) g.addEdge(3, 4, 9) g.addEdge(3, 5, 14) g.addEdge(4, 5, 10) g.addEdge(5, 6, 2) g.addEdge(6, 7, 1) g.addEdge(6, 8, 6) g.addEdge(7, 8, 7) g.reverseDeleteMST() # This code is contributed by# sanjeev2552",
"e": 7965,
"s": 5115,
"text": null
},
{
"code": "// C# program to find Minimum Spanning Tree// of a graph using Reverse Delete Algorithm using System;using System.Collections.Generic; // class to represent an edgepublic class Edge : IComparable<Edge> { public int u, v, w; public Edge(int u, int v, int w) { this.u = u; this.v = v; this.w = w; } public int CompareTo(Edge other) { return this.w.CompareTo(other.w); }} // Graph class represents a directed graph// using adjacency list representationpublic class Graph { private int V; // No. of vertices private List<int>[] adj; private List<Edge> edges; public Graph(int v) // Constructor { V = v; adj = new List<int>[ v ]; for (int i = 0; i < v; i++) adj[i] = new List<int>(); edges = new List<Edge>(); } // function to Add an edge public void AddEdge(int u, int v, int w) { adj[u].Add(v); // Add w to v’s list. adj[v].Add(u); // Add w to v’s list. edges.Add(new Edge(u, v, w)); } // function to perform dfs private void DFS(int v, bool[] visited) { // Mark the current node as visited and print it visited[v] = true; // Recur for all the vertices adjacent to // this vertex foreach(int i in adj[v]) { if (!visited[i]) DFS(i, visited); } } // Returns true if given graph is connected, else false private bool IsConnected() { bool[] visited = new bool[V]; // Find all reachable vertices from first vertex DFS(0, visited); // If set of reachable vertices includes all, // return true. for (int i = 1; i < V; i++) { if (visited[i] == false) return false; } return true; } // This function assumes that edge (u, v) // exists in graph or not, public void ReverseDeleteMST() { // Sort edges in increasing order on basis of cost edges.Sort(); int mst_wt = 0; // Initialize weight of MST Console.WriteLine(\"Edges in MST\"); // Iterate through all sorted edges in // decreasing order of weights for (int i = edges.Count - 1; i >= 0; i--) { int u = edges[i].u; int v = edges[i].v; // Remove edge from undirected graph adj[u].Remove(v); adj[v].Remove(u); // Adding the edge back if removing it // causes disconnection. In this case this // edge becomes part of MST. if (IsConnected() == false) { adj[u].Add(v); adj[v].Add(u); // This edge is part of MST Console.WriteLine(\"({0}, {1})\", u, v); mst_wt += edges[i].w; } } Console.WriteLine(\"Total weight of MST is {0}\", mst_wt); }} class GFG { // Driver code static void Main(string[] args) { // create the graph given in above figure int V = 9; Graph g = new Graph(V); // making above shown graph g.AddEdge(0, 1, 4); g.AddEdge(0, 7, 8); g.AddEdge(1, 2, 8); g.AddEdge(1, 7, 11); g.AddEdge(2, 3, 7); g.AddEdge(2, 8, 2); g.AddEdge(2, 5, 4); g.AddEdge(3, 4, 9); g.AddEdge(3, 5, 14); g.AddEdge(4, 5, 10); g.AddEdge(5, 6, 2); g.AddEdge(6, 7, 1); g.AddEdge(6, 8, 6); g.AddEdge(7, 8, 7); g.ReverseDeleteMST(); }} // This code is contributed by cavi4762",
"e": 11518,
"s": 7965,
"text": null
},
{
"code": null,
"e": 11621,
"s": 11518,
"text": "Edges in MST\n(3, 4) \n(0, 7) \n(2, 3) \n(2, 5) \n(0, 1) \n(5, 6) \n(2, 8) \n(6, 7) \nTotal weight of MST is 37"
},
{
"code": null,
"e": 11630,
"s": 11621,
"text": "Notes : "
},
{
"code": null,
"e": 11995,
"s": 11630,
"text": "The above implementation is a simple/naive implementation of Reverse Delete algorithm and can be optimized to O(E log V (log log V)3) [Source : Wiki]. But this optimized time complexity is still less than Prim and Kruskal Algorithms for MST.The above implementation modifies the original graph. We can create a copy of the graph if original graph must be retained."
},
{
"code": null,
"e": 12237,
"s": 11995,
"text": "The above implementation is a simple/naive implementation of Reverse Delete algorithm and can be optimized to O(E log V (log log V)3) [Source : Wiki]. But this optimized time complexity is still less than Prim and Kruskal Algorithms for MST."
},
{
"code": null,
"e": 12361,
"s": 12237,
"text": "The above implementation modifies the original graph. We can create a copy of the graph if original graph must be retained."
},
{
"code": null,
"e": 12659,
"s": 12361,
"text": "This article is contributed by Antra Purohit. If you like GeeksforGeeks and would like to contribute, you can also write an article using write.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. "
},
{
"code": null,
"e": 12671,
"s": 12659,
"text": "sanjeev2552"
},
{
"code": null,
"e": 12684,
"s": 12671,
"text": "simmytarika5"
},
{
"code": null,
"e": 12693,
"s": 12684,
"text": "cavi4762"
},
{
"code": null,
"e": 12710,
"s": 12693,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 12714,
"s": 12710,
"text": "MST"
},
{
"code": null,
"e": 12720,
"s": 12714,
"text": "Graph"
},
{
"code": null,
"e": 12727,
"s": 12720,
"text": "Greedy"
},
{
"code": null,
"e": 12734,
"s": 12727,
"text": "Greedy"
},
{
"code": null,
"e": 12740,
"s": 12734,
"text": "Graph"
},
{
"code": null,
"e": 12838,
"s": 12740,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 12878,
"s": 12838,
"text": "Breadth First Search or BFS for a Graph"
},
{
"code": null,
"e": 12916,
"s": 12878,
"text": "Depth First Search or DFS for a Graph"
},
{
"code": null,
"e": 12946,
"s": 12916,
"text": "Graph and its representations"
},
{
"code": null,
"e": 12966,
"s": 12946,
"text": "Topological Sorting"
},
{
"code": null,
"e": 12999,
"s": 12966,
"text": "Detect Cycle in a Directed Graph"
},
{
"code": null,
"e": 13026,
"s": 12999,
"text": "Program for array rotation"
},
{
"code": null,
"e": 13086,
"s": 13026,
"text": "Write a program to print all permutations of a given string"
},
{
"code": null,
"e": 13105,
"s": 13086,
"text": "Coin Change | DP-7"
},
{
"code": null,
"e": 13168,
"s": 13105,
"text": "Minimum Number of Platforms Required for a Railway/Bus Station"
}
] |
How to find the missing number in a given Array from number 1 to n in Java? | If a single number is missing in an integer array that contains a sequence of numbers values, you can find it basing of the sum of numbers or, basing on the xor of the numbers.
Based on the sum of the numbers −
The sum of n sequential numbers will be [n*(n+1)]/2. Using this get the sum of the numbers the n numbers.
Add all the elements in the array.
Subtract the sum of the numbers in the array from the sum of the n numbers.
import java.util.Scanner;
public class MissingNumber {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter the n value: ");
int n = sc.nextInt();
int inpuArray[] = new int[n];
System.out.println("Enter (n-1) numbers: ");
for(int i=0; i<=n-2; i++) {
inpuArray[i] = sc.nextInt();
}
//Finding the missing number
int sumOfAll = (n*(n+1))/2;
int sumOfArray = 0;
for(int i=0; i<=n-2; i++) {
sumOfArray = sumOfArray+inpuArray[i];
}
int missingNumber = sumOfAll-sumOfArray;
System.out.println("Missing number is: "+missingNumber);
}
}
Enter the n value:
5
Enter (n-1) numbers:
1
2
4
5
Missing number is: 3
Using XOR operation − Another way to find the missing number is using XOR.
Find the XOR of all the numbers up ton.
Find the XOR of all the numbers in the array.
Then find the XOR of both results.
import java.util.Scanner;
public class MissingNumber {
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Enter the n value: ");
int n = sc.nextInt();
int inpuArray[] = new int[n];
System.out.println("Enter (n-1) numbers: ");
for(int i=0; i<=n-2; i++) {
inpuArray[i] = sc.nextInt();
}
//Finding the missing number
int xorArray = inpuArray[0];
//XOR of elements of the array
for(int i=1; i<=n-1; i++) {
xorArray = xorArray ^ i;
}
int xorAll = inpuArray[0];
//XOR of elements of the array
for(int i=1; i<=n+1; i++) {
xorAll = xorAll ^ i;
}
int missingNumber = xorArray ^ xorAll;
System.out.println("Missing number is: "+missingNumber);
}
}
Enter the n value:
5
Enter (n-1) numbers:
1
2
4
5
Missing number is: 3 | [
{
"code": null,
"e": 1239,
"s": 1062,
"text": "If a single number is missing in an integer array that contains a sequence of numbers values, you can find it basing of the sum of numbers or, basing on the xor of the numbers."
},
{
"code": null,
"e": 1273,
"s": 1239,
"text": "Based on the sum of the numbers −"
},
{
"code": null,
"e": 1379,
"s": 1273,
"text": "The sum of n sequential numbers will be [n*(n+1)]/2. Using this get the sum of the numbers the n numbers."
},
{
"code": null,
"e": 1414,
"s": 1379,
"text": "Add all the elements in the array."
},
{
"code": null,
"e": 1490,
"s": 1414,
"text": "Subtract the sum of the numbers in the array from the sum of the n numbers."
},
{
"code": null,
"e": 2177,
"s": 1490,
"text": "import java.util.Scanner;\npublic class MissingNumber {\n public static void main(String[] args) {\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter the n value: \");\n int n = sc.nextInt();\n int inpuArray[] = new int[n];\n System.out.println(\"Enter (n-1) numbers: \");\n for(int i=0; i<=n-2; i++) {\n inpuArray[i] = sc.nextInt();\n }\n //Finding the missing number\n int sumOfAll = (n*(n+1))/2;\n int sumOfArray = 0;\n for(int i=0; i<=n-2; i++) {\n sumOfArray = sumOfArray+inpuArray[i];\n }\n int missingNumber = sumOfAll-sumOfArray;\n System.out.println(\"Missing number is: \"+missingNumber);\n }\n}"
},
{
"code": null,
"e": 2248,
"s": 2177,
"text": "Enter the n value:\n5\nEnter (n-1) numbers:\n1\n2\n4\n5\nMissing number is: 3"
},
{
"code": null,
"e": 2323,
"s": 2248,
"text": "Using XOR operation − Another way to find the missing number is using XOR."
},
{
"code": null,
"e": 2363,
"s": 2323,
"text": "Find the XOR of all the numbers up ton."
},
{
"code": null,
"e": 2409,
"s": 2363,
"text": "Find the XOR of all the numbers in the array."
},
{
"code": null,
"e": 2444,
"s": 2409,
"text": "Then find the XOR of both results."
},
{
"code": null,
"e": 3270,
"s": 2444,
"text": "import java.util.Scanner;\npublic class MissingNumber {\n public static void main(String[] args) {\n Scanner sc = new Scanner(System.in);\n System.out.println(\"Enter the n value: \");\n int n = sc.nextInt();\n int inpuArray[] = new int[n];\n System.out.println(\"Enter (n-1) numbers: \");\n for(int i=0; i<=n-2; i++) {\n inpuArray[i] = sc.nextInt();\n }\n //Finding the missing number\n int xorArray = inpuArray[0];\n //XOR of elements of the array\n for(int i=1; i<=n-1; i++) {\n xorArray = xorArray ^ i;\n }\n int xorAll = inpuArray[0];\n //XOR of elements of the array\n for(int i=1; i<=n+1; i++) {\n xorAll = xorAll ^ i;\n }\n int missingNumber = xorArray ^ xorAll;\n System.out.println(\"Missing number is: \"+missingNumber);\n }\n}"
},
{
"code": null,
"e": 3341,
"s": 3270,
"text": "Enter the n value:\n5\nEnter (n-1) numbers:\n1\n2\n4\n5\nMissing number is: 3"
}
] |
Timing diagram of INR M - GeeksforGeeks | 26 Oct, 2018
Problem – Draw the timing diagram of the given instruction in 8085,
INR M
The content present in the designated register/memory location (M) is incremented by 1 and the result is stored in the same place. If the operand is a memory location, it is specified by the contents of HL pair.Example:
INR M
Opcode: INR
Operand: M
M is the memory location (say 5000H) and suppose the data present at M (or 5000H) is 26H, which is need to be incremented by 1. Hex code- 34H
Algorithm –The instruction INR M is of 1 byte; therefore the complete instruction will be stored in a single memory address.For example:
2000: INR M
The opcode fetch will be same as for other instructions in first 4 T states.Only the Memory read and Memory Write need to be added in the successive T states.For the opcode fetch the IO/M (low active) = 0, S1 = 1 and S0 = 1.For the memory read the IO/M (low active) = 0, S1 = 1 and S0 = 0. Also, only 3 T states will be required.For the memory write the IO/M (low active) = 0, S1 = 0 and S0 = 1 and 3 T states will be required.
The timing diagram of INR M instruction is shown below:
In Opcode fetch ( t1-t4 T states ) –
00: lower bit of address where opcode is stored, i.e., 00
20: higher bit of address where opcode is stored, i.e., 20.
ALE: provides signal for multiplexed address and data bus. Only in t1 it used as address bus to fetch lower bit of address otherwise it will be used as data bus.
RD (low active): signal is 1 in t1 & t4 as no data is read by microprocessor. Signal is 0 in t2 & t3 because here the data is read by microprocessor.
WR (low active): Signal is 1 throughout, no data is written by microprocessor.
IO/M (low active): Signal is 0 in throughout because the operation is performing on memory.
S0 and S1: both are 1 in case of opcode fetching.
In Memory read ( t5-t7 T states ) –
00: lower bit of address where opcode is stored, i.e, 00
50: higher bit of address where opcode is stored, i.e, 50.
ALE: provides signal for multiplexed address and data bus. Only in t5 it used as address bus to fetch lower bit of address otherwise it will be used as data bus.
RD (low active): signal is 1 in t5, no data is read by microprocessor. Signal is 0 in t6 & t7, data is read by microprocessor.
WR (low active): signal is 1 throughout, no data is written by microprocessor.
IO/M (low active): signal is 0 in throughout, operation is performing on memory.
S0 and S1 – S1=1 and S0=0 for Read operation.
In Memory write ( t8-t10 T states ) –
00: lower bit of address where opcode is stored, i.e, 00
50: higher bit of address where opcode is stored, i.e, 50.
ALE: provides signal for multiplexed address and data bus. Only in t8 it used as address bus to fetch lower bit of address otherwise it will be used as data bus.
RD (low active): signal is 1 throughout, no data is read by microprocessor.
WR (low active): signal is 1 in t8, no data is written by microprocessor. Signal is 0 in t9 & t10, data is written by microprocessor.
IO/M (low active): signal is 0 in throughout, operation is performing on memory.
S0 and S1 – S1=0 and S0=1 for write operation.
microprocessor
Computer Organization & Architecture
microprocessor
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Addressing modes in 8085 microprocessor
Logical and Physical Address in Operating System
Memory Hierarchy Design and its Characteristics
Architecture of 8085 microprocessor
Pin diagram of 8086 microprocessor
Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)
Architecture of 8086
Computer Organization | RISC and CISC
Memory mapped I/O and Isolated I/O
Computer Organization and Architecture | Pipelining | Set 2 (Dependencies and Data Hazard) | [
{
"code": null,
"e": 24926,
"s": 24898,
"text": "\n26 Oct, 2018"
},
{
"code": null,
"e": 24994,
"s": 24926,
"text": "Problem – Draw the timing diagram of the given instruction in 8085,"
},
{
"code": null,
"e": 25001,
"s": 24994,
"text": "INR M "
},
{
"code": null,
"e": 25221,
"s": 25001,
"text": "The content present in the designated register/memory location (M) is incremented by 1 and the result is stored in the same place. If the operand is a memory location, it is specified by the contents of HL pair.Example:"
},
{
"code": null,
"e": 25251,
"s": 25221,
"text": "INR M\nOpcode: INR\nOperand: M "
},
{
"code": null,
"e": 25393,
"s": 25251,
"text": "M is the memory location (say 5000H) and suppose the data present at M (or 5000H) is 26H, which is need to be incremented by 1. Hex code- 34H"
},
{
"code": null,
"e": 25530,
"s": 25393,
"text": "Algorithm –The instruction INR M is of 1 byte; therefore the complete instruction will be stored in a single memory address.For example:"
},
{
"code": null,
"e": 25543,
"s": 25530,
"text": "2000: INR M "
},
{
"code": null,
"e": 25971,
"s": 25543,
"text": "The opcode fetch will be same as for other instructions in first 4 T states.Only the Memory read and Memory Write need to be added in the successive T states.For the opcode fetch the IO/M (low active) = 0, S1 = 1 and S0 = 1.For the memory read the IO/M (low active) = 0, S1 = 1 and S0 = 0. Also, only 3 T states will be required.For the memory write the IO/M (low active) = 0, S1 = 0 and S0 = 1 and 3 T states will be required."
},
{
"code": null,
"e": 26027,
"s": 25971,
"text": "The timing diagram of INR M instruction is shown below:"
},
{
"code": null,
"e": 26064,
"s": 26027,
"text": "In Opcode fetch ( t1-t4 T states ) –"
},
{
"code": null,
"e": 26122,
"s": 26064,
"text": "00: lower bit of address where opcode is stored, i.e., 00"
},
{
"code": null,
"e": 26182,
"s": 26122,
"text": "20: higher bit of address where opcode is stored, i.e., 20."
},
{
"code": null,
"e": 26344,
"s": 26182,
"text": "ALE: provides signal for multiplexed address and data bus. Only in t1 it used as address bus to fetch lower bit of address otherwise it will be used as data bus."
},
{
"code": null,
"e": 26494,
"s": 26344,
"text": "RD (low active): signal is 1 in t1 & t4 as no data is read by microprocessor. Signal is 0 in t2 & t3 because here the data is read by microprocessor."
},
{
"code": null,
"e": 26573,
"s": 26494,
"text": "WR (low active): Signal is 1 throughout, no data is written by microprocessor."
},
{
"code": null,
"e": 26665,
"s": 26573,
"text": "IO/M (low active): Signal is 0 in throughout because the operation is performing on memory."
},
{
"code": null,
"e": 26715,
"s": 26665,
"text": "S0 and S1: both are 1 in case of opcode fetching."
},
{
"code": null,
"e": 26751,
"s": 26715,
"text": "In Memory read ( t5-t7 T states ) –"
},
{
"code": null,
"e": 26808,
"s": 26751,
"text": "00: lower bit of address where opcode is stored, i.e, 00"
},
{
"code": null,
"e": 26867,
"s": 26808,
"text": "50: higher bit of address where opcode is stored, i.e, 50."
},
{
"code": null,
"e": 27029,
"s": 26867,
"text": "ALE: provides signal for multiplexed address and data bus. Only in t5 it used as address bus to fetch lower bit of address otherwise it will be used as data bus."
},
{
"code": null,
"e": 27156,
"s": 27029,
"text": "RD (low active): signal is 1 in t5, no data is read by microprocessor. Signal is 0 in t6 & t7, data is read by microprocessor."
},
{
"code": null,
"e": 27235,
"s": 27156,
"text": "WR (low active): signal is 1 throughout, no data is written by microprocessor."
},
{
"code": null,
"e": 27316,
"s": 27235,
"text": "IO/M (low active): signal is 0 in throughout, operation is performing on memory."
},
{
"code": null,
"e": 27362,
"s": 27316,
"text": "S0 and S1 – S1=1 and S0=0 for Read operation."
},
{
"code": null,
"e": 27400,
"s": 27362,
"text": "In Memory write ( t8-t10 T states ) –"
},
{
"code": null,
"e": 27457,
"s": 27400,
"text": "00: lower bit of address where opcode is stored, i.e, 00"
},
{
"code": null,
"e": 27516,
"s": 27457,
"text": "50: higher bit of address where opcode is stored, i.e, 50."
},
{
"code": null,
"e": 27678,
"s": 27516,
"text": "ALE: provides signal for multiplexed address and data bus. Only in t8 it used as address bus to fetch lower bit of address otherwise it will be used as data bus."
},
{
"code": null,
"e": 27754,
"s": 27678,
"text": "RD (low active): signal is 1 throughout, no data is read by microprocessor."
},
{
"code": null,
"e": 27888,
"s": 27754,
"text": "WR (low active): signal is 1 in t8, no data is written by microprocessor. Signal is 0 in t9 & t10, data is written by microprocessor."
},
{
"code": null,
"e": 27969,
"s": 27888,
"text": "IO/M (low active): signal is 0 in throughout, operation is performing on memory."
},
{
"code": null,
"e": 28016,
"s": 27969,
"text": "S0 and S1 – S1=0 and S0=1 for write operation."
},
{
"code": null,
"e": 28031,
"s": 28016,
"text": "microprocessor"
},
{
"code": null,
"e": 28068,
"s": 28031,
"text": "Computer Organization & Architecture"
},
{
"code": null,
"e": 28083,
"s": 28068,
"text": "microprocessor"
},
{
"code": null,
"e": 28181,
"s": 28083,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28190,
"s": 28181,
"text": "Comments"
},
{
"code": null,
"e": 28203,
"s": 28190,
"text": "Old Comments"
},
{
"code": null,
"e": 28243,
"s": 28203,
"text": "Addressing modes in 8085 microprocessor"
},
{
"code": null,
"e": 28292,
"s": 28243,
"text": "Logical and Physical Address in Operating System"
},
{
"code": null,
"e": 28340,
"s": 28292,
"text": "Memory Hierarchy Design and its Characteristics"
},
{
"code": null,
"e": 28376,
"s": 28340,
"text": "Architecture of 8085 microprocessor"
},
{
"code": null,
"e": 28411,
"s": 28376,
"text": "Pin diagram of 8086 microprocessor"
},
{
"code": null,
"e": 28506,
"s": 28411,
"text": "Computer Organization and Architecture | Pipelining | Set 1 (Execution, Stages and Throughput)"
},
{
"code": null,
"e": 28527,
"s": 28506,
"text": "Architecture of 8086"
},
{
"code": null,
"e": 28565,
"s": 28527,
"text": "Computer Organization | RISC and CISC"
},
{
"code": null,
"e": 28600,
"s": 28565,
"text": "Memory mapped I/O and Isolated I/O"
}
] |
after method in Python Tkinter | Tkinter is a python library to make GUIs. It has many built in methods to create and manipulate GUI windows and other widgets to show the data and GUI events. In this article we will see how the after method is used in a Tkinter GUI.
.after(delay, FuncName=FuncName)
This method calls the function FuncName after the given delay in milisecond
Here we make a frame to display a list of words randomly. We use the random library along with the after method to call a function displaying a given list of text in a random manner.
import random
from tkinter import *
base = Tk()
a = Label(base, text="After() Demo")
a.pack()
contrive = Frame(base, width=450, height=500)
contrive.pack()
words = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri','Sat','Sun']
#Display words randomly one after the other.
def display_weekday():
if not words:
return
rand = random.choice(words)
character_frame = Label(contrive, text=rand)
character_frame.pack()
contrive.after(500,display_weekday)
words.remove(rand)
base.after(0, display_weekday)
base.mainloop()
Running the above code gives us the following result:
On running the same program again we get the result showing different sequence of the words.
In the next example we will see how we can use the after method as a delay mechanism to wait for a process to run for a certain amount of time and then stop the process. We also use the destroy method to stop the processing.
from tkinter import Tk, mainloop, TOP
from tkinter.ttk import Button
from time import time
base = Tk()
stud = Button(base, text = 'After Demo()')
stud.pack(side = TOP, pady = 8)
print('processing Begins...')
begin = time()
base.after(3000, base.destroy)
mainloop()
conclusion = time()
print('process destroyed in % d seconds' % ( conclusion-begin))
Running the above code gives us the following result:
processing Begins...
process destroyed in 3 seconds | [
{
"code": null,
"e": 1296,
"s": 1062,
"text": "Tkinter is a python library to make GUIs. It has many built in methods to create and manipulate GUI windows and other widgets to show the data and GUI events. In this article we will see how the after method is used in a Tkinter GUI."
},
{
"code": null,
"e": 1405,
"s": 1296,
"text": ".after(delay, FuncName=FuncName)\nThis method calls the function FuncName after the given delay in milisecond"
},
{
"code": null,
"e": 1588,
"s": 1405,
"text": "Here we make a frame to display a list of words randomly. We use the random library along with the after method to call a function displaying a given list of text in a random manner."
},
{
"code": null,
"e": 2113,
"s": 1588,
"text": "import random\nfrom tkinter import *\n\nbase = Tk()\n\na = Label(base, text=\"After() Demo\")\na.pack()\n\ncontrive = Frame(base, width=450, height=500)\ncontrive.pack()\n\nwords = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri','Sat','Sun']\n#Display words randomly one after the other.\ndef display_weekday():\n if not words:\n return\n rand = random.choice(words)\n character_frame = Label(contrive, text=rand)\n character_frame.pack()\n contrive.after(500,display_weekday)\n words.remove(rand)\n\nbase.after(0, display_weekday)\nbase.mainloop()"
},
{
"code": null,
"e": 2167,
"s": 2113,
"text": "Running the above code gives us the following result:"
},
{
"code": null,
"e": 2260,
"s": 2167,
"text": "On running the same program again we get the result showing different sequence of the words."
},
{
"code": null,
"e": 2485,
"s": 2260,
"text": "In the next example we will see how we can use the after method as a delay mechanism to wait for a process to run for a certain amount of time and then stop the process. We also use the destroy method to stop the processing."
},
{
"code": null,
"e": 2842,
"s": 2485,
"text": "from tkinter import Tk, mainloop, TOP\nfrom tkinter.ttk import Button\n\nfrom time import time\n\nbase = Tk()\n\nstud = Button(base, text = 'After Demo()')\nstud.pack(side = TOP, pady = 8)\n\nprint('processing Begins...')\n\nbegin = time()\n\nbase.after(3000, base.destroy)\n\nmainloop()\n\nconclusion = time()\nprint('process destroyed in % d seconds' % ( conclusion-begin))"
},
{
"code": null,
"e": 2896,
"s": 2842,
"text": "Running the above code gives us the following result:"
},
{
"code": null,
"e": 2948,
"s": 2896,
"text": "processing Begins...\nprocess destroyed in 3 seconds"
}
] |
DFA of alternate 0's and 1's - GeeksforGeeks | 09 Sep, 2020
Regular Expression can be anything, from a terminal symbol, ∅, to union of two regular expressions (R1 + R2), their concatenation (R1R2) or its closure R1* as well.
Examples of Regular Expression :
Regular expression of set of all strings of 0’s and 1’s starting with two zeros :00(0+1)*Regular expression of set of all strings of 0’s and 1’s having even number of 0’s followed by odd numbers of 1’s :(00)*1(11)*Regular expression of set of all strings of 0’s and 1’s containing at least one 0 and at least two 1’s :00*11(0+1)* + 0111*(0+1)*
Regular expression of set of all strings of 0’s and 1’s starting with two zeros :00(0+1)*
00(0+1)*
Regular expression of set of all strings of 0’s and 1’s having even number of 0’s followed by odd numbers of 1’s :(00)*1(11)*
(00)*1(11)*
Regular expression of set of all strings of 0’s and 1’s containing at least one 0 and at least two 1’s :00*11(0+1)* + 0111*(0+1)*
00*11(0+1)* + 0111*(0+1)*
Strings that will be acceptable by regular expression with alternate 0’s and 1’s –
∈ (no input, 0 and 1)010101..... (string that starts with 0 followed by 1 and so on).101010..... (string that starts with 1 followed by 0 and so on).
∈ (no input, 0 and 1)
010101..... (string that starts with 0 followed by 1 and so on).
101010..... (string that starts with 1 followed by 0 and so on).
Now, a regular expression for set of all strings consisting of alternate 0’s and 1’s would be (01)*, where it can accept ∈, 01, 0101, 010101.....etc but this restricts the string as it can always begin with 0 only.
Again, the expression (10)* will accept ∈, 10, 1010, 101010....etc but this too restricts string as it can always begin with 1 only.
So, we introduce 1(01)* and 0(10)* to meet gap in respective cases.
While 1(01)* breaks the restriction of string starting with 0, 0(10)* breaks same for string starting with 1.
So, the final expression is –
(01)* + (10)* + 0(10)* + 1(01)*
Theory of Computation & Automata
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Construct a Turing Machine for language L = {ww | w ∈ {0,1}}
Proof that vertex cover is NP complete
Boyer-Moore Majority Voting Algorithm
CYK Algorithm for Context Free Grammar
Removal of ambiguity (Converting an Ambiguous grammar into Unambiguous grammar)
Introduction To Grammar in Theory of Computation
Removing Direct and Indirect Left Recursion in a Grammar
Applications of various Automata
Practice problems on finite automata
Construct Pushdown Automata for all length palindrome | [
{
"code": null,
"e": 24495,
"s": 24467,
"text": "\n09 Sep, 2020"
},
{
"code": null,
"e": 24660,
"s": 24495,
"text": "Regular Expression can be anything, from a terminal symbol, ∅, to union of two regular expressions (R1 + R2), their concatenation (R1R2) or its closure R1* as well."
},
{
"code": null,
"e": 24693,
"s": 24660,
"text": "Examples of Regular Expression :"
},
{
"code": null,
"e": 25037,
"s": 24693,
"text": "Regular expression of set of all strings of 0’s and 1’s starting with two zeros :00(0+1)*Regular expression of set of all strings of 0’s and 1’s having even number of 0’s followed by odd numbers of 1’s :(00)*1(11)*Regular expression of set of all strings of 0’s and 1’s containing at least one 0 and at least two 1’s :00*11(0+1)* + 0111*(0+1)*"
},
{
"code": null,
"e": 25127,
"s": 25037,
"text": "Regular expression of set of all strings of 0’s and 1’s starting with two zeros :00(0+1)*"
},
{
"code": null,
"e": 25136,
"s": 25127,
"text": "00(0+1)*"
},
{
"code": null,
"e": 25262,
"s": 25136,
"text": "Regular expression of set of all strings of 0’s and 1’s having even number of 0’s followed by odd numbers of 1’s :(00)*1(11)*"
},
{
"code": null,
"e": 25274,
"s": 25262,
"text": "(00)*1(11)*"
},
{
"code": null,
"e": 25404,
"s": 25274,
"text": "Regular expression of set of all strings of 0’s and 1’s containing at least one 0 and at least two 1’s :00*11(0+1)* + 0111*(0+1)*"
},
{
"code": null,
"e": 25430,
"s": 25404,
"text": "00*11(0+1)* + 0111*(0+1)*"
},
{
"code": null,
"e": 25513,
"s": 25430,
"text": "Strings that will be acceptable by regular expression with alternate 0’s and 1’s –"
},
{
"code": null,
"e": 25663,
"s": 25513,
"text": "∈ (no input, 0 and 1)010101..... (string that starts with 0 followed by 1 and so on).101010..... (string that starts with 1 followed by 0 and so on)."
},
{
"code": null,
"e": 25685,
"s": 25663,
"text": "∈ (no input, 0 and 1)"
},
{
"code": null,
"e": 25750,
"s": 25685,
"text": "010101..... (string that starts with 0 followed by 1 and so on)."
},
{
"code": null,
"e": 25815,
"s": 25750,
"text": "101010..... (string that starts with 1 followed by 0 and so on)."
},
{
"code": null,
"e": 26030,
"s": 25815,
"text": "Now, a regular expression for set of all strings consisting of alternate 0’s and 1’s would be (01)*, where it can accept ∈, 01, 0101, 010101.....etc but this restricts the string as it can always begin with 0 only."
},
{
"code": null,
"e": 26163,
"s": 26030,
"text": "Again, the expression (10)* will accept ∈, 10, 1010, 101010....etc but this too restricts string as it can always begin with 1 only."
},
{
"code": null,
"e": 26231,
"s": 26163,
"text": "So, we introduce 1(01)* and 0(10)* to meet gap in respective cases."
},
{
"code": null,
"e": 26341,
"s": 26231,
"text": "While 1(01)* breaks the restriction of string starting with 0, 0(10)* breaks same for string starting with 1."
},
{
"code": null,
"e": 26371,
"s": 26341,
"text": "So, the final expression is –"
},
{
"code": null,
"e": 26403,
"s": 26371,
"text": "(01)* + (10)* + 0(10)* + 1(01)*"
},
{
"code": null,
"e": 26436,
"s": 26403,
"text": "Theory of Computation & Automata"
},
{
"code": null,
"e": 26534,
"s": 26436,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26543,
"s": 26534,
"text": "Comments"
},
{
"code": null,
"e": 26556,
"s": 26543,
"text": "Old Comments"
},
{
"code": null,
"e": 26617,
"s": 26556,
"text": "Construct a Turing Machine for language L = {ww | w ∈ {0,1}}"
},
{
"code": null,
"e": 26656,
"s": 26617,
"text": "Proof that vertex cover is NP complete"
},
{
"code": null,
"e": 26694,
"s": 26656,
"text": "Boyer-Moore Majority Voting Algorithm"
},
{
"code": null,
"e": 26733,
"s": 26694,
"text": "CYK Algorithm for Context Free Grammar"
},
{
"code": null,
"e": 26813,
"s": 26733,
"text": "Removal of ambiguity (Converting an Ambiguous grammar into Unambiguous grammar)"
},
{
"code": null,
"e": 26862,
"s": 26813,
"text": "Introduction To Grammar in Theory of Computation"
},
{
"code": null,
"e": 26919,
"s": 26862,
"text": "Removing Direct and Indirect Left Recursion in a Grammar"
},
{
"code": null,
"e": 26952,
"s": 26919,
"text": "Applications of various Automata"
},
{
"code": null,
"e": 26989,
"s": 26952,
"text": "Practice problems on finite automata"
}
] |
Subset Sums | Practice | GeeksforGeeks | Given a list arr of N integers, print sums of all subsets in it.
Example 1:
Input:
N = 2
arr[] = {2, 3}
Output:
0 2 3 5
Explanation:
When no elements is taken then Sum = 0.
When only 2 is taken then Sum = 2.
When only 3 is taken then Sum = 3.
When element 2 and 3 are taken then
Sum = 2+3 = 5.
Example 2:
Input:
N = 3
arr = {5, 2, 1}
Output:
0 1 2 3 5 6 7 8
Your Task:
You don't need to read input or print anything. Your task is to complete the function subsetSums() which takes a list/vector and an integer N as an input parameter and return the list/vector of all the subset sums.
Expected Time Complexity: O(2N)
Expected Auxiliary Space: O(2N)
Constraints:
1 <= N <= 15
0 <= arr[i] <= 104
+1
premranjan880461 week ago
c++ recursive solution
vector<int>v; void solve(vector<int>arr,int sum, int i) { if(i==arr.size()) { v.push_back(sum); return; } solve(arr,sum,i+1); solve(arr,sum+arr[i],i+1); } vector<int> subsetSums(vector<int> arr, int N) { solve(arr,0,0); sort(v.begin(),v.end()); return v; }
0
mamoonakhterrock20021 week ago
class Solution{
ArrayList<Integer> subsetSums(ArrayList<Integer> arr, int N){
// code here
ArrayList < Integer > sumSubset = new ArrayList < > ();
int n = N;
func(0,0,arr,N,sumSubset);
return sumSubset;
}
static void func(int ind, int sum, ArrayList < Integer > arr, int N, ArrayList < Integer > sumSubset){
if(ind==N){
sumSubset.add(sum);
return;
}
func(ind+1,sum+arr.get(ind),arr,N,sumSubset);
func(ind+1,sum,arr,N,sumSubset);
}
}
0
manabghosh9241 week ago
vector<int> subsetSums(vector<int> arr, int N) { vector<int> res; // Write Your Code here res.push_back(0); int M; int i, j, sum=0; for(j=0;j<N;j++) { M = res.size(); for(i=0;i<M;i++) { sum=arr[j]+res[i]; res.push_back(sum); sum=0; } } return res; }
0
sakesai302 weeks ago
class Solution{ ArrayList<Integer> subsetSums(ArrayList<Integer> arr, int N){ ArrayList<Integer> mama = new ArrayList<>(); sub(0,arr,N,0,mama); return mama; } static void sub(int index,ArrayList<Integer> arr,int N,int sum,ArrayList<Integer> mama){ if(index == N){ mama.add(sum); return; } sum+=arr.get(index); sub(index+1,arr,N,sum,mama); sum-=arr.get(index); sub(index+1,arr,N,sum,mama); }
0
auntylover10823 weeks ago
Simple CPP Solution using recursion && backtracking
class Solution
{
public:
void sum(int i,vector<int>v,int N,vector<int>arr,vector<int>&ans){
if(i>=N){
int a= accumulate(v.begin(),v.end(),0);
ans.push_back(a);
return;
}
//take it
v.push_back(arr[i]);
sum(i+1,v,N,arr,ans);
//do not takeit
v.pop_back();
sum(i+1,v,N,arr,ans);
}
vector<int> subsetSums(vector<int> arr, int N)
{
vector<int>ans,v;
sum(0,v,N,arr,ans);
return ans;
// Write Your Code here
}
};
0
dipanshusharma93133 weeks ago
// java solution
class Solution{ ArrayList<Integer> sub(ArrayList<Integer> arr, int idx, int sum, ArrayList<Integer> di){ if(idx == arr.size()){ di.add(sum); return di; } int a = arr.get(idx); // to be sub(arr, idx+1, sum+a, di); // not to be sub(arr, idx+1, sum, di); return di; } ArrayList<Integer> subsetSums(ArrayList<Integer> arr, int N){ // code here ArrayList<Integer> di = new ArrayList<Integer>(); sub(arr, 0, 0, di); return di; }}
+1
phantom3334 weeks ago
void solve(int id,vector<int> &arr,int sum,int n,vector<int> &ds){
if(id==n){
ds.push_back(sum);
return;
}
sum+=arr[id];
solve(id+1,arr,sum,n,ds);
sum-=arr[id];
solve(id+1,arr,sum,n,ds);
}
vector<int> subsetSums(vector<int> arr,int N){
vector<int> ds;
int sum=0;
sort(ds.begin(),ds.end());
solve(0,arr,sum,N,ds);
return ds;
}
+3
shehzad874501 month ago
//solving this problem by IBH //for more checkout [Aaditya Verma] youtube channel vector<int> res; void solve(vector<int>arr, int sum){ //base condition if(arr.size() == 0){ res.push_back(sum); return; } int op1 = sum; int op2 = sum; op2 = sum + arr[0]; arr.erase(arr.begin() + 0); solve(arr, op1); solve(arr,op2); } vector<int> subsetSums(vector<int> arr, int N) { solve(arr,0); return res; }
0
akkeshri140420011 month ago
vector<int>res;
void solve(vector<int>arr,int indx,int op,int N){
if(indx==N){
res.push_back(op);
return;
}
solve(arr,indx+1,op+arr[indx],N);
solve(arr,indx+1,op,N);
}
vector<int> subsetSums(vector<int> arr, int N)
{
// Write Your Code here
solve(arr,0,0,N);
return res;
}
0
ksbsbisht1371 month ago
vector<int> res; void solve(vector<int> arr,int N,int output) { if(N==0) { res.push_back(output); return; } solve(arr,N-1,output+arr[N-1]); solve(arr,N-1,output); } vector<int> subsetSums(vector<int> arr, int N) { solve(arr,N,0); return res; }
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 303,
"s": 238,
"text": "Given a list arr of N integers, print sums of all subsets in it."
},
{
"code": null,
"e": 316,
"s": 305,
"text": "Example 1:"
},
{
"code": null,
"e": 535,
"s": 316,
"text": "Input:\nN = 2\narr[] = {2, 3}\nOutput:\n0 2 3 5\nExplanation:\nWhen no elements is taken then Sum = 0.\nWhen only 2 is taken then Sum = 2.\nWhen only 3 is taken then Sum = 3.\nWhen element 2 and 3 are taken then \nSum = 2+3 = 5."
},
{
"code": null,
"e": 546,
"s": 535,
"text": "Example 2:"
},
{
"code": null,
"e": 600,
"s": 546,
"text": "Input:\nN = 3\narr = {5, 2, 1}\nOutput:\n0 1 2 3 5 6 7 8\n"
},
{
"code": null,
"e": 828,
"s": 600,
"text": "Your Task: \nYou don't need to read input or print anything. Your task is to complete the function subsetSums() which takes a list/vector and an integer N as an input parameter and return the list/vector of all the subset sums."
},
{
"code": null,
"e": 892,
"s": 828,
"text": "Expected Time Complexity: O(2N)\nExpected Auxiliary Space: O(2N)"
},
{
"code": null,
"e": 937,
"s": 892,
"text": "Constraints:\n1 <= N <= 15\n0 <= arr[i] <= 104"
},
{
"code": null,
"e": 940,
"s": 937,
"text": "+1"
},
{
"code": null,
"e": 966,
"s": 940,
"text": "premranjan880461 week ago"
},
{
"code": null,
"e": 989,
"s": 966,
"text": "c++ recursive solution"
},
{
"code": null,
"e": 1341,
"s": 991,
"text": " vector<int>v; void solve(vector<int>arr,int sum, int i) { if(i==arr.size()) { v.push_back(sum); return; } solve(arr,sum,i+1); solve(arr,sum+arr[i],i+1); } vector<int> subsetSums(vector<int> arr, int N) { solve(arr,0,0); sort(v.begin(),v.end()); return v; }"
},
{
"code": null,
"e": 1343,
"s": 1341,
"text": "0"
},
{
"code": null,
"e": 1374,
"s": 1343,
"text": "mamoonakhterrock20021 week ago"
},
{
"code": null,
"e": 1975,
"s": 1374,
"text": "class Solution{\n ArrayList<Integer> subsetSums(ArrayList<Integer> arr, int N){\n \n // code here\n ArrayList < Integer > sumSubset = new ArrayList < > ();\n int n = N;\n func(0,0,arr,N,sumSubset);\n \n \n return sumSubset;\n }\n \n static void func(int ind, int sum, ArrayList < Integer > arr, int N, ArrayList < Integer > sumSubset){\n if(ind==N){\n sumSubset.add(sum);\n return;\n }\n \n func(ind+1,sum+arr.get(ind),arr,N,sumSubset);\n \n func(ind+1,sum,arr,N,sumSubset);\n \n }\n}"
},
{
"code": null,
"e": 1977,
"s": 1975,
"text": "0"
},
{
"code": null,
"e": 2001,
"s": 1977,
"text": "manabghosh9241 week ago"
},
{
"code": null,
"e": 2412,
"s": 2001,
"text": "vector<int> subsetSums(vector<int> arr, int N) { vector<int> res; // Write Your Code here res.push_back(0); int M; int i, j, sum=0; for(j=0;j<N;j++) { M = res.size(); for(i=0;i<M;i++) { sum=arr[j]+res[i]; res.push_back(sum); sum=0; } } return res; }"
},
{
"code": null,
"e": 2414,
"s": 2412,
"text": "0"
},
{
"code": null,
"e": 2435,
"s": 2414,
"text": "sakesai302 weeks ago"
},
{
"code": null,
"e": 2911,
"s": 2435,
"text": "class Solution{ ArrayList<Integer> subsetSums(ArrayList<Integer> arr, int N){ ArrayList<Integer> mama = new ArrayList<>(); sub(0,arr,N,0,mama); return mama; } static void sub(int index,ArrayList<Integer> arr,int N,int sum,ArrayList<Integer> mama){ if(index == N){ mama.add(sum); return; } sum+=arr.get(index); sub(index+1,arr,N,sum,mama); sum-=arr.get(index); sub(index+1,arr,N,sum,mama); }"
},
{
"code": null,
"e": 2913,
"s": 2911,
"text": "0"
},
{
"code": null,
"e": 2939,
"s": 2913,
"text": "auntylover10823 weeks ago"
},
{
"code": null,
"e": 2991,
"s": 2939,
"text": "Simple CPP Solution using recursion && backtracking"
},
{
"code": null,
"e": 3607,
"s": 2993,
"text": "class Solution\n\n{\n\npublic:\n\n void sum(int i,vector<int>v,int N,vector<int>arr,vector<int>&ans){\n\n if(i>=N){\n\n int a= accumulate(v.begin(),v.end(),0);\n\n ans.push_back(a);\n\n return;\n\n }\n\n //take it\n\n v.push_back(arr[i]);\n\n sum(i+1,v,N,arr,ans);\n\n //do not takeit\n\n v.pop_back();\n\n sum(i+1,v,N,arr,ans);\n\n \n\n \n\n }\n\n vector<int> subsetSums(vector<int> arr, int N)\n\n {\n\n vector<int>ans,v;\n\n sum(0,v,N,arr,ans);\n\n return ans;\n\n // Write Your Code here\n\n \n\n }\n\n};"
},
{
"code": null,
"e": 3609,
"s": 3607,
"text": "0"
},
{
"code": null,
"e": 3639,
"s": 3609,
"text": "dipanshusharma93133 weeks ago"
},
{
"code": null,
"e": 3656,
"s": 3639,
"text": "// java solution"
},
{
"code": null,
"e": 4187,
"s": 3656,
"text": "class Solution{ ArrayList<Integer> sub(ArrayList<Integer> arr, int idx, int sum, ArrayList<Integer> di){ if(idx == arr.size()){ di.add(sum); return di; } int a = arr.get(idx); // to be sub(arr, idx+1, sum+a, di); // not to be sub(arr, idx+1, sum, di); return di; } ArrayList<Integer> subsetSums(ArrayList<Integer> arr, int N){ // code here ArrayList<Integer> di = new ArrayList<Integer>(); sub(arr, 0, 0, di); return di; }}"
},
{
"code": null,
"e": 4190,
"s": 4187,
"text": "+1"
},
{
"code": null,
"e": 4212,
"s": 4190,
"text": "phantom3334 weeks ago"
},
{
"code": null,
"e": 4279,
"s": 4212,
"text": "void solve(int id,vector<int> &arr,int sum,int n,vector<int> &ds){"
},
{
"code": null,
"e": 4294,
"s": 4279,
"text": " if(id==n){"
},
{
"code": null,
"e": 4321,
"s": 4294,
"text": " ds.push_back(sum);"
},
{
"code": null,
"e": 4337,
"s": 4321,
"text": " return;"
},
{
"code": null,
"e": 4343,
"s": 4337,
"text": " }"
},
{
"code": null,
"e": 4361,
"s": 4343,
"text": " sum+=arr[id];"
},
{
"code": null,
"e": 4391,
"s": 4361,
"text": " solve(id+1,arr,sum,n,ds);"
},
{
"code": null,
"e": 4409,
"s": 4391,
"text": " sum-=arr[id];"
},
{
"code": null,
"e": 4439,
"s": 4409,
"text": " solve(id+1,arr,sum,n,ds);"
},
{
"code": null,
"e": 4441,
"s": 4439,
"text": "}"
},
{
"code": null,
"e": 4490,
"s": 4443,
"text": "vector<int> subsetSums(vector<int> arr,int N){"
},
{
"code": null,
"e": 4510,
"s": 4490,
"text": " vector<int> ds;"
},
{
"code": null,
"e": 4525,
"s": 4510,
"text": " int sum=0;"
},
{
"code": null,
"e": 4556,
"s": 4525,
"text": " sort(ds.begin(),ds.end());"
},
{
"code": null,
"e": 4583,
"s": 4556,
"text": " solve(0,arr,sum,N,ds);"
},
{
"code": null,
"e": 4598,
"s": 4583,
"text": " return ds;"
},
{
"code": null,
"e": 4600,
"s": 4598,
"text": "}"
},
{
"code": null,
"e": 4605,
"s": 4602,
"text": "+3"
},
{
"code": null,
"e": 4629,
"s": 4605,
"text": "shehzad874501 month ago"
},
{
"code": null,
"e": 5149,
"s": 4629,
"text": "//solving this problem by IBH //for more checkout [Aaditya Verma] youtube channel vector<int> res; void solve(vector<int>arr, int sum){ //base condition if(arr.size() == 0){ res.push_back(sum); return; } int op1 = sum; int op2 = sum; op2 = sum + arr[0]; arr.erase(arr.begin() + 0); solve(arr, op1); solve(arr,op2); } vector<int> subsetSums(vector<int> arr, int N) { solve(arr,0); return res; }"
},
{
"code": null,
"e": 5151,
"s": 5149,
"text": "0"
},
{
"code": null,
"e": 5179,
"s": 5151,
"text": "akkeshri140420011 month ago"
},
{
"code": null,
"e": 5508,
"s": 5179,
"text": "vector<int>res;\nvoid solve(vector<int>arr,int indx,int op,int N){\n if(indx==N){\n res.push_back(op);\n return;\n }\n solve(arr,indx+1,op+arr[indx],N);\n solve(arr,indx+1,op,N);\n}\n vector<int> subsetSums(vector<int> arr, int N)\n {\n // Write Your Code here\n solve(arr,0,0,N);\n return res;\n }"
},
{
"code": null,
"e": 5510,
"s": 5508,
"text": "0"
},
{
"code": null,
"e": 5534,
"s": 5510,
"text": "ksbsbisht1371 month ago"
},
{
"code": null,
"e": 5842,
"s": 5534,
"text": "vector<int> res; void solve(vector<int> arr,int N,int output) { if(N==0) { res.push_back(output); return; } solve(arr,N-1,output+arr[N-1]); solve(arr,N-1,output); } vector<int> subsetSums(vector<int> arr, int N) { solve(arr,N,0); return res; }"
},
{
"code": null,
"e": 5988,
"s": 5842,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 6024,
"s": 5988,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 6034,
"s": 6024,
"text": "\nProblem\n"
},
{
"code": null,
"e": 6044,
"s": 6034,
"text": "\nContest\n"
},
{
"code": null,
"e": 6107,
"s": 6044,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 6255,
"s": 6107,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 6463,
"s": 6255,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 6569,
"s": 6463,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
C++ String Library - reserve | It request a change in capacity.
Following is the declaration for std::string::reserve.
void reserve (size_t n = 0);
void reserve (size_t n = 0);
n − Planned length for the string.
none
if an exception is thrown, there are no changes in the string.
In below example for std::string::reserve.
#include <iostream>
#include <fstream>
#include <string>
int main () {
std::string str;
std::ifstream file ("test.txt",std::ios::in|std::ios::ate);
if (file) {
std::ifstream::streampos filesize = file.tellg();
str.reserve(filesize);
file.seekg(0);
while (!file.eof()) {
str += file.get();
}
std::cout << str;
}
return 0;
}
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2636,
"s": 2603,
"text": "It request a change in capacity."
},
{
"code": null,
"e": 2691,
"s": 2636,
"text": "Following is the declaration for std::string::reserve."
},
{
"code": null,
"e": 2720,
"s": 2691,
"text": "void reserve (size_t n = 0);"
},
{
"code": null,
"e": 2749,
"s": 2720,
"text": "void reserve (size_t n = 0);"
},
{
"code": null,
"e": 2784,
"s": 2749,
"text": "n − Planned length for the string."
},
{
"code": null,
"e": 2789,
"s": 2784,
"text": "none"
},
{
"code": null,
"e": 2852,
"s": 2789,
"text": "if an exception is thrown, there are no changes in the string."
},
{
"code": null,
"e": 2895,
"s": 2852,
"text": "In below example for std::string::reserve."
},
{
"code": null,
"e": 3278,
"s": 2895,
"text": "#include <iostream>\n#include <fstream>\n#include <string>\n\nint main () {\n std::string str;\n\n std::ifstream file (\"test.txt\",std::ios::in|std::ios::ate);\n if (file) {\n std::ifstream::streampos filesize = file.tellg();\n str.reserve(filesize);\n\n file.seekg(0);\n while (!file.eof()) {\n str += file.get();\n }\n std::cout << str;\n }\n return 0;\n}"
},
{
"code": null,
"e": 3285,
"s": 3278,
"text": " Print"
},
{
"code": null,
"e": 3296,
"s": 3285,
"text": " Add Notes"
}
] |
Area of a polygon with given n ordered vertices - GeeksforGeeks | 07 Nov, 2021
Given ordered coordinates of a polygon with n vertices. Find the area of the polygon. Here ordered means that the coordinates are given either in a clockwise manner or anticlockwise from the first vertex to last.Examples :
Input : X[] = {0, 4, 4, 0}, Y[] = {0, 0, 4, 4};
Output : 16
Input : X[] = {0, 4, 2}, Y[] = {0, 0, 4}
Output : 8
We can compute the area of a polygon using the Shoelace formula.
Area
= | 1/2 [ (x1y2 + x2y3 + ... + xn-1yn + xny1) –
(x2y1 + x3y2 + ... + xnyn-1 + x1yn) ] |
The above formula is derived by following the cross product of the vertices to get the Area of triangles formed in the polygon. Below is an implementation of the above formula.
CPP
Java
Python3
C#
PHP
Javascript
// C++ program to evaluate area of a polygon using// shoelace formula#include <bits/stdc++.h>using namespace std; // (X[i], Y[i]) are coordinates of i'th point.double polygonArea(double X[], double Y[], int n){ // Initialize area double area = 0.0; // Calculate value of shoelace formula int j = n - 1; for (int i = 0; i < n; i++) { area += (X[j] + X[i]) * (Y[j] - Y[i]); j = i; // j is previous vertex to i } // Return absolute value return abs(area / 2.0);} // Driver program to test above functionint main(){ double X[] = {0, 2, 4}; double Y[] = {1, 3, 7}; int n = sizeof(X)/sizeof(X[0]); cout << polygonArea(X, Y, n);}
// Java program to evaluate area // of a polygon using shoelace formulaimport java.io.*; class GFG { // (X[i], Y[i]) are coordinates of i'th point. public static double polygonArea(double X[], double Y[], int n) { // Initialize area double area = 0.0; // Calculate value of shoelace formula int j = n - 1; for (int i = 0; i < n; i++) { area += (X[j] + X[i]) * (Y[j] - Y[i]); // j is previous vertex to i j = i; } // Return absolute value return Math.abs(area / 2.0); } // Driver program public static void main (String[] args) { double X[] = {0, 2, 4}; double Y[] = {1, 3, 7}; int n = 3; System.out.println(polygonArea(X, Y, n)); } }// This code is contributed by Sunnnysingh
# python3 program to evaluate# area of a polygon using# shoelace formula # (X[i], Y[i]) are coordinates of i'th point.def polygonArea(X, Y, n): # Initialize area area = 0.0 # Calculate value of shoelace formula j = n - 1 for i in range(0,n): area += (X[j] + X[i]) * (Y[j] - Y[i]) j = i # j is previous vertex to i # Return absolute value return int(abs(area / 2.0)) # Driver program to test above functionX = [0, 2, 4]Y = [1, 3, 7]n = len(X)print(polygonArea(X, Y, n)) # This code is contributed by# Smitha Dinesh Semwal
// C# program to evaluate area// of a polygon using shoelace formulausing System; class GFG { // (X[i], Y[i]) are coordinates of i'th point. public static double polygonArea(double[] X, double[] Y, int n) { // Initialize area double area = 0.0; // Calculate value of shoelace formula int j = n - 1; for (int i = 0; i < n; i++) { area += (X[j] + X[i]) * (Y[j] - Y[i]); // j is previous vertex to i j = i; } // Return absolute value return Math.Abs(area / 2.0); } // Driver program public static void Main() { double[] X = { 0, 2, 4 }; double[] Y = { 1, 3, 7 }; int n = 3; Console.WriteLine(polygonArea(X, Y, n)); }} // This code is contributed by vt_m.
<?php// PHP program to evaluate area of // a polygon using shoelace formula // (X[i], Y[i]) are // coordinates of i'th point.function polygonArea($X, $Y, $n){ // Initialize area $area = 0.0; // Calculate value of // shoelace formula $j = $n - 1; for ($i = 0; $i < $n; $i++) { $area += ($X[$j] + $X[$i]) * ($Y[$j] - $Y[$i]); // j is previous vertex to i $j = $i; } // Return absolute value return abs($area / 2.0);} // Driver Code$X = array(0, 2, 4);$Y = array(1, 3, 7); $n = sizeof($X); echo polygonArea($X, $Y, $n); // This code is contributed by ajit?>
<script> // JavaScript program to evaluate area// of a polygon using shoelace formula // (X[i], Y[i]) are coordinates of i'th point. function polygonArea(X, Y, n) { // Initialize area let area = 0.0; // Calculate value of shoelace formula let j = n - 1; for (let i = 0; i < n; i++) { area += (X[j] + X[i]) * (Y[j] - Y[i]); // j is previous vertex to i j = i; } // Return absolute value return Math.abs(area / 2.0); } // Driver Code let X = [0, 2, 4]; let Y = [1, 3, 7]; let n = 3; document.write(polygonArea(X, Y, n)); // This code is contributed by target_2. </script>
Output :
2
Why is it called Shoelace Formula? The formula is called so because of the way we evaluate it. Example :
Let the input vertices be
(0, 1), (2, 3), and (4, 7).
Evaluation procedure matches with process of tying
shoelaces.
We write vertices as below
0 1
2 3
4 7
0 1 [written twice]
we evaluate positive terms as below
0 \ 1
2 \ 3
4 \ 7
0 1
i.e., 0*3 + 2*7 + 4*1 = 18
we evaluate negative terms as below
0 1
2 / 3
4 / 7
0 / 1
i.e., 0*7 + 4*3 + 2*1 = 14
Area = 1/2 (18 - 14) = 2
See this for a clearer image.
How does this work? We can always divide a polygon into triangles. The area formula is derived by taking each edge AB and calculating the (signed) area of triangle ABO with a vertex at the origin O, by taking the cross-product (which gives the area of a parallelogram) and dividing by 2. As one wraps around the polygon, these triangles with positive and negative areas will overlap, and the areas between the origin and the polygon will be canceled out and sum to 0, while only the area inside the reference triangle remains. [Source: Wiki]
For a better understanding look at the following diagrams:
Area Of Triangle Using Cross Product
Dividing Polygons into Smaller Triangles to compute Area
Similarly, for Irregular Polygons, we can form triangles to compute the Area
Related articles : Minimum Cost Polygon Triangulation Find Simple Closed Path for a given set of pointsThis article is contributed by Utkarsh Trivedi. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
jit_t
dybydx
target_2
surinderdawra388
kova484
area-volume-programs
triangle
Geometric
Geometric
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Circle and Lattice Points
Program for distance between two points on earth
Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping)
Queries on count of points lie inside a circle
Convex Hull | Set 2 (Graham Scan)
Given n line segments, find if any two segments intersect
Line Clipping | Set 1 (Cohen–Sutherland Algorithm)
Closest Pair of Points | O(nlogn) Implementation
Check if a point lies inside a rectangle | Set-2
Window to Viewport Transformation in Computer Graphics with Implementation | [
{
"code": null,
"e": 24415,
"s": 24387,
"text": "\n07 Nov, 2021"
},
{
"code": null,
"e": 24639,
"s": 24415,
"text": "Given ordered coordinates of a polygon with n vertices. Find the area of the polygon. Here ordered means that the coordinates are given either in a clockwise manner or anticlockwise from the first vertex to last.Examples : "
},
{
"code": null,
"e": 24753,
"s": 24639,
"text": "Input : X[] = {0, 4, 4, 0}, Y[] = {0, 0, 4, 4};\nOutput : 16\n\nInput : X[] = {0, 4, 2}, Y[] = {0, 0, 4}\nOutput : 8"
},
{
"code": null,
"e": 24820,
"s": 24753,
"text": "We can compute the area of a polygon using the Shoelace formula. "
},
{
"code": null,
"e": 24825,
"s": 24820,
"text": "Area"
},
{
"code": null,
"e": 24873,
"s": 24825,
"text": "= | 1/2 [ (x1y2 + x2y3 + ... + xn-1yn + xny1) –"
},
{
"code": null,
"e": 24913,
"s": 24873,
"text": "(x2y1 + x3y2 + ... + xnyn-1 + x1yn) ] |"
},
{
"code": null,
"e": 25091,
"s": 24913,
"text": "The above formula is derived by following the cross product of the vertices to get the Area of triangles formed in the polygon. Below is an implementation of the above formula. "
},
{
"code": null,
"e": 25095,
"s": 25091,
"text": "CPP"
},
{
"code": null,
"e": 25100,
"s": 25095,
"text": "Java"
},
{
"code": null,
"e": 25108,
"s": 25100,
"text": "Python3"
},
{
"code": null,
"e": 25111,
"s": 25108,
"text": "C#"
},
{
"code": null,
"e": 25115,
"s": 25111,
"text": "PHP"
},
{
"code": null,
"e": 25126,
"s": 25115,
"text": "Javascript"
},
{
"code": "// C++ program to evaluate area of a polygon using// shoelace formula#include <bits/stdc++.h>using namespace std; // (X[i], Y[i]) are coordinates of i'th point.double polygonArea(double X[], double Y[], int n){ // Initialize area double area = 0.0; // Calculate value of shoelace formula int j = n - 1; for (int i = 0; i < n; i++) { area += (X[j] + X[i]) * (Y[j] - Y[i]); j = i; // j is previous vertex to i } // Return absolute value return abs(area / 2.0);} // Driver program to test above functionint main(){ double X[] = {0, 2, 4}; double Y[] = {1, 3, 7}; int n = sizeof(X)/sizeof(X[0]); cout << polygonArea(X, Y, n);}",
"e": 25813,
"s": 25126,
"text": null
},
{
"code": "// Java program to evaluate area // of a polygon using shoelace formulaimport java.io.*; class GFG { // (X[i], Y[i]) are coordinates of i'th point. public static double polygonArea(double X[], double Y[], int n) { // Initialize area double area = 0.0; // Calculate value of shoelace formula int j = n - 1; for (int i = 0; i < n; i++) { area += (X[j] + X[i]) * (Y[j] - Y[i]); // j is previous vertex to i j = i; } // Return absolute value return Math.abs(area / 2.0); } // Driver program public static void main (String[] args) { double X[] = {0, 2, 4}; double Y[] = {1, 3, 7}; int n = 3; System.out.println(polygonArea(X, Y, n)); } }// This code is contributed by Sunnnysingh",
"e": 26730,
"s": 25813,
"text": null
},
{
"code": "# python3 program to evaluate# area of a polygon using# shoelace formula # (X[i], Y[i]) are coordinates of i'th point.def polygonArea(X, Y, n): # Initialize area area = 0.0 # Calculate value of shoelace formula j = n - 1 for i in range(0,n): area += (X[j] + X[i]) * (Y[j] - Y[i]) j = i # j is previous vertex to i # Return absolute value return int(abs(area / 2.0)) # Driver program to test above functionX = [0, 2, 4]Y = [1, 3, 7]n = len(X)print(polygonArea(X, Y, n)) # This code is contributed by# Smitha Dinesh Semwal",
"e": 27303,
"s": 26730,
"text": null
},
{
"code": "// C# program to evaluate area// of a polygon using shoelace formulausing System; class GFG { // (X[i], Y[i]) are coordinates of i'th point. public static double polygonArea(double[] X, double[] Y, int n) { // Initialize area double area = 0.0; // Calculate value of shoelace formula int j = n - 1; for (int i = 0; i < n; i++) { area += (X[j] + X[i]) * (Y[j] - Y[i]); // j is previous vertex to i j = i; } // Return absolute value return Math.Abs(area / 2.0); } // Driver program public static void Main() { double[] X = { 0, 2, 4 }; double[] Y = { 1, 3, 7 }; int n = 3; Console.WriteLine(polygonArea(X, Y, n)); }} // This code is contributed by vt_m.",
"e": 28164,
"s": 27303,
"text": null
},
{
"code": "<?php// PHP program to evaluate area of // a polygon using shoelace formula // (X[i], Y[i]) are // coordinates of i'th point.function polygonArea($X, $Y, $n){ // Initialize area $area = 0.0; // Calculate value of // shoelace formula $j = $n - 1; for ($i = 0; $i < $n; $i++) { $area += ($X[$j] + $X[$i]) * ($Y[$j] - $Y[$i]); // j is previous vertex to i $j = $i; } // Return absolute value return abs($area / 2.0);} // Driver Code$X = array(0, 2, 4);$Y = array(1, 3, 7); $n = sizeof($X); echo polygonArea($X, $Y, $n); // This code is contributed by ajit?>",
"e": 28825,
"s": 28164,
"text": null
},
{
"code": "<script> // JavaScript program to evaluate area// of a polygon using shoelace formula // (X[i], Y[i]) are coordinates of i'th point. function polygonArea(X, Y, n) { // Initialize area let area = 0.0; // Calculate value of shoelace formula let j = n - 1; for (let i = 0; i < n; i++) { area += (X[j] + X[i]) * (Y[j] - Y[i]); // j is previous vertex to i j = i; } // Return absolute value return Math.abs(area / 2.0); } // Driver Code let X = [0, 2, 4]; let Y = [1, 3, 7]; let n = 3; document.write(polygonArea(X, Y, n)); // This code is contributed by target_2. </script>",
"e": 29583,
"s": 28825,
"text": null
},
{
"code": null,
"e": 29593,
"s": 29583,
"text": "Output : "
},
{
"code": null,
"e": 29595,
"s": 29593,
"text": "2"
},
{
"code": null,
"e": 29701,
"s": 29595,
"text": "Why is it called Shoelace Formula? The formula is called so because of the way we evaluate it. Example : "
},
{
"code": null,
"e": 30172,
"s": 29701,
"text": "Let the input vertices be\n (0, 1), (2, 3), and (4, 7). \n\nEvaluation procedure matches with process of tying\nshoelaces.\n\nWe write vertices as below\n 0 1\n 2 3\n 4 7\n 0 1 [written twice]\n\nwe evaluate positive terms as below\n 0 \\ 1\n 2 \\ 3\n 4 \\ 7\n 0 1 \ni.e., 0*3 + 2*7 + 4*1 = 18 \n\nwe evaluate negative terms as below\n 0 1\n 2 / 3\n 4 / 7\n 0 / 1 \ni.e., 0*7 + 4*3 + 2*1 = 14\n\nArea = 1/2 (18 - 14) = 2 \n\nSee this for a clearer image."
},
{
"code": null,
"e": 30715,
"s": 30172,
"text": "How does this work? We can always divide a polygon into triangles. The area formula is derived by taking each edge AB and calculating the (signed) area of triangle ABO with a vertex at the origin O, by taking the cross-product (which gives the area of a parallelogram) and dividing by 2. As one wraps around the polygon, these triangles with positive and negative areas will overlap, and the areas between the origin and the polygon will be canceled out and sum to 0, while only the area inside the reference triangle remains. [Source: Wiki] "
},
{
"code": null,
"e": 30774,
"s": 30715,
"text": "For a better understanding look at the following diagrams:"
},
{
"code": null,
"e": 30811,
"s": 30774,
"text": "Area Of Triangle Using Cross Product"
},
{
"code": null,
"e": 30868,
"s": 30811,
"text": "Dividing Polygons into Smaller Triangles to compute Area"
},
{
"code": null,
"e": 30945,
"s": 30868,
"text": "Similarly, for Irregular Polygons, we can form triangles to compute the Area"
},
{
"code": null,
"e": 31220,
"s": 30945,
"text": "Related articles : Minimum Cost Polygon Triangulation Find Simple Closed Path for a given set of pointsThis article is contributed by Utkarsh Trivedi. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above"
},
{
"code": null,
"e": 31226,
"s": 31220,
"text": "jit_t"
},
{
"code": null,
"e": 31233,
"s": 31226,
"text": "dybydx"
},
{
"code": null,
"e": 31242,
"s": 31233,
"text": "target_2"
},
{
"code": null,
"e": 31259,
"s": 31242,
"text": "surinderdawra388"
},
{
"code": null,
"e": 31267,
"s": 31259,
"text": "kova484"
},
{
"code": null,
"e": 31288,
"s": 31267,
"text": "area-volume-programs"
},
{
"code": null,
"e": 31297,
"s": 31288,
"text": "triangle"
},
{
"code": null,
"e": 31307,
"s": 31297,
"text": "Geometric"
},
{
"code": null,
"e": 31317,
"s": 31307,
"text": "Geometric"
},
{
"code": null,
"e": 31415,
"s": 31317,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 31424,
"s": 31415,
"text": "Comments"
},
{
"code": null,
"e": 31437,
"s": 31424,
"text": "Old Comments"
},
{
"code": null,
"e": 31463,
"s": 31437,
"text": "Circle and Lattice Points"
},
{
"code": null,
"e": 31512,
"s": 31463,
"text": "Program for distance between two points on earth"
},
{
"code": null,
"e": 31565,
"s": 31512,
"text": "Convex Hull | Set 1 (Jarvis's Algorithm or Wrapping)"
},
{
"code": null,
"e": 31612,
"s": 31565,
"text": "Queries on count of points lie inside a circle"
},
{
"code": null,
"e": 31646,
"s": 31612,
"text": "Convex Hull | Set 2 (Graham Scan)"
},
{
"code": null,
"e": 31704,
"s": 31646,
"text": "Given n line segments, find if any two segments intersect"
},
{
"code": null,
"e": 31755,
"s": 31704,
"text": "Line Clipping | Set 1 (Cohen–Sutherland Algorithm)"
},
{
"code": null,
"e": 31804,
"s": 31755,
"text": "Closest Pair of Points | O(nlogn) Implementation"
},
{
"code": null,
"e": 31853,
"s": 31804,
"text": "Check if a point lies inside a rectangle | Set-2"
}
] |
An Advanced Example of the Tensorflow Estimator Class | by Tijmen Verhulsdonck | Towards Data Science | Estimators were introduced in version 1.3 of the Tensorflow API, and are used to abstract and simplify training, evaluation and prediction. If you haven’t worked with Estimators before I suggest to start by reading this article and get some familiarity as I won’t be covering all of the basics when using estimators. Instead I hope to demystify and clarify some aspects more detailed aspects of using Estimators and switching to Estimators from an existing code-base.
Anyone who has been working with Tensorflow for a long time will know that it used to be a lot more time consuming to setup and use Tensorflow as it is today. Today there are numerous libraries that simplify development such as slim, tflearn and more. These often reduced the code needed to define a network from multiple files and classes to a single function. They also simplified managing the training and evaluation, and to a small extend data preparation and loading. The Estimator class of Tensorflow does not change anything about the network definition but it simplifies and abstracts managing training, evaluation and prediction. It stands out from the other libraries due to it’s low level optimizations, useful abstractions and support from the core Tensorflow dev team.
That is the long story, in short Estimators are faster to run and to implement, simpler (once you get used to them) and well supported.
This article will break down some of the features of the Estimators, using examples from a GitHub project of mine SqueezeNext-Tensorflow. The network implemented in the project is from a paper released in 2018 called “SqueezeNext”. This network is very lightweight and fast due to a novel approach to separable convolutions. The researchers released a caffe version but to facilitate experimentation using the available Tensorflow libraries I recreated the algorithm from the paper in Tensorflow.
The article is setup using the following structure:
Setting Up The Estimator
Data Loading with Estimators and Datasets
Defining Prediction,Training and Evaluation modes
Session hooks and scaffolds
Prediction
In order to setup the Estimator for a certain training and evaluation roster it’s good to first understand how the Estimator setup works. To construct an instance of the Estimator class you can use the following call:
classifier = tf.estimator.Estimator(model_dir=model_dir, model_fn=model_fn, params=params)
In this call “model_dir” is the path to the folder where the Estimator should store and load checkpoints and event files. The “model_fn” parameter is a function that consumes the features, labels, mode and params in the following order:
def model_fn(features, labels, mode, params):
The Estimator will always supply those parameters when it executes the model function for training, evaluation or prediction. The features parameter contains a dictionary of tensors with the features that you want to feed to your network, the labels parameter contains a dictionary of tensors with the labels you want to use for training. These two parameters are generated by the input fn which will be explained later. The third parameter mode describes whether the “model_fn” is being called for training, evaluation or prediction. The final parameter params is a simple dictionary that can contain python variables and such that can be used during network definition (think total steps for learning rate schedule etc.).
Now that the Estimator object is initialized it can be used to start training and evaluating the network. Below is an excerpt of train.py which implements the above instructions to create an Estimator object after which it starts training and evaluating.
This snippet of code first sets up the config dictionary for the params, and uses it together with the “model_fn” to construct an Estimator Object. It then creates a “TrainSpec” with an “input_fn” and the “max_steps” the model should train for. A similar thing is done to create the “EvalSpec” where the “steps” are the number of steps of each evaluation, and “throttle_secs” defines the interval in seconds between each evaluation. The “tf.estimator.train_and_evaluate” is used to start the training and evaluation roster using the Estimator object, “TrainSpec” and “EvalSpec”. Finally an evaluation is explicitly called once more after the training finishes.
In my opinion the best things about Estimators, is that you can combine them easily with the Dataset class. Before the Estimator and Dataset class combination it was hard to prefetch and process examples on the CPU asynchronously from the GPU. Prefetching and processing on the CPU would in theory make sure there would be a batch of examples ready in memory any time the GPU was done processing the previous batch, in practice this was easier said then done. The problem with just assigning the fetch and process step to CPU is that unless it is done in parallel with the model processing on the GPU, the GPU still has to wait for the CPU to fetch the data from the storage disk and process it before it can start processing the next batch. For a long time queuerunners, asynchronous prefetching using python threads and other solutions were suggested, and in my experience none of them ever worked flawlessly and efficiently.
But using the Estimator class and combining it with the Dataset class is very easy, clean and works in parallel with the GPU. It allows the CPU to fetch, preprocess and que batches of examples such that there is always a new batch ready for the GPU. Using this method I have seen the utilization of a GPU stay close to 100% during training and global steps per second for small models(<10MB) increase 4 fold.
The Tensorflow Dataset class is designed as an E.T.L. process, which stands for Extract, Transform and Load. These steps will be defined soon, but this guide will only explain how to use tfrecords in combination with the Dataset class. For other formats (csv, numpy etc.) this page has a good write-up, however I suggest using tfrecords as they offer better performance and are easier to integrate with a Tensorflow development pipeline.
The whole E.T.L. process can be implemented using the Dataset class in only 7 lines of code as shown below. It might look complicated at first but read on for a detailed explanation of each lines functionality.
Extract
The first step in a Dataset input pipeline is to load the data from the tfrecords into memory. This starts with making a list of tfrecords available using a glob pattern e.g. “./Datasets/train-*.tfrecords” and the list_files function of the Dataset class. The parallel_interleave function is applied to the list of files, which ensures parallel extraction of the data as explained here. Finally a merged shuffle and repeat function is used to prefetch a certain number of examples from the tfrecords and shuffle them. The repeat ensures that there are always examples available by repeating from the start once the last example of every tfrecord is read.
files = tf.data.Dataset.list_files(glob_pattern, shuffle=True)dataset = files.apply(tf.contrib.data.parallel_interleave( lambda filename: tf.data.TFRecordDataset(filename), cycle_length=threads*2) )dataset = dataset.apply(tf.contrib.data.shuffle_and_repeat (32*self.batch_size))
Transform
Now that the data is available in memory the next step is to transform it, preferably into something that does not need any further processing in order to be fed to the neural network input. A call to the dataset’s map function is required to do this as shown below, where “map_func” is the function applied to every individual example on the CPU and “num_parallel_calls” the number of parallel invocations of the “map_func” to use.
threads = multiprocessing.cpu_count()dataset = dataset.map(map_func=lambda example: _parse_function(example, self.image_size, self.num_classes,training=training), num_parallel_calls=threads)
In this case the “map_func” is the parse function shown below, this function processes an example from the tfrecords (created using this repo) and outputs a tuple of dictionaries containing tensors representing the features and labels respectively. Note the use of a lambda function to pass python variables separately from the example to the parse function, as the example is unparsed data from the tfrecord and provided by the Dataset class.
Keep in mind that this parse function only processes one example at the time as show by the tf.parse_single_example call, but does so a number of times in parallel. To prevent running into any CPU bottlenecks it is important to keep the parse function fast, some tips on how to do this can be found here. All the individually processed examples are then batched and ready for processing.
dataset = dataset.batch(batch_size=self.batch_size)
Load
The final step of the ETL process is loading the batched examples onto the accelerator (GPU) ready for processing. In the Dataset class this is achieved by prefetching, which is done by calling the prefetch function of the dataset.
dataset = dataset.prefetch(buffer_size=self.batch_size)
Prefetching uncouples the producer (Dataset object on CPU) from the consumer (GPU), this allows them to run in parallel for increased throughput.
Input Function
Once the whole E.T.L. process is fully defined and implemented, the “input_fn” can be created by initializing the iterator and grabbing the next example using the following line:
input_fn = dataset.make_one_shot_iterator().get_next()
This input function is used by the Estimator as an input for the model function.
A quick reminder, the model function the estimator invokes during training, evaluation and prediction, should accept the following arguments as explained earlier:
def model_fn(features, labels, mode, params):
The features and labels are in this case supplied by the Dataset class, the params are mostly for hyper parameters used during network initialization, but the mode (of type “tf.estimator.ModeKeys”) dictates what action the model is going to be performing. Each mode is used to setup the model for that specific purpose, the available modes are prediction, training and evaluation.
The different modes can be used by calling the respective functions (shown above) of an Estimator Object namely; “predict”, “train” and “evaluate”. The code-path of every mode has to return an “EstimatorSpec” with the required fields for that mode, e.g. when the mode is predict, it has to return an “EstimatorSpec” that includes the predictions field:
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
Predict
The most basic mode is the prediction mode “tf.estimator.ModeKeys.PREDICT”, which as the name suggests is used to do predictions on data using the Estimator object. In this mode the “EstimatorSpec” expects a dictionary of tensors which will be executed and the results of which will be made available as numpy values to python.
In this excerpt you can see the predictions dictionary setup to generate classical image classification results. The if statement ensures that this code path is only executed when the predict function of an Estimator object is executed. The dictionary of tensors is passed to the “EstimatorSpec” as the predictions argument together with the mode. It is smart to define the prediction code-path first as it is the simplest, and since most of the code is used for training and evaluation as-well it can show problems early on.
Train
To train a model in the “tf.estimator.ModeKeys.TRAIN” mode it is necessary to create a so called “train_op”, this op is a tensor that when executed performs the back propagation to update the model. Simply put it is the minimize function of an optimizer such as the AdamOptimizer. The “train_op” and the scalar loss tensor are the minimum required arguments to create an “EstimatorSpec” for training. Below you can see an example of this being done.
Note the non-required arguments “training_hooks” and “scaffold”, these will be further explained later but in short they are used to add functionality to the setup and tear-down of a model and training session.
Evaluate
The final mode that needs a code-path in the model function is “tf.estimator.ModeKeys.EVAL”. The most important thing in order to perform an eval is the the metrics dictionary, this should be structured as a dictionary of tuples, where the first element of the tuple is a tensor containing the actual metric value and the second element is the tensor that updates the metric value. The update operation is necessary to ensure a reliable metric calculation over the whole validation set. Since it will often be impossible to evaluate the whole validation set in one batch, multiple batches have to be used. To prevent noise in the metric value due to per batch differences, the update operation is used to keep a running average (or gather all results) over all batches. This setup ensures the metric value is calculated over the whole validation set and not a single batch.
In the example above only the “loss” and “eval_metric_ops” are required arguments, the third argument “evaluation_hooks” is used to execute “tf.summary” operations as they are not automatically executed when running the Estimators evaluate function. In this example the “evaluation_hooks” are used to store images from the validation set to display using a Tensorboard. To achieve this a “SummarySaverHook” with the same output directory as the “model_dir” is initialized with a “tf.summary.image” operation and passed (encapsulated in an iterable) to the “EstimatorSpec”.
Now that I explained the advantages of using the Estimator class and how to use it, I hope you are excited to start using Estimators for your Tensorflow projects. However switching to Estimators from an existing code-base is not necessarily straightforward. As the Estimator class abstracts away most of the initialization and execution during training, any custom initialization and execution loops can no longer be an implementation using “tf.Session” and “sess.run()”. This could be a reason for someone with a large code-base not to transition to the Estimator class, as there is no easy and straightforward transition path. While this article is not a transition guide, I will clarify some of the new procedures for initialization and execution. This will hopefully fill in some of the gaps left by the official tutorials.
The main tools to influence the initialization and execution loop are the “Scaffold” object and the “SessionRunHook” object. The “Scaffold” is used for custom first time initialization of a model and can only be used to construct an “EstimatorSpec” in training mode. The “SessionRunHook” on the other hand can be used to construct an “EstimatorSpec” for each execution mode and is used each time train, evaluate or predict is called. Both the “Scaffold” and “SessionRunHook” provide certain functions that the Estimator class calls during use. Below you can see a timeline showing which function is called and when during the initialization and training process. This also shows that the Estimator under the hood still uses “tf.Session” and “sess.run”.
Scaffolds
A scaffold can be passed as the scaffold argument when constructing the “EstimatorSpec” for training. In a scaffold you can specify a number of different operations to be called during various stages of initialization. However for now we will focus on the “init_fn” argument as the others are more for distributed settings which this series will not cover. The “init_fn” is called after graph finalization but before the “sess.run” is called for the first time, it is only used once and thus perfect for custom variable initialization. A good example of this is if you want to finetune using a pretrained network and want to be selective about which variables to restore.
Above you can see an example of this implemented, the function checks for the existence of the fine-tune checkpoint and creates an initializer function using slim that filters on the “scope_name” variable. This function is then encapsulated in the function format that the Estimator expects which consumes the “Scaffold” object itself and a “tf.Session” object. Inside the “init_fn” the session is used to run the “initializer_fn”.
# setup fine tune scaffoldscaffold = tf.train.Scaffold(init_op=None, init_fn=tools.fine_tune.init_weights(params["fine_tune_ckpt"]))# create estimator training specreturn tf.estimator.EstimatorSpec(tf.estimator.ModeKeys.TRAIN, loss=loss, train_op=train_op,scaffold=scaffold)
This “init_fn” can then be passed as an argument to construct a “Scaffold” object, as show above. This “Scaffold” object is used to construct the “EstimatorSpec” for the train mode. While this is a relatively simple example (and can also be achieved with “WarmStartSettings”), it can be easily expanded for more advanced onetime initialization functionality.
SessionRunHooks
As shown in the timeline earlier the “SessionRunHook” class can be used to alter certain parts of the training, evaluation or prediction loop. This is done using a concept known as hooking, I will not go into to much detail on what hooking is exactly as in this case it is pretty self explanatory. But using a “SessionRunHook” is slightly different from using a scaffold. Instead of initializing the “SessionRunHook” object with one or more functions as arguments it is used as a super class. It is then possible to overwrite the methods “begin”, “after_create_session”, “before_run”, “after_run” and “end”, to extend functionality. Below an excerpt showing how to overwrite the “begin” function is shown.
Keep in mind that the “begin” function does not get any arguments but each hook is slightly different, please refer here to check what arguments are passed to which function. Now one can create a “SessionRunHook” with the extended begin method by calling:
stats_hook = tools.stats.ModelStats("squeezenext", params["model_dir"], self.batch_size)
This object can then be used when constructing an “EstimatorSpec” for training, evaluating and predicting by passing it to the respective arguments “training_hooks”, “evaluation_hooks” and “prediction_hooks”. Note that each arguments ends with hooks, and they expect an iterable of one or multiple hooks so always encapsulate your hooks in an iterable.
Training and evaluation have extensively been covered, but prediction has not yet been fully explained. This is mostly because prediction is the easiest of the the 3 (train,eval,predict) Estimator modes, but there are still some pitfalls one can run into. I explained earlier how to setup an “EstimatorSpec” for prediction, but not how to use it. Below is a small example showing how to use an Estimator for prediction.
For prediction it does not make sense to make/use tfrecords, so instead the “numpy_input_fn” is used. To the “x” argument a dictionary of numpy arrays is passed containing the features, in this case an image.
Note: Depending on whether or not you are preprocess images in the “input_fn” during training, you would need to perform the same preprocessing here, either in numpy before passing to the “numpy_input_fn” or in Tensorflow.
With the numpy input function setup, a simple call to predict is enough to create a prediction. The “predictions” variable in the example above will not actually contain the result yet, instead it is an object of the generator class, to get the actual result you need to iterate over the it. This iteration will start the Tensorflow execution and produce the actual result.
I hope I was able to demystify some of the not so documented features of tensorflow Estimators and fill in the gaps left by the official tutorials. All the code in this post comes from this github repo, so please head over there for an example of Estimators used in combination with the Datasets class. If you have any more questions please leave them here, I’ll try my best to answer them. | [
{
"code": null,
"e": 640,
"s": 172,
"text": "Estimators were introduced in version 1.3 of the Tensorflow API, and are used to abstract and simplify training, evaluation and prediction. If you haven’t worked with Estimators before I suggest to start by reading this article and get some familiarity as I won’t be covering all of the basics when using estimators. Instead I hope to demystify and clarify some aspects more detailed aspects of using Estimators and switching to Estimators from an existing code-base."
},
{
"code": null,
"e": 1422,
"s": 640,
"text": "Anyone who has been working with Tensorflow for a long time will know that it used to be a lot more time consuming to setup and use Tensorflow as it is today. Today there are numerous libraries that simplify development such as slim, tflearn and more. These often reduced the code needed to define a network from multiple files and classes to a single function. They also simplified managing the training and evaluation, and to a small extend data preparation and loading. The Estimator class of Tensorflow does not change anything about the network definition but it simplifies and abstracts managing training, evaluation and prediction. It stands out from the other libraries due to it’s low level optimizations, useful abstractions and support from the core Tensorflow dev team."
},
{
"code": null,
"e": 1558,
"s": 1422,
"text": "That is the long story, in short Estimators are faster to run and to implement, simpler (once you get used to them) and well supported."
},
{
"code": null,
"e": 2055,
"s": 1558,
"text": "This article will break down some of the features of the Estimators, using examples from a GitHub project of mine SqueezeNext-Tensorflow. The network implemented in the project is from a paper released in 2018 called “SqueezeNext”. This network is very lightweight and fast due to a novel approach to separable convolutions. The researchers released a caffe version but to facilitate experimentation using the available Tensorflow libraries I recreated the algorithm from the paper in Tensorflow."
},
{
"code": null,
"e": 2107,
"s": 2055,
"text": "The article is setup using the following structure:"
},
{
"code": null,
"e": 2132,
"s": 2107,
"text": "Setting Up The Estimator"
},
{
"code": null,
"e": 2174,
"s": 2132,
"text": "Data Loading with Estimators and Datasets"
},
{
"code": null,
"e": 2224,
"s": 2174,
"text": "Defining Prediction,Training and Evaluation modes"
},
{
"code": null,
"e": 2252,
"s": 2224,
"text": "Session hooks and scaffolds"
},
{
"code": null,
"e": 2263,
"s": 2252,
"text": "Prediction"
},
{
"code": null,
"e": 2481,
"s": 2263,
"text": "In order to setup the Estimator for a certain training and evaluation roster it’s good to first understand how the Estimator setup works. To construct an instance of the Estimator class you can use the following call:"
},
{
"code": null,
"e": 2642,
"s": 2481,
"text": "classifier = tf.estimator.Estimator(model_dir=model_dir, model_fn=model_fn, params=params)"
},
{
"code": null,
"e": 2879,
"s": 2642,
"text": "In this call “model_dir” is the path to the folder where the Estimator should store and load checkpoints and event files. The “model_fn” parameter is a function that consumes the features, labels, mode and params in the following order:"
},
{
"code": null,
"e": 2925,
"s": 2879,
"text": "def model_fn(features, labels, mode, params):"
},
{
"code": null,
"e": 3649,
"s": 2925,
"text": "The Estimator will always supply those parameters when it executes the model function for training, evaluation or prediction. The features parameter contains a dictionary of tensors with the features that you want to feed to your network, the labels parameter contains a dictionary of tensors with the labels you want to use for training. These two parameters are generated by the input fn which will be explained later. The third parameter mode describes whether the “model_fn” is being called for training, evaluation or prediction. The final parameter params is a simple dictionary that can contain python variables and such that can be used during network definition (think total steps for learning rate schedule etc.)."
},
{
"code": null,
"e": 3904,
"s": 3649,
"text": "Now that the Estimator object is initialized it can be used to start training and evaluating the network. Below is an excerpt of train.py which implements the above instructions to create an Estimator object after which it starts training and evaluating."
},
{
"code": null,
"e": 4565,
"s": 3904,
"text": "This snippet of code first sets up the config dictionary for the params, and uses it together with the “model_fn” to construct an Estimator Object. It then creates a “TrainSpec” with an “input_fn” and the “max_steps” the model should train for. A similar thing is done to create the “EvalSpec” where the “steps” are the number of steps of each evaluation, and “throttle_secs” defines the interval in seconds between each evaluation. The “tf.estimator.train_and_evaluate” is used to start the training and evaluation roster using the Estimator object, “TrainSpec” and “EvalSpec”. Finally an evaluation is explicitly called once more after the training finishes."
},
{
"code": null,
"e": 5493,
"s": 4565,
"text": "In my opinion the best things about Estimators, is that you can combine them easily with the Dataset class. Before the Estimator and Dataset class combination it was hard to prefetch and process examples on the CPU asynchronously from the GPU. Prefetching and processing on the CPU would in theory make sure there would be a batch of examples ready in memory any time the GPU was done processing the previous batch, in practice this was easier said then done. The problem with just assigning the fetch and process step to CPU is that unless it is done in parallel with the model processing on the GPU, the GPU still has to wait for the CPU to fetch the data from the storage disk and process it before it can start processing the next batch. For a long time queuerunners, asynchronous prefetching using python threads and other solutions were suggested, and in my experience none of them ever worked flawlessly and efficiently."
},
{
"code": null,
"e": 5902,
"s": 5493,
"text": "But using the Estimator class and combining it with the Dataset class is very easy, clean and works in parallel with the GPU. It allows the CPU to fetch, preprocess and que batches of examples such that there is always a new batch ready for the GPU. Using this method I have seen the utilization of a GPU stay close to 100% during training and global steps per second for small models(<10MB) increase 4 fold."
},
{
"code": null,
"e": 6340,
"s": 5902,
"text": "The Tensorflow Dataset class is designed as an E.T.L. process, which stands for Extract, Transform and Load. These steps will be defined soon, but this guide will only explain how to use tfrecords in combination with the Dataset class. For other formats (csv, numpy etc.) this page has a good write-up, however I suggest using tfrecords as they offer better performance and are easier to integrate with a Tensorflow development pipeline."
},
{
"code": null,
"e": 6551,
"s": 6340,
"text": "The whole E.T.L. process can be implemented using the Dataset class in only 7 lines of code as shown below. It might look complicated at first but read on for a detailed explanation of each lines functionality."
},
{
"code": null,
"e": 6559,
"s": 6551,
"text": "Extract"
},
{
"code": null,
"e": 7214,
"s": 6559,
"text": "The first step in a Dataset input pipeline is to load the data from the tfrecords into memory. This starts with making a list of tfrecords available using a glob pattern e.g. “./Datasets/train-*.tfrecords” and the list_files function of the Dataset class. The parallel_interleave function is applied to the list of files, which ensures parallel extraction of the data as explained here. Finally a merged shuffle and repeat function is used to prefetch a certain number of examples from the tfrecords and shuffle them. The repeat ensures that there are always examples available by repeating from the start once the last example of every tfrecord is read."
},
{
"code": null,
"e": 7613,
"s": 7214,
"text": "files = tf.data.Dataset.list_files(glob_pattern, shuffle=True)dataset = files.apply(tf.contrib.data.parallel_interleave( lambda filename: tf.data.TFRecordDataset(filename), cycle_length=threads*2) )dataset = dataset.apply(tf.contrib.data.shuffle_and_repeat (32*self.batch_size))"
},
{
"code": null,
"e": 7623,
"s": 7613,
"text": "Transform"
},
{
"code": null,
"e": 8056,
"s": 7623,
"text": "Now that the data is available in memory the next step is to transform it, preferably into something that does not need any further processing in order to be fed to the neural network input. A call to the dataset’s map function is required to do this as shown below, where “map_func” is the function applied to every individual example on the CPU and “num_parallel_calls” the number of parallel invocations of the “map_func” to use."
},
{
"code": null,
"e": 8331,
"s": 8056,
"text": "threads = multiprocessing.cpu_count()dataset = dataset.map(map_func=lambda example: _parse_function(example, self.image_size, self.num_classes,training=training), num_parallel_calls=threads)"
},
{
"code": null,
"e": 8775,
"s": 8331,
"text": "In this case the “map_func” is the parse function shown below, this function processes an example from the tfrecords (created using this repo) and outputs a tuple of dictionaries containing tensors representing the features and labels respectively. Note the use of a lambda function to pass python variables separately from the example to the parse function, as the example is unparsed data from the tfrecord and provided by the Dataset class."
},
{
"code": null,
"e": 9163,
"s": 8775,
"text": "Keep in mind that this parse function only processes one example at the time as show by the tf.parse_single_example call, but does so a number of times in parallel. To prevent running into any CPU bottlenecks it is important to keep the parse function fast, some tips on how to do this can be found here. All the individually processed examples are then batched and ready for processing."
},
{
"code": null,
"e": 9215,
"s": 9163,
"text": "dataset = dataset.batch(batch_size=self.batch_size)"
},
{
"code": null,
"e": 9220,
"s": 9215,
"text": "Load"
},
{
"code": null,
"e": 9452,
"s": 9220,
"text": "The final step of the ETL process is loading the batched examples onto the accelerator (GPU) ready for processing. In the Dataset class this is achieved by prefetching, which is done by calling the prefetch function of the dataset."
},
{
"code": null,
"e": 9508,
"s": 9452,
"text": "dataset = dataset.prefetch(buffer_size=self.batch_size)"
},
{
"code": null,
"e": 9654,
"s": 9508,
"text": "Prefetching uncouples the producer (Dataset object on CPU) from the consumer (GPU), this allows them to run in parallel for increased throughput."
},
{
"code": null,
"e": 9669,
"s": 9654,
"text": "Input Function"
},
{
"code": null,
"e": 9848,
"s": 9669,
"text": "Once the whole E.T.L. process is fully defined and implemented, the “input_fn” can be created by initializing the iterator and grabbing the next example using the following line:"
},
{
"code": null,
"e": 9903,
"s": 9848,
"text": "input_fn = dataset.make_one_shot_iterator().get_next()"
},
{
"code": null,
"e": 9984,
"s": 9903,
"text": "This input function is used by the Estimator as an input for the model function."
},
{
"code": null,
"e": 10147,
"s": 9984,
"text": "A quick reminder, the model function the estimator invokes during training, evaluation and prediction, should accept the following arguments as explained earlier:"
},
{
"code": null,
"e": 10193,
"s": 10147,
"text": "def model_fn(features, labels, mode, params):"
},
{
"code": null,
"e": 10574,
"s": 10193,
"text": "The features and labels are in this case supplied by the Dataset class, the params are mostly for hyper parameters used during network initialization, but the mode (of type “tf.estimator.ModeKeys”) dictates what action the model is going to be performing. Each mode is used to setup the model for that specific purpose, the available modes are prediction, training and evaluation."
},
{
"code": null,
"e": 10927,
"s": 10574,
"text": "The different modes can be used by calling the respective functions (shown above) of an Estimator Object namely; “predict”, “train” and “evaluate”. The code-path of every mode has to return an “EstimatorSpec” with the required fields for that mode, e.g. when the mode is predict, it has to return an “EstimatorSpec” that includes the predictions field:"
},
{
"code": null,
"e": 10992,
"s": 10927,
"text": "return tf.estimator.EstimatorSpec(mode, predictions=predictions)"
},
{
"code": null,
"e": 11000,
"s": 10992,
"text": "Predict"
},
{
"code": null,
"e": 11328,
"s": 11000,
"text": "The most basic mode is the prediction mode “tf.estimator.ModeKeys.PREDICT”, which as the name suggests is used to do predictions on data using the Estimator object. In this mode the “EstimatorSpec” expects a dictionary of tensors which will be executed and the results of which will be made available as numpy values to python."
},
{
"code": null,
"e": 11854,
"s": 11328,
"text": "In this excerpt you can see the predictions dictionary setup to generate classical image classification results. The if statement ensures that this code path is only executed when the predict function of an Estimator object is executed. The dictionary of tensors is passed to the “EstimatorSpec” as the predictions argument together with the mode. It is smart to define the prediction code-path first as it is the simplest, and since most of the code is used for training and evaluation as-well it can show problems early on."
},
{
"code": null,
"e": 11860,
"s": 11854,
"text": "Train"
},
{
"code": null,
"e": 12310,
"s": 11860,
"text": "To train a model in the “tf.estimator.ModeKeys.TRAIN” mode it is necessary to create a so called “train_op”, this op is a tensor that when executed performs the back propagation to update the model. Simply put it is the minimize function of an optimizer such as the AdamOptimizer. The “train_op” and the scalar loss tensor are the minimum required arguments to create an “EstimatorSpec” for training. Below you can see an example of this being done."
},
{
"code": null,
"e": 12521,
"s": 12310,
"text": "Note the non-required arguments “training_hooks” and “scaffold”, these will be further explained later but in short they are used to add functionality to the setup and tear-down of a model and training session."
},
{
"code": null,
"e": 12530,
"s": 12521,
"text": "Evaluate"
},
{
"code": null,
"e": 13404,
"s": 12530,
"text": "The final mode that needs a code-path in the model function is “tf.estimator.ModeKeys.EVAL”. The most important thing in order to perform an eval is the the metrics dictionary, this should be structured as a dictionary of tuples, where the first element of the tuple is a tensor containing the actual metric value and the second element is the tensor that updates the metric value. The update operation is necessary to ensure a reliable metric calculation over the whole validation set. Since it will often be impossible to evaluate the whole validation set in one batch, multiple batches have to be used. To prevent noise in the metric value due to per batch differences, the update operation is used to keep a running average (or gather all results) over all batches. This setup ensures the metric value is calculated over the whole validation set and not a single batch."
},
{
"code": null,
"e": 13977,
"s": 13404,
"text": "In the example above only the “loss” and “eval_metric_ops” are required arguments, the third argument “evaluation_hooks” is used to execute “tf.summary” operations as they are not automatically executed when running the Estimators evaluate function. In this example the “evaluation_hooks” are used to store images from the validation set to display using a Tensorboard. To achieve this a “SummarySaverHook” with the same output directory as the “model_dir” is initialized with a “tf.summary.image” operation and passed (encapsulated in an iterable) to the “EstimatorSpec”."
},
{
"code": null,
"e": 14805,
"s": 13977,
"text": "Now that I explained the advantages of using the Estimator class and how to use it, I hope you are excited to start using Estimators for your Tensorflow projects. However switching to Estimators from an existing code-base is not necessarily straightforward. As the Estimator class abstracts away most of the initialization and execution during training, any custom initialization and execution loops can no longer be an implementation using “tf.Session” and “sess.run()”. This could be a reason for someone with a large code-base not to transition to the Estimator class, as there is no easy and straightforward transition path. While this article is not a transition guide, I will clarify some of the new procedures for initialization and execution. This will hopefully fill in some of the gaps left by the official tutorials."
},
{
"code": null,
"e": 15558,
"s": 14805,
"text": "The main tools to influence the initialization and execution loop are the “Scaffold” object and the “SessionRunHook” object. The “Scaffold” is used for custom first time initialization of a model and can only be used to construct an “EstimatorSpec” in training mode. The “SessionRunHook” on the other hand can be used to construct an “EstimatorSpec” for each execution mode and is used each time train, evaluate or predict is called. Both the “Scaffold” and “SessionRunHook” provide certain functions that the Estimator class calls during use. Below you can see a timeline showing which function is called and when during the initialization and training process. This also shows that the Estimator under the hood still uses “tf.Session” and “sess.run”."
},
{
"code": null,
"e": 15568,
"s": 15558,
"text": "Scaffolds"
},
{
"code": null,
"e": 16240,
"s": 15568,
"text": "A scaffold can be passed as the scaffold argument when constructing the “EstimatorSpec” for training. In a scaffold you can specify a number of different operations to be called during various stages of initialization. However for now we will focus on the “init_fn” argument as the others are more for distributed settings which this series will not cover. The “init_fn” is called after graph finalization but before the “sess.run” is called for the first time, it is only used once and thus perfect for custom variable initialization. A good example of this is if you want to finetune using a pretrained network and want to be selective about which variables to restore."
},
{
"code": null,
"e": 16672,
"s": 16240,
"text": "Above you can see an example of this implemented, the function checks for the existence of the fine-tune checkpoint and creates an initializer function using slim that filters on the “scope_name” variable. This function is then encapsulated in the function format that the Estimator expects which consumes the “Scaffold” object itself and a “tf.Session” object. Inside the “init_fn” the session is used to run the “initializer_fn”."
},
{
"code": null,
"e": 17013,
"s": 16672,
"text": "# setup fine tune scaffoldscaffold = tf.train.Scaffold(init_op=None, init_fn=tools.fine_tune.init_weights(params[\"fine_tune_ckpt\"]))# create estimator training specreturn tf.estimator.EstimatorSpec(tf.estimator.ModeKeys.TRAIN, loss=loss, train_op=train_op,scaffold=scaffold)"
},
{
"code": null,
"e": 17372,
"s": 17013,
"text": "This “init_fn” can then be passed as an argument to construct a “Scaffold” object, as show above. This “Scaffold” object is used to construct the “EstimatorSpec” for the train mode. While this is a relatively simple example (and can also be achieved with “WarmStartSettings”), it can be easily expanded for more advanced onetime initialization functionality."
},
{
"code": null,
"e": 17388,
"s": 17372,
"text": "SessionRunHooks"
},
{
"code": null,
"e": 18094,
"s": 17388,
"text": "As shown in the timeline earlier the “SessionRunHook” class can be used to alter certain parts of the training, evaluation or prediction loop. This is done using a concept known as hooking, I will not go into to much detail on what hooking is exactly as in this case it is pretty self explanatory. But using a “SessionRunHook” is slightly different from using a scaffold. Instead of initializing the “SessionRunHook” object with one or more functions as arguments it is used as a super class. It is then possible to overwrite the methods “begin”, “after_create_session”, “before_run”, “after_run” and “end”, to extend functionality. Below an excerpt showing how to overwrite the “begin” function is shown."
},
{
"code": null,
"e": 18350,
"s": 18094,
"text": "Keep in mind that the “begin” function does not get any arguments but each hook is slightly different, please refer here to check what arguments are passed to which function. Now one can create a “SessionRunHook” with the extended begin method by calling:"
},
{
"code": null,
"e": 18509,
"s": 18350,
"text": "stats_hook = tools.stats.ModelStats(\"squeezenext\", params[\"model_dir\"], self.batch_size)"
},
{
"code": null,
"e": 18862,
"s": 18509,
"text": "This object can then be used when constructing an “EstimatorSpec” for training, evaluating and predicting by passing it to the respective arguments “training_hooks”, “evaluation_hooks” and “prediction_hooks”. Note that each arguments ends with hooks, and they expect an iterable of one or multiple hooks so always encapsulate your hooks in an iterable."
},
{
"code": null,
"e": 19282,
"s": 18862,
"text": "Training and evaluation have extensively been covered, but prediction has not yet been fully explained. This is mostly because prediction is the easiest of the the 3 (train,eval,predict) Estimator modes, but there are still some pitfalls one can run into. I explained earlier how to setup an “EstimatorSpec” for prediction, but not how to use it. Below is a small example showing how to use an Estimator for prediction."
},
{
"code": null,
"e": 19491,
"s": 19282,
"text": "For prediction it does not make sense to make/use tfrecords, so instead the “numpy_input_fn” is used. To the “x” argument a dictionary of numpy arrays is passed containing the features, in this case an image."
},
{
"code": null,
"e": 19714,
"s": 19491,
"text": "Note: Depending on whether or not you are preprocess images in the “input_fn” during training, you would need to perform the same preprocessing here, either in numpy before passing to the “numpy_input_fn” or in Tensorflow."
},
{
"code": null,
"e": 20088,
"s": 19714,
"text": "With the numpy input function setup, a simple call to predict is enough to create a prediction. The “predictions” variable in the example above will not actually contain the result yet, instead it is an object of the generator class, to get the actual result you need to iterate over the it. This iteration will start the Tensorflow execution and produce the actual result."
}
] |
C library function - ftell() | The C library function long int ftell(FILE *stream) returns the current file position of the given stream.
Following is the declaration for ftell() function.
long int ftell(FILE *stream)
stream − This is the pointer to a FILE object that identifies the stream.
stream − This is the pointer to a FILE object that identifies the stream.
This function returns the current value of the position indicator. If an error occurs, -1L is returned, and the global variable errno is set to a positive value.
The following example shows the usage of ftell() function.
#include <stdio.h>
int main () {
FILE *fp;
int len;
fp = fopen("file.txt", "r");
if( fp == NULL ) {
perror ("Error opening file");
return(-1);
}
fseek(fp, 0, SEEK_END);
len = ftell(fp);
fclose(fp);
printf("Total size of file.txt = %d bytes\n", len);
return(0);
}
Let us assume we have a text file file.txt, which has the following content −
This is tutorialspoint.com
Now let us compile and run the above program that will produce the following result if file has above mentioned content otherwise it will give different result based on the file content −
Total size of file.txt = 26 bytes
12 Lectures
2 hours
Nishant Malik
12 Lectures
2.5 hours
Nishant Malik
48 Lectures
6.5 hours
Asif Hussain
12 Lectures
2 hours
Richa Maheshwari
20 Lectures
3.5 hours
Vandana Annavaram
44 Lectures
1 hours
Amit Diwan
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2114,
"s": 2007,
"text": "The C library function long int ftell(FILE *stream) returns the current file position of the given stream."
},
{
"code": null,
"e": 2165,
"s": 2114,
"text": "Following is the declaration for ftell() function."
},
{
"code": null,
"e": 2194,
"s": 2165,
"text": "long int ftell(FILE *stream)"
},
{
"code": null,
"e": 2268,
"s": 2194,
"text": "stream − This is the pointer to a FILE object that identifies the stream."
},
{
"code": null,
"e": 2342,
"s": 2268,
"text": "stream − This is the pointer to a FILE object that identifies the stream."
},
{
"code": null,
"e": 2504,
"s": 2342,
"text": "This function returns the current value of the position indicator. If an error occurs, -1L is returned, and the global variable errno is set to a positive value."
},
{
"code": null,
"e": 2563,
"s": 2504,
"text": "The following example shows the usage of ftell() function."
},
{
"code": null,
"e": 2876,
"s": 2563,
"text": "#include <stdio.h>\n\nint main () {\n FILE *fp;\n int len;\n\n fp = fopen(\"file.txt\", \"r\");\n if( fp == NULL ) {\n perror (\"Error opening file\");\n return(-1);\n }\n fseek(fp, 0, SEEK_END);\n\n len = ftell(fp);\n fclose(fp);\n\n printf(\"Total size of file.txt = %d bytes\\n\", len);\n \n return(0);\n}"
},
{
"code": null,
"e": 2954,
"s": 2876,
"text": "Let us assume we have a text file file.txt, which has the following content −"
},
{
"code": null,
"e": 2981,
"s": 2954,
"text": "This is tutorialspoint.com"
},
{
"code": null,
"e": 3169,
"s": 2981,
"text": "Now let us compile and run the above program that will produce the following result if file has above mentioned content otherwise it will give different result based on the file content −"
},
{
"code": null,
"e": 3204,
"s": 3169,
"text": "Total size of file.txt = 26 bytes\n"
},
{
"code": null,
"e": 3237,
"s": 3204,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3252,
"s": 3237,
"text": " Nishant Malik"
},
{
"code": null,
"e": 3287,
"s": 3252,
"text": "\n 12 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 3302,
"s": 3287,
"text": " Nishant Malik"
},
{
"code": null,
"e": 3337,
"s": 3302,
"text": "\n 48 Lectures \n 6.5 hours \n"
},
{
"code": null,
"e": 3351,
"s": 3337,
"text": " Asif Hussain"
},
{
"code": null,
"e": 3384,
"s": 3351,
"text": "\n 12 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 3402,
"s": 3384,
"text": " Richa Maheshwari"
},
{
"code": null,
"e": 3437,
"s": 3402,
"text": "\n 20 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 3456,
"s": 3437,
"text": " Vandana Annavaram"
},
{
"code": null,
"e": 3489,
"s": 3456,
"text": "\n 44 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 3501,
"s": 3489,
"text": " Amit Diwan"
},
{
"code": null,
"e": 3508,
"s": 3501,
"text": " Print"
},
{
"code": null,
"e": 3519,
"s": 3508,
"text": " Add Notes"
}
] |
How to Check if PHP session has already started? | In PHP,we utilize session_start() an inbuilt function to start the session .But the problem we face in a PHP script is if we execute it more than once it throws an error. So here we will learn how to check the session started or not without calling session_start() function twice.
There are two ways to follow to resolve this problem.
For below PHP 5.4.0 version.
<?php
if(session_id() == ''){
session_start();
}
?>
If the session not started this code above will always start the session in the PHP script.
In the second method, we can utilize the function session_status(), which returns the status of the present session. This function can return three integer values, which all are predefined constants. These are:
0 – PHP_SESSION_DISABLED: Sessions are currently disabled.
1 – PHP_SESSION_NONE: Sessions are enabled, but no session has been started.
2 – PHP_SESSION_ACTIVE: Sessions are enabled and a session has been started.
<?php
if (session_status() == PHP_SESSION_NONE) {
session_start();
}
?>
The above code checks whether the session started or not, if not started this will start the session in the PHP script.
session_status() function only runs in PHP 5.4.0 version or above. | [
{
"code": null,
"e": 1343,
"s": 1062,
"text": "In PHP,we utilize session_start() an inbuilt function to start the session .But the problem we face in a PHP script is if we execute it more than once it throws an error. So here we will learn how to check the session started or not without calling session_start() function twice."
},
{
"code": null,
"e": 1397,
"s": 1343,
"text": "There are two ways to follow to resolve this problem."
},
{
"code": null,
"e": 1426,
"s": 1397,
"text": "For below PHP 5.4.0 version."
},
{
"code": null,
"e": 1490,
"s": 1426,
"text": "<?php\n if(session_id() == ''){\n session_start();\n }\n?>"
},
{
"code": null,
"e": 1582,
"s": 1490,
"text": "If the session not started this code above will always start the session in the PHP script."
},
{
"code": null,
"e": 1793,
"s": 1582,
"text": "In the second method, we can utilize the function session_status(), which returns the status of the present session. This function can return three integer values, which all are predefined constants. These are:"
},
{
"code": null,
"e": 1852,
"s": 1793,
"text": "0 – PHP_SESSION_DISABLED: Sessions are currently disabled."
},
{
"code": null,
"e": 1929,
"s": 1852,
"text": "1 – PHP_SESSION_NONE: Sessions are enabled, but no session has been started."
},
{
"code": null,
"e": 2006,
"s": 1929,
"text": "2 – PHP_SESSION_ACTIVE: Sessions are enabled and a session has been started."
},
{
"code": null,
"e": 2090,
"s": 2006,
"text": "<?php\n if (session_status() == PHP_SESSION_NONE) {\n session_start();\n }\n?>"
},
{
"code": null,
"e": 2210,
"s": 2090,
"text": "The above code checks whether the session started or not, if not started this will start the session in the PHP script."
},
{
"code": null,
"e": 2277,
"s": 2210,
"text": "session_status() function only runs in PHP 5.4.0 version or above."
}
] |
Find the sequence number of a triangular number - GeeksforGeeks | 17 Mar, 2021
Given an integer N print the sequence number of the given Triangular Number. If the number is not a triangular number then print -1.
A number is termed as a triangular number if we can represent it in the form of a triangular grid of points such that the points form an equilateral triangle and each row contains as many points as the row number, i.e., the first row has one point, the second row has two points, the third row has three points and so on. First 10 tringular number are: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55.
Examples:
Input: N = 21 Output:6 Explanation: Since 15 is a 6th Tringular Number.Input: N = 12 Output:-1 Explanation: Since 12 is not a tringular Number
Approach:
Since tringular numbers are sum of natural numbers so can be generalise as quadratic equation.
Since tringular numbers are sum of natural numbers so can be generalise as quadratic equation.
C++
Java
Python3
C#
Javascript
// C++ code to print sequence// number of a triangular number#include<bits/stdc++.h>using namespace std; int main(){ int N = 21; int A = sqrt(2 * N + 0.25) - 0.5; int B = A; // If N is not tringular number if (B != A) cout << "-1"; else cout << B;} // This code is contributed by yatinagg
// Java code to print sequence// number of a triangular numberimport java.util.*;class GFG{ public static void main(String args[]){ int N = 21; int A = (int)(Math.sqrt(2 * N + 0.25) - 0.5); int B = A; // If N is not tringular number if (B != A) System.out.print("-1"); else System.out.print(B);}} // This code is contributed by Akanksha_Rai
# Python3 code to print sequence# number of a triangular number import math N = 21A = math.sqrt(2 * N + 0.25)-0.5B = int(A) # if N is not tringular numberif B != A: print(-1)else: print(B)
// C# code to print sequence// number of a triangular numberusing System;class GFG{ public static void Main(){ int N = 21; int A = (int)(Math.Sqrt(2 * N + 0.25) - 0.5); int B = A; // If N is not tringular number if (B != A) Console.Write("-1"); else Console.Write(B);}} // This code is contributed by Code_Mech
<script>// javascript code to print sequence// number of a triangular number let N = 21; let A = Math.sqrt(2 * N + 0.25) - 0.5; let B = A; // If N is not tringular number if (B != A) document.write("-1"); else document.write(B); // This code is contributed by Rajput-Ji </script>
6
Time Complexity: O(1) Auxiliary Space: O(1)
yatinagg
Code_Mech
Akanksha_Rai
nidhi_biet
Rajput-Ji
series
series-sum
Competitive Programming
Computer Science Quizzes
Mathematical
Write From Home
Mathematical
series
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Breadth First Traversal ( BFS ) on a 2D array
Count of triplets in an Array (i, j, k) such that i < j < k and a[k] < a[i] < a[j]
Runtime Errors
Most important type of Algorithms
Multistage Graph (Shortest Path)
Shortest path in a directed graph by Dijkstra’s algorithm
Check whether two strings contain same characters in same order
Difference between Backtracking and Branch-N-Bound technique
Graph implementation using STL for competitive programming | Set 2 (Weighted graph)
String hashing using Polynomial rolling hash function | [
{
"code": null,
"e": 24711,
"s": 24683,
"text": "\n17 Mar, 2021"
},
{
"code": null,
"e": 24845,
"s": 24711,
"text": "Given an integer N print the sequence number of the given Triangular Number. If the number is not a triangular number then print -1. "
},
{
"code": null,
"e": 25237,
"s": 24845,
"text": "A number is termed as a triangular number if we can represent it in the form of a triangular grid of points such that the points form an equilateral triangle and each row contains as many points as the row number, i.e., the first row has one point, the second row has two points, the third row has three points and so on. First 10 tringular number are: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55. "
},
{
"code": null,
"e": 25251,
"s": 25239,
"text": "Examples: "
},
{
"code": null,
"e": 25396,
"s": 25251,
"text": "Input: N = 21 Output:6 Explanation: Since 15 is a 6th Tringular Number.Input: N = 12 Output:-1 Explanation: Since 12 is not a tringular Number "
},
{
"code": null,
"e": 25410,
"s": 25398,
"text": "Approach: "
},
{
"code": null,
"e": 25505,
"s": 25410,
"text": "Since tringular numbers are sum of natural numbers so can be generalise as quadratic equation."
},
{
"code": null,
"e": 25600,
"s": 25505,
"text": "Since tringular numbers are sum of natural numbers so can be generalise as quadratic equation."
},
{
"code": null,
"e": 25606,
"s": 25602,
"text": "C++"
},
{
"code": null,
"e": 25611,
"s": 25606,
"text": "Java"
},
{
"code": null,
"e": 25619,
"s": 25611,
"text": "Python3"
},
{
"code": null,
"e": 25622,
"s": 25619,
"text": "C#"
},
{
"code": null,
"e": 25633,
"s": 25622,
"text": "Javascript"
},
{
"code": "// C++ code to print sequence// number of a triangular number#include<bits/stdc++.h>using namespace std; int main(){ int N = 21; int A = sqrt(2 * N + 0.25) - 0.5; int B = A; // If N is not tringular number if (B != A) cout << \"-1\"; else cout << B;} // This code is contributed by yatinagg",
"e": 25955,
"s": 25633,
"text": null
},
{
"code": "// Java code to print sequence// number of a triangular numberimport java.util.*;class GFG{ public static void main(String args[]){ int N = 21; int A = (int)(Math.sqrt(2 * N + 0.25) - 0.5); int B = A; // If N is not tringular number if (B != A) System.out.print(\"-1\"); else System.out.print(B);}} // This code is contributed by Akanksha_Rai",
"e": 26333,
"s": 25955,
"text": null
},
{
"code": "# Python3 code to print sequence# number of a triangular number import math N = 21A = math.sqrt(2 * N + 0.25)-0.5B = int(A) # if N is not tringular numberif B != A: print(-1)else: print(B) ",
"e": 26536,
"s": 26333,
"text": null
},
{
"code": "// C# code to print sequence// number of a triangular numberusing System;class GFG{ public static void Main(){ int N = 21; int A = (int)(Math.Sqrt(2 * N + 0.25) - 0.5); int B = A; // If N is not tringular number if (B != A) Console.Write(\"-1\"); else Console.Write(B);}} // This code is contributed by Code_Mech",
"e": 26884,
"s": 26536,
"text": null
},
{
"code": "<script>// javascript code to print sequence// number of a triangular number let N = 21; let A = Math.sqrt(2 * N + 0.25) - 0.5; let B = A; // If N is not tringular number if (B != A) document.write(\"-1\"); else document.write(B); // This code is contributed by Rajput-Ji </script>",
"e": 27208,
"s": 26884,
"text": null
},
{
"code": null,
"e": 27210,
"s": 27208,
"text": "6"
},
{
"code": null,
"e": 27257,
"s": 27212,
"text": "Time Complexity: O(1) Auxiliary Space: O(1) "
},
{
"code": null,
"e": 27266,
"s": 27257,
"text": "yatinagg"
},
{
"code": null,
"e": 27276,
"s": 27266,
"text": "Code_Mech"
},
{
"code": null,
"e": 27289,
"s": 27276,
"text": "Akanksha_Rai"
},
{
"code": null,
"e": 27300,
"s": 27289,
"text": "nidhi_biet"
},
{
"code": null,
"e": 27310,
"s": 27300,
"text": "Rajput-Ji"
},
{
"code": null,
"e": 27317,
"s": 27310,
"text": "series"
},
{
"code": null,
"e": 27328,
"s": 27317,
"text": "series-sum"
},
{
"code": null,
"e": 27352,
"s": 27328,
"text": "Competitive Programming"
},
{
"code": null,
"e": 27377,
"s": 27352,
"text": "Computer Science Quizzes"
},
{
"code": null,
"e": 27390,
"s": 27377,
"text": "Mathematical"
},
{
"code": null,
"e": 27406,
"s": 27390,
"text": "Write From Home"
},
{
"code": null,
"e": 27419,
"s": 27406,
"text": "Mathematical"
},
{
"code": null,
"e": 27426,
"s": 27419,
"text": "series"
},
{
"code": null,
"e": 27524,
"s": 27426,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27533,
"s": 27524,
"text": "Comments"
},
{
"code": null,
"e": 27546,
"s": 27533,
"text": "Old Comments"
},
{
"code": null,
"e": 27592,
"s": 27546,
"text": "Breadth First Traversal ( BFS ) on a 2D array"
},
{
"code": null,
"e": 27675,
"s": 27592,
"text": "Count of triplets in an Array (i, j, k) such that i < j < k and a[k] < a[i] < a[j]"
},
{
"code": null,
"e": 27690,
"s": 27675,
"text": "Runtime Errors"
},
{
"code": null,
"e": 27724,
"s": 27690,
"text": "Most important type of Algorithms"
},
{
"code": null,
"e": 27757,
"s": 27724,
"text": "Multistage Graph (Shortest Path)"
},
{
"code": null,
"e": 27815,
"s": 27757,
"text": "Shortest path in a directed graph by Dijkstra’s algorithm"
},
{
"code": null,
"e": 27879,
"s": 27815,
"text": "Check whether two strings contain same characters in same order"
},
{
"code": null,
"e": 27940,
"s": 27879,
"text": "Difference between Backtracking and Branch-N-Bound technique"
},
{
"code": null,
"e": 28024,
"s": 27940,
"text": "Graph implementation using STL for competitive programming | Set 2 (Weighted graph)"
}
] |
How do we use radio buttons in HTML forms? | Using HTML forms, you can easily take user input. The <form> tag is used to get user input, by adding the form elements. Different types of form elements include text input, radio button input, submit button, etc.
Let’s learn how to use radio buttons in HTML forms to get user input. Radio buttons are used when out of many options; just one option is required to be selected. They are also created using HTML <input> tag and the type attribute is set to radio.
Here are the attributes for radio button −
You can try to run the following code to learn how to work with radio buttons in HTML −
Live Demo
<!DOCTYPE html>
<html>
<body>
<head>
<title>HTML Radio Button</title>
</head>
<p>Gender</p>
<form>
<input type="radio" name="gender" value="male">Male
<br>
<input type="radio" name="gender" value="female">Female
</form>
</body>
</html> | [
{
"code": null,
"e": 1276,
"s": 1062,
"text": "Using HTML forms, you can easily take user input. The <form> tag is used to get user input, by adding the form elements. Different types of form elements include text input, radio button input, submit button, etc."
},
{
"code": null,
"e": 1524,
"s": 1276,
"text": "Let’s learn how to use radio buttons in HTML forms to get user input. Radio buttons are used when out of many options; just one option is required to be selected. They are also created using HTML <input> tag and the type attribute is set to radio."
},
{
"code": null,
"e": 1567,
"s": 1524,
"text": "Here are the attributes for radio button −"
},
{
"code": null,
"e": 1656,
"s": 1567,
"text": " You can try to run the following code to learn how to work with radio buttons in HTML −"
},
{
"code": null,
"e": 1666,
"s": 1656,
"text": "Live Demo"
},
{
"code": null,
"e": 1974,
"s": 1666,
"text": "<!DOCTYPE html>\n<html>\n <body>\n <head>\n <title>HTML Radio Button</title>\n </head>\n <p>Gender</p>\n <form>\n <input type=\"radio\" name=\"gender\" value=\"male\">Male\n <br>\n <input type=\"radio\" name=\"gender\" value=\"female\">Female\n </form>\n </body>\n</html>"
}
] |
Matplotlib.axes.Axes.set_transform() in Python - GeeksforGeeks | 30 Apr, 2020
Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute.
The Axes.set_transform() function in axes module of matplotlib library is used to set the artist transform.
Syntax: Axes.set_transform(self, t)
Parameters: This method accepts only one parameters.
t : This parameter is the Transform.
Returns: This method does not return any value.
Below examples illustrate the matplotlib.axes.Axes.set_transform() function in matplotlib.axes:
Example 1:
# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as pltimport matplotlib.transforms as mtransforms delta = 0.25x = y = np.arange(-3.0, 3.0, delta)X, Y = np.meshgrid(x, y)Z1 = np.exp(-X**2 - Y**2)Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2)Z = (Z1 - Z2) transform = mtransforms.Affine2D().rotate_deg(30)fig, ax = plt.subplots() im = ax.imshow(Z, interpolation ='none', origin ='lower', extent =[-2, 4, -3, 2], clip_on = True) trans_data = transform + ax.transDataim.set_transform(trans_data) x1, x2, y1, y2 = im.get_extent()ax.plot([x1, x2, x2, x1, x1], [y1, y1, y2, y2, y1], "ro-", transform = trans_data) ax.set_xlim(-5, 5)ax.set_ylim(-4, 4) fig.suptitle('matplotlib.axes.Axes.set_transform() \function Example\n\n', fontweight ="bold") plt.show()
Output:
Example 2:
# Implementation of matplotlib function import matplotlib.pyplot as pltfrom matplotlib import collections, colors, transformsimport numpy as np nverts = 50npts = 100 r = np.arange(nverts)theta = np.linspace(0, 2 * np.pi, nverts)xx = r * np.sin(theta)yy = r * np.cos(theta)spiral = np.column_stack([xx, yy]) rs = np.random.RandomState(19680801) xyo = rs.randn(npts, 2) colors = [colors.to_rgba(c) for c in plt.rcParams['axes.prop_cycle'].by_key()['color']] fig, ax1 = plt.subplots() col = collections.RegularPolyCollection( 7, sizes = np.abs(xx) * 10.0, offsets = xyo, transOffset = ax1.transData) trans = transforms.Affine2D().scale(fig.dpi / 72.0)col.set_transform(trans) ax1.add_collection(col, autolim = True)col.set_color(colors) fig.suptitle('matplotlib.axes.Axes.set_transform() function\ Example\n', fontweight ="bold") fig.canvas.draw() plt.show()
Output:
Matplotlib axes-class
Python-matplotlib
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Install PIP on Windows ?
How to drop one or multiple columns in Pandas Dataframe
How To Convert Python Dictionary To JSON?
Check if element exists in list in Python
Python | Pandas dataframe.groupby()
Python | Get unique values from a list
Defaultdict in Python
Python | os.path.join() method
Python Classes and Objects
Create a directory in Python | [
{
"code": null,
"e": 23901,
"s": 23873,
"text": "\n30 Apr, 2020"
},
{
"code": null,
"e": 24201,
"s": 23901,
"text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute."
},
{
"code": null,
"e": 24309,
"s": 24201,
"text": "The Axes.set_transform() function in axes module of matplotlib library is used to set the artist transform."
},
{
"code": null,
"e": 24345,
"s": 24309,
"text": "Syntax: Axes.set_transform(self, t)"
},
{
"code": null,
"e": 24398,
"s": 24345,
"text": "Parameters: This method accepts only one parameters."
},
{
"code": null,
"e": 24435,
"s": 24398,
"text": "t : This parameter is the Transform."
},
{
"code": null,
"e": 24483,
"s": 24435,
"text": "Returns: This method does not return any value."
},
{
"code": null,
"e": 24579,
"s": 24483,
"text": "Below examples illustrate the matplotlib.axes.Axes.set_transform() function in matplotlib.axes:"
},
{
"code": null,
"e": 24590,
"s": 24579,
"text": "Example 1:"
},
{
"code": "# Implementation of matplotlib functionimport numpy as npimport matplotlib.pyplot as pltimport matplotlib.transforms as mtransforms delta = 0.25x = y = np.arange(-3.0, 3.0, delta)X, Y = np.meshgrid(x, y)Z1 = np.exp(-X**2 - Y**2)Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2)Z = (Z1 - Z2) transform = mtransforms.Affine2D().rotate_deg(30)fig, ax = plt.subplots() im = ax.imshow(Z, interpolation ='none', origin ='lower', extent =[-2, 4, -3, 2], clip_on = True) trans_data = transform + ax.transDataim.set_transform(trans_data) x1, x2, y1, y2 = im.get_extent()ax.plot([x1, x2, x2, x1, x1], [y1, y1, y2, y2, y1], \"ro-\", transform = trans_data) ax.set_xlim(-5, 5)ax.set_ylim(-4, 4) fig.suptitle('matplotlib.axes.Axes.set_transform() \\function Example\\n\\n', fontweight =\"bold\") plt.show()",
"e": 25442,
"s": 24590,
"text": null
},
{
"code": null,
"e": 25450,
"s": 25442,
"text": "Output:"
},
{
"code": null,
"e": 25461,
"s": 25450,
"text": "Example 2:"
},
{
"code": "# Implementation of matplotlib function import matplotlib.pyplot as pltfrom matplotlib import collections, colors, transformsimport numpy as np nverts = 50npts = 100 r = np.arange(nverts)theta = np.linspace(0, 2 * np.pi, nverts)xx = r * np.sin(theta)yy = r * np.cos(theta)spiral = np.column_stack([xx, yy]) rs = np.random.RandomState(19680801) xyo = rs.randn(npts, 2) colors = [colors.to_rgba(c) for c in plt.rcParams['axes.prop_cycle'].by_key()['color']] fig, ax1 = plt.subplots() col = collections.RegularPolyCollection( 7, sizes = np.abs(xx) * 10.0, offsets = xyo, transOffset = ax1.transData) trans = transforms.Affine2D().scale(fig.dpi / 72.0)col.set_transform(trans) ax1.add_collection(col, autolim = True)col.set_color(colors) fig.suptitle('matplotlib.axes.Axes.set_transform() function\\ Example\\n', fontweight =\"bold\") fig.canvas.draw() plt.show()",
"e": 26359,
"s": 25461,
"text": null
},
{
"code": null,
"e": 26367,
"s": 26359,
"text": "Output:"
},
{
"code": null,
"e": 26389,
"s": 26367,
"text": "Matplotlib axes-class"
},
{
"code": null,
"e": 26407,
"s": 26389,
"text": "Python-matplotlib"
},
{
"code": null,
"e": 26414,
"s": 26407,
"text": "Python"
},
{
"code": null,
"e": 26512,
"s": 26414,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26521,
"s": 26512,
"text": "Comments"
},
{
"code": null,
"e": 26534,
"s": 26521,
"text": "Old Comments"
},
{
"code": null,
"e": 26566,
"s": 26534,
"text": "How to Install PIP on Windows ?"
},
{
"code": null,
"e": 26622,
"s": 26566,
"text": "How to drop one or multiple columns in Pandas Dataframe"
},
{
"code": null,
"e": 26664,
"s": 26622,
"text": "How To Convert Python Dictionary To JSON?"
},
{
"code": null,
"e": 26706,
"s": 26664,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 26742,
"s": 26706,
"text": "Python | Pandas dataframe.groupby()"
},
{
"code": null,
"e": 26781,
"s": 26742,
"text": "Python | Get unique values from a list"
},
{
"code": null,
"e": 26803,
"s": 26781,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 26834,
"s": 26803,
"text": "Python | os.path.join() method"
},
{
"code": null,
"e": 26861,
"s": 26834,
"text": "Python Classes and Objects"
}
] |
There are two very different ways to deploy ML models, here’s both | by Tom Grek | Towards Data Science | If an ML model makes a prediction in Jupyter, is anyone around to hear it?
Probably not. Deploying models is the key to making them useful.
It’s not only if you’re building product, in which case deployment is a necessity — it also applies if you’re generating reports for management. Ten years ago it was unthinkable that execs wouldn’t question assumptions and plug their own numbers into an Excel sheet to see what changed. Today, a PDF of impenetrable matplotlib figures might impress junior VPs, but could well fuel ML skepticism in the eyes of experienced C-suite execs.
Don’t help bring about the end of the AI hype cycle!
And so, deployment of ML models became the hot topic, simply because there aren’t that many people who know how to do it; seeing that you need both data science and engineering skills. As I recently discovered, there are two really divergent ways to deploy models: the traditional way, and a more recent option that, I will be honest, blew my mind.
In this article, I’ll provide you with a straightforward yet best-practices template for both kinds of deployment. As always, for the kinaesthetic learner, feel free to skip straight to the code here, which I actually deployed here if you want to test it out. I know not everybody likes to jump around when reading; it looks like this:
If you come from an analyst background, you may not grok web-app architecture, so let me illustrate that first. Apologies if this is oversimplification and man-splaining! But I have seen enough “ML model deployments” that are really just XGBoost wrapped in Flask, that I know it’s a real problem.
The user (on the left here) is using a browser that runs only Javascript, HTML, and CSS. That’s the frontend. It can make calls to a backend server to get results, which it then maybe processes and displays. The backend server should respond ASAP to the frontend’s requests; but the backend may need to talk to databases, third party APIs, and microservices. The backend may also produce slow jobs — such as ML jobs — at the request of the user, which it should put into a queue. (Bear in mind that usually that the user usually has to authenticate itself somehow).
Now, let’s talk distributed web app architecture.
In general, we want to run as many backend instances as possible, for scalability. That’s why there are bubbles coming out of ‘server’ in the diagram above; they represent ‘more of these’. So, each instance has to remain stateless: finish handling the HTTP request and exit. Don’t keep anything in memory between requests, because a client’s first request might go to one server, and a subsequent request to another.
It’s bad if we have a long running endpoint: it would tie up one of our servers (say... doing some ML task), leaving it unable to handle other users’ requests. We need to keep the web server responsive and have it hand off long running tasks, with some kind of shared persistence so that when the user checks progress or requests the result, any server can report. Also, jobs, and parts-of-jobs, should be able to be done in parallel by as many workers as there are resources for.
The answer is a first-in, first-out (FIFO) queue. The backend simply enqueues jobs. Workers pick and process jobs out of the queue, performing training or inference, and storing models or predictions to the database when done.
With the library MLQ, the following is literally all you need for a backend web server — an endpoint to enqueue a job, an endpoint to check the progress of a job, and an endpoint to serve up a job’s result if the job has finished.
The architecture for truly deploying an ML model is this:
Backend server receives a request from user’s web browser. It’s wrapped up in JSON but semantically would be something like: “Tomorrow is Wednesday and we sold 10 units today. How many customer support calls should we expect tomorrow?”Backend pushes the job {Wednesday, 10} into a queue (some place decoupled from the backend itself, such as Redis in the case of MLQ). The queue replies with “Thanks, let’s refer to that as Job ID 562”.Backend replies to the user: “I’ll do that calculation. It has ID 562. Please wait”. Backend is then free to serve other users.The user’s web browser starts displaying a ‘please wait’ spinner.Workers — at least, ones that are not currently processing another job — are constantly polling the queue for jobs. Probably, the workers exist on another server/computer, but they can also be different threads/processes on the same computer. Workers might have GPUs, whereas the backend server probably does not need to.Eventually, a worker will pick up the job, removing it from the queue, and process it (e.g. run {Wednesday, 10} through some XGBoost model). It’ll save the prediction to a database. Imagine this step takes 5 minutes.Meanwhile, the user’s web browser is polling the backend every 30 seconds to ask if job 562 is done yet. The backend checks if the database has a result stored at id=562 and replies accordingly. Any of our multiple horizontal backends is able to serve the user’s request. You might imagine that the shared database is a single point of failure, and you’d be right! But separately, we provisioned replicas and some failover mechanism, maybe sharding/load balancing, so it’s all good.After five minutes plus a bit, the user polls for a result, and we are able to serve it up.
Backend server receives a request from user’s web browser. It’s wrapped up in JSON but semantically would be something like: “Tomorrow is Wednesday and we sold 10 units today. How many customer support calls should we expect tomorrow?”
Backend pushes the job {Wednesday, 10} into a queue (some place decoupled from the backend itself, such as Redis in the case of MLQ). The queue replies with “Thanks, let’s refer to that as Job ID 562”.
Backend replies to the user: “I’ll do that calculation. It has ID 562. Please wait”. Backend is then free to serve other users.
The user’s web browser starts displaying a ‘please wait’ spinner.
Workers — at least, ones that are not currently processing another job — are constantly polling the queue for jobs. Probably, the workers exist on another server/computer, but they can also be different threads/processes on the same computer. Workers might have GPUs, whereas the backend server probably does not need to.
Eventually, a worker will pick up the job, removing it from the queue, and process it (e.g. run {Wednesday, 10} through some XGBoost model). It’ll save the prediction to a database. Imagine this step takes 5 minutes.
Meanwhile, the user’s web browser is polling the backend every 30 seconds to ask if job 562 is done yet. The backend checks if the database has a result stored at id=562 and replies accordingly. Any of our multiple horizontal backends is able to serve the user’s request. You might imagine that the shared database is a single point of failure, and you’d be right! But separately, we provisioned replicas and some failover mechanism, maybe sharding/load balancing, so it’s all good.
After five minutes plus a bit, the user polls for a result, and we are able to serve it up.
There is a little bit more involved, mostly to handle resiliency and persistency (what if a worker goes offline midway through a job? What if the user’s input was garbage and caused the job to fail?) But that’s the basics. Here’s a very simple worker template for MLQ. It just waits until it receives a job, then runs a function on the job parameters and stores the result. You can run as many of these things in parallel as you want, on the same server or distributed servers. If you look in the repo, you’ll find the complete code for doing this with the Nietzche/Tensorflow RNN model.
There are several good queueing frameworks available, or things that make suitable queues, including Celery, Dask, ZeroMQ, native Redis, and a library I recently made to be an easy-to-use version of all this for deploying side projects without complexity: MLQ. Kafka is also a thing but regular readers will know I’m not a fan of over-architected, Java-based projects. MLQ is immature; I’m not trying to sell it here. Use Celery instead for serious projects.
This week, I spent some time with NVIDIA and asked about their canonical solution for job queueing (specifically, in my case, so that I can make a GPU farm available to everyone at work with a Jupyter notebook, without them all trying to submit jobs at the same time). There isn’t one, yet, but I was assured they’re working on it. Until then, hand-rolling a solution with a queuing system is the only way.
(Also possibly of interest from that meeting: everyone agreed that MXNet is a really good framework, perhaps the best — but sadly may be on its way out).
You might be wondering, how does ML queuing work with real time applications? The answer is: the same way, but it’s not ideal due to latency (for example, industrial IoT). Queue entrypoints can be distributed, so the real tricks are in how the database handles that. Additionally, general opinion is that people won’t accept private data being sent to the backend, another reason why “ML at the edge” is a hot topic. If all the data needed for inference is available in one place, let’s do the inference there. And so, without further adieu:
And so, enter the poor maligned front end engineer, whom everyone assumed thought linear algebra meant doing calculations one after the other, but who is repeatedly the most in-demand person on your team. Who, it turns out, probably wasn’t a nitwit after all, but maybe just biding their time until Javascript’s AI tools caught up to Python’s. And recently, they did.
The nutshell is, you can use Tensorflow from Javascript now. Initially I dismissed Google’s announcement of this thinking it probably meant running inference with hobbled, crippled models which have to fit into some pattern (a single convolutional layer with a maximum 12 filters, FP-8, etc). And only inference! Definitely not training. How could that even be possible from Javascript.
I was very wrong!
I don’t want to focus on training a model in Javascript in this article — that’s super cool, but not always super practical — but rather, provide an alternative deployment pattern for a trained model. Bear in mind that your trained model will be available to the world. Anyone can copy it and see what the layers look like, and steal all the parameters. I guess I’d say that’s inevitable and your model probably is less special than you imagine: any competitive advantage is in the data and speed that you can deploy model revisions. And of course, how great of a product you build atop the model. Anyway, be wary of that.
TensorflowJS can execute, in a user’s web browser, any Keras model. And, via Web GL, they are hardware accelerated! I don’t have hard numbers but anecdotally, it’s worked well for me. Definitely not as fast as Python, but I am sure that over time JS will catch up.
For this article, I copied the code from the official Tensorflow Keras text generation LSTM example and ran it to build a model. My complete Jupyter notebook is here.
Then, export the model to TFJS. You’ll probably need to pip install tensorflowjs. Then:
import tensorflowjs as tfjstfjs.converters.save_keras_model(model, '.')
In the directory you’ll now find model.json and group1-shard1of1.
All the Tensorflow JS examples right now use Yarn, which I understand is a bit outdated (back when I was doing more visualization work, yarn became the new hotness versus npm's old dog; now it’s gone back the other way). Let’s for now put aside front-end’s capriciousness.
Anyway, hopefully you have yarn and a working node installation (version 9 at least). For a minimal example of a website that serves a frontend model, you can clone my repo.
The actual Javascript code is not that interesting. There’s a bit of boilerplate around creating a tensor from a buffer, but all you need to do to actually use the model:
model = await tf.loadModel('https://mydomain.com/model.json');output = model.predict(input);
The complete, end-to-end, frontend (and backend) deployment is example is in my repo here.
Viola! Hardware accelerated Keras models where you don’t even need a backend.
The biggest drawback I can think of, other than your network architecture being available for all to see, is that in many practical applications not all of the data is available at the front end. Where I work, when a user enters a query, we fetch a massive amount of data from Elasticsearch, and run a model (actually several) on that data. Sending so much data to the frontend is not feasible.
Even if it was, you might want to charge per-prediction, which would not be possible once you have entered Javascript land.
It’s simple:
Use a queue
Don’t tie up your backend webserver; separate any ML processes from the act of serving up assets and endpoints
Ensure everything is stateless and able to operate in parallel
Consider frontend deployment
I hope you enjoyed and gained something from this article! Please like and follow if you did, and/or provide feedback here or at @tomgrek. I build AI in SF, and always enjoy talking to fellow nerds/AI enthusiasts, so feel free to get in touch via whatever means. | [
{
"code": null,
"e": 247,
"s": 172,
"text": "If an ML model makes a prediction in Jupyter, is anyone around to hear it?"
},
{
"code": null,
"e": 312,
"s": 247,
"text": "Probably not. Deploying models is the key to making them useful."
},
{
"code": null,
"e": 749,
"s": 312,
"text": "It’s not only if you’re building product, in which case deployment is a necessity — it also applies if you’re generating reports for management. Ten years ago it was unthinkable that execs wouldn’t question assumptions and plug their own numbers into an Excel sheet to see what changed. Today, a PDF of impenetrable matplotlib figures might impress junior VPs, but could well fuel ML skepticism in the eyes of experienced C-suite execs."
},
{
"code": null,
"e": 802,
"s": 749,
"text": "Don’t help bring about the end of the AI hype cycle!"
},
{
"code": null,
"e": 1151,
"s": 802,
"text": "And so, deployment of ML models became the hot topic, simply because there aren’t that many people who know how to do it; seeing that you need both data science and engineering skills. As I recently discovered, there are two really divergent ways to deploy models: the traditional way, and a more recent option that, I will be honest, blew my mind."
},
{
"code": null,
"e": 1487,
"s": 1151,
"text": "In this article, I’ll provide you with a straightforward yet best-practices template for both kinds of deployment. As always, for the kinaesthetic learner, feel free to skip straight to the code here, which I actually deployed here if you want to test it out. I know not everybody likes to jump around when reading; it looks like this:"
},
{
"code": null,
"e": 1784,
"s": 1487,
"text": "If you come from an analyst background, you may not grok web-app architecture, so let me illustrate that first. Apologies if this is oversimplification and man-splaining! But I have seen enough “ML model deployments” that are really just XGBoost wrapped in Flask, that I know it’s a real problem."
},
{
"code": null,
"e": 2350,
"s": 1784,
"text": "The user (on the left here) is using a browser that runs only Javascript, HTML, and CSS. That’s the frontend. It can make calls to a backend server to get results, which it then maybe processes and displays. The backend server should respond ASAP to the frontend’s requests; but the backend may need to talk to databases, third party APIs, and microservices. The backend may also produce slow jobs — such as ML jobs — at the request of the user, which it should put into a queue. (Bear in mind that usually that the user usually has to authenticate itself somehow)."
},
{
"code": null,
"e": 2400,
"s": 2350,
"text": "Now, let’s talk distributed web app architecture."
},
{
"code": null,
"e": 2817,
"s": 2400,
"text": "In general, we want to run as many backend instances as possible, for scalability. That’s why there are bubbles coming out of ‘server’ in the diagram above; they represent ‘more of these’. So, each instance has to remain stateless: finish handling the HTTP request and exit. Don’t keep anything in memory between requests, because a client’s first request might go to one server, and a subsequent request to another."
},
{
"code": null,
"e": 3298,
"s": 2817,
"text": "It’s bad if we have a long running endpoint: it would tie up one of our servers (say... doing some ML task), leaving it unable to handle other users’ requests. We need to keep the web server responsive and have it hand off long running tasks, with some kind of shared persistence so that when the user checks progress or requests the result, any server can report. Also, jobs, and parts-of-jobs, should be able to be done in parallel by as many workers as there are resources for."
},
{
"code": null,
"e": 3525,
"s": 3298,
"text": "The answer is a first-in, first-out (FIFO) queue. The backend simply enqueues jobs. Workers pick and process jobs out of the queue, performing training or inference, and storing models or predictions to the database when done."
},
{
"code": null,
"e": 3756,
"s": 3525,
"text": "With the library MLQ, the following is literally all you need for a backend web server — an endpoint to enqueue a job, an endpoint to check the progress of a job, and an endpoint to serve up a job’s result if the job has finished."
},
{
"code": null,
"e": 3814,
"s": 3756,
"text": "The architecture for truly deploying an ML model is this:"
},
{
"code": null,
"e": 5553,
"s": 3814,
"text": "Backend server receives a request from user’s web browser. It’s wrapped up in JSON but semantically would be something like: “Tomorrow is Wednesday and we sold 10 units today. How many customer support calls should we expect tomorrow?”Backend pushes the job {Wednesday, 10} into a queue (some place decoupled from the backend itself, such as Redis in the case of MLQ). The queue replies with “Thanks, let’s refer to that as Job ID 562”.Backend replies to the user: “I’ll do that calculation. It has ID 562. Please wait”. Backend is then free to serve other users.The user’s web browser starts displaying a ‘please wait’ spinner.Workers — at least, ones that are not currently processing another job — are constantly polling the queue for jobs. Probably, the workers exist on another server/computer, but they can also be different threads/processes on the same computer. Workers might have GPUs, whereas the backend server probably does not need to.Eventually, a worker will pick up the job, removing it from the queue, and process it (e.g. run {Wednesday, 10} through some XGBoost model). It’ll save the prediction to a database. Imagine this step takes 5 minutes.Meanwhile, the user’s web browser is polling the backend every 30 seconds to ask if job 562 is done yet. The backend checks if the database has a result stored at id=562 and replies accordingly. Any of our multiple horizontal backends is able to serve the user’s request. You might imagine that the shared database is a single point of failure, and you’d be right! But separately, we provisioned replicas and some failover mechanism, maybe sharding/load balancing, so it’s all good.After five minutes plus a bit, the user polls for a result, and we are able to serve it up."
},
{
"code": null,
"e": 5789,
"s": 5553,
"text": "Backend server receives a request from user’s web browser. It’s wrapped up in JSON but semantically would be something like: “Tomorrow is Wednesday and we sold 10 units today. How many customer support calls should we expect tomorrow?”"
},
{
"code": null,
"e": 5991,
"s": 5789,
"text": "Backend pushes the job {Wednesday, 10} into a queue (some place decoupled from the backend itself, such as Redis in the case of MLQ). The queue replies with “Thanks, let’s refer to that as Job ID 562”."
},
{
"code": null,
"e": 6119,
"s": 5991,
"text": "Backend replies to the user: “I’ll do that calculation. It has ID 562. Please wait”. Backend is then free to serve other users."
},
{
"code": null,
"e": 6185,
"s": 6119,
"text": "The user’s web browser starts displaying a ‘please wait’ spinner."
},
{
"code": null,
"e": 6507,
"s": 6185,
"text": "Workers — at least, ones that are not currently processing another job — are constantly polling the queue for jobs. Probably, the workers exist on another server/computer, but they can also be different threads/processes on the same computer. Workers might have GPUs, whereas the backend server probably does not need to."
},
{
"code": null,
"e": 6724,
"s": 6507,
"text": "Eventually, a worker will pick up the job, removing it from the queue, and process it (e.g. run {Wednesday, 10} through some XGBoost model). It’ll save the prediction to a database. Imagine this step takes 5 minutes."
},
{
"code": null,
"e": 7207,
"s": 6724,
"text": "Meanwhile, the user’s web browser is polling the backend every 30 seconds to ask if job 562 is done yet. The backend checks if the database has a result stored at id=562 and replies accordingly. Any of our multiple horizontal backends is able to serve the user’s request. You might imagine that the shared database is a single point of failure, and you’d be right! But separately, we provisioned replicas and some failover mechanism, maybe sharding/load balancing, so it’s all good."
},
{
"code": null,
"e": 7299,
"s": 7207,
"text": "After five minutes plus a bit, the user polls for a result, and we are able to serve it up."
},
{
"code": null,
"e": 7887,
"s": 7299,
"text": "There is a little bit more involved, mostly to handle resiliency and persistency (what if a worker goes offline midway through a job? What if the user’s input was garbage and caused the job to fail?) But that’s the basics. Here’s a very simple worker template for MLQ. It just waits until it receives a job, then runs a function on the job parameters and stores the result. You can run as many of these things in parallel as you want, on the same server or distributed servers. If you look in the repo, you’ll find the complete code for doing this with the Nietzche/Tensorflow RNN model."
},
{
"code": null,
"e": 8346,
"s": 7887,
"text": "There are several good queueing frameworks available, or things that make suitable queues, including Celery, Dask, ZeroMQ, native Redis, and a library I recently made to be an easy-to-use version of all this for deploying side projects without complexity: MLQ. Kafka is also a thing but regular readers will know I’m not a fan of over-architected, Java-based projects. MLQ is immature; I’m not trying to sell it here. Use Celery instead for serious projects."
},
{
"code": null,
"e": 8753,
"s": 8346,
"text": "This week, I spent some time with NVIDIA and asked about their canonical solution for job queueing (specifically, in my case, so that I can make a GPU farm available to everyone at work with a Jupyter notebook, without them all trying to submit jobs at the same time). There isn’t one, yet, but I was assured they’re working on it. Until then, hand-rolling a solution with a queuing system is the only way."
},
{
"code": null,
"e": 8907,
"s": 8753,
"text": "(Also possibly of interest from that meeting: everyone agreed that MXNet is a really good framework, perhaps the best — but sadly may be on its way out)."
},
{
"code": null,
"e": 9449,
"s": 8907,
"text": "You might be wondering, how does ML queuing work with real time applications? The answer is: the same way, but it’s not ideal due to latency (for example, industrial IoT). Queue entrypoints can be distributed, so the real tricks are in how the database handles that. Additionally, general opinion is that people won’t accept private data being sent to the backend, another reason why “ML at the edge” is a hot topic. If all the data needed for inference is available in one place, let’s do the inference there. And so, without further adieu:"
},
{
"code": null,
"e": 9817,
"s": 9449,
"text": "And so, enter the poor maligned front end engineer, whom everyone assumed thought linear algebra meant doing calculations one after the other, but who is repeatedly the most in-demand person on your team. Who, it turns out, probably wasn’t a nitwit after all, but maybe just biding their time until Javascript’s AI tools caught up to Python’s. And recently, they did."
},
{
"code": null,
"e": 10204,
"s": 9817,
"text": "The nutshell is, you can use Tensorflow from Javascript now. Initially I dismissed Google’s announcement of this thinking it probably meant running inference with hobbled, crippled models which have to fit into some pattern (a single convolutional layer with a maximum 12 filters, FP-8, etc). And only inference! Definitely not training. How could that even be possible from Javascript."
},
{
"code": null,
"e": 10222,
"s": 10204,
"text": "I was very wrong!"
},
{
"code": null,
"e": 10845,
"s": 10222,
"text": "I don’t want to focus on training a model in Javascript in this article — that’s super cool, but not always super practical — but rather, provide an alternative deployment pattern for a trained model. Bear in mind that your trained model will be available to the world. Anyone can copy it and see what the layers look like, and steal all the parameters. I guess I’d say that’s inevitable and your model probably is less special than you imagine: any competitive advantage is in the data and speed that you can deploy model revisions. And of course, how great of a product you build atop the model. Anyway, be wary of that."
},
{
"code": null,
"e": 11110,
"s": 10845,
"text": "TensorflowJS can execute, in a user’s web browser, any Keras model. And, via Web GL, they are hardware accelerated! I don’t have hard numbers but anecdotally, it’s worked well for me. Definitely not as fast as Python, but I am sure that over time JS will catch up."
},
{
"code": null,
"e": 11277,
"s": 11110,
"text": "For this article, I copied the code from the official Tensorflow Keras text generation LSTM example and ran it to build a model. My complete Jupyter notebook is here."
},
{
"code": null,
"e": 11365,
"s": 11277,
"text": "Then, export the model to TFJS. You’ll probably need to pip install tensorflowjs. Then:"
},
{
"code": null,
"e": 11437,
"s": 11365,
"text": "import tensorflowjs as tfjstfjs.converters.save_keras_model(model, '.')"
},
{
"code": null,
"e": 11503,
"s": 11437,
"text": "In the directory you’ll now find model.json and group1-shard1of1."
},
{
"code": null,
"e": 11776,
"s": 11503,
"text": "All the Tensorflow JS examples right now use Yarn, which I understand is a bit outdated (back when I was doing more visualization work, yarn became the new hotness versus npm's old dog; now it’s gone back the other way). Let’s for now put aside front-end’s capriciousness."
},
{
"code": null,
"e": 11950,
"s": 11776,
"text": "Anyway, hopefully you have yarn and a working node installation (version 9 at least). For a minimal example of a website that serves a frontend model, you can clone my repo."
},
{
"code": null,
"e": 12121,
"s": 11950,
"text": "The actual Javascript code is not that interesting. There’s a bit of boilerplate around creating a tensor from a buffer, but all you need to do to actually use the model:"
},
{
"code": null,
"e": 12214,
"s": 12121,
"text": "model = await tf.loadModel('https://mydomain.com/model.json');output = model.predict(input);"
},
{
"code": null,
"e": 12305,
"s": 12214,
"text": "The complete, end-to-end, frontend (and backend) deployment is example is in my repo here."
},
{
"code": null,
"e": 12383,
"s": 12305,
"text": "Viola! Hardware accelerated Keras models where you don’t even need a backend."
},
{
"code": null,
"e": 12778,
"s": 12383,
"text": "The biggest drawback I can think of, other than your network architecture being available for all to see, is that in many practical applications not all of the data is available at the front end. Where I work, when a user enters a query, we fetch a massive amount of data from Elasticsearch, and run a model (actually several) on that data. Sending so much data to the frontend is not feasible."
},
{
"code": null,
"e": 12902,
"s": 12778,
"text": "Even if it was, you might want to charge per-prediction, which would not be possible once you have entered Javascript land."
},
{
"code": null,
"e": 12915,
"s": 12902,
"text": "It’s simple:"
},
{
"code": null,
"e": 12927,
"s": 12915,
"text": "Use a queue"
},
{
"code": null,
"e": 13038,
"s": 12927,
"text": "Don’t tie up your backend webserver; separate any ML processes from the act of serving up assets and endpoints"
},
{
"code": null,
"e": 13101,
"s": 13038,
"text": "Ensure everything is stateless and able to operate in parallel"
},
{
"code": null,
"e": 13130,
"s": 13101,
"text": "Consider frontend deployment"
}
] |
Installing Hadoop 3.1.0 multi-node cluster on Ubuntu 16.04 Step by Step | by Hadi Fadlallah | Towards Data Science | There are many links on the web about install Hadoop 3. Many of them are not working well or need improvements. This article is taken from the official documentation and other articles in addition of many answers from Stackoverflow.com
Note: All prerequisites must be applied on name node and data nodes
First, we need to install SSH and few software installation utilities for Java 8:
sudo apt install \openssh-server \software-properties-common \python-software-properties
Then we need to install Oracle’s Java 8 distribution and update the current OS.
sudo add-apt-repository ppa:webupd8team/javasudo apt updatesudo apt install oracle-java8-installer
To verify the java version you can use the following command:
java -version
We will use a dedicated Hadoop user account for running Hadoop applications. While that’s not required but it is recommended because it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine (security, permissions, backups, etc).
sudo addgroup hadoopgroupsudo adduser --ingroup hadoopgroup hadoopusersudo adduser hadoopuser sudo
You can check the groups and users using the following commands:
compgen -gcompgen -u
Hadoop requires SSH access to manage its different nodes, i.e. remote machines plus your local machine.
First you need to login as Hadoopuser
sudo su -- hadoopuser
The following commands are used for generating a key value pair using SSH
ssh-keygen -t rsa -P “” -f ~/.ssh/id_rsa
Copy the public keys form id_rsa.pub to authorized_keys.
cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keyschmod 0600 ~/.ssh/authorized_keys
Make sure hadoopuser can ssh to its own account without password. ssh to localhost from hadoopuser account to make sure it is working fine.
ssh localhost
Note: If you get error: ssh: connect to host localhost port 22: Connection refused, then, please try to install ssh-server using below command.
sudo apt-get install ssh
In this article, we will install Hadoop on three machines:
The First machine will act as the name node (Master) and a data node (slave), the other machines are data nodes (slaves)
On each machine we have to edit the /etc/hosts files using the following command
sudo gedit /etc/hosts
Each file must contain these rows:
127.0.0.1 localhost10.0.1.1 hadoop-namenode10.0.1.2 hadoop-datanode-210.0.1.3 hadoop-datadnode-3
Note: if the /etc/hosts file contains the following line
127.0.1.1 <Hostname>
Then you have to delete this line.
We are going to install all the software under the /opt directory and store HDFS’ underlying data there as well. Below we will create the folders with a single command.
sudo mkdir -p /opt/{hadoop/{logs},hdfs/{datanode,namenode},yarn/{logs}
The layout of the folder will looks like
/opt/├── hadoop│ ├── logs├── hdfs│ ├── datanode│ └── namenode├── yarn│ ├── logs
You can download the hadoop-3.1.0.tar.gz using the following command:
wget -c -O hadoop.tar.gz http://www-eu.apache.org/dist/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz
To decompress the Hadoop package you can use the following command:
sudo tar -xvf hadoop.tar.gz \ --directory=/opt/hadoop \--strip 1
The binary release of Hadoop 3 is 293 MB compressed. Its decompressed size is 733 MB with 400 MB of small documentation files that may take a long time to decompress. You can skip these files by adding the following line to the command above:
--exclude=hadoop-3.1.0/share/doc
Note: These Steps must be done on the Name node and Data nodes as well.
There are environment settings that will be used by Hadoop, Hive and Spark and are shared by both root and the regular user accounts. To centralize these settings I’ve stored them in /etc/profile and created a symbolic link from /root/.bashrc to this file as well. That way all users will have centrally-managed settings.
sudo gedit /etc/profile
The /etc/profile must looks like:
if [ “$PS1” ]; thenif [ “$BASH” ] && [ “$BASH” != “/bin/sh” ]; then# The file bash.bashrc already sets the default PS1.# PS1=’\h:\w\$ ‘if [ -f /etc/bash.bashrc ]; then. /etc/bash.bashrcfielseif [ “`id -u`” -eq 0 ]; thenPS1=’# ‘elsePS1=’$ ‘fififiif [ -d /etc/profile.d ]; thenfor i in /etc/profile.d/*.sh; doif [ -r $i ]; then. $ifidoneunset ifiexport HADOOP_HOME=/opt/hadoopexport PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport HADOOP_CONF_DIR=/opt/hadoop/etc/hadoopexport HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport JAVA_HOME=/usr/lib/jvm/java-8-oracleexport HADOOP_MAPRED_HOME=/opt/hadoopexport HADOOP_COMMON_HOME=/opt/hadoopexport HADOOP_HDFS_HOME=/opt/hadoopexport YARN_HOME=/opt/hadoop
The following command will create a symbolic link between ~/.bashrc and /etc/profile and apply the changes made to /etc/profile
sudo ln -sf /etc/profile /root/.bashrcsource /etc/profile
Update /opt/hadoop /etc/hadoop/ hadoop-env.sh file and set JAVA_HOME variable and HADOOP_HOME, HADOOP_CONF_DIR AND HADOOP_LOG_DIR variables
export JAVA_HOME=/usr/lib/jvm/java-8-oracleexport HADOOP_HOME=/opt/hadoopexport HADOOP_CONF_DIR=/opt/hadoop /etc/hadoopexport HADOOP_LOG_DIR=/opt/hadoop/logs
Log out and re-login to your hadoopuser account and check Hadoop installation using below command.
hadoop -version
First, we have to update the hdfs-site.xml file located a /opt/Hadoop/etc/Hadoop/ to define the name node and data node on this machine and define the replication factor and other settings:
sudo gedit /opt/hadoop/etc/hadoop/hdfs-site.xml
The file must looks like:
<configuration><property><name>dfs.namenode.name.dir</name><value>file:///opt/hdfs/namenode</value><description>NameNode directory for namespace and transaction logs storage.</description></property><property><name>dfs.datanode.data.dir</name><value>file:///opt/hdfs/datanode</value><description>DataNode directory</description></property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.permissions</name><value>false</value></property><property><name>dfs.datanode.use.datanode.hostname</name><value>false</value></property><property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property></configuration>
Then we have to update core-site.xml file located at /opt/hadoop/etc/hadoop and let Hadoop distribution know, where the name node is located:
sudo gedit /opt/hadoop/etc/hadoop/core-site.xml
The file must looks like:
<configuration><property><name>fs.defaultFS</name><value>hdfs://hadoop-namenode:9820/</value><description>NameNode URI</description></property><property><name>io.file.buffer.size</name><value>131072</value><description>Buffer size</description></property></configuration>
Then we have to update yarn-site.xml file located at /opt/hadoop/etc/hadoop/
sudo gedit /opt/hadoop/etc/hadoop/yarn-site.xml
The file must looks like:
<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value><description>Yarn Node Manager Aux Service</description></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property><name>yarn.nodemanager.local-dirs</name><value>file:///opt/yarn/local</value></property><property><name>yarn.nodemanager.log-dirs</name><value>file:///opt/yarn/logs</value></property></configuration>
Then we have to update mapred-site.xml file located at /opt/hadoop/etc/hadoop/
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value><description>MapReduce framework name</description></property><property><name>mapreduce.jobhistory.address</name><value>hadoop-namenode:10020</value><description>Default port is 10020.</description></property><property><name>mapreduce.jobhistory.webapp.address</name><value> hadoop-namenode:19888</value><description>Default port is 19888.</description></property><property><name>mapreduce.jobhistory.intermediate-done-dir</name><value>/mr-history/tmp</value><description>Directory where history files are written by MapReduce jobs.</description></property><property><name>mapreduce.jobhistory.done-dir</name><value>/mr-history/done</value><description>Directory where history files are managed by the MR JobHistory Server.</description></property></configuration>
Now we have to format the name node
hdfs namenode –format
Finally, we have to add your data node(s) (Slaves) to workers file located in /opt/hadoop/etc/hadoop
10.0.1.110.0.1.210.0.1.3
After configuring data nodes you have to make sure that the name node has a password less access to them:
ssh-copy-id -i /home/hadoopuser/.ssh/id_rsa.pub [email protected] -i /home/hadoopuser/.ssh/id_rsa.pub [email protected]
Note: Instead of downloading Hadoop you can copy the Hadoop.tar.gz file from the name node to data nodes and extract it. You can use the following command:
scp hadoop.tar.gz hadoop-datanode-2:/home/hadoopuserscp hadoop.tar.gz hadoop-datanode-3:/home/hadoopuser
On each data node you must do the following steps:
We have to update hdfs-site.xml, core-site.xml, yarn-site.xml and mapred-site.xml located at /opt/hadoop/etc/hadoop directory as the following:
hdfs-site.xml
<configuration><property><name>dfs.datanode.data.dir</name><value>file:///opt/hdfs/datanode</value><description>DataNode directory</description></property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.permissions</name><value>false</value></property><property><name>dfs.datanode.use.datanode.hostname</name><value>false</value></property></configuration>
core-site.xml
<configuration><property><name>fs.defaultFS</name><value>hdfs://hadoop-namenode:9820/</value><description>NameNode URI</description></property></configuration>
yarn-site.xml
<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value><description>Yarn Node Manager Aux Service</description></property></configuration>
mapred-site.xml
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value><description>MapReduce framework name</description></property></configuration>
After finishing the steps above, from the name node we have to execute the following command to start the Name node, data nodes and secondary name node:
start-dfs.sh
It will give the following output:
Starting namenodes on [hadoop-namenode]Starting datanodesStarting secondary namenodes [hadoop-namenode]
Also to start the resource manager and node managers we have to execute the following command:
start-yarn.sh
It will give the following output:
Starting resourcemanagerStarting nodemanagers
After that, to ensure that Hadoop started successfully, we must run the jps command on name node and data nodes must give the following output:
On Name node (ignore process ids):
16488 NameNode16622 DataNode17215 NodeManager17087 ResourceManager17530 Jps16829 SecondaryNameNode
On Data nodes (ignore process ids):
2306 DataNode2479 NodeManager2581 Jps
If you get similar output then all Hadoop daemons started successfully.
Note: You may check logs at /opt/hadoop/logs and check if everything is alright, by running hdfs dfsadmin -report command (it must return Live datanodes (3)).
Namenode
Access to the following URL: https://hadoop-namenode:9870/
ResourceManager
Access to the following URL: https://hadoop-namenode:8088/
[1] F. Houbart, “How to install and set up hadoop cluster,” Linode, 16 October 2017. [Online]. Available: https://www.linode.com/docs/databases/hadoop/how-to-install-and-set-up-hadoop-cluster. [Accessed 20 July 2018].
[2] “Stack overflow Q&A,” Stack overflow, [Online]. Available: https://www.Stackoverflow.com.
[3] “Apache Hadoop Documentation,” Apache, [Online]. Available: https://www.hadoop.apache.org. [Accessed 01 07 2018].
[4] G. Bansal, “Install Hadoop 3.0.0 multi-node cluster on Ubuntu,” 31 August 2017. [Online]. Available: http://www.gaurav3ansal.blogspot.com/2017/08/installing-hadoop-300-alpha-4-multi.html. [Accessed 16 07 2018].
[5] M. Litwintschik, “Hadoop 3 Single-Node Install Guide,” 19 March 2018. [Online]. Available: http://www.tech.marksblogg.com/hadoop-3-single-node-install-guide.html. [Accessed 10 06 2018]. | [
{
"code": null,
"e": 407,
"s": 171,
"text": "There are many links on the web about install Hadoop 3. Many of them are not working well or need improvements. This article is taken from the official documentation and other articles in addition of many answers from Stackoverflow.com"
},
{
"code": null,
"e": 475,
"s": 407,
"text": "Note: All prerequisites must be applied on name node and data nodes"
},
{
"code": null,
"e": 557,
"s": 475,
"text": "First, we need to install SSH and few software installation utilities for Java 8:"
},
{
"code": null,
"e": 646,
"s": 557,
"text": "sudo apt install \\openssh-server \\software-properties-common \\python-software-properties"
},
{
"code": null,
"e": 726,
"s": 646,
"text": "Then we need to install Oracle’s Java 8 distribution and update the current OS."
},
{
"code": null,
"e": 825,
"s": 726,
"text": "sudo add-apt-repository ppa:webupd8team/javasudo apt updatesudo apt install oracle-java8-installer"
},
{
"code": null,
"e": 887,
"s": 825,
"text": "To verify the java version you can use the following command:"
},
{
"code": null,
"e": 901,
"s": 887,
"text": "java -version"
},
{
"code": null,
"e": 1197,
"s": 901,
"text": "We will use a dedicated Hadoop user account for running Hadoop applications. While that’s not required but it is recommended because it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine (security, permissions, backups, etc)."
},
{
"code": null,
"e": 1296,
"s": 1197,
"text": "sudo addgroup hadoopgroupsudo adduser --ingroup hadoopgroup hadoopusersudo adduser hadoopuser sudo"
},
{
"code": null,
"e": 1361,
"s": 1296,
"text": "You can check the groups and users using the following commands:"
},
{
"code": null,
"e": 1382,
"s": 1361,
"text": "compgen -gcompgen -u"
},
{
"code": null,
"e": 1486,
"s": 1382,
"text": "Hadoop requires SSH access to manage its different nodes, i.e. remote machines plus your local machine."
},
{
"code": null,
"e": 1524,
"s": 1486,
"text": "First you need to login as Hadoopuser"
},
{
"code": null,
"e": 1546,
"s": 1524,
"text": "sudo su -- hadoopuser"
},
{
"code": null,
"e": 1620,
"s": 1546,
"text": "The following commands are used for generating a key value pair using SSH"
},
{
"code": null,
"e": 1661,
"s": 1620,
"text": "ssh-keygen -t rsa -P “” -f ~/.ssh/id_rsa"
},
{
"code": null,
"e": 1718,
"s": 1661,
"text": "Copy the public keys form id_rsa.pub to authorized_keys."
},
{
"code": null,
"e": 1798,
"s": 1718,
"text": "cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keyschmod 0600 ~/.ssh/authorized_keys"
},
{
"code": null,
"e": 1938,
"s": 1798,
"text": "Make sure hadoopuser can ssh to its own account without password. ssh to localhost from hadoopuser account to make sure it is working fine."
},
{
"code": null,
"e": 1952,
"s": 1938,
"text": "ssh localhost"
},
{
"code": null,
"e": 2096,
"s": 1952,
"text": "Note: If you get error: ssh: connect to host localhost port 22: Connection refused, then, please try to install ssh-server using below command."
},
{
"code": null,
"e": 2121,
"s": 2096,
"text": "sudo apt-get install ssh"
},
{
"code": null,
"e": 2180,
"s": 2121,
"text": "In this article, we will install Hadoop on three machines:"
},
{
"code": null,
"e": 2301,
"s": 2180,
"text": "The First machine will act as the name node (Master) and a data node (slave), the other machines are data nodes (slaves)"
},
{
"code": null,
"e": 2382,
"s": 2301,
"text": "On each machine we have to edit the /etc/hosts files using the following command"
},
{
"code": null,
"e": 2404,
"s": 2382,
"text": "sudo gedit /etc/hosts"
},
{
"code": null,
"e": 2439,
"s": 2404,
"text": "Each file must contain these rows:"
},
{
"code": null,
"e": 2536,
"s": 2439,
"text": "127.0.0.1 localhost10.0.1.1 hadoop-namenode10.0.1.2 hadoop-datanode-210.0.1.3 hadoop-datadnode-3"
},
{
"code": null,
"e": 2593,
"s": 2536,
"text": "Note: if the /etc/hosts file contains the following line"
},
{
"code": null,
"e": 2614,
"s": 2593,
"text": "127.0.1.1 <Hostname>"
},
{
"code": null,
"e": 2649,
"s": 2614,
"text": "Then you have to delete this line."
},
{
"code": null,
"e": 2818,
"s": 2649,
"text": "We are going to install all the software under the /opt directory and store HDFS’ underlying data there as well. Below we will create the folders with a single command."
},
{
"code": null,
"e": 2889,
"s": 2818,
"text": "sudo mkdir -p /opt/{hadoop/{logs},hdfs/{datanode,namenode},yarn/{logs}"
},
{
"code": null,
"e": 2930,
"s": 2889,
"text": "The layout of the folder will looks like"
},
{
"code": null,
"e": 3010,
"s": 2930,
"text": "/opt/├── hadoop│ ├── logs├── hdfs│ ├── datanode│ └── namenode├── yarn│ ├── logs"
},
{
"code": null,
"e": 3080,
"s": 3010,
"text": "You can download the hadoop-3.1.0.tar.gz using the following command:"
},
{
"code": null,
"e": 3182,
"s": 3080,
"text": "wget -c -O hadoop.tar.gz http://www-eu.apache.org/dist/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz"
},
{
"code": null,
"e": 3250,
"s": 3182,
"text": "To decompress the Hadoop package you can use the following command:"
},
{
"code": null,
"e": 3315,
"s": 3250,
"text": "sudo tar -xvf hadoop.tar.gz \\ --directory=/opt/hadoop \\--strip 1"
},
{
"code": null,
"e": 3558,
"s": 3315,
"text": "The binary release of Hadoop 3 is 293 MB compressed. Its decompressed size is 733 MB with 400 MB of small documentation files that may take a long time to decompress. You can skip these files by adding the following line to the command above:"
},
{
"code": null,
"e": 3591,
"s": 3558,
"text": "--exclude=hadoop-3.1.0/share/doc"
},
{
"code": null,
"e": 3663,
"s": 3591,
"text": "Note: These Steps must be done on the Name node and Data nodes as well."
},
{
"code": null,
"e": 3985,
"s": 3663,
"text": "There are environment settings that will be used by Hadoop, Hive and Spark and are shared by both root and the regular user accounts. To centralize these settings I’ve stored them in /etc/profile and created a symbolic link from /root/.bashrc to this file as well. That way all users will have centrally-managed settings."
},
{
"code": null,
"e": 4009,
"s": 3985,
"text": "sudo gedit /etc/profile"
},
{
"code": null,
"e": 4043,
"s": 4009,
"text": "The /etc/profile must looks like:"
},
{
"code": null,
"e": 4849,
"s": 4043,
"text": "if [ “$PS1” ]; thenif [ “$BASH” ] && [ “$BASH” != “/bin/sh” ]; then# The file bash.bashrc already sets the default PS1.# PS1=’\\h:\\w\\$ ‘if [ -f /etc/bash.bashrc ]; then. /etc/bash.bashrcfielseif [ “`id -u`” -eq 0 ]; thenPS1=’# ‘elsePS1=’$ ‘fififiif [ -d /etc/profile.d ]; thenfor i in /etc/profile.d/*.sh; doif [ -r $i ]; then. $ifidoneunset ifiexport HADOOP_HOME=/opt/hadoopexport PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbinexport HADOOP_CONF_DIR=/opt/hadoop/etc/hadoopexport HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport JAVA_HOME=/usr/lib/jvm/java-8-oracleexport HADOOP_MAPRED_HOME=/opt/hadoopexport HADOOP_COMMON_HOME=/opt/hadoopexport HADOOP_HDFS_HOME=/opt/hadoopexport YARN_HOME=/opt/hadoop"
},
{
"code": null,
"e": 4977,
"s": 4849,
"text": "The following command will create a symbolic link between ~/.bashrc and /etc/profile and apply the changes made to /etc/profile"
},
{
"code": null,
"e": 5035,
"s": 4977,
"text": "sudo ln -sf /etc/profile /root/.bashrcsource /etc/profile"
},
{
"code": null,
"e": 5175,
"s": 5035,
"text": "Update /opt/hadoop /etc/hadoop/ hadoop-env.sh file and set JAVA_HOME variable and HADOOP_HOME, HADOOP_CONF_DIR AND HADOOP_LOG_DIR variables"
},
{
"code": null,
"e": 5333,
"s": 5175,
"text": "export JAVA_HOME=/usr/lib/jvm/java-8-oracleexport HADOOP_HOME=/opt/hadoopexport HADOOP_CONF_DIR=/opt/hadoop /etc/hadoopexport HADOOP_LOG_DIR=/opt/hadoop/logs"
},
{
"code": null,
"e": 5432,
"s": 5333,
"text": "Log out and re-login to your hadoopuser account and check Hadoop installation using below command."
},
{
"code": null,
"e": 5448,
"s": 5432,
"text": "hadoop -version"
},
{
"code": null,
"e": 5638,
"s": 5448,
"text": "First, we have to update the hdfs-site.xml file located a /opt/Hadoop/etc/Hadoop/ to define the name node and data node on this machine and define the replication factor and other settings:"
},
{
"code": null,
"e": 5686,
"s": 5638,
"text": "sudo gedit /opt/hadoop/etc/hadoop/hdfs-site.xml"
},
{
"code": null,
"e": 5712,
"s": 5686,
"text": "The file must looks like:"
},
{
"code": null,
"e": 6396,
"s": 5712,
"text": "<configuration><property><name>dfs.namenode.name.dir</name><value>file:///opt/hdfs/namenode</value><description>NameNode directory for namespace and transaction logs storage.</description></property><property><name>dfs.datanode.data.dir</name><value>file:///opt/hdfs/datanode</value><description>DataNode directory</description></property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.permissions</name><value>false</value></property><property><name>dfs.datanode.use.datanode.hostname</name><value>false</value></property><property><name>dfs.namenode.datanode.registration.ip-hostname-check</name><value>false</value></property></configuration>"
},
{
"code": null,
"e": 6538,
"s": 6396,
"text": "Then we have to update core-site.xml file located at /opt/hadoop/etc/hadoop and let Hadoop distribution know, where the name node is located:"
},
{
"code": null,
"e": 6586,
"s": 6538,
"text": "sudo gedit /opt/hadoop/etc/hadoop/core-site.xml"
},
{
"code": null,
"e": 6612,
"s": 6586,
"text": "The file must looks like:"
},
{
"code": null,
"e": 6884,
"s": 6612,
"text": "<configuration><property><name>fs.defaultFS</name><value>hdfs://hadoop-namenode:9820/</value><description>NameNode URI</description></property><property><name>io.file.buffer.size</name><value>131072</value><description>Buffer size</description></property></configuration>"
},
{
"code": null,
"e": 6961,
"s": 6884,
"text": "Then we have to update yarn-site.xml file located at /opt/hadoop/etc/hadoop/"
},
{
"code": null,
"e": 7009,
"s": 6961,
"text": "sudo gedit /opt/hadoop/etc/hadoop/yarn-site.xml"
},
{
"code": null,
"e": 7035,
"s": 7009,
"text": "The file must looks like:"
},
{
"code": null,
"e": 7552,
"s": 7035,
"text": "<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value><description>Yarn Node Manager Aux Service</description></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property><name>yarn.nodemanager.local-dirs</name><value>file:///opt/yarn/local</value></property><property><name>yarn.nodemanager.log-dirs</name><value>file:///opt/yarn/logs</value></property></configuration>"
},
{
"code": null,
"e": 7631,
"s": 7552,
"text": "Then we have to update mapred-site.xml file located at /opt/hadoop/etc/hadoop/"
},
{
"code": null,
"e": 8477,
"s": 7631,
"text": "<configuration><property><name>mapreduce.framework.name</name><value>yarn</value><description>MapReduce framework name</description></property><property><name>mapreduce.jobhistory.address</name><value>hadoop-namenode:10020</value><description>Default port is 10020.</description></property><property><name>mapreduce.jobhistory.webapp.address</name><value> hadoop-namenode:19888</value><description>Default port is 19888.</description></property><property><name>mapreduce.jobhistory.intermediate-done-dir</name><value>/mr-history/tmp</value><description>Directory where history files are written by MapReduce jobs.</description></property><property><name>mapreduce.jobhistory.done-dir</name><value>/mr-history/done</value><description>Directory where history files are managed by the MR JobHistory Server.</description></property></configuration>"
},
{
"code": null,
"e": 8513,
"s": 8477,
"text": "Now we have to format the name node"
},
{
"code": null,
"e": 8535,
"s": 8513,
"text": "hdfs namenode –format"
},
{
"code": null,
"e": 8636,
"s": 8535,
"text": "Finally, we have to add your data node(s) (Slaves) to workers file located in /opt/hadoop/etc/hadoop"
},
{
"code": null,
"e": 8661,
"s": 8636,
"text": "10.0.1.110.0.1.210.0.1.3"
},
{
"code": null,
"e": 8767,
"s": 8661,
"text": "After configuring data nodes you have to make sure that the name node has a password less access to them:"
},
{
"code": null,
"e": 8902,
"s": 8767,
"text": "ssh-copy-id -i /home/hadoopuser/.ssh/id_rsa.pub [email protected] -i /home/hadoopuser/.ssh/id_rsa.pub [email protected]"
},
{
"code": null,
"e": 9058,
"s": 8902,
"text": "Note: Instead of downloading Hadoop you can copy the Hadoop.tar.gz file from the name node to data nodes and extract it. You can use the following command:"
},
{
"code": null,
"e": 9163,
"s": 9058,
"text": "scp hadoop.tar.gz hadoop-datanode-2:/home/hadoopuserscp hadoop.tar.gz hadoop-datanode-3:/home/hadoopuser"
},
{
"code": null,
"e": 9214,
"s": 9163,
"text": "On each data node you must do the following steps:"
},
{
"code": null,
"e": 9358,
"s": 9214,
"text": "We have to update hdfs-site.xml, core-site.xml, yarn-site.xml and mapred-site.xml located at /opt/hadoop/etc/hadoop directory as the following:"
},
{
"code": null,
"e": 9372,
"s": 9358,
"text": "hdfs-site.xml"
},
{
"code": null,
"e": 9766,
"s": 9372,
"text": "<configuration><property><name>dfs.datanode.data.dir</name><value>file:///opt/hdfs/datanode</value><description>DataNode directory</description></property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.permissions</name><value>false</value></property><property><name>dfs.datanode.use.datanode.hostname</name><value>false</value></property></configuration>"
},
{
"code": null,
"e": 9780,
"s": 9766,
"text": "core-site.xml"
},
{
"code": null,
"e": 9940,
"s": 9780,
"text": "<configuration><property><name>fs.defaultFS</name><value>hdfs://hadoop-namenode:9820/</value><description>NameNode URI</description></property></configuration>"
},
{
"code": null,
"e": 9954,
"s": 9940,
"text": "yarn-site.xml"
},
{
"code": null,
"e": 10137,
"s": 9954,
"text": "<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value><description>Yarn Node Manager Aux Service</description></property></configuration>"
},
{
"code": null,
"e": 10153,
"s": 10137,
"text": "mapred-site.xml"
},
{
"code": null,
"e": 10313,
"s": 10153,
"text": "<configuration><property><name>mapreduce.framework.name</name><value>yarn</value><description>MapReduce framework name</description></property></configuration>"
},
{
"code": null,
"e": 10466,
"s": 10313,
"text": "After finishing the steps above, from the name node we have to execute the following command to start the Name node, data nodes and secondary name node:"
},
{
"code": null,
"e": 10479,
"s": 10466,
"text": "start-dfs.sh"
},
{
"code": null,
"e": 10514,
"s": 10479,
"text": "It will give the following output:"
},
{
"code": null,
"e": 10618,
"s": 10514,
"text": "Starting namenodes on [hadoop-namenode]Starting datanodesStarting secondary namenodes [hadoop-namenode]"
},
{
"code": null,
"e": 10713,
"s": 10618,
"text": "Also to start the resource manager and node managers we have to execute the following command:"
},
{
"code": null,
"e": 10727,
"s": 10713,
"text": "start-yarn.sh"
},
{
"code": null,
"e": 10762,
"s": 10727,
"text": "It will give the following output:"
},
{
"code": null,
"e": 10808,
"s": 10762,
"text": "Starting resourcemanagerStarting nodemanagers"
},
{
"code": null,
"e": 10952,
"s": 10808,
"text": "After that, to ensure that Hadoop started successfully, we must run the jps command on name node and data nodes must give the following output:"
},
{
"code": null,
"e": 10987,
"s": 10952,
"text": "On Name node (ignore process ids):"
},
{
"code": null,
"e": 11086,
"s": 10987,
"text": "16488 NameNode16622 DataNode17215 NodeManager17087 ResourceManager17530 Jps16829 SecondaryNameNode"
},
{
"code": null,
"e": 11122,
"s": 11086,
"text": "On Data nodes (ignore process ids):"
},
{
"code": null,
"e": 11160,
"s": 11122,
"text": "2306 DataNode2479 NodeManager2581 Jps"
},
{
"code": null,
"e": 11232,
"s": 11160,
"text": "If you get similar output then all Hadoop daemons started successfully."
},
{
"code": null,
"e": 11391,
"s": 11232,
"text": "Note: You may check logs at /opt/hadoop/logs and check if everything is alright, by running hdfs dfsadmin -report command (it must return Live datanodes (3))."
},
{
"code": null,
"e": 11400,
"s": 11391,
"text": "Namenode"
},
{
"code": null,
"e": 11459,
"s": 11400,
"text": "Access to the following URL: https://hadoop-namenode:9870/"
},
{
"code": null,
"e": 11475,
"s": 11459,
"text": "ResourceManager"
},
{
"code": null,
"e": 11534,
"s": 11475,
"text": "Access to the following URL: https://hadoop-namenode:8088/"
},
{
"code": null,
"e": 11752,
"s": 11534,
"text": "[1] F. Houbart, “How to install and set up hadoop cluster,” Linode, 16 October 2017. [Online]. Available: https://www.linode.com/docs/databases/hadoop/how-to-install-and-set-up-hadoop-cluster. [Accessed 20 July 2018]."
},
{
"code": null,
"e": 11846,
"s": 11752,
"text": "[2] “Stack overflow Q&A,” Stack overflow, [Online]. Available: https://www.Stackoverflow.com."
},
{
"code": null,
"e": 11964,
"s": 11846,
"text": "[3] “Apache Hadoop Documentation,” Apache, [Online]. Available: https://www.hadoop.apache.org. [Accessed 01 07 2018]."
},
{
"code": null,
"e": 12179,
"s": 11964,
"text": "[4] G. Bansal, “Install Hadoop 3.0.0 multi-node cluster on Ubuntu,” 31 August 2017. [Online]. Available: http://www.gaurav3ansal.blogspot.com/2017/08/installing-hadoop-300-alpha-4-multi.html. [Accessed 16 07 2018]."
}
] |
How to multiply each value in a column by a constant in R? | To multiply each value in a column by a constant, we can use multiplication sign *.
For example, if we have a data frame called df that contains a column say x. Now, if we want to multiply each value in x with 10 then we can use the below mentioned command −
df$x<-10*(df$x)
Following snippet creates a sample data frame −
x<-rpois(20,1)
df1<-data.frame(x)
df1
The following dataframe is created −
x
1 0
2 2
3 1
4 1
5 0
6 1
7 1
8 0
9 3
10 1
11 0
12 0
13 1
14 1
15 0
16 3
17 2
18 0
19 2
20 0
To multiply each value in x with 5, add the following code to the above snippet −
x<-rpois(20,1)
df1<-data.frame(x)
df1$x<-5*(df1$x)
df1
If you execute all the above given snippets as a single program, it generates the following Output −
x
1 0
2 10
3 5
4 5
5 0
6 5
7 5
8 0
9 15
10 5
11 0
12 0
13 5
14 5
15 0
16 15
17 10
18 0
19 10
20 0
Following snippet creates a sample data frame −
y<-round(rnorm(20),1)
df2<-data.frame(y)
df2
The following dataframe is created −
y
1 1.0
2 -1.8
3 0.0
4 0.2
5 -2.9
6 1.0
7 -0.6
8 -1.3
9 -0.2
10 -0.3
11 0.3
12 0.8
13 -0.9
14 0.4
15 -2.2
16 1.1
17 0.5
18 3.0
19 1.6
20 1.0
To multiply each value in y with 2, add the following code to the above snippet −
y<-round(rnorm(20),1)
df2<-data.frame(y)
df2$y<-2*(df2$y)
df2
If you execute all the above given snippets as a single program, it generates the following Output −
y
1 2.0
2 -3.6
3 0.0
4 0.4
5 -5.8
6 2.0
7 -1.2
8 -2.6
9 -0.4
10 -0.6
11 0.6
12 1.6
13 -1.8
14 0.8
15 -4.4
16 2.2
17 1.0
18 6.0
19 3.2
20 2.0 | [
{
"code": null,
"e": 1146,
"s": 1062,
"text": "To multiply each value in a column by a constant, we can use multiplication sign *."
},
{
"code": null,
"e": 1321,
"s": 1146,
"text": "For example, if we have a data frame called df that contains a column say x. Now, if we want to multiply each value in x with 10 then we can use the below mentioned command −"
},
{
"code": null,
"e": 1337,
"s": 1321,
"text": "df$x<-10*(df$x)"
},
{
"code": null,
"e": 1385,
"s": 1337,
"text": "Following snippet creates a sample data frame −"
},
{
"code": null,
"e": 1423,
"s": 1385,
"text": "x<-rpois(20,1)\ndf1<-data.frame(x)\ndf1"
},
{
"code": null,
"e": 1460,
"s": 1423,
"text": "The following dataframe is created −"
},
{
"code": null,
"e": 1565,
"s": 1460,
"text": " x\n1 0\n2 2\n3 1\n4 1\n5 0\n6 1\n7 1\n8 0\n9 3\n10 1\n11 0\n12 0\n13 1\n14 1\n15 0\n16 3\n17 2\n18 0\n19 2\n20 0"
},
{
"code": null,
"e": 1647,
"s": 1565,
"text": "To multiply each value in x with 5, add the following code to the above snippet −"
},
{
"code": null,
"e": 1702,
"s": 1647,
"text": "x<-rpois(20,1)\ndf1<-data.frame(x)\ndf1$x<-5*(df1$x)\ndf1"
},
{
"code": null,
"e": 1803,
"s": 1702,
"text": "If you execute all the above given snippets as a single program, it generates the following Output −"
},
{
"code": null,
"e": 1929,
"s": 1803,
"text": " x\n1 0\n2 10\n3 5\n4 5\n5 0\n6 5\n7 5\n8 0\n9 15\n10 5\n11 0\n12 0\n13 5\n14 5\n15 0\n16 15\n17 10\n18 0\n19 10\n20 0"
},
{
"code": null,
"e": 1977,
"s": 1929,
"text": "Following snippet creates a sample data frame −"
},
{
"code": null,
"e": 2022,
"s": 1977,
"text": "y<-round(rnorm(20),1)\ndf2<-data.frame(y)\ndf2"
},
{
"code": null,
"e": 2059,
"s": 2022,
"text": "The following dataframe is created −"
},
{
"code": null,
"e": 2226,
"s": 2059,
"text": " y\n1 1.0\n2 -1.8\n3 0.0\n4 0.2\n5 -2.9\n6 1.0\n7 -0.6\n8 -1.3\n9 -0.2\n10 -0.3\n11 0.3\n12 0.8\n13 -0.9\n14 0.4\n15 -2.2\n16 1.1\n17 0.5\n18 3.0\n19 1.6\n20 1.0"
},
{
"code": null,
"e": 2308,
"s": 2226,
"text": "To multiply each value in y with 2, add the following code to the above snippet −"
},
{
"code": null,
"e": 2370,
"s": 2308,
"text": "y<-round(rnorm(20),1)\ndf2<-data.frame(y)\ndf2$y<-2*(df2$y)\ndf2"
},
{
"code": null,
"e": 2471,
"s": 2370,
"text": "If you execute all the above given snippets as a single program, it generates the following Output −"
},
{
"code": null,
"e": 2638,
"s": 2471,
"text": " y\n1 2.0\n2 -3.6\n3 0.0\n4 0.4\n5 -5.8\n6 2.0\n7 -1.2\n8 -2.6\n9 -0.4\n10 -0.6\n11 0.6\n12 1.6\n13 -1.8\n14 0.8\n15 -4.4\n16 2.2\n17 1.0\n18 6.0\n19 3.2\n20 2.0"
}
] |
NeuralProphet: A Time-Series Modeling Library based on Neural-Networks | by Esmaeil Alizadeh | Towards Data Science | NeuralProphet is a python library for modeling time-series data based on neural networks. It’s built on top of PyTorch and is heavily inspired by Facebook Prophet and AR-Net libraries.
From the library name, you may ask what is the main difference between Facebook’s Prophet library and NeuralProphet. According to NeuralProphet’s documentation, the added features are[1]:
Using PyTorch’s Gradient Descent optimization engine making the modeling process much faster than Prophet
Using AR-Net for modeling time-series autocorrelation (aka serial correlation)
Custom losses and metrics
Having configurable non-linear layers of feed-forward neural networks,
etc.
Based on the project’s GitHub page, the main maintainer of this project is Oskar Triebe from Stanford University with collaboration from Facebook and Monash University.
The project is in the beta phase, so I would advise you to be cautious if you want to use this library in a production environment.
Unfortunately, there is no pip or conda package for this library at the time of writing this article. You can install it by cloning the repository and installing it running pip install .. However, if you are going to use the package in a Jupyter Notebook environment, you should install their live version pip install .[live]. This will provide more features such as a live plot of train and validation loss using plot_live_loss().
git clone https://github.com/ourownstory/neural_prophet cd neural_prophet pip install .[live]
I would recommend creating a fresh environment (a conda or venv) and installing the NeuralProphet package from the new environment letting the installer take care of all dependencies (it has Pandas, Jupyter Notebook, PyTorch as dependencies).
Now that we have the package installed, let’s play!
Here, I’m using the daily climate data in Delhi from 2013 to 2017 that I found on Kaggle. First, let’s import the main packages.
import pandas as pd from neuralprophet import NeuralProphet
Then, we can read the data into a Panda DataFrame. NeuralProphet object expects the time-series data to have a date column named ds and the time-series column value we want to predict as y.
Now let’s initialize the model. Below, I’ve brought all default arguments defined for the NeuralProphet object, including additional information about some. These are the hyperparameters you can configure in the model. Of course, if you are planning to use the default variables, you can just do model = NeuralProphet().
After configuring the model and its hyperparameters, we need to train the model and make predictions. Let’s make up to a one-year prediction of the temperature.
You can simply plot the forecast by calling model.plot(forecast) as following:
The one-year forecast plot is shown above, where the time period between 2017–01–01 to 2018–01–01 is the prediction. As can be seen, the forecast plot resembles the historical time-series. It both captured the seasonality as well as the slow-growing linear trend.
You can plot the parameters by calling model.plot_parameters().
The model loss using Mean Absolute Error (MAE) is plotted below. You can also use the Smoothed L1-Loss function.
In this post, we talked about NeuralProphet, a python library that models time-series based on Neural Networks. The library uses PyTorch as a backend. As a case study, we created a prediction model for daily Delhi climate time-series data and made a one-year prediction. An advantage of using this library is its similar syntax to Facebook’s Prophet library.
You can find the Jupyter notebook for this blog post on GitHub.
Thanks for reading!
If you liked this post, you can join my mailing list here to receive more posts about Data Science, Machine Learning, Statistics, and interesting Python libraries and tips & tricks. You can also follow me on my website, Medium, or LinkedIn.
You can now install the package directly through pip install neuralprophetor pip install neuralprophet[live]. You don't need to clone the repository as suggested in the article.
[1] NeuralProphet, Documentation
[1] O. J. Triebe et al, AR-Net: A Simple Auto-Regressive Neural Network For Time-Series, (2019)
[2] https://facebook.github.io/prophet/
[3] https://github.com/ourownstory/AR-Net
Originally published at https://www.ealizadeh.com. | [
{
"code": null,
"e": 356,
"s": 171,
"text": "NeuralProphet is a python library for modeling time-series data based on neural networks. It’s built on top of PyTorch and is heavily inspired by Facebook Prophet and AR-Net libraries."
},
{
"code": null,
"e": 544,
"s": 356,
"text": "From the library name, you may ask what is the main difference between Facebook’s Prophet library and NeuralProphet. According to NeuralProphet’s documentation, the added features are[1]:"
},
{
"code": null,
"e": 650,
"s": 544,
"text": "Using PyTorch’s Gradient Descent optimization engine making the modeling process much faster than Prophet"
},
{
"code": null,
"e": 729,
"s": 650,
"text": "Using AR-Net for modeling time-series autocorrelation (aka serial correlation)"
},
{
"code": null,
"e": 755,
"s": 729,
"text": "Custom losses and metrics"
},
{
"code": null,
"e": 826,
"s": 755,
"text": "Having configurable non-linear layers of feed-forward neural networks,"
},
{
"code": null,
"e": 831,
"s": 826,
"text": "etc."
},
{
"code": null,
"e": 1000,
"s": 831,
"text": "Based on the project’s GitHub page, the main maintainer of this project is Oskar Triebe from Stanford University with collaboration from Facebook and Monash University."
},
{
"code": null,
"e": 1132,
"s": 1000,
"text": "The project is in the beta phase, so I would advise you to be cautious if you want to use this library in a production environment."
},
{
"code": null,
"e": 1564,
"s": 1132,
"text": "Unfortunately, there is no pip or conda package for this library at the time of writing this article. You can install it by cloning the repository and installing it running pip install .. However, if you are going to use the package in a Jupyter Notebook environment, you should install their live version pip install .[live]. This will provide more features such as a live plot of train and validation loss using plot_live_loss()."
},
{
"code": null,
"e": 1658,
"s": 1564,
"text": "git clone https://github.com/ourownstory/neural_prophet cd neural_prophet pip install .[live]"
},
{
"code": null,
"e": 1901,
"s": 1658,
"text": "I would recommend creating a fresh environment (a conda or venv) and installing the NeuralProphet package from the new environment letting the installer take care of all dependencies (it has Pandas, Jupyter Notebook, PyTorch as dependencies)."
},
{
"code": null,
"e": 1953,
"s": 1901,
"text": "Now that we have the package installed, let’s play!"
},
{
"code": null,
"e": 2082,
"s": 1953,
"text": "Here, I’m using the daily climate data in Delhi from 2013 to 2017 that I found on Kaggle. First, let’s import the main packages."
},
{
"code": null,
"e": 2142,
"s": 2082,
"text": "import pandas as pd from neuralprophet import NeuralProphet"
},
{
"code": null,
"e": 2332,
"s": 2142,
"text": "Then, we can read the data into a Panda DataFrame. NeuralProphet object expects the time-series data to have a date column named ds and the time-series column value we want to predict as y."
},
{
"code": null,
"e": 2653,
"s": 2332,
"text": "Now let’s initialize the model. Below, I’ve brought all default arguments defined for the NeuralProphet object, including additional information about some. These are the hyperparameters you can configure in the model. Of course, if you are planning to use the default variables, you can just do model = NeuralProphet()."
},
{
"code": null,
"e": 2814,
"s": 2653,
"text": "After configuring the model and its hyperparameters, we need to train the model and make predictions. Let’s make up to a one-year prediction of the temperature."
},
{
"code": null,
"e": 2893,
"s": 2814,
"text": "You can simply plot the forecast by calling model.plot(forecast) as following:"
},
{
"code": null,
"e": 3157,
"s": 2893,
"text": "The one-year forecast plot is shown above, where the time period between 2017–01–01 to 2018–01–01 is the prediction. As can be seen, the forecast plot resembles the historical time-series. It both captured the seasonality as well as the slow-growing linear trend."
},
{
"code": null,
"e": 3221,
"s": 3157,
"text": "You can plot the parameters by calling model.plot_parameters()."
},
{
"code": null,
"e": 3334,
"s": 3221,
"text": "The model loss using Mean Absolute Error (MAE) is plotted below. You can also use the Smoothed L1-Loss function."
},
{
"code": null,
"e": 3693,
"s": 3334,
"text": "In this post, we talked about NeuralProphet, a python library that models time-series based on Neural Networks. The library uses PyTorch as a backend. As a case study, we created a prediction model for daily Delhi climate time-series data and made a one-year prediction. An advantage of using this library is its similar syntax to Facebook’s Prophet library."
},
{
"code": null,
"e": 3757,
"s": 3693,
"text": "You can find the Jupyter notebook for this blog post on GitHub."
},
{
"code": null,
"e": 3777,
"s": 3757,
"text": "Thanks for reading!"
},
{
"code": null,
"e": 4018,
"s": 3777,
"text": "If you liked this post, you can join my mailing list here to receive more posts about Data Science, Machine Learning, Statistics, and interesting Python libraries and tips & tricks. You can also follow me on my website, Medium, or LinkedIn."
},
{
"code": null,
"e": 4196,
"s": 4018,
"text": "You can now install the package directly through pip install neuralprophetor pip install neuralprophet[live]. You don't need to clone the repository as suggested in the article."
},
{
"code": null,
"e": 4229,
"s": 4196,
"text": "[1] NeuralProphet, Documentation"
},
{
"code": null,
"e": 4325,
"s": 4229,
"text": "[1] O. J. Triebe et al, AR-Net: A Simple Auto-Regressive Neural Network For Time-Series, (2019)"
},
{
"code": null,
"e": 4365,
"s": 4325,
"text": "[2] https://facebook.github.io/prophet/"
},
{
"code": null,
"e": 4407,
"s": 4365,
"text": "[3] https://github.com/ourownstory/AR-Net"
}
] |
Tree Data Structure (Case Study) | So far we have seen different concepts of logic programming in Prolog. Now we will see one case study on Prolog. We will see how to implement a tree data structure using Prolog, and we will create our own operators. So let us start the planning.
Suppose we have a tree as shown below −
We have to implement this tree using prolog. We have some operations as follows −
op(500, xfx, ‘is_parent’).
op(500, xfx, ‘is_parent’).
op(500, xfx, ‘is_sibling_of’).
op(500, xfx, ‘is_sibling_of’).
op(500, xfx, ‘is_at_same_level’).
op(500, xfx, ‘is_at_same_level’).
And another predicate namely leaf_node(Node)
And another predicate namely leaf_node(Node)
In these operators, you have seen some parameters as (500, xfx, <operator_name>). The first argument (here 500) is the priority of that operator. The ‘xfx’ indicates that this is a binary operator and the <operator_name> is the name of the operator.
These operators can be used to define the tree database. We can use these operators as follows −
a is_parent b, or is_parent(a, b). So this indicates that node a is the parent of node b.
a is_parent b, or is_parent(a, b). So this indicates that node a is the parent of node b.
X is_sibling_of Y or is_sibling_of(X,Y). This indicates that X is the sibling of node Y. So the rule is, if another node Z is parent of X and Z is also the parent of Y and X and Y are different, then X and Y are siblings.
X is_sibling_of Y or is_sibling_of(X,Y). This indicates that X is the sibling of node Y. So the rule is, if another node Z is parent of X and Z is also the parent of Y and X and Y are different, then X and Y are siblings.
leaf_node(Node). A node (Node) is said to be a leaf node when a node has no children.
leaf_node(Node). A node (Node) is said to be a leaf node when a node has no children.
X is_at_same_level Y, or is_at_same_level(X,Y). This will check whether X and Y are at the same level or not. So the condition is when X and Y are same, then it returns true, otherwise W is the parent of X, Z is the parent of Y and W and Z are at the same level.
X is_at_same_level Y, or is_at_same_level(X,Y). This will check whether X and Y are at the same level or not. So the condition is when X and Y are same, then it returns true, otherwise W is the parent of X, Z is the parent of Y and W and Z are at the same level.
As shown above, other rules are defined in the code. So let us see the program to get better view.
/* The tree database */
:- op(500,xfx,'is_parent').
a is_parent b. c is_parent g. f is_parent l. j is_parent q.
a is_parent c. c is_parent h. f is_parent m. j is_parent r.
a is_parent d. c is_parent i. h is_parent n. j is_parent s.
b is_parent e. d is_parent j. i is_parent o. m is_parent t.
b is_parent f. e is_parent k. i is_parent p. n is_parent u.
n
is_parent v.
/* X and Y are siblings i.e. child from the same parent */
:- op(500,xfx,'is_sibling_of').
X is_sibling_of Y :- Z is_parent X,
Z is_parent Y,
X \== Y.
leaf_node(Node) :- \+ is_parent(Node,Child). % Node grounded
/* X and Y are on the same level in the tree. */
:-op(500,xfx,'is_at_same_level').
X is_at_same_level X .
X is_at_same_level Y :- W is_parent X,
Z is_parent Y,
W is_at_same_level Z.
| ?- [case_tree].
compiling D:/TP Prolog/Sample_Codes/case_tree.pl for byte code...
D:/TP Prolog/Sample_Codes/case_tree.pl:20: warning: singleton variables [Child] for leaf_node/1
D:/TP Prolog/Sample_Codes/case_tree.pl compiled, 28 lines read - 3244 bytes written, 7 ms
yes
| ?- i is_parent p.
yes
| ?- i is_parent s.
no
| ?- is_parent(i,p).
yes
| ?- e is_sibling_of f.
true ?
yes
| ?- is_sibling_of(e,g).
no
| ?- leaf_node(v).
yes
| ?- leaf_node(a).
no
| ?- is_at_same_level(l,s).
true ?
yes
| ?- l is_at_same_level v.
no
| ?-
Here, we will see some more operations that will be performed on the above given tree data structure.
Let us consider the same tree here −
We will define other operations −
path(Node)
path(Node)
locate(Node)
locate(Node)
As we have created the last database, we will create a new program that will hold these operations, then consult the new file to use these operations on our pre-existing program.
So let us see what is the purpose of these operators −
path(Node) − This will display the path from the root node to the given node. To solve this, suppose X is parent of Node, then find path(X), then write X. When root node ‘a’ is reached, it will stop.
path(Node) − This will display the path from the root node to the given node. To solve this, suppose X is parent of Node, then find path(X), then write X. When root node ‘a’ is reached, it will stop.
locate(Node) − This will locate a node (Node) from the root of the tree. In this case, we will call the path(Node) and write the Node.
locate(Node) − This will locate a node (Node) from the root of the tree. In this case, we will call the path(Node) and write the Node.
Let us see the program in execution −
path(a). /* Can start at a. */
path(Node) :- Mother is_parent Node, /* Choose parent, */
path(Mother), /* find path and then */
write(Mother),
write(' --> ').
/* Locate node by finding a path from root down to the node */
locate(Node) :- path(Node),
write(Node),
nl.
| ?- consult('case_tree_more.pl').
compiling D:/TP Prolog/Sample_Codes/case_tree_more.pl for byte code...
D:/TP Prolog/Sample_Codes/case_tree_more.pl compiled, 9 lines read - 866 bytes written, 6 ms
yes
| ?- path(n).
a --> c --> h -->
true ?
yes
| ?- path(s).
a --> d --> j -->
true ?
yes
| ?- path(w).
no
| ?- locate(n).
a --> c --> h --> n
true ?
yes
| ?- locate(s).
a --> d --> j --> s
true ?
yes
| ?- locate(w).
no
| ?-
Now let us define some advanced operations on the same tree data structure.
Here we will see how to find the height of a node, that is, the length of the longest path from that node, using the Prolog built-in predicate setof/3. This predicate takes (Template, Goal, Set). This binds Set to the list of all instances of Template satisfying the goal Goal.
We have already defined the tree before, so we will consult the current code to execute these set of operations without redefining the tree database again.
We will create some predicates as follows −
ht(Node,H). This finds the height. It also checks whether a node is leaf or not, if so, then sets height H as 0, otherwise recursively finds the height of children of Node, and add 1 to them.
max([X|R], M,A). This calculates the max element from the list, and a value M. So if M is maximum, then it returns M, otherwise, it returns the maximum element of list that is greater than M. To solve this, if given list is empty, return M as max element, otherwise check whether Head is greater than M or not, if so, then call max() using the tail part and the value X, otherwise call max() using tail and the value M.
height(N,H). This uses the setof/3 predicate. This will find the set of results using the goal ht(N,Z) for the template Z and stores into the list type variable called Set. Now find the max of Set, and value 0, store the result into H.
Now let us see the program in execution −
height(N,H) :- setof(Z,ht(N,Z),Set),
max(Set,0,H).
ht(Node,0) :- leaf_node(Node),!.
ht(Node,H) :- Node is_parent Child,
ht(Child,H1),
H is H1 + 1.
max([],M,M).
max([X|R],M,A) :- (X > M -> max(R,X,A) ; max(R,M,A)).
| ?- consult('case_tree_adv.pl').
compiling D:/TP Prolog/Sample_Codes/case_tree_adv.pl for byte code...
D:/TP Prolog/Sample_Codes/case_tree_adv.pl compiled, 9 lines read - 2060 bytes written, 9 ms
yes
| ?- ht(c,H).
H = 1 ? a
H = 3
H = 3
H = 2
H = 2
yes
| ?- max([1,5,3,4,2],10,Max).
Max = 10
yes
| ?- max([1,5,3,40,2],10,Max).
Max = 40
yes
| ?- setof(H, ht(c,H),Set).
Set = [1,2,3]
yes
| ?- max([1,2,3],0,H).
H = 3
yes
| ?- height(c,H).
H = 3
yes
| ?- height(a,H).
H = 4
yes
| ?-
65 Lectures
5 hours
Arnab Chakraborty
78 Lectures
7 hours
Arnab Chakraborty
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2338,
"s": 2092,
"text": "So far we have seen different concepts of logic programming in Prolog. Now we will see one case study on Prolog. We will see how to implement a tree data structure using Prolog, and we will create our own operators. So let us start the planning."
},
{
"code": null,
"e": 2378,
"s": 2338,
"text": "Suppose we have a tree as shown below −"
},
{
"code": null,
"e": 2460,
"s": 2378,
"text": "We have to implement this tree using prolog. We have some operations as follows −"
},
{
"code": null,
"e": 2487,
"s": 2460,
"text": "op(500, xfx, ‘is_parent’)."
},
{
"code": null,
"e": 2514,
"s": 2487,
"text": "op(500, xfx, ‘is_parent’)."
},
{
"code": null,
"e": 2545,
"s": 2514,
"text": "op(500, xfx, ‘is_sibling_of’)."
},
{
"code": null,
"e": 2576,
"s": 2545,
"text": "op(500, xfx, ‘is_sibling_of’)."
},
{
"code": null,
"e": 2610,
"s": 2576,
"text": "op(500, xfx, ‘is_at_same_level’)."
},
{
"code": null,
"e": 2644,
"s": 2610,
"text": "op(500, xfx, ‘is_at_same_level’)."
},
{
"code": null,
"e": 2689,
"s": 2644,
"text": "And another predicate namely leaf_node(Node)"
},
{
"code": null,
"e": 2734,
"s": 2689,
"text": "And another predicate namely leaf_node(Node)"
},
{
"code": null,
"e": 2984,
"s": 2734,
"text": "In these operators, you have seen some parameters as (500, xfx, <operator_name>). The first argument (here 500) is the priority of that operator. The ‘xfx’ indicates that this is a binary operator and the <operator_name> is the name of the operator."
},
{
"code": null,
"e": 3081,
"s": 2984,
"text": "These operators can be used to define the tree database. We can use these operators as follows −"
},
{
"code": null,
"e": 3171,
"s": 3081,
"text": "a is_parent b, or is_parent(a, b). So this indicates that node a is the parent of node b."
},
{
"code": null,
"e": 3261,
"s": 3171,
"text": "a is_parent b, or is_parent(a, b). So this indicates that node a is the parent of node b."
},
{
"code": null,
"e": 3483,
"s": 3261,
"text": "X is_sibling_of Y or is_sibling_of(X,Y). This indicates that X is the sibling of node Y. So the rule is, if another node Z is parent of X and Z is also the parent of Y and X and Y are different, then X and Y are siblings."
},
{
"code": null,
"e": 3705,
"s": 3483,
"text": "X is_sibling_of Y or is_sibling_of(X,Y). This indicates that X is the sibling of node Y. So the rule is, if another node Z is parent of X and Z is also the parent of Y and X and Y are different, then X and Y are siblings."
},
{
"code": null,
"e": 3791,
"s": 3705,
"text": "leaf_node(Node). A node (Node) is said to be a leaf node when a node has no children."
},
{
"code": null,
"e": 3877,
"s": 3791,
"text": "leaf_node(Node). A node (Node) is said to be a leaf node when a node has no children."
},
{
"code": null,
"e": 4140,
"s": 3877,
"text": "X is_at_same_level Y, or is_at_same_level(X,Y). This will check whether X and Y are at the same level or not. So the condition is when X and Y are same, then it returns true, otherwise W is the parent of X, Z is the parent of Y and W and Z are at the same level."
},
{
"code": null,
"e": 4403,
"s": 4140,
"text": "X is_at_same_level Y, or is_at_same_level(X,Y). This will check whether X and Y are at the same level or not. So the condition is when X and Y are same, then it returns true, otherwise W is the parent of X, Z is the parent of Y and W and Z are at the same level."
},
{
"code": null,
"e": 4502,
"s": 4403,
"text": "As shown above, other rules are defined in the code. So let us see the program to get better view."
},
{
"code": null,
"e": 5360,
"s": 4502,
"text": "/* The tree database */\n\n:- op(500,xfx,'is_parent').\n\na is_parent b. c is_parent g. f is_parent l. j is_parent q.\na is_parent c. c is_parent h. f is_parent m. j is_parent r.\na is_parent d. c is_parent i. h is_parent n. j is_parent s.\nb is_parent e. d is_parent j. i is_parent o. m is_parent t.\nb is_parent f. e is_parent k. i is_parent p. n is_parent u.\nn \nis_parent v.\n/* X and Y are siblings i.e. child from the same parent */\n\n:- op(500,xfx,'is_sibling_of').\n\nX is_sibling_of Y :- Z is_parent X,\n Z is_parent Y,\n X \\== Y.\nleaf_node(Node) :- \\+ is_parent(Node,Child). % Node grounded\n\n/* X and Y are on the same level in the tree. */\n\n:-op(500,xfx,'is_at_same_level').\nX is_at_same_level X .\nX is_at_same_level Y :- W is_parent X,\n Z is_parent Y,\n W is_at_same_level Z."
},
{
"code": null,
"e": 5901,
"s": 5360,
"text": "| ?- [case_tree].\ncompiling D:/TP Prolog/Sample_Codes/case_tree.pl for byte code...\nD:/TP Prolog/Sample_Codes/case_tree.pl:20: warning: singleton variables [Child] for leaf_node/1\nD:/TP Prolog/Sample_Codes/case_tree.pl compiled, 28 lines read - 3244 bytes written, 7 ms\n\nyes\n| ?- i is_parent p.\n\nyes\n| ?- i is_parent s.\n\nno\n| ?- is_parent(i,p).\n\nyes\n| ?- e is_sibling_of f.\n\ntrue ?\n\nyes\n| ?- is_sibling_of(e,g).\n\nno\n| ?- leaf_node(v).\n\nyes\n| ?- leaf_node(a).\n\nno\n| ?- is_at_same_level(l,s).\n\ntrue ?\n\nyes\n| ?- l is_at_same_level v.\n\nno\n| ?-\n"
},
{
"code": null,
"e": 6003,
"s": 5901,
"text": "Here, we will see some more operations that will be performed on the above given tree data structure."
},
{
"code": null,
"e": 6040,
"s": 6003,
"text": "Let us consider the same tree here −"
},
{
"code": null,
"e": 6074,
"s": 6040,
"text": "We will define other operations −"
},
{
"code": null,
"e": 6085,
"s": 6074,
"text": "path(Node)"
},
{
"code": null,
"e": 6096,
"s": 6085,
"text": "path(Node)"
},
{
"code": null,
"e": 6109,
"s": 6096,
"text": "locate(Node)"
},
{
"code": null,
"e": 6122,
"s": 6109,
"text": "locate(Node)"
},
{
"code": null,
"e": 6301,
"s": 6122,
"text": "As we have created the last database, we will create a new program that will hold these operations, then consult the new file to use these operations on our pre-existing program."
},
{
"code": null,
"e": 6356,
"s": 6301,
"text": "So let us see what is the purpose of these operators −"
},
{
"code": null,
"e": 6556,
"s": 6356,
"text": "path(Node) − This will display the path from the root node to the given node. To solve this, suppose X is parent of Node, then find path(X), then write X. When root node ‘a’ is reached, it will stop."
},
{
"code": null,
"e": 6756,
"s": 6556,
"text": "path(Node) − This will display the path from the root node to the given node. To solve this, suppose X is parent of Node, then find path(X), then write X. When root node ‘a’ is reached, it will stop."
},
{
"code": null,
"e": 6891,
"s": 6756,
"text": "locate(Node) − This will locate a node (Node) from the root of the tree. In this case, we will call the path(Node) and write the Node."
},
{
"code": null,
"e": 7026,
"s": 6891,
"text": "locate(Node) − This will locate a node (Node) from the root of the tree. In this case, we will call the path(Node) and write the Node."
},
{
"code": null,
"e": 7064,
"s": 7026,
"text": "Let us see the program in execution −"
},
{
"code": null,
"e": 7458,
"s": 7064,
"text": "path(a). /* Can start at a. */\npath(Node) :- Mother is_parent Node, /* Choose parent, */\n path(Mother), /* find path and then */ \n write(Mother),\n write(' --> ').\n \n/* Locate node by finding a path from root down to the node */\nlocate(Node) :- path(Node),\n write(Node),\n nl."
},
{
"code": null,
"e": 7894,
"s": 7458,
"text": "| ?- consult('case_tree_more.pl').\ncompiling D:/TP Prolog/Sample_Codes/case_tree_more.pl for byte code...\nD:/TP Prolog/Sample_Codes/case_tree_more.pl compiled, 9 lines read - 866 bytes written, 6 ms\n\nyes\n| ?- path(n).\na --> c --> h -->\n\ntrue ?\n\nyes\n| ?- path(s).\na --> d --> j -->\n\ntrue ?\n\nyes\n| ?- path(w).\n\nno\n| ?- locate(n).\na --> c --> h --> n\n\ntrue ?\n\nyes\n| ?- locate(s).\na --> d --> j --> s\n\ntrue ?\n\nyes\n| ?- locate(w).\n\nno\n| ?-\n"
},
{
"code": null,
"e": 7970,
"s": 7894,
"text": "Now let us define some advanced operations on the same tree data structure."
},
{
"code": null,
"e": 8248,
"s": 7970,
"text": "Here we will see how to find the height of a node, that is, the length of the longest path from that node, using the Prolog built-in predicate setof/3. This predicate takes (Template, Goal, Set). This binds Set to the list of all instances of Template satisfying the goal Goal."
},
{
"code": null,
"e": 8404,
"s": 8248,
"text": "We have already defined the tree before, so we will consult the current code to execute these set of operations without redefining the tree database again."
},
{
"code": null,
"e": 8448,
"s": 8404,
"text": "We will create some predicates as follows −"
},
{
"code": null,
"e": 8640,
"s": 8448,
"text": "ht(Node,H). This finds the height. It also checks whether a node is leaf or not, if so, then sets height H as 0, otherwise recursively finds the height of children of Node, and add 1 to them."
},
{
"code": null,
"e": 9060,
"s": 8640,
"text": "max([X|R], M,A). This calculates the max element from the list, and a value M. So if M is maximum, then it returns M, otherwise, it returns the maximum element of list that is greater than M. To solve this, if given list is empty, return M as max element, otherwise check whether Head is greater than M or not, if so, then call max() using the tail part and the value X, otherwise call max() using tail and the value M."
},
{
"code": null,
"e": 9296,
"s": 9060,
"text": "height(N,H). This uses the setof/3 predicate. This will find the set of results using the goal ht(N,Z) for the template Z and stores into the list type variable called Set. Now find the max of Set, and value 0, store the result into H."
},
{
"code": null,
"e": 9338,
"s": 9296,
"text": "Now let us see the program in execution −"
},
{
"code": null,
"e": 9611,
"s": 9338,
"text": "height(N,H) :- setof(Z,ht(N,Z),Set),\n max(Set,0,H).\n \nht(Node,0) :- leaf_node(Node),!.\nht(Node,H) :- Node is_parent Child,\n ht(Child,H1),\n H is H1 + 1.\nmax([],M,M).\nmax([X|R],M,A) :- (X > M -> max(R,X,A) ; max(R,M,A))."
},
{
"code": null,
"e": 10111,
"s": 9611,
"text": "| ?- consult('case_tree_adv.pl').\ncompiling D:/TP Prolog/Sample_Codes/case_tree_adv.pl for byte code...\nD:/TP Prolog/Sample_Codes/case_tree_adv.pl compiled, 9 lines read - 2060 bytes written, 9 ms\n\nyes\n| ?- ht(c,H).\n\nH = 1 ? a\n\nH = 3\n\nH = 3\n\nH = 2\n\nH = 2\n\nyes\n| ?- max([1,5,3,4,2],10,Max).\n\nMax = 10\n\nyes\n| ?- max([1,5,3,40,2],10,Max).\n\nMax = 40\n\nyes\n| ?- setof(H, ht(c,H),Set).\n\nSet = [1,2,3]\n\nyes\n| ?- max([1,2,3],0,H).\n\nH = 3\n\nyes\n| ?- height(c,H).\n\nH = 3\n\nyes\n| ?- height(a,H).\n\nH = 4\n\nyes\n| ?-\n"
},
{
"code": null,
"e": 10144,
"s": 10111,
"text": "\n 65 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 10163,
"s": 10144,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 10196,
"s": 10163,
"text": "\n 78 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 10215,
"s": 10196,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 10222,
"s": 10215,
"text": " Print"
},
{
"code": null,
"e": 10233,
"s": 10222,
"text": " Add Notes"
}
] |
All Unique Permutations of an array | Practice | GeeksforGeeks | Given an array arr[] of length n. Find all possible unique permutations of the array.
Example 1:
Input:
n = 3
arr[] = {1, 2, 1}
Output:
1 1 2
1 2 1
2 1 1
Explanation:
These are the only possible unique permutations
for the given array.
Example 2:
Input:
n = 2
arr[] = {4, 5}
Output:
4 5
5 4
Your Task:
You don't need to read input or print anything. You only need to complete the function uniquePerms() that takes an integer n, and an array arr of size n as input and returns a sorted list of lists containing all unique permutations of the array.
Expected Time Complexity: O(n*n!)
Expected Auxilliary Space: O(n*n!)
Constraints:
1 ≤ n ≤ 9
1 ≤ arri ≤ 10
0
nikil19b1011793 weeks ago
set<vector<int>>st; void recurPermute(vector<vector<int>>&res,vector<int>&ds,int freq[],vector<int>&arr){ if(ds.size()==arr.size()){ if(st.find(ds)==st.end()){ res.push_back(ds); st.insert(ds); } return; } for(int i=0;i<arr.size();i++){ if(freq[i]==0){ ds.push_back(arr[i]); freq[i]=1; recurPermute(res,ds,freq,arr); freq[i]=0; ds.pop_back(); } } } vector<vector<int>> uniquePerms(vector<int> arr ,int n) { // code here vector<vector<int>>res; vector<int>ds; int freq[20]={0}; recurPermute(res,ds,freq,arr); sort(res.begin(),res.end()); return res; }
+2
lindan1232 months ago
set<vector<int>>s;
vector<vector<int>> ans;
void help(vector<int>arr,int i,int n)
{
if(i==n)
{
s.insert(arr);
return;
}
for(int x=i;x<=n;x++)
{
swap(arr[i],arr[x]);
help(arr,i+1,n);
swap(arr[i],arr[x]);
}
}
vector<vector<int>> uniquePerms(vector<int> arr ,int n) {
sort(arr.begin(),arr.end());
help(arr,0,n-1);
for(auto x: s)
{
ans.push_back(x);
}
sort(ans.begin(),ans.end());
return ans;
}
Time Taken : 1.2sec
Cpp
0
cs21m0593 months ago
vector<vector<int>> uniquePerms(vector<int> arr ,int n) { // code here map<vector<int> , int> m; vector<vector<int>> ans; sort(arr.begin() , arr.end()); do { if(m.find(arr) == m.end()) { ans.push_back(arr); } m[arr]++; }while(next_permutation(arr.begin() , arr.end())); return ans;
0
saket727403 months ago
My correct java solution which passed 100% test cases
static void permutation(ArrayList<Integer> arr, int l, int r, Set<ArrayList<Integer>> a){ if(l == r){ //System.out.println(arr); // ArrayList<Integer> b = new ArrayList<>(); // b.addAll(arr); a.add(new ArrayList<Integer>(arr)); return ; } for(int i=l;i<=r;i++){ arr = interchangeInt(arr, l, i); permutation(arr, l+1, r, a); arr = interchangeInt(arr, l, i); } } static ArrayList<Integer> interchangeInt(ArrayList<Integer> arr, int a, int b){ int temp = arr.get(a); int l = arr.get(b); arr.set(a, l); arr.set(b, temp); return arr; } static ArrayList<ArrayList<Integer>> uniquePerms(ArrayList<Integer> arr , int n) { // code here Set<ArrayList<Integer>> set = new HashSet<>(); permutation(arr, 0, n-1, set); ArrayList<ArrayList<Integer>> A = new ArrayList<ArrayList<Integer>>(set); Collections.sort(A, new Comparator<ArrayList<Integer>>(){ @Override public int compare(ArrayList<Integer> a, ArrayList<Integer> b){ int comp = 0; for(int i=0;i<Math.min(a.size(), b.size());i++){ comp = Integer.compare(a.get(i), b.get(i)); if(comp != 0) return comp; } return Integer.compare(a.size(), b.size()); } }); return A; }
+1
amruthack11013 months ago
@gfg please fix this issue, the code is giving correct output in other text editors but not here
static ArrayList<ArrayList<Integer>> m1 = new ArrayList<>();
static HashMap<ArrayList,Integer> m= new HashMap<>();
static ArrayList<ArrayList<Integer>> uniquePerms(ArrayList<Integer> arr , int n) {
//ArrayList<ArrayList<Integer>> res=new ArrayList<>();
Collections.sort(arr);
int[] nums=new int[n];
for(int i=0;i<n;i++){
nums[i]=arr.get(i);
}
boolean[] visited = new boolean[nums.length];
findPermutations(nums, new ArrayList<>(), visited);
return m1;
}
static void findPermutations(int[] nums, ArrayList<Integer> cl, boolean[] visited) {
if (cl.size() == nums.length) {
if(!m.containsKey(cl)){
m.put( (cl) ,1 );
m1.add(new ArrayList<>(cl));
}
return;
}
for(int i = 0; i < nums.length; i++) {
if (!visited[i]) {
visited[i] = true;
cl.add(nums[i]);
findPermutations(nums, cl, visited);
cl.remove(cl.size() - 1);
visited[i] = false;
}
}
}
0
krishnagautam58993 months ago
set<vector<int>>ans; //use set for unique elements void help(vector<int>&nums,int idx,int n){ if(idx==n){ ans.insert(nums); return ; } for(int i=idx;i<n;i++){ swap(nums[idx],nums[i]); help(nums,idx+1,n); swap(nums[idx],nums[i]); } } vector<vector<int>> uniquePerms(vector<int> arr ,int n) { // code here vector<vector<int>>v; help(arr,0,n); for(auto &it:ans){ v.push_back(it); } return v; }
+4
ghazanferwahab23 months ago
Dear @gfg fix ur compiler my code is fiving perfect answer yet it is showing that it is not acceptiong for the test case
6
2 1 2 3 4 5
This is very frustating and i am getting AC in leetcode for the same question My code:
static ArrayList<ArrayList<Integer>> uniquePerms(ArrayList<Integer> arr , int n) { // code Collections.sort(arr); int a1[]=new int [arr.size()]; for(int i=0;i<arr.size();i++) a1[i]=arr.get(i); ArrayList<ArrayList<Integer>> ans=new ArrayList<>(); solve(0,a1.length,a1,ans); return ans; } static void solve(int ind,int l,int nums[],ArrayList<ArrayList<Integer>>a){ if(ind==l){ List<Integer>ds=new ArrayList<>(); for(int i=0;i<l;i++) ds.add(nums[i]); if(!a.contains(ds)) a.add(new ArrayList<Integer>(ds)); } for(int i=ind;i<l;i++){ swap(i,ind,nums); solve(ind+1,l,nums,a); swap(i,ind,nums); } return; } static void swap(int i,int j,int nums[]){ int t=nums[i]; nums[i]=nums[j]; nums[j]=t; }
+1
karunakarmutthi3 months ago
what is wrong with my code it was showing TLE at 102 test case but Succesfuly Executed at leetcode and interviewBit
class Solution { static ArrayList<ArrayList<Integer>> uniquePerms(ArrayList<Integer> arr , int n) { Collections.sort(arr); ArrayList<Integer> cur=new ArrayList<>(); ArrayList<ArrayList<Integer>> res=new ArrayList<>(); boolean[] map=new boolean[n]; recur(cur,map,arr,res,n); return res; } public static void recur(ArrayList<Integer> curr,boolean[] map,ArrayList<Integer> arr,ArrayList<ArrayList<Integer>> res,int n){ if(n==curr.size()){ if(!res.contains(curr)) res.add(new ArrayList<Integer>(curr)); } for(int i=0;i<n;i++){ if(map[i]==false){ map[i]=true; curr.add(arr.get(i)); recur(curr,map,arr,res,n); curr.remove(curr.size()-1); map[i]=false; } } }};
0
keshavkumarshivanshu34 months ago
best of best time- 0.03********
void solve(map<int,int> &mp,vector<int> tmp,int n,vector<vector<int>> &ans){
if(tmp.size()==n){
ans.push_back(tmp);
return;
}
for(auto i=mp.begin();i!=mp.end();i++){
if(i->second > 0){
tmp.push_back(i->first);
i->second--;
solve(mp,tmp,n,ans);
tmp.pop_back();
i->second++;
}
}
}
vector<vector<int>> uniquePerms(vector<int> arr ,int n) {
// code here
map<int,int> mp;
for(auto i:arr) mp[i]++;
vector<vector<int>> ans;
vector<int> tmp;
solve(mp,tmp,n,ans);
return ans;
}
+1
rayalravi20014 months ago
c++ solution
void solve(vector<int> arr, int l , int r,set<vector<int>> &ans ){ if(l==r){ ans.insert(arr); return; } for(int i=l; i<r; i++){ swap(arr[i],arr[l]); solve(arr,l+1,r,ans); swap(arr[i],arr[l]); } return; } vector<vector<int>> uniquePerms(vector<int> arr ,int n) { // code here int l = 0; int r = arr.size(); set<vector<int>> ans; solve(arr,l,r,ans); vector<vector<int>> v(ans.size()); copy(ans.begin(), ans.end(), v.begin()); // for(int i=0; i<ans.size();i++){ // v.insert(ans[i]); // } return v; }
We strongly recommend solving this problem on your own before viewing its editorial. Do you still
want to view the editorial?
Login to access your submissions.
Problem
Contest
Reset the IDE using the second button on the top right corner.
Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values.
Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints.
You can access the hints to get an idea about what is expected of you as well as the final solution code.
You can view the solutions submitted by other users from the submission tab. | [
{
"code": null,
"e": 312,
"s": 226,
"text": "Given an array arr[] of length n. Find all possible unique permutations of the array."
},
{
"code": null,
"e": 324,
"s": 312,
"text": "\nExample 1:"
},
{
"code": null,
"e": 466,
"s": 324,
"text": "Input: \nn = 3\narr[] = {1, 2, 1}\nOutput: \n1 1 2\n1 2 1\n2 1 1\nExplanation:\nThese are the only possible unique permutations\nfor the given array.\n"
},
{
"code": null,
"e": 477,
"s": 466,
"text": "Example 2:"
},
{
"code": null,
"e": 524,
"s": 477,
"text": "Input: \nn = 2\narr[] = {4, 5}\nOutput: \n4 5\n5 4\n"
},
{
"code": null,
"e": 782,
"s": 524,
"text": "\nYour Task:\nYou don't need to read input or print anything. You only need to complete the function uniquePerms() that takes an integer n, and an array arr of size n as input and returns a sorted list of lists containing all unique permutations of the array."
},
{
"code": null,
"e": 855,
"s": 782,
"text": "\nExpected Time Complexity: O(n*n!)\nExpected Auxilliary Space: O(n*n!)\n "
},
{
"code": null,
"e": 892,
"s": 855,
"text": "Constraints:\n1 ≤ n ≤ 9\n1 ≤ arri ≤ 10"
},
{
"code": null,
"e": 894,
"s": 892,
"text": "0"
},
{
"code": null,
"e": 920,
"s": 894,
"text": "nikil19b1011793 weeks ago"
},
{
"code": null,
"e": 1673,
"s": 920,
"text": "set<vector<int>>st; void recurPermute(vector<vector<int>>&res,vector<int>&ds,int freq[],vector<int>&arr){ if(ds.size()==arr.size()){ if(st.find(ds)==st.end()){ res.push_back(ds); st.insert(ds); } return; } for(int i=0;i<arr.size();i++){ if(freq[i]==0){ ds.push_back(arr[i]); freq[i]=1; recurPermute(res,ds,freq,arr); freq[i]=0; ds.pop_back(); } } } vector<vector<int>> uniquePerms(vector<int> arr ,int n) { // code here vector<vector<int>>res; vector<int>ds; int freq[20]={0}; recurPermute(res,ds,freq,arr); sort(res.begin(),res.end()); return res; }"
},
{
"code": null,
"e": 1676,
"s": 1673,
"text": "+2"
},
{
"code": null,
"e": 1698,
"s": 1676,
"text": "lindan1232 months ago"
},
{
"code": null,
"e": 2311,
"s": 1698,
"text": " set<vector<int>>s;\n vector<vector<int>> ans;\n void help(vector<int>arr,int i,int n)\n {\n if(i==n)\n {\n s.insert(arr);\n return;\n }\n for(int x=i;x<=n;x++)\n {\n swap(arr[i],arr[x]);\n help(arr,i+1,n);\n swap(arr[i],arr[x]);\n }\n }\n vector<vector<int>> uniquePerms(vector<int> arr ,int n) {\n \n sort(arr.begin(),arr.end());\n help(arr,0,n-1);\n \n for(auto x: s)\n {\n ans.push_back(x);\n }\n sort(ans.begin(),ans.end());\n \n return ans;\n }"
},
{
"code": null,
"e": 2331,
"s": 2311,
"text": "Time Taken : 1.2sec"
},
{
"code": null,
"e": 2335,
"s": 2331,
"text": "Cpp"
},
{
"code": null,
"e": 2337,
"s": 2335,
"text": "0"
},
{
"code": null,
"e": 2358,
"s": 2337,
"text": "cs21m0593 months ago"
},
{
"code": null,
"e": 2795,
"s": 2358,
"text": "vector<vector<int>> uniquePerms(vector<int> arr ,int n) { // code here map<vector<int> , int> m; vector<vector<int>> ans; sort(arr.begin() , arr.end()); do { if(m.find(arr) == m.end()) { ans.push_back(arr); } m[arr]++; }while(next_permutation(arr.begin() , arr.end())); return ans; "
},
{
"code": null,
"e": 2799,
"s": 2797,
"text": "0"
},
{
"code": null,
"e": 2822,
"s": 2799,
"text": "saket727403 months ago"
},
{
"code": null,
"e": 2876,
"s": 2822,
"text": "My correct java solution which passed 100% test cases"
},
{
"code": null,
"e": 4323,
"s": 2878,
"text": "static void permutation(ArrayList<Integer> arr, int l, int r, Set<ArrayList<Integer>> a){ if(l == r){ //System.out.println(arr); // ArrayList<Integer> b = new ArrayList<>(); // b.addAll(arr); a.add(new ArrayList<Integer>(arr)); return ; } for(int i=l;i<=r;i++){ arr = interchangeInt(arr, l, i); permutation(arr, l+1, r, a); arr = interchangeInt(arr, l, i); } } static ArrayList<Integer> interchangeInt(ArrayList<Integer> arr, int a, int b){ int temp = arr.get(a); int l = arr.get(b); arr.set(a, l); arr.set(b, temp); return arr; } static ArrayList<ArrayList<Integer>> uniquePerms(ArrayList<Integer> arr , int n) { // code here Set<ArrayList<Integer>> set = new HashSet<>(); permutation(arr, 0, n-1, set); ArrayList<ArrayList<Integer>> A = new ArrayList<ArrayList<Integer>>(set); Collections.sort(A, new Comparator<ArrayList<Integer>>(){ @Override public int compare(ArrayList<Integer> a, ArrayList<Integer> b){ int comp = 0; for(int i=0;i<Math.min(a.size(), b.size());i++){ comp = Integer.compare(a.get(i), b.get(i)); if(comp != 0) return comp; } return Integer.compare(a.size(), b.size()); } }); return A; }"
},
{
"code": null,
"e": 4326,
"s": 4323,
"text": "+1"
},
{
"code": null,
"e": 4352,
"s": 4326,
"text": "amruthack11013 months ago"
},
{
"code": null,
"e": 4449,
"s": 4352,
"text": "@gfg please fix this issue, the code is giving correct output in other text editors but not here"
},
{
"code": null,
"e": 5589,
"s": 4449,
"text": "static ArrayList<ArrayList<Integer>> m1 = new ArrayList<>();\nstatic HashMap<ArrayList,Integer> m= new HashMap<>();\nstatic ArrayList<ArrayList<Integer>> uniquePerms(ArrayList<Integer> arr , int n) {\n \n //ArrayList<ArrayList<Integer>> res=new ArrayList<>();\n Collections.sort(arr);\n int[] nums=new int[n];\n for(int i=0;i<n;i++){\n nums[i]=arr.get(i);\n }\n boolean[] visited = new boolean[nums.length];\n findPermutations(nums, new ArrayList<>(), visited);\n \n return m1;\n }\n static void findPermutations(int[] nums, ArrayList<Integer> cl, boolean[] visited) {\n if (cl.size() == nums.length) {\n \tif(!m.containsKey(cl)){\n m.put( (cl) ,1 );\n m1.add(new ArrayList<>(cl));\n }\n return;\n }\n for(int i = 0; i < nums.length; i++) {\n if (!visited[i]) {\n visited[i] = true;\n cl.add(nums[i]);\n findPermutations(nums, cl, visited);\n cl.remove(cl.size() - 1);\n visited[i] = false;\n }\n }\n }"
},
{
"code": null,
"e": 5591,
"s": 5589,
"text": "0"
},
{
"code": null,
"e": 5621,
"s": 5591,
"text": "krishnagautam58993 months ago"
},
{
"code": null,
"e": 6153,
"s": 5621,
"text": " set<vector<int>>ans; //use set for unique elements void help(vector<int>&nums,int idx,int n){ if(idx==n){ ans.insert(nums); return ; } for(int i=idx;i<n;i++){ swap(nums[idx],nums[i]); help(nums,idx+1,n); swap(nums[idx],nums[i]); } } vector<vector<int>> uniquePerms(vector<int> arr ,int n) { // code here vector<vector<int>>v; help(arr,0,n); for(auto &it:ans){ v.push_back(it); } return v; }"
},
{
"code": null,
"e": 6156,
"s": 6153,
"text": "+4"
},
{
"code": null,
"e": 6184,
"s": 6156,
"text": "ghazanferwahab23 months ago"
},
{
"code": null,
"e": 6305,
"s": 6184,
"text": "Dear @gfg fix ur compiler my code is fiving perfect answer yet it is showing that it is not acceptiong for the test case"
},
{
"code": null,
"e": 6308,
"s": 6305,
"text": " 6"
},
{
"code": null,
"e": 6321,
"s": 6308,
"text": " 2 1 2 3 4 5"
},
{
"code": null,
"e": 6408,
"s": 6321,
"text": "This is very frustating and i am getting AC in leetcode for the same question My code:"
},
{
"code": null,
"e": 7291,
"s": 6408,
"text": " static ArrayList<ArrayList<Integer>> uniquePerms(ArrayList<Integer> arr , int n) { // code Collections.sort(arr); int a1[]=new int [arr.size()]; for(int i=0;i<arr.size();i++) a1[i]=arr.get(i); ArrayList<ArrayList<Integer>> ans=new ArrayList<>(); solve(0,a1.length,a1,ans); return ans; } static void solve(int ind,int l,int nums[],ArrayList<ArrayList<Integer>>a){ if(ind==l){ List<Integer>ds=new ArrayList<>(); for(int i=0;i<l;i++) ds.add(nums[i]); if(!a.contains(ds)) a.add(new ArrayList<Integer>(ds)); } for(int i=ind;i<l;i++){ swap(i,ind,nums); solve(ind+1,l,nums,a); swap(i,ind,nums); } return; } static void swap(int i,int j,int nums[]){ int t=nums[i]; nums[i]=nums[j]; nums[j]=t; }"
},
{
"code": null,
"e": 7294,
"s": 7291,
"text": "+1"
},
{
"code": null,
"e": 7322,
"s": 7294,
"text": "karunakarmutthi3 months ago"
},
{
"code": null,
"e": 7438,
"s": 7322,
"text": "what is wrong with my code it was showing TLE at 102 test case but Succesfuly Executed at leetcode and interviewBit"
},
{
"code": null,
"e": 8278,
"s": 7438,
"text": "class Solution { static ArrayList<ArrayList<Integer>> uniquePerms(ArrayList<Integer> arr , int n) { Collections.sort(arr); ArrayList<Integer> cur=new ArrayList<>(); ArrayList<ArrayList<Integer>> res=new ArrayList<>(); boolean[] map=new boolean[n]; recur(cur,map,arr,res,n); return res; } public static void recur(ArrayList<Integer> curr,boolean[] map,ArrayList<Integer> arr,ArrayList<ArrayList<Integer>> res,int n){ if(n==curr.size()){ if(!res.contains(curr)) res.add(new ArrayList<Integer>(curr)); } for(int i=0;i<n;i++){ if(map[i]==false){ map[i]=true; curr.add(arr.get(i)); recur(curr,map,arr,res,n); curr.remove(curr.size()-1); map[i]=false; } } }};"
},
{
"code": null,
"e": 8284,
"s": 8282,
"text": "0"
},
{
"code": null,
"e": 8318,
"s": 8284,
"text": "keshavkumarshivanshu34 months ago"
},
{
"code": null,
"e": 9019,
"s": 8318,
"text": "best of best time- 0.03********\nvoid solve(map<int,int> &mp,vector<int> tmp,int n,vector<vector<int>> &ans){\n if(tmp.size()==n){\n ans.push_back(tmp);\n return;\n }\n for(auto i=mp.begin();i!=mp.end();i++){\n if(i->second > 0){\n tmp.push_back(i->first);\n i->second--;\n solve(mp,tmp,n,ans);\n tmp.pop_back();\n i->second++;\n }\n }\n }\n vector<vector<int>> uniquePerms(vector<int> arr ,int n) {\n // code here\n map<int,int> mp;\n for(auto i:arr) mp[i]++;\n vector<vector<int>> ans;\n vector<int> tmp;\n solve(mp,tmp,n,ans);\n return ans;\n }"
},
{
"code": null,
"e": 9022,
"s": 9019,
"text": "+1"
},
{
"code": null,
"e": 9048,
"s": 9022,
"text": "rayalravi20014 months ago"
},
{
"code": null,
"e": 9062,
"s": 9048,
"text": "c++ solution"
},
{
"code": null,
"e": 9725,
"s": 9064,
"text": "void solve(vector<int> arr, int l , int r,set<vector<int>> &ans ){ if(l==r){ ans.insert(arr); return; } for(int i=l; i<r; i++){ swap(arr[i],arr[l]); solve(arr,l+1,r,ans); swap(arr[i],arr[l]); } return; } vector<vector<int>> uniquePerms(vector<int> arr ,int n) { // code here int l = 0; int r = arr.size(); set<vector<int>> ans; solve(arr,l,r,ans); vector<vector<int>> v(ans.size()); copy(ans.begin(), ans.end(), v.begin()); // for(int i=0; i<ans.size();i++){ // v.insert(ans[i]); // } return v; }"
},
{
"code": null,
"e": 9871,
"s": 9725,
"text": "We strongly recommend solving this problem on your own before viewing its editorial. Do you still\n want to view the editorial?"
},
{
"code": null,
"e": 9907,
"s": 9871,
"text": " Login to access your submissions. "
},
{
"code": null,
"e": 9917,
"s": 9907,
"text": "\nProblem\n"
},
{
"code": null,
"e": 9927,
"s": 9917,
"text": "\nContest\n"
},
{
"code": null,
"e": 9990,
"s": 9927,
"text": "Reset the IDE using the second button on the top right corner."
},
{
"code": null,
"e": 10138,
"s": 9990,
"text": "Avoid using static/global variables in your code as your code is tested against multiple test cases and these tend to retain their previous values."
},
{
"code": null,
"e": 10346,
"s": 10138,
"text": "Passing the Sample/Custom Test cases does not guarantee the correctness of code. On submission, your code is tested against multiple test cases consisting of all possible corner cases and stress constraints."
},
{
"code": null,
"e": 10452,
"s": 10346,
"text": "You can access the hints to get an idea about what is expected of you as well as the final solution code."
}
] |
java.time.LocalDateTime.getLong() Method Example | The java.time.LocalDateTime.getLong(TemporalField field) method gets the value of the specified field from this date-time as an long.
Following is the declaration for java.time.LocalDateTime.getLong(TemporalField field) method.
public long getLong(TemporalField field)
field − the field to get, not null.
the value for the field.
DateTimeException − if a value for the field cannot be obtained or the value is outside the range of valid values for the field.
DateTimeException − if a value for the field cannot be obtained or the value is outside the range of valid values for the field.
UnsupportedTemporalTypeException − if the field is not supported or the range of values exceeds an long.
UnsupportedTemporalTypeException − if the field is not supported or the range of values exceeds an long.
ArithmeticException − if numeric overflow occurs
ArithmeticException − if numeric overflow occurs
The following example shows the usage of java.time.LocalDateTime.getLong(TemporalField field) method.
package com.tutorialspoint;
import java.time.LocalDateTime;
import java.time.temporal.ChronoField;
public class LocalDateTimeDemo {
public static void main(String[] args) {
LocalDateTime date = LocalDateTime.parse("2017-02-03T12:30:30");
System.out.println(date.getLong(ChronoField.CLOCK_HOUR_OF_DAY));
}
}
Let us compile and run the above program, this will produce the following result −
12
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2049,
"s": 1915,
"text": "The java.time.LocalDateTime.getLong(TemporalField field) method gets the value of the specified field from this date-time as an long."
},
{
"code": null,
"e": 2143,
"s": 2049,
"text": "Following is the declaration for java.time.LocalDateTime.getLong(TemporalField field) method."
},
{
"code": null,
"e": 2185,
"s": 2143,
"text": "public long getLong(TemporalField field)\n"
},
{
"code": null,
"e": 2221,
"s": 2185,
"text": "field − the field to get, not null."
},
{
"code": null,
"e": 2246,
"s": 2221,
"text": "the value for the field."
},
{
"code": null,
"e": 2375,
"s": 2246,
"text": "DateTimeException − if a value for the field cannot be obtained or the value is outside the range of valid values for the field."
},
{
"code": null,
"e": 2504,
"s": 2375,
"text": "DateTimeException − if a value for the field cannot be obtained or the value is outside the range of valid values for the field."
},
{
"code": null,
"e": 2609,
"s": 2504,
"text": "UnsupportedTemporalTypeException − if the field is not supported or the range of values exceeds an long."
},
{
"code": null,
"e": 2714,
"s": 2609,
"text": "UnsupportedTemporalTypeException − if the field is not supported or the range of values exceeds an long."
},
{
"code": null,
"e": 2763,
"s": 2714,
"text": "ArithmeticException − if numeric overflow occurs"
},
{
"code": null,
"e": 2812,
"s": 2763,
"text": "ArithmeticException − if numeric overflow occurs"
},
{
"code": null,
"e": 2914,
"s": 2812,
"text": "The following example shows the usage of java.time.LocalDateTime.getLong(TemporalField field) method."
},
{
"code": null,
"e": 3245,
"s": 2914,
"text": "package com.tutorialspoint;\n\nimport java.time.LocalDateTime;\nimport java.time.temporal.ChronoField;\n\npublic class LocalDateTimeDemo {\n public static void main(String[] args) {\n \n LocalDateTime date = LocalDateTime.parse(\"2017-02-03T12:30:30\");\n System.out.println(date.getLong(ChronoField.CLOCK_HOUR_OF_DAY)); \n }\n}"
},
{
"code": null,
"e": 3328,
"s": 3245,
"text": "Let us compile and run the above program, this will produce the following result −"
},
{
"code": null,
"e": 3332,
"s": 3328,
"text": "12\n"
},
{
"code": null,
"e": 3339,
"s": 3332,
"text": " Print"
},
{
"code": null,
"e": 3350,
"s": 3339,
"text": " Add Notes"
}
] |
Selenium - Radio Button Interaction | In this section, we will understand how to interact with Radio Buttons. We can select a radio button option using the 'click' method and unselect using the same 'click' method.
Let us understand how to interact with radio buttons using https://www.calculator.net/mortgage-payoff-calculator.html. We can also check if a radio button is selected or enabled.
import java.util.concurrent.TimeUnit;
import org.openqa.selenium.*;
import org.openqa.selenium.firefox.FirefoxDriver;
public class webdriverdemo {
public static void main(String[] args) throws InterruptedException {
WebDriver driver = new FirefoxDriver();
//Puts an Implicit wait, Will wait for 10 seconds before throwing exception
driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);
//Launch website
driver.navigate().to("http://www.calculator.net/mortgage-payoff-calculator.html");
driver.manage().window().maximize();
// Click on Radio Button
driver.findElement(By.id("cpayoff1")).click();
System.out.println("The Output of the IsSelected " +
driver.findElement(By.id("cpayoff1")).isSelected());
System.out.println("The Output of the IsEnabled " +
driver.findElement(By.id("cpayoff1")).isEnabled());
System.out.println("The Output of the IsDisplayed " +
driver.findElement(By.id("cpayoff1")).isDisplayed());
//Close the Browser.
driver.close();
}
}
Upon execution, the radio button is selected and the output of the commands are displayed in the console.
46 Lectures
5.5 hours
Aditya Dua
296 Lectures
146 hours
Arun Motoori
411 Lectures
38.5 hours
In28Minutes Official
22 Lectures
7 hours
Arun Motoori
118 Lectures
17 hours
Arun Motoori
278 Lectures
38.5 hours
Lets Kode It
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2052,
"s": 1875,
"text": "In this section, we will understand how to interact with Radio Buttons. We can select a radio button option using the 'click' method and unselect using the same 'click' method."
},
{
"code": null,
"e": 2231,
"s": 2052,
"text": "Let us understand how to interact with radio buttons using https://www.calculator.net/mortgage-payoff-calculator.html. We can also check if a radio button is selected or enabled."
},
{
"code": null,
"e": 3341,
"s": 2231,
"text": "import java.util.concurrent.TimeUnit;\n\nimport org.openqa.selenium.*;\nimport org.openqa.selenium.firefox.FirefoxDriver;\n\npublic class webdriverdemo {\n public static void main(String[] args) throws InterruptedException {\n \n WebDriver driver = new FirefoxDriver();\n \n //Puts an Implicit wait, Will wait for 10 seconds before throwing exception\n driver.manage().timeouts().implicitlyWait(10, TimeUnit.SECONDS);\n \n //Launch website\n driver.navigate().to(\"http://www.calculator.net/mortgage-payoff-calculator.html\");\n driver.manage().window().maximize();\n \n // Click on Radio Button\n driver.findElement(By.id(\"cpayoff1\")).click();\n System.out.println(\"The Output of the IsSelected \" +\n driver.findElement(By.id(\"cpayoff1\")).isSelected());\n System.out.println(\"The Output of the IsEnabled \" +\n driver.findElement(By.id(\"cpayoff1\")).isEnabled());\n System.out.println(\"The Output of the IsDisplayed \" +\n driver.findElement(By.id(\"cpayoff1\")).isDisplayed());\n \n //Close the Browser.\n driver.close();\n }\n}"
},
{
"code": null,
"e": 3447,
"s": 3341,
"text": "Upon execution, the radio button is selected and the output of the commands are displayed in the console."
},
{
"code": null,
"e": 3482,
"s": 3447,
"text": "\n 46 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 3494,
"s": 3482,
"text": " Aditya Dua"
},
{
"code": null,
"e": 3530,
"s": 3494,
"text": "\n 296 Lectures \n 146 hours \n"
},
{
"code": null,
"e": 3544,
"s": 3530,
"text": " Arun Motoori"
},
{
"code": null,
"e": 3581,
"s": 3544,
"text": "\n 411 Lectures \n 38.5 hours \n"
},
{
"code": null,
"e": 3603,
"s": 3581,
"text": " In28Minutes Official"
},
{
"code": null,
"e": 3636,
"s": 3603,
"text": "\n 22 Lectures \n 7 hours \n"
},
{
"code": null,
"e": 3650,
"s": 3636,
"text": " Arun Motoori"
},
{
"code": null,
"e": 3685,
"s": 3650,
"text": "\n 118 Lectures \n 17 hours \n"
},
{
"code": null,
"e": 3699,
"s": 3685,
"text": " Arun Motoori"
},
{
"code": null,
"e": 3736,
"s": 3699,
"text": "\n 278 Lectures \n 38.5 hours \n"
},
{
"code": null,
"e": 3750,
"s": 3736,
"text": " Lets Kode It"
},
{
"code": null,
"e": 3757,
"s": 3750,
"text": " Print"
},
{
"code": null,
"e": 3768,
"s": 3757,
"text": " Add Notes"
}
] |
Walmart Interview Experience | On-Campus 2021 (Virtual) | 26 Aug, 2021
In Placement season 2021-22 Walmart Labs Visited our campus
Round 1(Coding Round): Platform was HackerEarth time duration was 1 hour. [100 Marks ]
30 MCQs (Moderate Level Core CS questions) 1 marks each
Coding question 1-We have to find minimum length of matching substrings among two strings.[simple HashMap] 20 Marks
Coding question 2-Variation of this question Minimum number of jumps with max k jumps allowed at a time. [50 Marks]
Round 2(Technical Round-1): Platform was Zoom time duration was 40 mins. Started with an Introduction. It was completely based on OOPs he asked every major concept of OOPs with some unconventional questions. Every question has follow-ups
Questions asked:
Abstract class Vs Interface. Why do we need an Abstract class if we have an interface?Pure Virtual functions vs Virtual FunctionWhich is better i=i+1 or i++ from compilers perspectiveCan Program execute without Main functionWhat is the difference between Inheritance and Polymorphism? then follow up question can we achieve inheritance with polymorphismWhat is the interface in C++?[ yes C++ ]What is return via value, return via reference, return via addressDoes the heap memory get de-allocated after Program Execution?In detail Static Memory allocation and Dynamic memory Allocation(DMA). Does Static Allocation has anything to deal with static variables? Static variables vs global variables, then Why we need static variables if we have global variables.Finally some queries about Types of Joins in MYSQL. inner join, outer join, full join, left join, right join.
Abstract class Vs Interface. Why do we need an Abstract class if we have an interface?
Pure Virtual functions vs Virtual Function
Which is better i=i+1 or i++ from compilers perspective
Can Program execute without Main function
What is the difference between Inheritance and Polymorphism? then follow up question can we achieve inheritance with polymorphism
What is the interface in C++?[ yes C++ ]
What is return via value, return via reference, return via address
Does the heap memory get de-allocated after Program Execution?
In detail Static Memory allocation and Dynamic memory Allocation(DMA). Does Static Allocation has anything to deal with static variables? Static variables vs global variables, then Why we need static variables if we have global variables.
Finally some queries about Types of Joins in MYSQL. inner join, outer join, full join, left join, right join.
Round 3(Technical Round-2): Platform was Zoom time duration was 1 hour. He was a serious manager, simply said we are not going to waste any time solve the first question completely then we will see about the second question. I have to write the whole code from scratch for both the questions and also have to pass all the given test cases.
Check whether the given Binary tree is BST, he realized that I know this question so he told me that you have to solve this via some different approach.You are given a matrix of 0 and 1 and you have to find all the zeros which are completely covered by 1’sFor Ex:
[1,1,1,1]
[1,0,0,1] here the answer is 2
[1,1,1,1]
[1,0,1,1]
[1,1,1,1]
[1,0,1,1] here the answer is 0
[1,0,0,1]
[1,0,1,1] there are some edge cases over which both codes were tested.
Check whether the given Binary tree is BST, he realized that I know this question so he told me that you have to solve this via some different approach.
You are given a matrix of 0 and 1 and you have to find all the zeros which are completely covered by 1’sFor Ex:
[1,1,1,1]
[1,0,0,1] here the answer is 2
[1,1,1,1]
[1,0,1,1]
[1,1,1,1]
[1,0,1,1] here the answer is 0
[1,0,0,1]
[1,0,1,1] there are some edge cases over which both codes were tested.
For Ex:
[1,1,1,1]
[1,0,0,1] here the answer is 2
[1,1,1,1]
[1,0,1,1]
[1,1,1,1]
[1,0,1,1] here the answer is 0
[1,0,0,1]
[1,0,1,1]
there are some edge cases over which both codes were tested.
Round 4(Hiring Manager Round): Platform was Zoom time duration was 30 mins. Started with Tell me something about yourself.
Questions Asked:
Why Walmart?
Where did you see yourself after 5 years?
Apart from academics, what are your hobbies?
Random Question about what is cloud.
Questions about projects.
Questions about Previous internships.
Verdict: Selected
Marketing
On-Campus
Walmart
Interview Experiences
Walmart
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
TCS Digital Interview Questions
Google SWE Interview Experience (Google Online Coding Challenge) 2022
Samsung Interview Experience Research & Institute SRIB (Off-Campus) 2022
Amazon Interview Experience for SDE 1
Google Interview Questions
Amazon Interview Experience SDE-2 (3 Years Experienced)
TCS Ninja Interview Experience (2020 batch)
Write It Up: Share Your Interview Experiences
Samsung RnD Coding Round Questions
How I cracked TCS Digital | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n26 Aug, 2021"
},
{
"code": null,
"e": 112,
"s": 52,
"text": "In Placement season 2021-22 Walmart Labs Visited our campus"
},
{
"code": null,
"e": 200,
"s": 112,
"text": "Round 1(Coding Round): Platform was HackerEarth time duration was 1 hour. [100 Marks ]"
},
{
"code": null,
"e": 256,
"s": 200,
"text": "30 MCQs (Moderate Level Core CS questions) 1 marks each"
},
{
"code": null,
"e": 374,
"s": 256,
"text": "Coding question 1-We have to find minimum length of matching substrings among two strings.[simple HashMap] 20 Marks"
},
{
"code": null,
"e": 491,
"s": 374,
"text": "Coding question 2-Variation of this question Minimum number of jumps with max k jumps allowed at a time. [50 Marks]"
},
{
"code": null,
"e": 729,
"s": 491,
"text": "Round 2(Technical Round-1): Platform was Zoom time duration was 40 mins. Started with an Introduction. It was completely based on OOPs he asked every major concept of OOPs with some unconventional questions. Every question has follow-ups"
},
{
"code": null,
"e": 746,
"s": 729,
"text": "Questions asked:"
},
{
"code": null,
"e": 1615,
"s": 746,
"text": "Abstract class Vs Interface. Why do we need an Abstract class if we have an interface?Pure Virtual functions vs Virtual FunctionWhich is better i=i+1 or i++ from compilers perspectiveCan Program execute without Main functionWhat is the difference between Inheritance and Polymorphism? then follow up question can we achieve inheritance with polymorphismWhat is the interface in C++?[ yes C++ ]What is return via value, return via reference, return via addressDoes the heap memory get de-allocated after Program Execution?In detail Static Memory allocation and Dynamic memory Allocation(DMA). Does Static Allocation has anything to deal with static variables? Static variables vs global variables, then Why we need static variables if we have global variables.Finally some queries about Types of Joins in MYSQL. inner join, outer join, full join, left join, right join."
},
{
"code": null,
"e": 1702,
"s": 1615,
"text": "Abstract class Vs Interface. Why do we need an Abstract class if we have an interface?"
},
{
"code": null,
"e": 1745,
"s": 1702,
"text": "Pure Virtual functions vs Virtual Function"
},
{
"code": null,
"e": 1801,
"s": 1745,
"text": "Which is better i=i+1 or i++ from compilers perspective"
},
{
"code": null,
"e": 1843,
"s": 1801,
"text": "Can Program execute without Main function"
},
{
"code": null,
"e": 1973,
"s": 1843,
"text": "What is the difference between Inheritance and Polymorphism? then follow up question can we achieve inheritance with polymorphism"
},
{
"code": null,
"e": 2014,
"s": 1973,
"text": "What is the interface in C++?[ yes C++ ]"
},
{
"code": null,
"e": 2081,
"s": 2014,
"text": "What is return via value, return via reference, return via address"
},
{
"code": null,
"e": 2144,
"s": 2081,
"text": "Does the heap memory get de-allocated after Program Execution?"
},
{
"code": null,
"e": 2383,
"s": 2144,
"text": "In detail Static Memory allocation and Dynamic memory Allocation(DMA). Does Static Allocation has anything to deal with static variables? Static variables vs global variables, then Why we need static variables if we have global variables."
},
{
"code": null,
"e": 2493,
"s": 2383,
"text": "Finally some queries about Types of Joins in MYSQL. inner join, outer join, full join, left join, right join."
},
{
"code": null,
"e": 2833,
"s": 2493,
"text": "Round 3(Technical Round-2): Platform was Zoom time duration was 1 hour. He was a serious manager, simply said we are not going to waste any time solve the first question completely then we will see about the second question. I have to write the whole code from scratch for both the questions and also have to pass all the given test cases."
},
{
"code": null,
"e": 3286,
"s": 2833,
"text": "Check whether the given Binary tree is BST, he realized that I know this question so he told me that you have to solve this via some different approach.You are given a matrix of 0 and 1 and you have to find all the zeros which are completely covered by 1’sFor Ex:\n[1,1,1,1]\n[1,0,0,1] here the answer is 2\n[1,1,1,1] \n[1,0,1,1]\n[1,1,1,1]\n[1,0,1,1] here the answer is 0 \n[1,0,0,1]\n[1,0,1,1] there are some edge cases over which both codes were tested."
},
{
"code": null,
"e": 3439,
"s": 3286,
"text": "Check whether the given Binary tree is BST, he realized that I know this question so he told me that you have to solve this via some different approach."
},
{
"code": null,
"e": 3740,
"s": 3439,
"text": "You are given a matrix of 0 and 1 and you have to find all the zeros which are completely covered by 1’sFor Ex:\n[1,1,1,1]\n[1,0,0,1] here the answer is 2\n[1,1,1,1] \n[1,0,1,1]\n[1,1,1,1]\n[1,0,1,1] here the answer is 0 \n[1,0,0,1]\n[1,0,1,1] there are some edge cases over which both codes were tested."
},
{
"code": null,
"e": 3877,
"s": 3740,
"text": "For Ex:\n[1,1,1,1]\n[1,0,0,1] here the answer is 2\n[1,1,1,1] \n[1,0,1,1]\n[1,1,1,1]\n[1,0,1,1] here the answer is 0 \n[1,0,0,1]\n[1,0,1,1] "
},
{
"code": null,
"e": 3938,
"s": 3877,
"text": "there are some edge cases over which both codes were tested."
},
{
"code": null,
"e": 4061,
"s": 3938,
"text": "Round 4(Hiring Manager Round): Platform was Zoom time duration was 30 mins. Started with Tell me something about yourself."
},
{
"code": null,
"e": 4078,
"s": 4061,
"text": "Questions Asked:"
},
{
"code": null,
"e": 4091,
"s": 4078,
"text": "Why Walmart?"
},
{
"code": null,
"e": 4133,
"s": 4091,
"text": "Where did you see yourself after 5 years?"
},
{
"code": null,
"e": 4178,
"s": 4133,
"text": "Apart from academics, what are your hobbies?"
},
{
"code": null,
"e": 4215,
"s": 4178,
"text": "Random Question about what is cloud."
},
{
"code": null,
"e": 4241,
"s": 4215,
"text": "Questions about projects."
},
{
"code": null,
"e": 4279,
"s": 4241,
"text": "Questions about Previous internships."
},
{
"code": null,
"e": 4297,
"s": 4279,
"text": "Verdict: Selected"
},
{
"code": null,
"e": 4307,
"s": 4297,
"text": "Marketing"
},
{
"code": null,
"e": 4317,
"s": 4307,
"text": "On-Campus"
},
{
"code": null,
"e": 4325,
"s": 4317,
"text": "Walmart"
},
{
"code": null,
"e": 4347,
"s": 4325,
"text": "Interview Experiences"
},
{
"code": null,
"e": 4355,
"s": 4347,
"text": "Walmart"
},
{
"code": null,
"e": 4453,
"s": 4355,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 4485,
"s": 4453,
"text": "TCS Digital Interview Questions"
},
{
"code": null,
"e": 4555,
"s": 4485,
"text": "Google SWE Interview Experience (Google Online Coding Challenge) 2022"
},
{
"code": null,
"e": 4628,
"s": 4555,
"text": "Samsung Interview Experience Research & Institute SRIB (Off-Campus) 2022"
},
{
"code": null,
"e": 4666,
"s": 4628,
"text": "Amazon Interview Experience for SDE 1"
},
{
"code": null,
"e": 4693,
"s": 4666,
"text": "Google Interview Questions"
},
{
"code": null,
"e": 4749,
"s": 4693,
"text": "Amazon Interview Experience SDE-2 (3 Years Experienced)"
},
{
"code": null,
"e": 4793,
"s": 4749,
"text": "TCS Ninja Interview Experience (2020 batch)"
},
{
"code": null,
"e": 4839,
"s": 4793,
"text": "Write It Up: Share Your Interview Experiences"
},
{
"code": null,
"e": 4874,
"s": 4839,
"text": "Samsung RnD Coding Round Questions"
}
] |
JavaScript | How to add an element to a JSON object? | 27 May, 2019
In order to add Key/value pair to a JSON object, Either we use dot notation or square bracket notation. Both methods are widely accepted.
Example 1: This example adds {“prop_4” : “val_4”} to the GFG_p object by using dot notation.
<!DOCTYPE html><html> <head> <title> JavaScript | Add a key/value pair to JSON object </title></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_up" style=" font-weight: bold"> </p> <button onclick="Geeks()"> Click to add </button> <p id="GFG_down" style="color:green; font-weight: bold" ;> </p> <script> var GFG_p = { prop_1: "val_1", prop_2: "val_2", prop_3: "val_3" }; var p_up = document.getElementById("GFG_up"); var p_down = document.getElementById("GFG_down"); p_up.innerHTML = JSON.stringify(GFG_p); function Geeks() { GFG_p.prop_4 = "val_4"; p_down.innerHTML = JSON.stringify(GFG_p); } </script></body> </html>
Output:
Before clicking on the button:
After clicking on the button:
Example 2: This example adds {“prop_4” : “val_4”} to the GFG_p object by using square bracket notation.
<!DOCTYPE html><html> <head> <title> JavaScript | Add a key/value pair to JSON object </title></head> <body style="text-align:center;"> <h1 style="color:green;"> GeeksForGeeks </h1> <p id="GFG_up" style=" font-weight: bold"> </p> <button onclick="Geeks()"> Click to add </button> <p id="GFG_down" style="color:green; font-weight: bold" ;> </p> <script> var GFG_p = { prop_1: "val_1", prop_2: "val_2", prop_3: "val_3" }; var p_up = document.getElementById("GFG_up"); var p_down = document.getElementById("GFG_down"); p_up.innerHTML = JSON.stringify(GFG_p); function Geeks() { GFG_p["prop_4"] = "val_4"; p_down.innerHTML = JSON.stringify(GFG_p); } </script></body> </html>
Output:
Before clicking on the button:
After clicking on the button:
JavaScript-Misc
JSON
JavaScript
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
Remove elements from a JavaScript Array
Hide or show elements in HTML using display property
How to append HTML code to a div using JavaScript ?
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Installation of Node.js on Linux
Difference between var, let and const keywords in JavaScript
How to insert spaces/tabs in text using HTML/CSS?
How to fetch data from an API in ReactJS ? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n27 May, 2019"
},
{
"code": null,
"e": 166,
"s": 28,
"text": "In order to add Key/value pair to a JSON object, Either we use dot notation or square bracket notation. Both methods are widely accepted."
},
{
"code": null,
"e": 259,
"s": 166,
"text": "Example 1: This example adds {“prop_4” : “val_4”} to the GFG_p object by using dot notation."
},
{
"code": "<!DOCTYPE html><html> <head> <title> JavaScript | Add a key/value pair to JSON object </title></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_up\" style=\" font-weight: bold\"> </p> <button onclick=\"Geeks()\"> Click to add </button> <p id=\"GFG_down\" style=\"color:green; font-weight: bold\" ;> </p> <script> var GFG_p = { prop_1: \"val_1\", prop_2: \"val_2\", prop_3: \"val_3\" }; var p_up = document.getElementById(\"GFG_up\"); var p_down = document.getElementById(\"GFG_down\"); p_up.innerHTML = JSON.stringify(GFG_p); function Geeks() { GFG_p.prop_4 = \"val_4\"; p_down.innerHTML = JSON.stringify(GFG_p); } </script></body> </html>",
"e": 1180,
"s": 259,
"text": null
},
{
"code": null,
"e": 1188,
"s": 1180,
"text": "Output:"
},
{
"code": null,
"e": 1219,
"s": 1188,
"text": "Before clicking on the button:"
},
{
"code": null,
"e": 1249,
"s": 1219,
"text": "After clicking on the button:"
},
{
"code": null,
"e": 1353,
"s": 1249,
"text": "Example 2: This example adds {“prop_4” : “val_4”} to the GFG_p object by using square bracket notation."
},
{
"code": "<!DOCTYPE html><html> <head> <title> JavaScript | Add a key/value pair to JSON object </title></head> <body style=\"text-align:center;\"> <h1 style=\"color:green;\"> GeeksForGeeks </h1> <p id=\"GFG_up\" style=\" font-weight: bold\"> </p> <button onclick=\"Geeks()\"> Click to add </button> <p id=\"GFG_down\" style=\"color:green; font-weight: bold\" ;> </p> <script> var GFG_p = { prop_1: \"val_1\", prop_2: \"val_2\", prop_3: \"val_3\" }; var p_up = document.getElementById(\"GFG_up\"); var p_down = document.getElementById(\"GFG_down\"); p_up.innerHTML = JSON.stringify(GFG_p); function Geeks() { GFG_p[\"prop_4\"] = \"val_4\"; p_down.innerHTML = JSON.stringify(GFG_p); } </script></body> </html>",
"e": 2279,
"s": 1353,
"text": null
},
{
"code": null,
"e": 2287,
"s": 2279,
"text": "Output:"
},
{
"code": null,
"e": 2318,
"s": 2287,
"text": "Before clicking on the button:"
},
{
"code": null,
"e": 2348,
"s": 2318,
"text": "After clicking on the button:"
},
{
"code": null,
"e": 2364,
"s": 2348,
"text": "JavaScript-Misc"
},
{
"code": null,
"e": 2369,
"s": 2364,
"text": "JSON"
},
{
"code": null,
"e": 2380,
"s": 2369,
"text": "JavaScript"
},
{
"code": null,
"e": 2397,
"s": 2380,
"text": "Web Technologies"
},
{
"code": null,
"e": 2495,
"s": 2397,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2556,
"s": 2495,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 2628,
"s": 2556,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 2668,
"s": 2628,
"text": "Remove elements from a JavaScript Array"
},
{
"code": null,
"e": 2721,
"s": 2668,
"text": "Hide or show elements in HTML using display property"
},
{
"code": null,
"e": 2773,
"s": 2721,
"text": "How to append HTML code to a div using JavaScript ?"
},
{
"code": null,
"e": 2835,
"s": 2773,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 2868,
"s": 2835,
"text": "Installation of Node.js on Linux"
},
{
"code": null,
"e": 2929,
"s": 2868,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 2979,
"s": 2929,
"text": "How to insert spaces/tabs in text using HTML/CSS?"
}
] |
How to Convert a String value to Float value in Java with Examples | 29 Jan, 2020
Given a String “str” in Java, the task is to convert this string to float type.
Examples:
Input: str = "1.0"
Output: 1.0
Input: str = "3.14"
Output: 3.14
Approach 1: (Naive Method)One method is to traverse the string and add the numbers one by one to the float type. This method is not an efficient approach.
Approach 2: (Using Float.parseFloat() method)The simplest way to do so is using parseFloat() method of Float class in java.lang package. This method takes the string to be parsed and returns the float type from it. If not convertible, this method throws error.
Syntax:
Float.parseFloat(str);
Below is the implementation of the above approach:
Example 1: To show successful conversion
// Java Program to convert string to float class GFG { // Function to convert String to Float public static float convertStringToFloat(String str) { // Convert string to float // using parseFloat() method return Float.parseFloat(str); } // Driver code public static void main(String[] args) { // The string value String stringValue = "1.0"; // The expected float value float floatValue; // Convert string to float floatValue = convertStringToFloat(stringValue); // Print the expected float value System.out.println( stringValue + " after converting into float = " + floatValue); }}
1.0 after converting into float = 1.0
Example 2: To show unsuccessful conversion
// Java Program to convert string to float class GFG { // Function to convert String to Float public static void convertStringToFloat(String str) { float floatValue; try { // Convert string to float // using parseFloat() method floatValue = Float.parseFloat(str); // Print the expected float value System.out.println( str + " after converting into float = " + floatValue); } catch (Exception e) { // Print the error System.out.println( str + " cannot be converted to float: " + e.getMessage()); } } // Driver code public static void main(String[] args) { // The string value String str1 = ""; String str2 = null; String str3 = "GFG"; // Convert string to float // using parseFloat() method convertStringToFloat(str1); convertStringToFloat(str2); convertStringToFloat(str3); }}
cannot be converted to float: empty String
null cannot be converted to float: null
GFG cannot be converted to float: For input string: "GFG"
Approach 3: (Using Float.valueOf() method)The valueOf() method of Float class converts data from its internal form into human-readable form.
Syntax:
Float.valueOf(str);
Below is the implementation of the above approach:
Example 1: To show successful conversion
// Java Program to convert string to float class GFG { // Function to convert String to Float public static float convertStringToFloat(String str) { // Convert string to float // using valueOf() method return Float.valueOf(str); } // Driver code public static void main(String[] args) { // The string value String stringValue = "1.0"; // The expected float value float floatValue; // Convert string to float floatValue = convertStringToFloat(stringValue); // Print the expected float value System.out.println( stringValue + " after converting into float = " + floatValue); }}
1.0 after converting into float = 1.0
Java-Data Types
Java-Float
Java-String-Programs
Java
Java Programs
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Stream In Java
Introduction to Java
Constructors in Java
Exceptions in Java
Generics in Java
Java Programming Examples
Convert Double to Integer in Java
Implementing a Linked List in Java using Class
Factory method design pattern in Java
Java Program to Remove Duplicate Elements From the Array | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Jan, 2020"
},
{
"code": null,
"e": 108,
"s": 28,
"text": "Given a String “str” in Java, the task is to convert this string to float type."
},
{
"code": null,
"e": 118,
"s": 108,
"text": "Examples:"
},
{
"code": null,
"e": 184,
"s": 118,
"text": "Input: str = \"1.0\"\nOutput: 1.0\n\nInput: str = \"3.14\"\nOutput: 3.14\n"
},
{
"code": null,
"e": 339,
"s": 184,
"text": "Approach 1: (Naive Method)One method is to traverse the string and add the numbers one by one to the float type. This method is not an efficient approach."
},
{
"code": null,
"e": 600,
"s": 339,
"text": "Approach 2: (Using Float.parseFloat() method)The simplest way to do so is using parseFloat() method of Float class in java.lang package. This method takes the string to be parsed and returns the float type from it. If not convertible, this method throws error."
},
{
"code": null,
"e": 608,
"s": 600,
"text": "Syntax:"
},
{
"code": null,
"e": 632,
"s": 608,
"text": "Float.parseFloat(str);\n"
},
{
"code": null,
"e": 683,
"s": 632,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 724,
"s": 683,
"text": "Example 1: To show successful conversion"
},
{
"code": "// Java Program to convert string to float class GFG { // Function to convert String to Float public static float convertStringToFloat(String str) { // Convert string to float // using parseFloat() method return Float.parseFloat(str); } // Driver code public static void main(String[] args) { // The string value String stringValue = \"1.0\"; // The expected float value float floatValue; // Convert string to float floatValue = convertStringToFloat(stringValue); // Print the expected float value System.out.println( stringValue + \" after converting into float = \" + floatValue); }}",
"e": 1455,
"s": 724,
"text": null
},
{
"code": null,
"e": 1494,
"s": 1455,
"text": "1.0 after converting into float = 1.0\n"
},
{
"code": null,
"e": 1537,
"s": 1494,
"text": "Example 2: To show unsuccessful conversion"
},
{
"code": "// Java Program to convert string to float class GFG { // Function to convert String to Float public static void convertStringToFloat(String str) { float floatValue; try { // Convert string to float // using parseFloat() method floatValue = Float.parseFloat(str); // Print the expected float value System.out.println( str + \" after converting into float = \" + floatValue); } catch (Exception e) { // Print the error System.out.println( str + \" cannot be converted to float: \" + e.getMessage()); } } // Driver code public static void main(String[] args) { // The string value String str1 = \"\"; String str2 = null; String str3 = \"GFG\"; // Convert string to float // using parseFloat() method convertStringToFloat(str1); convertStringToFloat(str2); convertStringToFloat(str3); }}",
"e": 2618,
"s": 1537,
"text": null
},
{
"code": null,
"e": 2760,
"s": 2618,
"text": "cannot be converted to float: empty String\nnull cannot be converted to float: null\nGFG cannot be converted to float: For input string: \"GFG\"\n"
},
{
"code": null,
"e": 2901,
"s": 2760,
"text": "Approach 3: (Using Float.valueOf() method)The valueOf() method of Float class converts data from its internal form into human-readable form."
},
{
"code": null,
"e": 2909,
"s": 2901,
"text": "Syntax:"
},
{
"code": null,
"e": 2930,
"s": 2909,
"text": "Float.valueOf(str);\n"
},
{
"code": null,
"e": 2981,
"s": 2930,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 3022,
"s": 2981,
"text": "Example 1: To show successful conversion"
},
{
"code": "// Java Program to convert string to float class GFG { // Function to convert String to Float public static float convertStringToFloat(String str) { // Convert string to float // using valueOf() method return Float.valueOf(str); } // Driver code public static void main(String[] args) { // The string value String stringValue = \"1.0\"; // The expected float value float floatValue; // Convert string to float floatValue = convertStringToFloat(stringValue); // Print the expected float value System.out.println( stringValue + \" after converting into float = \" + floatValue); }}",
"e": 3747,
"s": 3022,
"text": null
},
{
"code": null,
"e": 3786,
"s": 3747,
"text": "1.0 after converting into float = 1.0\n"
},
{
"code": null,
"e": 3802,
"s": 3786,
"text": "Java-Data Types"
},
{
"code": null,
"e": 3813,
"s": 3802,
"text": "Java-Float"
},
{
"code": null,
"e": 3834,
"s": 3813,
"text": "Java-String-Programs"
},
{
"code": null,
"e": 3839,
"s": 3834,
"text": "Java"
},
{
"code": null,
"e": 3853,
"s": 3839,
"text": "Java Programs"
},
{
"code": null,
"e": 3858,
"s": 3853,
"text": "Java"
},
{
"code": null,
"e": 3956,
"s": 3858,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 3971,
"s": 3956,
"text": "Stream In Java"
},
{
"code": null,
"e": 3992,
"s": 3971,
"text": "Introduction to Java"
},
{
"code": null,
"e": 4013,
"s": 3992,
"text": "Constructors in Java"
},
{
"code": null,
"e": 4032,
"s": 4013,
"text": "Exceptions in Java"
},
{
"code": null,
"e": 4049,
"s": 4032,
"text": "Generics in Java"
},
{
"code": null,
"e": 4075,
"s": 4049,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 4109,
"s": 4075,
"text": "Convert Double to Integer in Java"
},
{
"code": null,
"e": 4156,
"s": 4109,
"text": "Implementing a Linked List in Java using Class"
},
{
"code": null,
"e": 4194,
"s": 4156,
"text": "Factory method design pattern in Java"
}
] |
Maximum possible XOR of every element in an array with another array in C++ | In this problem, we are given two arrays A and B of n elements each. Our task is to create a program to find the maximum possible XOR of every element in an array with another array.
We have to compute the maximum XOR for each element of array A with array B i.e. for each element of array A we will select an element in array B which will have the maximum XOR value.
Let's take an example to understand the problem −
Input −
array A = {3, 6 ,11, 9}
array B = {8, 2, 4, 1}
Output −
11 14 15 13
Explanation−
Let’s see the XOR combination of each element of array A with all elements of array B and then select the maximum for each.
3 XOR 8 = 11 3 XOR 2 = 1 3 XOR 4 = 7 3 XOR 1 = 2
Maximum = 11.
6 XOR 8 = 14 6 XOR 2 = 4 6 XOR 4 = 2 6 XOR 1 = 1
Maximum = 14.
11 XOR 8 = 3 11 XOR 2 = 9 11 XOR 4 = 15 11 XOR 1 = 10
Maximum = 15.
9 XOR 8 = 1 9 XOR 2 = 11 9 XOR 4 = 13 9 XOR 1 = 8
Maximum = 13.
To solve this problem, a simple and naive approach is to calculate all the combinations and print the maximum XOR as shown in the above example.
But this will not be effective as the code relies on two loops which make its complexity of the order O(n^2).
So, we will see a better solution to the problem.
It is to use a trie data structure which will store the binary of all elements of array B for the match with array A, to find the maximum XOR.
So, for an element of array A, we will check its most significant bit and try to make it 1. And the go to the next MSB. Following this we will get our maximum XOR element for an element of A in array B.
Program to find the maximum possible XOR of every element in an array with another array
Live Demo
#include<iostream>
using namespace std;
struct trie{
int value;
trie *child[2];
};
trie * get(){
trie * root = new trie;
root -> value = 0;
root -> child[0] = NULL;
root -> child[1] = NULL;
return root;
}
void insert(trie * root, int key){
trie * temp = root;
for (int i = 31; i >= 0; i--){
bool current_bit = key & (1 << i);
if (temp -> child[current_bit] == NULL)
temp -> child[current_bit] = get();
temp = temp -> child[current_bit];
}
temp -> value = key;
}
int findMaxXor(trie * root, int element){
trie * temp = root;
for (int i = 31; i >= 0; i--){
bool bits = ( element & ( 1 << i) );
if (temp -> child[1 - bits] != NULL)
temp = temp -> child[1 - bits];
else
temp = temp -> child[bits];
}
return (element ^ temp -> value);
}
int main(){
int A[] = {3, 11, 6, 9};
int B[] = {8, 2, 4, 1};
int N = sizeof(A)/sizeof(A[0]);
trie * root = get();
for (int i = 0; i < N; i++)
insert(root, B[i]);
cout<<"The maximum possible XOR of every possible element in array A with Array B is\n";
for (int i = 0; i < N; i++)
cout <<findMaxXor(root, A[i])<<"\t";
return 0;
}
The maximum possible XOR of every possible element in array A with Array B is
11 15 14 13 | [
{
"code": null,
"e": 1370,
"s": 1187,
"text": "In this problem, we are given two arrays A and B of n elements each. Our task is to create a program to find the maximum possible XOR of every element in an array with another array."
},
{
"code": null,
"e": 1555,
"s": 1370,
"text": "We have to compute the maximum XOR for each element of array A with array B i.e. for each element of array A we will select an element in array B which will have the maximum XOR value."
},
{
"code": null,
"e": 1605,
"s": 1555,
"text": "Let's take an example to understand the problem −"
},
{
"code": null,
"e": 1613,
"s": 1605,
"text": "Input −"
},
{
"code": null,
"e": 1660,
"s": 1613,
"text": "array A = {3, 6 ,11, 9}\narray B = {8, 2, 4, 1}"
},
{
"code": null,
"e": 1669,
"s": 1660,
"text": "Output −"
},
{
"code": null,
"e": 1681,
"s": 1669,
"text": "11 14 15 13"
},
{
"code": null,
"e": 1694,
"s": 1681,
"text": "Explanation−"
},
{
"code": null,
"e": 1818,
"s": 1694,
"text": "Let’s see the XOR combination of each element of array A with all elements of array B and then select the maximum for each."
},
{
"code": null,
"e": 2076,
"s": 1818,
"text": "3 XOR 8 = 11 3 XOR 2 = 1 3 XOR 4 = 7 3 XOR 1 = 2\nMaximum = 11.\n6 XOR 8 = 14 6 XOR 2 = 4 6 XOR 4 = 2 6 XOR 1 = 1\nMaximum = 14.\n11 XOR 8 = 3 11 XOR 2 = 9 11 XOR 4 = 15 11 XOR 1 = 10\nMaximum = 15.\n9 XOR 8 = 1 9 XOR 2 = 11 9 XOR 4 = 13 9 XOR 1 = 8\nMaximum = 13."
},
{
"code": null,
"e": 2221,
"s": 2076,
"text": "To solve this problem, a simple and naive approach is to calculate all the combinations and print the maximum XOR as shown in the above example."
},
{
"code": null,
"e": 2331,
"s": 2221,
"text": "But this will not be effective as the code relies on two loops which make its complexity of the order O(n^2)."
},
{
"code": null,
"e": 2381,
"s": 2331,
"text": "So, we will see a better solution to the problem."
},
{
"code": null,
"e": 2524,
"s": 2381,
"text": "It is to use a trie data structure which will store the binary of all elements of array B for the match with array A, to find the maximum XOR."
},
{
"code": null,
"e": 2727,
"s": 2524,
"text": "So, for an element of array A, we will check its most significant bit and try to make it 1. And the go to the next MSB. Following this we will get our maximum XOR element for an element of A in array B."
},
{
"code": null,
"e": 2816,
"s": 2727,
"text": "Program to find the maximum possible XOR of every element in an array with another array"
},
{
"code": null,
"e": 2827,
"s": 2816,
"text": " Live Demo"
},
{
"code": null,
"e": 4025,
"s": 2827,
"text": "#include<iostream>\nusing namespace std;\nstruct trie{\n int value;\n trie *child[2];\n};\ntrie * get(){\n trie * root = new trie;\n root -> value = 0;\n root -> child[0] = NULL;\n root -> child[1] = NULL;\n return root;\n}\nvoid insert(trie * root, int key){\n trie * temp = root;\n for (int i = 31; i >= 0; i--){\n bool current_bit = key & (1 << i);\n if (temp -> child[current_bit] == NULL)\n temp -> child[current_bit] = get();\n temp = temp -> child[current_bit];\n }\n temp -> value = key;\n}\nint findMaxXor(trie * root, int element){\n trie * temp = root;\n for (int i = 31; i >= 0; i--){\n bool bits = ( element & ( 1 << i) );\n if (temp -> child[1 - bits] != NULL)\n temp = temp -> child[1 - bits];\n else\n temp = temp -> child[bits];\n }\n return (element ^ temp -> value);\n}\nint main(){\n int A[] = {3, 11, 6, 9};\n int B[] = {8, 2, 4, 1};\n int N = sizeof(A)/sizeof(A[0]);\n trie * root = get();\n for (int i = 0; i < N; i++)\n insert(root, B[i]);\n cout<<\"The maximum possible XOR of every possible element in array A with Array B is\\n\";\n for (int i = 0; i < N; i++)\n cout <<findMaxXor(root, A[i])<<\"\\t\";\n return 0;\n}"
},
{
"code": null,
"e": 4115,
"s": 4025,
"text": "The maximum possible XOR of every possible element in array A with Array B is\n11 15 14 13"
}
] |
matplotlib.axes.Axes.barh() in Python | 13 Apr, 2020
Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute.
The Axes.barh() function in axes module of matplotlib library is used to make a horizontal bar plot.
Syntax: Axes.barh(self, y, width, height=0.8, left=None, *, align=’center’, **kwargs)
Parameters: This method accept the following parameters that are described below:
y: This parameter is the sequence of y coordinates of the bar.
height: This parameter is the height(s) of the bars.
width: This parameter is an optional parameter. And it is the width(s) of the bars with default value 0.8.
left : This parameter is also an optional parameter. And it is the x coordinate(s) of the left sides of the bars.
align: This parameter is also an optional parameter. And it is used for alignment of the bars to the y coordinates.
Returns: This returns the following:
BarContainer:This returns the container with all the bars and optionally errorbars.
Below examples illustrate the matplotlib.axes.Axes.barh() function in matplotlib.axes:
Example #1:
# Implementation of matplotlib functionimport matplotlib.pyplot as pltimport numpy as np data = ((1000, 30), (30, 10), (30, 100), (800, 500), (50, 10)) dim = len(data[0])w = 0.3dimw = w / dim fig, ax = plt.subplots()x = np.arange(len(data)) for i in range(len(data[0])): y = [d[i] for d in data] b = ax.barh(x + i * dimw, y, dimw, left = 0.001) ax.set_yticks(x + dimw / 2)ax.set_yticklabels(map(str, x))ax.set_xscale('log') ax.set_title('matplotlib.axes.Axes.barh Example') plt.show()
Output:
Example #2:
# ImpleMinetation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt labels = ['Month1', 'Month2', 'Month3', 'Month4'] mine = [21, 52, 33, 54]others = [54, 23, 32, 41]Mine_std = [2, 3, 4, 1]Others_std = [3, 5, 2, 3]width = 0.3 fig, ax = plt.subplots() ax.barh(labels, mine, width, xerr = Mine_std, label ='Mine') ax.barh(labels, others, width, xerr = Others_std, left = mine, label ='Others') ax.set_xlabel('Articles')ax.legend() ax.set_title('matplotlib.axes.Axes.barh Example') plt.show()
Output:
Python-matplotlib
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
How to iterate through Excel rows in Python?
Deque in Python
Defaultdict in Python
Queue in Python
Rotate axis tick labels in Seaborn and Matplotlib
Check if element exists in list in Python
Python Classes and Objects
Bar Plot in Matplotlib
Python OOPs Concepts
How To Convert Python Dictionary To JSON? | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n13 Apr, 2020"
},
{
"code": null,
"e": 328,
"s": 28,
"text": "Matplotlib is a library in Python and it is numerical – mathematical extension for NumPy library. The Axes Class contains most of the figure elements: Axis, Tick, Line2D, Text, Polygon, etc., and sets the coordinate system. And the instances of Axes supports callbacks through a callbacks attribute."
},
{
"code": null,
"e": 429,
"s": 328,
"text": "The Axes.barh() function in axes module of matplotlib library is used to make a horizontal bar plot."
},
{
"code": null,
"e": 515,
"s": 429,
"text": "Syntax: Axes.barh(self, y, width, height=0.8, left=None, *, align=’center’, **kwargs)"
},
{
"code": null,
"e": 597,
"s": 515,
"text": "Parameters: This method accept the following parameters that are described below:"
},
{
"code": null,
"e": 660,
"s": 597,
"text": "y: This parameter is the sequence of y coordinates of the bar."
},
{
"code": null,
"e": 713,
"s": 660,
"text": "height: This parameter is the height(s) of the bars."
},
{
"code": null,
"e": 820,
"s": 713,
"text": "width: This parameter is an optional parameter. And it is the width(s) of the bars with default value 0.8."
},
{
"code": null,
"e": 934,
"s": 820,
"text": "left : This parameter is also an optional parameter. And it is the x coordinate(s) of the left sides of the bars."
},
{
"code": null,
"e": 1050,
"s": 934,
"text": "align: This parameter is also an optional parameter. And it is used for alignment of the bars to the y coordinates."
},
{
"code": null,
"e": 1087,
"s": 1050,
"text": "Returns: This returns the following:"
},
{
"code": null,
"e": 1171,
"s": 1087,
"text": "BarContainer:This returns the container with all the bars and optionally errorbars."
},
{
"code": null,
"e": 1258,
"s": 1171,
"text": "Below examples illustrate the matplotlib.axes.Axes.barh() function in matplotlib.axes:"
},
{
"code": null,
"e": 1270,
"s": 1258,
"text": "Example #1:"
},
{
"code": "# Implementation of matplotlib functionimport matplotlib.pyplot as pltimport numpy as np data = ((1000, 30), (30, 10), (30, 100), (800, 500), (50, 10)) dim = len(data[0])w = 0.3dimw = w / dim fig, ax = plt.subplots()x = np.arange(len(data)) for i in range(len(data[0])): y = [d[i] for d in data] b = ax.barh(x + i * dimw, y, dimw, left = 0.001) ax.set_yticks(x + dimw / 2)ax.set_yticklabels(map(str, x))ax.set_xscale('log') ax.set_title('matplotlib.axes.Axes.barh Example') plt.show()",
"e": 1798,
"s": 1270,
"text": null
},
{
"code": null,
"e": 1806,
"s": 1798,
"text": "Output:"
},
{
"code": null,
"e": 1818,
"s": 1806,
"text": "Example #2:"
},
{
"code": "# ImpleMinetation of matplotlib functionimport numpy as npimport matplotlib.pyplot as plt labels = ['Month1', 'Month2', 'Month3', 'Month4'] mine = [21, 52, 33, 54]others = [54, 23, 32, 41]Mine_std = [2, 3, 4, 1]Others_std = [3, 5, 2, 3]width = 0.3 fig, ax = plt.subplots() ax.barh(labels, mine, width, xerr = Mine_std, label ='Mine') ax.barh(labels, others, width, xerr = Others_std, left = mine, label ='Others') ax.set_xlabel('Articles')ax.legend() ax.set_title('matplotlib.axes.Axes.barh Example') plt.show()",
"e": 2387,
"s": 1818,
"text": null
},
{
"code": null,
"e": 2395,
"s": 2387,
"text": "Output:"
},
{
"code": null,
"e": 2413,
"s": 2395,
"text": "Python-matplotlib"
},
{
"code": null,
"e": 2420,
"s": 2413,
"text": "Python"
},
{
"code": null,
"e": 2518,
"s": 2420,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 2563,
"s": 2518,
"text": "How to iterate through Excel rows in Python?"
},
{
"code": null,
"e": 2579,
"s": 2563,
"text": "Deque in Python"
},
{
"code": null,
"e": 2601,
"s": 2579,
"text": "Defaultdict in Python"
},
{
"code": null,
"e": 2617,
"s": 2601,
"text": "Queue in Python"
},
{
"code": null,
"e": 2667,
"s": 2617,
"text": "Rotate axis tick labels in Seaborn and Matplotlib"
},
{
"code": null,
"e": 2709,
"s": 2667,
"text": "Check if element exists in list in Python"
},
{
"code": null,
"e": 2736,
"s": 2709,
"text": "Python Classes and Objects"
},
{
"code": null,
"e": 2759,
"s": 2736,
"text": "Bar Plot in Matplotlib"
},
{
"code": null,
"e": 2780,
"s": 2759,
"text": "Python OOPs Concepts"
}
] |
How to use Xpath with BeautifulSoup ? | 16 Mar, 2021
Prerequisites: Beautifulsoup
In this article, we will see how to use Xpath with BeautifulSoup. Getting data from an element on the webpage using lxml requires the usage of Xpaths. XPath works very much like a traditional file system
First, we need to install all these modules on our computer.
BeautifulSoup: Our primary module contains a method to access a webpage over HTTP.
pip install bs4
lxml: Helper library to process webpages in python language.
pip install lxml
requests: Makes the process of sending HTTP requests flawless.the output of the function
pip install requests
Getting data from an element on the webpage using lxml requires the usage of Xpaths.
XPath works very much like a traditional file system.
To access file 1,
C:/File1
Similarly, To access file 2,
C:/Documents/User1/File2
Right-click the element in the page and click on Inspect.
Right-click on the element in the Elements Tab.
Click on copy XPath.
Import module
Scrap content from a webpage
Now to use the Xpath we need to convert the soup object to an etree object because BeautifulSoup by default doesn’t support working with XPath.
However, lxml supports XPath 1.0. It has a BeautifulSoup compatible mode where it’ll try and parse broken HTML the way Soup does.
To copy the XPath of an element we need to inspect the element and then right-click on it’s HTML and find the XPath.
After this, you can use the .xpath method available in etree class of lxml module to parse the value inside the concerned element.
Note: If XPath is not giving you the desired result copy the full XPath instead of XPath and the rest other steps would be the same.
Given below is an example to show how Xpath can be used with Beautifulsoup
Program:
Python3
from bs4 import BeautifulSoupfrom lxml import etreeimport requests URL = "https://en.wikipedia.org/wiki/Nike,_Inc." HEADERS = ({'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 \ (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36',\ 'Accept-Language': 'en-US, en;q=0.5'}) webpage = requests.get(URL, headers=HEADERS)soup = BeautifulSoup(webpage.content, "html.parser")dom = etree.HTML(str(soup))print(dom.xpath('//*[@id="firstHeading"]')[0].text)
Output:
Nike, Inc.
Picked
Python BeautifulSoup
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 53,
"s": 25,
"text": "\n16 Mar, 2021"
},
{
"code": null,
"e": 82,
"s": 53,
"text": "Prerequisites: Beautifulsoup"
},
{
"code": null,
"e": 286,
"s": 82,
"text": "In this article, we will see how to use Xpath with BeautifulSoup. Getting data from an element on the webpage using lxml requires the usage of Xpaths. XPath works very much like a traditional file system"
},
{
"code": null,
"e": 347,
"s": 286,
"text": "First, we need to install all these modules on our computer."
},
{
"code": null,
"e": 430,
"s": 347,
"text": "BeautifulSoup: Our primary module contains a method to access a webpage over HTTP."
},
{
"code": null,
"e": 446,
"s": 430,
"text": "pip install bs4"
},
{
"code": null,
"e": 507,
"s": 446,
"text": "lxml: Helper library to process webpages in python language."
},
{
"code": null,
"e": 524,
"s": 507,
"text": "pip install lxml"
},
{
"code": null,
"e": 613,
"s": 524,
"text": "requests: Makes the process of sending HTTP requests flawless.the output of the function"
},
{
"code": null,
"e": 634,
"s": 613,
"text": "pip install requests"
},
{
"code": null,
"e": 719,
"s": 634,
"text": "Getting data from an element on the webpage using lxml requires the usage of Xpaths."
},
{
"code": null,
"e": 773,
"s": 719,
"text": "XPath works very much like a traditional file system."
},
{
"code": null,
"e": 791,
"s": 773,
"text": "To access file 1,"
},
{
"code": null,
"e": 800,
"s": 791,
"text": "C:/File1"
},
{
"code": null,
"e": 829,
"s": 800,
"text": "Similarly, To access file 2,"
},
{
"code": null,
"e": 854,
"s": 829,
"text": "C:/Documents/User1/File2"
},
{
"code": null,
"e": 912,
"s": 854,
"text": "Right-click the element in the page and click on Inspect."
},
{
"code": null,
"e": 960,
"s": 912,
"text": "Right-click on the element in the Elements Tab."
},
{
"code": null,
"e": 981,
"s": 960,
"text": "Click on copy XPath."
},
{
"code": null,
"e": 995,
"s": 981,
"text": "Import module"
},
{
"code": null,
"e": 1024,
"s": 995,
"text": "Scrap content from a webpage"
},
{
"code": null,
"e": 1168,
"s": 1024,
"text": "Now to use the Xpath we need to convert the soup object to an etree object because BeautifulSoup by default doesn’t support working with XPath."
},
{
"code": null,
"e": 1298,
"s": 1168,
"text": "However, lxml supports XPath 1.0. It has a BeautifulSoup compatible mode where it’ll try and parse broken HTML the way Soup does."
},
{
"code": null,
"e": 1415,
"s": 1298,
"text": "To copy the XPath of an element we need to inspect the element and then right-click on it’s HTML and find the XPath."
},
{
"code": null,
"e": 1546,
"s": 1415,
"text": "After this, you can use the .xpath method available in etree class of lxml module to parse the value inside the concerned element."
},
{
"code": null,
"e": 1679,
"s": 1546,
"text": "Note: If XPath is not giving you the desired result copy the full XPath instead of XPath and the rest other steps would be the same."
},
{
"code": null,
"e": 1754,
"s": 1679,
"text": "Given below is an example to show how Xpath can be used with Beautifulsoup"
},
{
"code": null,
"e": 1763,
"s": 1754,
"text": "Program:"
},
{
"code": null,
"e": 1771,
"s": 1763,
"text": "Python3"
},
{
"code": "from bs4 import BeautifulSoupfrom lxml import etreeimport requests URL = \"https://en.wikipedia.org/wiki/Nike,_Inc.\" HEADERS = ({'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 \\ (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36',\\ 'Accept-Language': 'en-US, en;q=0.5'}) webpage = requests.get(URL, headers=HEADERS)soup = BeautifulSoup(webpage.content, \"html.parser\")dom = etree.HTML(str(soup))print(dom.xpath('//*[@id=\"firstHeading\"]')[0].text)",
"e": 2277,
"s": 1771,
"text": null
},
{
"code": null,
"e": 2285,
"s": 2277,
"text": "Output:"
},
{
"code": null,
"e": 2296,
"s": 2285,
"text": "Nike, Inc."
},
{
"code": null,
"e": 2303,
"s": 2296,
"text": "Picked"
},
{
"code": null,
"e": 2324,
"s": 2303,
"text": "Python BeautifulSoup"
},
{
"code": null,
"e": 2331,
"s": 2324,
"text": "Python"
}
] |
Suffix Tree Application 3 – Longest Repeated Substring | 22 Nov, 2021
Given a text string, find Longest Repeated Substring in the text. If there are more than one Longest Repeated Substrings, get any one of them.
Longest Repeated Substring in GEEKSFORGEEKS is: GEEKS
Longest Repeated Substring in AAAAAAAAAA is: AAAAAAAAA
Longest Repeated Substring in ABCDEFG is: No repeated substring
Longest Repeated Substring in ABABABA is: ABABA
Longest Repeated Substring in ATCGATCGA is: ATCGA
Longest Repeated Substring in banana is: ana
Longest Repeated Substring in abcpqrabpqpq is: ab (pq is another LRS here)
This problem can be solved by different approaches with varying time and space complexities. Here we will discuss Suffix Tree approach (3rd Suffix Tree Application). Other approaches will be discussed soon.As a prerequisite, we must know how to build a suffix tree in one or the other way. Here we will build suffix tree using Ukkonen’s Algorithm, discussed already as below: Ukkonen’s Suffix Tree Construction – Part 1 Ukkonen’s Suffix Tree Construction – Part 2 Ukkonen’s Suffix Tree Construction – Part 3 Ukkonen’s Suffix Tree Construction – Part 4 Ukkonen’s Suffix Tree Construction – Part 5 Ukkonen’s Suffix Tree Construction – Part 6Lets look at following figure:
This is suffix tree for string “ABABABA$”. In this string, following substrings are repeated: A, B, AB, BA, ABA, BAB, ABAB, BABA, ABABA And Longest Repeated Substring is ABABA. In a suffix tree, one node can’t have more than one outgoing edge starting with same character, and so if there are repeated substring in the text, they will share on same path and that path in suffix tree will go through one or more internal node(s) down the tree (below the point where substring ends on that path). In above figure, we can see that
Path with Substring “A” has three internal nodes down the tree
Path with Substring “AB” has two internal nodes down the tree
Path with Substring “ABA” has two internal nodes down the tree
Path with Substring “ABAB” has one internal node down the tree
Path with Substring “ABABA” has one internal node down the tree
Path with Substring “B” has two internal nodes down the tree
Path with Substring “BA” has two internal nodes down the tree
Path with Substring “BAB” has one internal node down the tree
Path with Substring “BABA” has one internal node down the tree
All above substrings are repeated.Substrings ABABAB, ABABABA, BABAB, BABABA have no internal node down the tree (after the point where substring end on the path), and so these are not repeated.Can you see how to find longest repeated substring ?? We can see in figure that, longest repeated substring will end at the internal node which is farthest from the root (i.e. deepest node in the tree), because length of substring is the path label length from root to that internal node.So finding longest repeated substring boils down to finding the deepest node in suffix tree and then get the path label from root to that deepest internal node.
C
// A C program to implement Ukkonen's Suffix Tree Construction// And then find Longest Repeated Substring#include <stdio.h>#include <string.h>#include <stdlib.h>#define MAX_CHAR 256 struct SuffixTreeNode { struct SuffixTreeNode *children[MAX_CHAR]; //pointer to other node via suffix link struct SuffixTreeNode *suffixLink; /*(start, end) interval specifies the edge, by which the node is connected to its parent node. Each edge will connect two nodes, one parent and one child, and (start, end) interval of a given edge will be stored in the child node. Lets say there are two nods A and B connected by an edge with indices (5, 8) then this indices (5, 8) will be stored in node B. */ int start; int *end; /*for leaf nodes, it stores the index of suffix for the path from root to leaf*/ int suffixIndex;}; typedef struct SuffixTreeNode Node; char text[100]; //Input stringNode *root = NULL; //Pointer to root node /*lastNewNode will point to newly created internal node, waiting for it's suffix link to be set, which might get a new suffix link (other than root) in next extension of same phase. lastNewNode will be set to NULL when last newly created internal node (if there is any) got it's suffix link reset to new internal node created in next extension of same phase. */Node *lastNewNode = NULL;Node *activeNode = NULL; /*activeEdge is represented as input string character index (not the character itself)*/int activeEdge = -1;int activeLength = 0; // remainingSuffixCount tells how many suffixes yet to// be added in treeint remainingSuffixCount = 0;int leafEnd = -1;int *rootEnd = NULL;int *splitEnd = NULL;int size = -1; //Length of input string Node *newNode(int start, int *end){ Node *node =(Node*) malloc(sizeof(Node)); int i; for (i = 0; i < MAX_CHAR; i++) node->children[i] = NULL; /*For root node, suffixLink will be set to NULL For internal nodes, suffixLink will be set to root by default in current extension and may change in next extension*/ node->suffixLink = root; node->start = start; node->end = end; /*suffixIndex will be set to -1 by default and actual suffix index will be set later for leaves at the end of all phases*/ node->suffixIndex = -1; return node;} int edgeLength(Node *n) { if(n == root) return 0; return *(n->end) - (n->start) + 1;} int walkDown(Node *currNode){ /*activePoint change for walk down (APCFWD) using Skip/Count Trick (Trick 1). If activeLength is greater than current edge length, set next internal node as activeNode and adjust activeEdge and activeLength accordingly to represent same activePoint*/ if (activeLength >= edgeLength(currNode)) { activeEdge += edgeLength(currNode); activeLength -= edgeLength(currNode); activeNode = currNode; return 1; } return 0;} void extendSuffixTree(int pos){ /*Extension Rule 1, this takes care of extending all leaves created so far in tree*/ leafEnd = pos; /*Increment remainingSuffixCount indicating that a new suffix added to the list of suffixes yet to be added in tree*/ remainingSuffixCount++; /*set lastNewNode to NULL while starting a new phase, indicating there is no internal node waiting for it's suffix link reset in current phase*/ lastNewNode = NULL; //Add all suffixes (yet to be added) one by one in tree while(remainingSuffixCount > 0) { if (activeLength == 0) activeEdge = pos; //APCFALZ // There is no outgoing edge starting with // activeEdge from activeNode if (activeNode->children] == NULL) { //Extension Rule 2 (A new leaf edge gets created) activeNode->children] = newNode(pos, &leafEnd); /*A new leaf edge is created in above line starting from an existing node (the current activeNode), and if there is any internal node waiting for it's suffix link get reset, point the suffix link from that last internal node to current activeNode. Then set lastNewNode to NULL indicating no more node waiting for suffix link reset.*/ if (lastNewNode != NULL) { lastNewNode->suffixLink = activeNode; lastNewNode = NULL; } } // There is an outgoing edge starting with activeEdge // from activeNode else { // Get the next node at the end of edge starting // with activeEdge Node *next = activeNode->children]; if (walkDown(next))//Do walkdown { //Start from next node (the new activeNode) continue; } /*Extension Rule 3 (current character being processed is already on the edge)*/ if (text[next->start + activeLength] == text[pos]) { //If a newly created node waiting for it's //suffix link to be set, then set suffix link //of that waiting node to current active node if(lastNewNode != NULL && activeNode != root) { lastNewNode->suffixLink = activeNode; lastNewNode = NULL; } //APCFER3 activeLength++; /*STOP all further processing in this phase and move on to next phase*/ break; } /*We will be here when activePoint is in middle of the edge being traversed and current character being processed is not on the edge (we fall off the tree). In this case, we add a new internal node and a new leaf edge going out of that new node. This is Extension Rule 2, where a new leaf edge and a new internal node get created*/ splitEnd = (int*) malloc(sizeof(int)); *splitEnd = next->start + activeLength - 1; //New internal node Node *split = newNode(next->start, splitEnd); activeNode->children] = split; //New leaf coming out of new internal node split->children] = newNode(pos, &leafEnd); next->start += activeLength; split->children] = next; /*We got a new internal node here. If there is any internal node created in last extensions of same phase which is still waiting for it's suffix link reset, do it now.*/ if (lastNewNode != NULL) { /*suffixLink of lastNewNode points to current newly created internal node*/ lastNewNode->suffixLink = split; } /*Make the current newly created internal node waiting for it's suffix link reset (which is pointing to root at present). If we come across any other internal node (existing or newly created) in next extension of same phase, when a new leaf edge gets added (i.e. when Extension Rule 2 applies is any of the next extension of same phase) at that point, suffixLink of this node will point to that internal node.*/ lastNewNode = split; } /* One suffix got added in tree, decrement the count of suffixes yet to be added.*/ remainingSuffixCount--; if (activeNode == root && activeLength > 0) //APCFER2C1 { activeLength--; activeEdge = pos - remainingSuffixCount + 1; } else if (activeNode != root) //APCFER2C2 { activeNode = activeNode->suffixLink; } }} void print(int i, int j){ int k; for (k=i; k<=j; k++) printf("%c", text[k]);} //Print the suffix tree as well along with setting suffix index//So tree will be printed in DFS manner//Each edge along with it's suffix index will be printedvoid setSuffixIndexByDFS(Node *n, int labelHeight){ if (n == NULL) return; if (n->start != -1) //A non-root node { //Print the label on edge from parent to current node //Uncomment below line to print suffix tree // print(n->start, *(n->end)); } int leaf = 1; int i; for (i = 0; i < MAX_CHAR; i++) { if (n->children[i] != NULL) { //Uncomment below two lines to print suffix index // if (leaf == 1 && n->start != -1) // printf(" [%d]\n", n->suffixIndex); //Current node is not a leaf as it has outgoing //edges from it. leaf = 0; setSuffixIndexByDFS(n->children[i], labelHeight + edgeLength(n->children[i])); } } if (leaf == 1) { n->suffixIndex = size - labelHeight; //Uncomment below line to print suffix index //printf(" [%d]\n", n->suffixIndex); }} void freeSuffixTreeByPostOrder(Node *n){ if (n == NULL) return; int i; for (i = 0; i < MAX_CHAR; i++) { if (n->children[i] != NULL) { freeSuffixTreeByPostOrder(n->children[i]); } } if (n->suffixIndex == -1) free(n->end); free(n);} /*Build the suffix tree and print the edge labels along withsuffixIndex. suffixIndex for leaf edges will be >= 0 andfor non-leaf edges will be -1*/void buildSuffixTree(){ size = strlen(text); int i; rootEnd = (int*) malloc(sizeof(int)); *rootEnd = - 1; /*Root is a special node with start and end indices as -1, as it has no parent from where an edge comes to root*/ root = newNode(-1, rootEnd); activeNode = root; //First activeNode will be root for (i=0; i<size; i++) extendSuffixTree(i); int labelHeight = 0; setSuffixIndexByDFS(root, labelHeight);} void doTraversal(Node *n, int labelHeight, int* maxHeight,int* substringStartIndex){ if(n == NULL) { return; } int i=0; if(n->suffixIndex == -1) //If it is internal node { for (i = 0; i < MAX_CHAR; i++) { if(n->children[i] != NULL) { doTraversal(n->children[i], labelHeight + edgeLength(n->children[i]), maxHeight, substringStartIndex); } } } else if(n->suffixIndex > -1 && (*maxHeight < labelHeight - edgeLength(n))) { *maxHeight = labelHeight - edgeLength(n); *substringStartIndex = n->suffixIndex; }} void getLongestRepeatedSubstring(){ int maxHeight = 0; int substringStartIndex = 0; doTraversal(root, 0, &maxHeight, &substringStartIndex);// printf("maxHeight %d, substringStartIndex %d\n", maxHeight,// substringStartIndex); printf("Longest Repeated Substring in %s is: ", text); int k; for (k=0; k<maxHeight; k++) printf("%c", text[k + substringStartIndex]); if(k == 0) printf("No repeated substring"); printf("\n");} // driver program to test above functionsint main(int argc, char *argv[]){ strcpy(text, "GEEKSFORGEEKS$"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, "AAAAAAAAAA$"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, "ABCDEFG$"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, "ABABABA$"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, "ATCGATCGA$"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, "banana$"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, "abcpqrabpqpq$"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, "pqrpqpqabab$"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); return 0;}
Output:
Longest Repeated Substring in GEEKSFORGEEKS$ is: GEEKS
Longest Repeated Substring in AAAAAAAAAA$ is: AAAAAAAAA
Longest Repeated Substring in ABCDEFG$ is: No repeated substring
Longest Repeated Substring in ABABABA$ is: ABABA
Longest Repeated Substring in ATCGATCGA$ is: ATCGA
Longest Repeated Substring in banana$ is: ana
Longest Repeated Substring in abcpqrabpqpq$ is: ab
Longest Repeated Substring in pqrpqpqabab$ is: ab
In case of multiple LRS (As we see in last two test cases), this implementation prints the LRS which comes 1st lexicographically.Ukkonen’s Suffix Tree Construction takes O(N) time and space to build suffix tree for a string of length N and after that finding deepest node will take O(N). So it is linear in time and space.Followup questions:
Find all repeated substrings in given textFind all unique substrings in given textFind all repeated substrings of a given lengthFind all unique substrings of a given lengthIn case of multiple LRS in text, find the one which occurs most number of times
Find all repeated substrings in given text
Find all unique substrings in given text
Find all repeated substrings of a given length
Find all unique substrings of a given length
In case of multiple LRS in text, find the one which occurs most number of times
All these problems can be solved in linear time with few changes in above implementation.We have published following more articles on suffix tree applications:
Suffix Tree Application 1 – Substring Check
Suffix Tree Application 2 – Searching All Patterns
Suffix Tree Application 4 – Build Linear Time Suffix Array
Generalized Suffix Tree 1
Suffix Tree Application 5 – Longest Common Substring
Suffix Tree Application 6 – Longest Palindromic Substring
This article is contributed by Anurag Singh. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above
nidhi_biet
rajeev0719singh
sagar0719kumar
surinderdawra388
Suffix-Tree
Advanced Data Structure
Pattern Searching
Pattern Searching
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 52,
"s": 24,
"text": "\n22 Nov, 2021"
},
{
"code": null,
"e": 197,
"s": 52,
"text": "Given a text string, find Longest Repeated Substring in the text. If there are more than one Longest Repeated Substrings, get any one of them. "
},
{
"code": null,
"e": 588,
"s": 197,
"text": "Longest Repeated Substring in GEEKSFORGEEKS is: GEEKS\nLongest Repeated Substring in AAAAAAAAAA is: AAAAAAAAA\nLongest Repeated Substring in ABCDEFG is: No repeated substring\nLongest Repeated Substring in ABABABA is: ABABA\nLongest Repeated Substring in ATCGATCGA is: ATCGA\nLongest Repeated Substring in banana is: ana\nLongest Repeated Substring in abcpqrabpqpq is: ab (pq is another LRS here)"
},
{
"code": null,
"e": 1260,
"s": 588,
"text": "This problem can be solved by different approaches with varying time and space complexities. Here we will discuss Suffix Tree approach (3rd Suffix Tree Application). Other approaches will be discussed soon.As a prerequisite, we must know how to build a suffix tree in one or the other way. Here we will build suffix tree using Ukkonen’s Algorithm, discussed already as below: Ukkonen’s Suffix Tree Construction – Part 1 Ukkonen’s Suffix Tree Construction – Part 2 Ukkonen’s Suffix Tree Construction – Part 3 Ukkonen’s Suffix Tree Construction – Part 4 Ukkonen’s Suffix Tree Construction – Part 5 Ukkonen’s Suffix Tree Construction – Part 6Lets look at following figure: "
},
{
"code": null,
"e": 1790,
"s": 1260,
"text": "This is suffix tree for string “ABABABA$”. In this string, following substrings are repeated: A, B, AB, BA, ABA, BAB, ABAB, BABA, ABABA And Longest Repeated Substring is ABABA. In a suffix tree, one node can’t have more than one outgoing edge starting with same character, and so if there are repeated substring in the text, they will share on same path and that path in suffix tree will go through one or more internal node(s) down the tree (below the point where substring ends on that path). In above figure, we can see that "
},
{
"code": null,
"e": 1853,
"s": 1790,
"text": "Path with Substring “A” has three internal nodes down the tree"
},
{
"code": null,
"e": 1915,
"s": 1853,
"text": "Path with Substring “AB” has two internal nodes down the tree"
},
{
"code": null,
"e": 1978,
"s": 1915,
"text": "Path with Substring “ABA” has two internal nodes down the tree"
},
{
"code": null,
"e": 2041,
"s": 1978,
"text": "Path with Substring “ABAB” has one internal node down the tree"
},
{
"code": null,
"e": 2105,
"s": 2041,
"text": "Path with Substring “ABABA” has one internal node down the tree"
},
{
"code": null,
"e": 2166,
"s": 2105,
"text": "Path with Substring “B” has two internal nodes down the tree"
},
{
"code": null,
"e": 2228,
"s": 2166,
"text": "Path with Substring “BA” has two internal nodes down the tree"
},
{
"code": null,
"e": 2290,
"s": 2228,
"text": "Path with Substring “BAB” has one internal node down the tree"
},
{
"code": null,
"e": 2353,
"s": 2290,
"text": "Path with Substring “BABA” has one internal node down the tree"
},
{
"code": null,
"e": 2996,
"s": 2353,
"text": "All above substrings are repeated.Substrings ABABAB, ABABABA, BABAB, BABABA have no internal node down the tree (after the point where substring end on the path), and so these are not repeated.Can you see how to find longest repeated substring ?? We can see in figure that, longest repeated substring will end at the internal node which is farthest from the root (i.e. deepest node in the tree), because length of substring is the path label length from root to that internal node.So finding longest repeated substring boils down to finding the deepest node in suffix tree and then get the path label from root to that deepest internal node. "
},
{
"code": null,
"e": 2998,
"s": 2996,
"text": "C"
},
{
"code": "// A C program to implement Ukkonen's Suffix Tree Construction// And then find Longest Repeated Substring#include <stdio.h>#include <string.h>#include <stdlib.h>#define MAX_CHAR 256 struct SuffixTreeNode { struct SuffixTreeNode *children[MAX_CHAR]; //pointer to other node via suffix link struct SuffixTreeNode *suffixLink; /*(start, end) interval specifies the edge, by which the node is connected to its parent node. Each edge will connect two nodes, one parent and one child, and (start, end) interval of a given edge will be stored in the child node. Lets say there are two nods A and B connected by an edge with indices (5, 8) then this indices (5, 8) will be stored in node B. */ int start; int *end; /*for leaf nodes, it stores the index of suffix for the path from root to leaf*/ int suffixIndex;}; typedef struct SuffixTreeNode Node; char text[100]; //Input stringNode *root = NULL; //Pointer to root node /*lastNewNode will point to newly created internal node, waiting for it's suffix link to be set, which might get a new suffix link (other than root) in next extension of same phase. lastNewNode will be set to NULL when last newly created internal node (if there is any) got it's suffix link reset to new internal node created in next extension of same phase. */Node *lastNewNode = NULL;Node *activeNode = NULL; /*activeEdge is represented as input string character index (not the character itself)*/int activeEdge = -1;int activeLength = 0; // remainingSuffixCount tells how many suffixes yet to// be added in treeint remainingSuffixCount = 0;int leafEnd = -1;int *rootEnd = NULL;int *splitEnd = NULL;int size = -1; //Length of input string Node *newNode(int start, int *end){ Node *node =(Node*) malloc(sizeof(Node)); int i; for (i = 0; i < MAX_CHAR; i++) node->children[i] = NULL; /*For root node, suffixLink will be set to NULL For internal nodes, suffixLink will be set to root by default in current extension and may change in next extension*/ node->suffixLink = root; node->start = start; node->end = end; /*suffixIndex will be set to -1 by default and actual suffix index will be set later for leaves at the end of all phases*/ node->suffixIndex = -1; return node;} int edgeLength(Node *n) { if(n == root) return 0; return *(n->end) - (n->start) + 1;} int walkDown(Node *currNode){ /*activePoint change for walk down (APCFWD) using Skip/Count Trick (Trick 1). If activeLength is greater than current edge length, set next internal node as activeNode and adjust activeEdge and activeLength accordingly to represent same activePoint*/ if (activeLength >= edgeLength(currNode)) { activeEdge += edgeLength(currNode); activeLength -= edgeLength(currNode); activeNode = currNode; return 1; } return 0;} void extendSuffixTree(int pos){ /*Extension Rule 1, this takes care of extending all leaves created so far in tree*/ leafEnd = pos; /*Increment remainingSuffixCount indicating that a new suffix added to the list of suffixes yet to be added in tree*/ remainingSuffixCount++; /*set lastNewNode to NULL while starting a new phase, indicating there is no internal node waiting for it's suffix link reset in current phase*/ lastNewNode = NULL; //Add all suffixes (yet to be added) one by one in tree while(remainingSuffixCount > 0) { if (activeLength == 0) activeEdge = pos; //APCFALZ // There is no outgoing edge starting with // activeEdge from activeNode if (activeNode->children] == NULL) { //Extension Rule 2 (A new leaf edge gets created) activeNode->children] = newNode(pos, &leafEnd); /*A new leaf edge is created in above line starting from an existing node (the current activeNode), and if there is any internal node waiting for it's suffix link get reset, point the suffix link from that last internal node to current activeNode. Then set lastNewNode to NULL indicating no more node waiting for suffix link reset.*/ if (lastNewNode != NULL) { lastNewNode->suffixLink = activeNode; lastNewNode = NULL; } } // There is an outgoing edge starting with activeEdge // from activeNode else { // Get the next node at the end of edge starting // with activeEdge Node *next = activeNode->children]; if (walkDown(next))//Do walkdown { //Start from next node (the new activeNode) continue; } /*Extension Rule 3 (current character being processed is already on the edge)*/ if (text[next->start + activeLength] == text[pos]) { //If a newly created node waiting for it's //suffix link to be set, then set suffix link //of that waiting node to current active node if(lastNewNode != NULL && activeNode != root) { lastNewNode->suffixLink = activeNode; lastNewNode = NULL; } //APCFER3 activeLength++; /*STOP all further processing in this phase and move on to next phase*/ break; } /*We will be here when activePoint is in middle of the edge being traversed and current character being processed is not on the edge (we fall off the tree). In this case, we add a new internal node and a new leaf edge going out of that new node. This is Extension Rule 2, where a new leaf edge and a new internal node get created*/ splitEnd = (int*) malloc(sizeof(int)); *splitEnd = next->start + activeLength - 1; //New internal node Node *split = newNode(next->start, splitEnd); activeNode->children] = split; //New leaf coming out of new internal node split->children] = newNode(pos, &leafEnd); next->start += activeLength; split->children] = next; /*We got a new internal node here. If there is any internal node created in last extensions of same phase which is still waiting for it's suffix link reset, do it now.*/ if (lastNewNode != NULL) { /*suffixLink of lastNewNode points to current newly created internal node*/ lastNewNode->suffixLink = split; } /*Make the current newly created internal node waiting for it's suffix link reset (which is pointing to root at present). If we come across any other internal node (existing or newly created) in next extension of same phase, when a new leaf edge gets added (i.e. when Extension Rule 2 applies is any of the next extension of same phase) at that point, suffixLink of this node will point to that internal node.*/ lastNewNode = split; } /* One suffix got added in tree, decrement the count of suffixes yet to be added.*/ remainingSuffixCount--; if (activeNode == root && activeLength > 0) //APCFER2C1 { activeLength--; activeEdge = pos - remainingSuffixCount + 1; } else if (activeNode != root) //APCFER2C2 { activeNode = activeNode->suffixLink; } }} void print(int i, int j){ int k; for (k=i; k<=j; k++) printf(\"%c\", text[k]);} //Print the suffix tree as well along with setting suffix index//So tree will be printed in DFS manner//Each edge along with it's suffix index will be printedvoid setSuffixIndexByDFS(Node *n, int labelHeight){ if (n == NULL) return; if (n->start != -1) //A non-root node { //Print the label on edge from parent to current node //Uncomment below line to print suffix tree // print(n->start, *(n->end)); } int leaf = 1; int i; for (i = 0; i < MAX_CHAR; i++) { if (n->children[i] != NULL) { //Uncomment below two lines to print suffix index // if (leaf == 1 && n->start != -1) // printf(\" [%d]\\n\", n->suffixIndex); //Current node is not a leaf as it has outgoing //edges from it. leaf = 0; setSuffixIndexByDFS(n->children[i], labelHeight + edgeLength(n->children[i])); } } if (leaf == 1) { n->suffixIndex = size - labelHeight; //Uncomment below line to print suffix index //printf(\" [%d]\\n\", n->suffixIndex); }} void freeSuffixTreeByPostOrder(Node *n){ if (n == NULL) return; int i; for (i = 0; i < MAX_CHAR; i++) { if (n->children[i] != NULL) { freeSuffixTreeByPostOrder(n->children[i]); } } if (n->suffixIndex == -1) free(n->end); free(n);} /*Build the suffix tree and print the edge labels along withsuffixIndex. suffixIndex for leaf edges will be >= 0 andfor non-leaf edges will be -1*/void buildSuffixTree(){ size = strlen(text); int i; rootEnd = (int*) malloc(sizeof(int)); *rootEnd = - 1; /*Root is a special node with start and end indices as -1, as it has no parent from where an edge comes to root*/ root = newNode(-1, rootEnd); activeNode = root; //First activeNode will be root for (i=0; i<size; i++) extendSuffixTree(i); int labelHeight = 0; setSuffixIndexByDFS(root, labelHeight);} void doTraversal(Node *n, int labelHeight, int* maxHeight,int* substringStartIndex){ if(n == NULL) { return; } int i=0; if(n->suffixIndex == -1) //If it is internal node { for (i = 0; i < MAX_CHAR; i++) { if(n->children[i] != NULL) { doTraversal(n->children[i], labelHeight + edgeLength(n->children[i]), maxHeight, substringStartIndex); } } } else if(n->suffixIndex > -1 && (*maxHeight < labelHeight - edgeLength(n))) { *maxHeight = labelHeight - edgeLength(n); *substringStartIndex = n->suffixIndex; }} void getLongestRepeatedSubstring(){ int maxHeight = 0; int substringStartIndex = 0; doTraversal(root, 0, &maxHeight, &substringStartIndex);// printf(\"maxHeight %d, substringStartIndex %d\\n\", maxHeight,// substringStartIndex); printf(\"Longest Repeated Substring in %s is: \", text); int k; for (k=0; k<maxHeight; k++) printf(\"%c\", text[k + substringStartIndex]); if(k == 0) printf(\"No repeated substring\"); printf(\"\\n\");} // driver program to test above functionsint main(int argc, char *argv[]){ strcpy(text, \"GEEKSFORGEEKS$\"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, \"AAAAAAAAAA$\"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, \"ABCDEFG$\"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, \"ABABABA$\"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, \"ATCGATCGA$\"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, \"banana$\"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, \"abcpqrabpqpq$\"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); strcpy(text, \"pqrpqpqabab$\"); buildSuffixTree(); getLongestRepeatedSubstring(); //Free the dynamically allocated memory freeSuffixTreeByPostOrder(root); return 0;}",
"e": 15595,
"s": 2998,
"text": null
},
{
"code": null,
"e": 15603,
"s": 15595,
"text": "Output:"
},
{
"code": null,
"e": 16026,
"s": 15603,
"text": "Longest Repeated Substring in GEEKSFORGEEKS$ is: GEEKS\nLongest Repeated Substring in AAAAAAAAAA$ is: AAAAAAAAA\nLongest Repeated Substring in ABCDEFG$ is: No repeated substring\nLongest Repeated Substring in ABABABA$ is: ABABA\nLongest Repeated Substring in ATCGATCGA$ is: ATCGA\nLongest Repeated Substring in banana$ is: ana\nLongest Repeated Substring in abcpqrabpqpq$ is: ab\nLongest Repeated Substring in pqrpqpqabab$ is: ab"
},
{
"code": null,
"e": 16370,
"s": 16026,
"text": "In case of multiple LRS (As we see in last two test cases), this implementation prints the LRS which comes 1st lexicographically.Ukkonen’s Suffix Tree Construction takes O(N) time and space to build suffix tree for a string of length N and after that finding deepest node will take O(N). So it is linear in time and space.Followup questions: "
},
{
"code": null,
"e": 16622,
"s": 16370,
"text": "Find all repeated substrings in given textFind all unique substrings in given textFind all repeated substrings of a given lengthFind all unique substrings of a given lengthIn case of multiple LRS in text, find the one which occurs most number of times"
},
{
"code": null,
"e": 16665,
"s": 16622,
"text": "Find all repeated substrings in given text"
},
{
"code": null,
"e": 16706,
"s": 16665,
"text": "Find all unique substrings in given text"
},
{
"code": null,
"e": 16753,
"s": 16706,
"text": "Find all repeated substrings of a given length"
},
{
"code": null,
"e": 16798,
"s": 16753,
"text": "Find all unique substrings of a given length"
},
{
"code": null,
"e": 16878,
"s": 16798,
"text": "In case of multiple LRS in text, find the one which occurs most number of times"
},
{
"code": null,
"e": 17040,
"s": 16878,
"text": "All these problems can be solved in linear time with few changes in above implementation.We have published following more articles on suffix tree applications: "
},
{
"code": null,
"e": 17086,
"s": 17040,
"text": "Suffix Tree Application 1 – Substring Check "
},
{
"code": null,
"e": 17139,
"s": 17086,
"text": "Suffix Tree Application 2 – Searching All Patterns "
},
{
"code": null,
"e": 17200,
"s": 17139,
"text": "Suffix Tree Application 4 – Build Linear Time Suffix Array "
},
{
"code": null,
"e": 17228,
"s": 17200,
"text": "Generalized Suffix Tree 1 "
},
{
"code": null,
"e": 17283,
"s": 17228,
"text": "Suffix Tree Application 5 – Longest Common Substring "
},
{
"code": null,
"e": 17343,
"s": 17283,
"text": "Suffix Tree Application 6 – Longest Palindromic Substring "
},
{
"code": null,
"e": 17513,
"s": 17343,
"text": "This article is contributed by Anurag Singh. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above "
},
{
"code": null,
"e": 17524,
"s": 17513,
"text": "nidhi_biet"
},
{
"code": null,
"e": 17540,
"s": 17524,
"text": "rajeev0719singh"
},
{
"code": null,
"e": 17555,
"s": 17540,
"text": "sagar0719kumar"
},
{
"code": null,
"e": 17572,
"s": 17555,
"text": "surinderdawra388"
},
{
"code": null,
"e": 17584,
"s": 17572,
"text": "Suffix-Tree"
},
{
"code": null,
"e": 17608,
"s": 17584,
"text": "Advanced Data Structure"
},
{
"code": null,
"e": 17626,
"s": 17608,
"text": "Pattern Searching"
},
{
"code": null,
"e": 17644,
"s": 17626,
"text": "Pattern Searching"
}
] |
PHP | utf8_decode() Function | 07 Aug, 2021
The utf8_decode() function is an inbuilt function in PHP which is used to decode a UTF-8 string to the ISO-8859-1. This function decodes back to the encoded string which is encoded with the utf8_encode() function.
Syntax:
string utf8_decode( string $string )
Parameter: This function accepts single parameter $string which is required. It specifies the UTF-8 encoded string to be decoded.
Return Value: This function returns a string representing the decoded string on success and false on failure.
Note: This function is available for PHP 4.0.0 and newer version.
Program 1:
PHP
<?php // String to decode$string_to_decode = "\x63"; // Encoding stringecho utf8_encode($string_to_decode) ."<br>"; // Encoding stringecho utf8_decode($string_to_decode); ?>
Output:
c
c
Program 2:
PHP
<?php // Creating an array of 256 elements$text = array( "\x20", "\x21", "\x22", "\x23", "\x24", "\x25", "\x26", "\x27", "\x28", "\x29", "\x2a", "\x2b", "\x2c", "\x2d", "\x2e", "\x2f", "\x30", "\x31"); echo "Encoded elements:\n"; // Encoding and decoding all elementsforeach( $text as $index ) { echo utf8_encode($index) . " ";} echo "\nDecoded elements:\n"; foreach( $text as $index ) { echo utf8_decode($index) . " ";} ?>
Output:
Encoded elements:
! " # $ % & ' ( ) * + , - . / 0 1
Decoded elements:
! " # $ % & ' ( ) * + , - . / 0 1
Reference: https://www.php.net/manual/en/function.utf8-decode.php
ruhelaa48
PHP-function
PHP
Web Technologies
PHP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n07 Aug, 2021"
},
{
"code": null,
"e": 242,
"s": 28,
"text": "The utf8_decode() function is an inbuilt function in PHP which is used to decode a UTF-8 string to the ISO-8859-1. This function decodes back to the encoded string which is encoded with the utf8_encode() function."
},
{
"code": null,
"e": 251,
"s": 242,
"text": "Syntax: "
},
{
"code": null,
"e": 288,
"s": 251,
"text": "string utf8_decode( string $string )"
},
{
"code": null,
"e": 418,
"s": 288,
"text": "Parameter: This function accepts single parameter $string which is required. It specifies the UTF-8 encoded string to be decoded."
},
{
"code": null,
"e": 528,
"s": 418,
"text": "Return Value: This function returns a string representing the decoded string on success and false on failure."
},
{
"code": null,
"e": 594,
"s": 528,
"text": "Note: This function is available for PHP 4.0.0 and newer version."
},
{
"code": null,
"e": 607,
"s": 594,
"text": "Program 1: "
},
{
"code": null,
"e": 611,
"s": 607,
"text": "PHP"
},
{
"code": "<?php // String to decode$string_to_decode = \"\\x63\"; // Encoding stringecho utf8_encode($string_to_decode) .\"<br>\"; // Encoding stringecho utf8_decode($string_to_decode); ?>",
"e": 785,
"s": 611,
"text": null
},
{
"code": null,
"e": 794,
"s": 785,
"text": "Output: "
},
{
"code": null,
"e": 798,
"s": 794,
"text": "c\nc"
},
{
"code": null,
"e": 811,
"s": 798,
"text": "Program 2: "
},
{
"code": null,
"e": 815,
"s": 811,
"text": "PHP"
},
{
"code": "<?php // Creating an array of 256 elements$text = array( \"\\x20\", \"\\x21\", \"\\x22\", \"\\x23\", \"\\x24\", \"\\x25\", \"\\x26\", \"\\x27\", \"\\x28\", \"\\x29\", \"\\x2a\", \"\\x2b\", \"\\x2c\", \"\\x2d\", \"\\x2e\", \"\\x2f\", \"\\x30\", \"\\x31\"); echo \"Encoded elements:\\n\"; // Encoding and decoding all elementsforeach( $text as $index ) { echo utf8_encode($index) . \" \";} echo \"\\nDecoded elements:\\n\"; foreach( $text as $index ) { echo utf8_decode($index) . \" \";} ?>",
"e": 1254,
"s": 815,
"text": null
},
{
"code": null,
"e": 1263,
"s": 1254,
"text": "Output: "
},
{
"code": null,
"e": 1373,
"s": 1263,
"text": "Encoded elements:\n ! \" # $ % & ' ( ) * + , - . / 0 1 \nDecoded elements:\n ! \" # $ % & ' ( ) * + , - . / 0 1 "
},
{
"code": null,
"e": 1440,
"s": 1373,
"text": "Reference: https://www.php.net/manual/en/function.utf8-decode.php "
},
{
"code": null,
"e": 1450,
"s": 1440,
"text": "ruhelaa48"
},
{
"code": null,
"e": 1463,
"s": 1450,
"text": "PHP-function"
},
{
"code": null,
"e": 1467,
"s": 1463,
"text": "PHP"
},
{
"code": null,
"e": 1484,
"s": 1467,
"text": "Web Technologies"
},
{
"code": null,
"e": 1488,
"s": 1484,
"text": "PHP"
}
] |
Python IMDbPY – Searching a movie | 18 Jan, 2022
IMDbPY is a Python package which is used to retrieve and manage the data of the IMDb.IMDb is an online database of information related to films, television programs, home videos, video games, and streaming content online – including cast, production crew and personal biographies, plot summaries, trivia, fan and critical reviews, and ratings.In this article we will see how we can install this module and use this module to fetch variety of information.Installation In order to extract data from IMDb, we must first install the Python IMDbP library. This can be done by entering the command below in your command prompt or terminal:
pip install IMDbPY
Searching a movie We can search a movie with the help of search_movie
Syntax : imdb_object.search_movie(name)Argument : It takes string as argument, which is the movie name.Return : It return list, items in list have same or similar title to the searched movie.
Below is the implementation
Python3
# importing the moduleimport imdb # creating instance of IMDbia = imdb.IMDb() # movie namename = "3 idiots" # searching the moviesearch = ia.search_movie(name) # printing the resultfor i in search: print(i)
Output :
3 Idiots
3 idiotas
3 Idiots
3 Idiots w/ GUNS
3 Idiots on Wheels
3 Idiots Try Candy!
3 Idiots; How Cho Copes with Slump
The Idiots
Idiots
Vidiots
Idiotest
The Idiot
Idiotsitter
Idiots
Idioten
4 Idiots
Idiots
Idiots
Los 3 Idiotas
iDiots
Another example:
Python3
# importing the moduleimport imdb # creating instance of IMDbia = imdb.IMDb() # movie namename = "Tarzan the wonder car" # searching the moviesearch = ia.search_movie(name) # printing the resultprint(search)
Output :
[Movie id:0435437[http] title:_Taarzan: The Wonder Car (2004)_>]
sagartomar9927
Python IMDbPY-module
Python
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n18 Jan, 2022"
},
{
"code": null,
"e": 664,
"s": 28,
"text": "IMDbPY is a Python package which is used to retrieve and manage the data of the IMDb.IMDb is an online database of information related to films, television programs, home videos, video games, and streaming content online – including cast, production crew and personal biographies, plot summaries, trivia, fan and critical reviews, and ratings.In this article we will see how we can install this module and use this module to fetch variety of information.Installation In order to extract data from IMDb, we must first install the Python IMDbP library. This can be done by entering the command below in your command prompt or terminal: "
},
{
"code": null,
"e": 683,
"s": 664,
"text": "pip install IMDbPY"
},
{
"code": null,
"e": 755,
"s": 683,
"text": "Searching a movie We can search a movie with the help of search_movie "
},
{
"code": null,
"e": 949,
"s": 755,
"text": "Syntax : imdb_object.search_movie(name)Argument : It takes string as argument, which is the movie name.Return : It return list, items in list have same or similar title to the searched movie. "
},
{
"code": null,
"e": 979,
"s": 949,
"text": "Below is the implementation "
},
{
"code": null,
"e": 987,
"s": 979,
"text": "Python3"
},
{
"code": "# importing the moduleimport imdb # creating instance of IMDbia = imdb.IMDb() # movie namename = \"3 idiots\" # searching the moviesearch = ia.search_movie(name) # printing the resultfor i in search: print(i)",
"e": 1197,
"s": 987,
"text": null
},
{
"code": null,
"e": 1208,
"s": 1197,
"text": "Output : "
},
{
"code": null,
"e": 1443,
"s": 1208,
"text": "3 Idiots\n3 idiotas\n3 Idiots\n3 Idiots w/ GUNS\n3 Idiots on Wheels\n3 Idiots Try Candy!\n3 Idiots; How Cho Copes with Slump\nThe Idiots\nIdiots\nVidiots\nIdiotest\nThe Idiot\nIdiotsitter\nIdiots\nIdioten\n4 Idiots\nIdiots\nIdiots\nLos 3 Idiotas\niDiots"
},
{
"code": null,
"e": 1462,
"s": 1443,
"text": "Another example: "
},
{
"code": null,
"e": 1470,
"s": 1462,
"text": "Python3"
},
{
"code": "# importing the moduleimport imdb # creating instance of IMDbia = imdb.IMDb() # movie namename = \"Tarzan the wonder car\" # searching the moviesearch = ia.search_movie(name) # printing the resultprint(search)",
"e": 1678,
"s": 1470,
"text": null
},
{
"code": null,
"e": 1689,
"s": 1678,
"text": "Output : "
},
{
"code": null,
"e": 1754,
"s": 1689,
"text": "[Movie id:0435437[http] title:_Taarzan: The Wonder Car (2004)_>]"
},
{
"code": null,
"e": 1771,
"s": 1756,
"text": "sagartomar9927"
},
{
"code": null,
"e": 1792,
"s": 1771,
"text": "Python IMDbPY-module"
},
{
"code": null,
"e": 1799,
"s": 1792,
"text": "Python"
}
] |
ToDo App in C Language | 08 Jun, 2021
ToDo List App is a kind of app that generally used to maintain our day-to-day tasks or list everything that we have to do, with the most important tasks at the top of the list, and the least important tasks at the bottom. It is helpful in planning our daily schedules. We can add more tasks at any time and delete a task that is completed.
Features:
In this version of the ToDo list, the user will be getting four options:
Create (add) a new task or adding a new ToDo in the ToDo List App.
See all the tasks or View all the ToDos that were added to the app.
Delete any ToDo from the list of ToDos.
Exit from the app.
Approach:
This program involves the basic concepts like variables, data types, structure, string, loop, inserting a node into the linked list at any position, deleting a node from the linked list at any position, linked list traversal, etc. The approach followed for constructing the ToDo application is as follows:
The splash screen will display the name of the application and the developer: This is done using some statements inside the printf() function (the predefined function used to print the (“character, string, float, integer, octal and hexadecimal values”) and some predefined-functions.
The second screen will present the user with a list of four options i.e. Add, Delete, View, and Exit: This is achieved using Switch-cases.
Depending upon the user selects a corresponding function screen will be displayed: Functions for each task are created. Since C Language is a function or procedure based language, so we should make functions for specific jobs.
All the ToDos will be written inside the data part of the node of the Linked List. The linked list should be declared globally so that data (our ToDos) will not get lost if a function’s execution is over. And by declaring it globally all functions can use the same data that is inside the linked list.
Below are the functionality of the above program:
The Splash screen: This consists of the name of the application and the developer. The code is written inside a function named interface():interface() function contains some printf statements and a predefined function called system().The system() function is a part of the C/C++ standard library. It is used to pass the commands that can be executed in the command processor or the terminal of the operating system and finally returns the command after it has been completed.system(“Color 4F”) will change the color of the console i.e. background (4) and the text on the console i.e. foreground (F).system(“pause”) will pause the screen so the user will get a message: Press any key to continue . . .
interface() function contains some printf statements and a predefined function called system().
The system() function is a part of the C/C++ standard library. It is used to pass the commands that can be executed in the command processor or the terminal of the operating system and finally returns the command after it has been completed.system(“Color 4F”) will change the color of the console i.e. background (4) and the text on the console i.e. foreground (F).system(“pause”) will pause the screen so the user will get a message: Press any key to continue . . .
system(“Color 4F”) will change the color of the console i.e. background (4) and the text on the console i.e. foreground (F).
system(“pause”) will pause the screen so the user will get a message: Press any key to continue . . .
main() function: Use a simple switch case inside an infinite while-loop so that users will get to make choices every time and provide choices with the help of the printf() function and taking user’s input using the scanf() function. According to the input, the specific case will be executed and the required function will be called.
Linked List: Linked list named Todo is made using the structure concept of C and using the typedef we are renaming it to Todo. This Linked list consists of three parts –The data part is made as an array of characters i.e., char buffer[101]. The ToDos can be large so declaring the size of the array as 101.The node part contains the address of the next node i.e. *next.An integer types variable (int count) that will take account of the number of nodes and will help in the numbering of ToDos in further defined functions.
The data part is made as an array of characters i.e., char buffer[101]. The ToDos can be large so declaring the size of the array as 101.
The node part contains the address of the next node i.e. *next.
An integer types variable (int count) that will take account of the number of nodes and will help in the numbering of ToDos in further defined functions.
As in a singly linked list, a start pointer (In this case- todo *start) is used to get the address of the first node, it is declared and kept NULL inside it (Initially pointing to NULL).
seetodo() function: Four concepts are coded in this function. These are as follows:system(“cls”): to clear the screen or the console. It can be avoided if anyone wants to see all the previous operations or inputs done by the user.Creating an object of the structure variable i.e. *temp to access the linked list structure. This temp variable will point to start initially. We can output Empty ToDo if the start is equal to NULL. This means that our list is empty.Using a simple linked list traversal concept i.e., print the data part, node by node until the last node we can print all the ToDos. The while loop will execute till the last node, printf() inside it will print the numbering of ToDos, and puts() function will print the data which is in the form of a string of characters. fflush() is a predefined function, its purpose is to clear (or flush) the output buffer and move the buffered data to the console.Finally using the system(“pause”) to pause the screen until the user presses any key.
system(“cls”): to clear the screen or the console. It can be avoided if anyone wants to see all the previous operations or inputs done by the user.
Creating an object of the structure variable i.e. *temp to access the linked list structure. This temp variable will point to start initially. We can output Empty ToDo if the start is equal to NULL. This means that our list is empty.
Using a simple linked list traversal concept i.e., print the data part, node by node until the last node we can print all the ToDos. The while loop will execute till the last node, printf() inside it will print the numbering of ToDos, and puts() function will print the data which is in the form of a string of characters. fflush() is a predefined function, its purpose is to clear (or flush) the output buffer and move the buffered data to the console.
Finally using the system(“pause”) to pause the screen until the user presses any key.
createtodo() function: It contains a switch-case to ask the user if he/she wants to add ToDo or not using a character variable (char c;). Using printf() for asking the user about another input and scanf() to input the choice of the user.Now using the concept of adding a node at the end of the linked list the nodes are added. Here two cases can be possible –If there is no node present, in this case, the start will point to NULL.If there are some nodes present, in that case, the start will point to the first node and use a pointer-to-node ( *add) to traverse till the last node (which contains NULL in the pointer part). Here, dynamic memory allocation (using calloc() is used, this is a predefined function to allocate memory dynamically) to allocate memory at run time.
If there is no node present, in this case, the start will point to NULL.
If there are some nodes present, in that case, the start will point to the first node and use a pointer-to-node ( *add) to traverse till the last node (which contains NULL in the pointer part). Here, dynamic memory allocation (using calloc() is used, this is a predefined function to allocate memory dynamically) to allocate memory at run time.
In insertion, a new node is made, the data is taken from the user using gets() (a predefined function used to take input of characters), pointer part is made NULL as we are adding at the end and the newly created node is made to point by the previous node present in the linked list by using traversal concept explained above.
adjustcount() function: This function will take account of the numbering of the nodes of the linked list. Using the traversal concept and the help of start pointer it will update the value of the count of each node at every call.
deletetodo() function: Using the concept of deleting a node, we are deleting ToDos. We are asking the user the node that he/she wants to delete (by asking the numbering of the node). If the start is NULL then we cannot delete anything, so we can print: There is no TODO for today.
Below is the program for the above approach:
C
// C program for the above approach#include <stdio.h>#include <stdlib.h> // Renaming structure to avoid the// repetitive use of struct keywordtypedef struct ToDo todo; // Declaration of structurestruct ToDo { // char array as data part char buffer[101]; // Pointer part to access addresses todo* next; // Count variable for counting // the number of nodes int count;}; // Declare start pointer as null in// the beginningtodo* start = NULL; // Driver Codeint main(){ int choice; interface(); while (1) { // Change console color and // text color system("Color 3F"); // Clear the console system("cls"); printf("1. To see your ToDo list\n"); printf("2. To create new ToDo\n"); printf("3. To delete your ToDo\n"); printf("4. Exit"); printf("\n\n\nEnter your choice\t:\t"); // Choice from the user scanf("%d", &choice); switch (choice) { // Calling functions defined // below as per the user input case 1: seetodo(); break; case 2: createtodo(); break; case 3: deletetodo(); break; case 4: exit(1); break; default: printf("\nInvalid Choice :-(\n"); system("pause"); } } return 0;} // Code for Splash screenvoid interface(){ system("color 4F"); printf("\n\n\n\n"); printf("\t~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~\n"); printf("\t~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~\n\n"); printf("\t} : } : } : } : } : } " ": } : } : } : " "WELCOME TO the TODO APP " " : { : { : { : { : { " ": { : { : { : {\n\n"); printf("\t~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~\n"); printf("\t~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~~~~~~~~~~~~~~~" "~~~~~~~~~~\n"); printf("\n\n\n\t\t\t\t\t\t\t\" "t\t\t\t " "@Sushant_Gaurav\n\n\n\n" "\n\n\n\t"); // Pausing screen until user // presses any key system("pause");} // To view all the todosvoid seetodo(){ // Clearing the console system("cls"); // Pointer to the node for traversal todo* temp; // temp is made to point the // start of linked list temp = start; // Condition for empty linked list if (start == NULL) printf("\n\nEmpty ToDo \n\n"); // Traverse until last node while (temp != NULL) { // Print number of the node printf("%d.)", temp->count); // Print data of the node puts(temp->buffer); // Clear output console fflush(stdin); // Going to next node temp = temp->next; } printf("\n\n\n"); system("pause");} // Function to insert a node todovoid createtodo(){ // Choose choice from user char c; // Pointers to node todo *add, *temp; system("cls"); // Infinite loop which will // break if "n" is pressed while (1) { printf("\nWant to add new ToDo ??" + " Press 'y' for Yes and 'n' " + " for No :-)\n\t\t"); fflush(stdin); // Input from user scanf("%c", &c); if (c == 'n') break; else { // If start node is NULL if (start == NULL) { // Dynamically allocating // memory to the newly // created node add = (todo*)calloc(1, sizeof(todo)); // Using add pointer to // create linked list start = add; printf("\nType it.....\n"); // Input from user fflush(stdin); gets(add->buffer); // As first input so // count is 1 add->count = 1; // As first node so // start's next is NULL start->next = NULL; } else { temp = (todo*)calloc(1, sizeof(todo)); printf("\nType it.....\n"); fflush(stdin); gets(temp->buffer); // Insertion is at last // so pointer part is NULL temp->next = NULL; // add is now pointing // newly created node add->next = temp; add = add->next; } // Using the concept of // insertion at the end, // adding a todo // Calling function to adjust // the count variable adjustcount(); } }} // Function to delete the todovoid deletetodo(){ system("cls"); // To get the numbering of the // todo to be deleted int x; todo *del, *temp; printf("\nEnter the ToDo's number" + " that you want to remove.\n\t\t"); // Checking empty condition if (start == NULL) printf("\n\nThere is no ToDo" + " for today :-)\n\n\n"); else { scanf("%d", &x); // del will point to start del = start; // temp will point to start's // next so that traversal and // deletion is achieved easily temp = start->next; // Running infinite loop so // that user can delete and // asked again for choice while (1) { // When the values matches, // delete the node if (del->count == x) { // When the node to be // deleted is first node start = start->next; // Deallocating the memory // of the deleted node free(del); // Adjusting the count when // node is deleted adjustcount(); break; } if (temp->count == x) { del->next = temp->next; free(temp); adjustcount(); break; } else { del = temp; temp = temp->next; } } } system("pause");} // Function to adjust the numbering// of the nodesvoid adjustcount(){ // For traversal, using // a node pointer todo* temp; int i = 1; temp = start; // Running loop until last node // and numbering it one by one while (temp != NULL) { temp->count = i; i++; temp = temp->next; }}
Output:
Splash Screen:
List of Available Functions
User Presses 2
User Presses 1
Displaying ToDos
Deleting a ToDo
Displaying ToDos after Deleting a ToDo
User Presses 4
akshaysingh98088
Technical Scripter 2020
C Language
C Programs
C++
Project
Technical Scripter
CPP
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Unordered Sets in C++ Standard Template Library
What is the purpose of a function prototype?
Operators in C / C++
Exception Handling in C++
TCP Server-Client implementation in C
Strings in C
Arrow operator -> in C/C++ with Examples
Basics of File Handling in C
UDP Server-Client implementation in C
Header files in C/C++ and its uses | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n08 Jun, 2021"
},
{
"code": null,
"e": 395,
"s": 54,
"text": "ToDo List App is a kind of app that generally used to maintain our day-to-day tasks or list everything that we have to do, with the most important tasks at the top of the list, and the least important tasks at the bottom. It is helpful in planning our daily schedules. We can add more tasks at any time and delete a task that is completed. "
},
{
"code": null,
"e": 405,
"s": 395,
"text": "Features:"
},
{
"code": null,
"e": 478,
"s": 405,
"text": "In this version of the ToDo list, the user will be getting four options:"
},
{
"code": null,
"e": 545,
"s": 478,
"text": "Create (add) a new task or adding a new ToDo in the ToDo List App."
},
{
"code": null,
"e": 613,
"s": 545,
"text": "See all the tasks or View all the ToDos that were added to the app."
},
{
"code": null,
"e": 653,
"s": 613,
"text": "Delete any ToDo from the list of ToDos."
},
{
"code": null,
"e": 672,
"s": 653,
"text": "Exit from the app."
},
{
"code": null,
"e": 682,
"s": 672,
"text": "Approach:"
},
{
"code": null,
"e": 988,
"s": 682,
"text": "This program involves the basic concepts like variables, data types, structure, string, loop, inserting a node into the linked list at any position, deleting a node from the linked list at any position, linked list traversal, etc. The approach followed for constructing the ToDo application is as follows:"
},
{
"code": null,
"e": 1272,
"s": 988,
"text": "The splash screen will display the name of the application and the developer: This is done using some statements inside the printf() function (the predefined function used to print the (“character, string, float, integer, octal and hexadecimal values”) and some predefined-functions."
},
{
"code": null,
"e": 1411,
"s": 1272,
"text": "The second screen will present the user with a list of four options i.e. Add, Delete, View, and Exit: This is achieved using Switch-cases."
},
{
"code": null,
"e": 1638,
"s": 1411,
"text": "Depending upon the user selects a corresponding function screen will be displayed: Functions for each task are created. Since C Language is a function or procedure based language, so we should make functions for specific jobs."
},
{
"code": null,
"e": 1940,
"s": 1638,
"text": "All the ToDos will be written inside the data part of the node of the Linked List. The linked list should be declared globally so that data (our ToDos) will not get lost if a function’s execution is over. And by declaring it globally all functions can use the same data that is inside the linked list."
},
{
"code": null,
"e": 1990,
"s": 1940,
"text": "Below are the functionality of the above program:"
},
{
"code": null,
"e": 2691,
"s": 1990,
"text": "The Splash screen: This consists of the name of the application and the developer. The code is written inside a function named interface():interface() function contains some printf statements and a predefined function called system().The system() function is a part of the C/C++ standard library. It is used to pass the commands that can be executed in the command processor or the terminal of the operating system and finally returns the command after it has been completed.system(“Color 4F”) will change the color of the console i.e. background (4) and the text on the console i.e. foreground (F).system(“pause”) will pause the screen so the user will get a message: Press any key to continue . . ."
},
{
"code": null,
"e": 2787,
"s": 2691,
"text": "interface() function contains some printf statements and a predefined function called system()."
},
{
"code": null,
"e": 3254,
"s": 2787,
"text": "The system() function is a part of the C/C++ standard library. It is used to pass the commands that can be executed in the command processor or the terminal of the operating system and finally returns the command after it has been completed.system(“Color 4F”) will change the color of the console i.e. background (4) and the text on the console i.e. foreground (F).system(“pause”) will pause the screen so the user will get a message: Press any key to continue . . ."
},
{
"code": null,
"e": 3379,
"s": 3254,
"text": "system(“Color 4F”) will change the color of the console i.e. background (4) and the text on the console i.e. foreground (F)."
},
{
"code": null,
"e": 3481,
"s": 3379,
"text": "system(“pause”) will pause the screen so the user will get a message: Press any key to continue . . ."
},
{
"code": null,
"e": 3815,
"s": 3481,
"text": "main() function: Use a simple switch case inside an infinite while-loop so that users will get to make choices every time and provide choices with the help of the printf() function and taking user’s input using the scanf() function. According to the input, the specific case will be executed and the required function will be called."
},
{
"code": null,
"e": 4338,
"s": 3815,
"text": "Linked List: Linked list named Todo is made using the structure concept of C and using the typedef we are renaming it to Todo. This Linked list consists of three parts –The data part is made as an array of characters i.e., char buffer[101]. The ToDos can be large so declaring the size of the array as 101.The node part contains the address of the next node i.e. *next.An integer types variable (int count) that will take account of the number of nodes and will help in the numbering of ToDos in further defined functions."
},
{
"code": null,
"e": 4476,
"s": 4338,
"text": "The data part is made as an array of characters i.e., char buffer[101]. The ToDos can be large so declaring the size of the array as 101."
},
{
"code": null,
"e": 4540,
"s": 4476,
"text": "The node part contains the address of the next node i.e. *next."
},
{
"code": null,
"e": 4694,
"s": 4540,
"text": "An integer types variable (int count) that will take account of the number of nodes and will help in the numbering of ToDos in further defined functions."
},
{
"code": null,
"e": 4881,
"s": 4694,
"text": "As in a singly linked list, a start pointer (In this case- todo *start) is used to get the address of the first node, it is declared and kept NULL inside it (Initially pointing to NULL)."
},
{
"code": null,
"e": 5883,
"s": 4881,
"text": "seetodo() function: Four concepts are coded in this function. These are as follows:system(“cls”): to clear the screen or the console. It can be avoided if anyone wants to see all the previous operations or inputs done by the user.Creating an object of the structure variable i.e. *temp to access the linked list structure. This temp variable will point to start initially. We can output Empty ToDo if the start is equal to NULL. This means that our list is empty.Using a simple linked list traversal concept i.e., print the data part, node by node until the last node we can print all the ToDos. The while loop will execute till the last node, printf() inside it will print the numbering of ToDos, and puts() function will print the data which is in the form of a string of characters. fflush() is a predefined function, its purpose is to clear (or flush) the output buffer and move the buffered data to the console.Finally using the system(“pause”) to pause the screen until the user presses any key."
},
{
"code": null,
"e": 6031,
"s": 5883,
"text": "system(“cls”): to clear the screen or the console. It can be avoided if anyone wants to see all the previous operations or inputs done by the user."
},
{
"code": null,
"e": 6265,
"s": 6031,
"text": "Creating an object of the structure variable i.e. *temp to access the linked list structure. This temp variable will point to start initially. We can output Empty ToDo if the start is equal to NULL. This means that our list is empty."
},
{
"code": null,
"e": 6719,
"s": 6265,
"text": "Using a simple linked list traversal concept i.e., print the data part, node by node until the last node we can print all the ToDos. The while loop will execute till the last node, printf() inside it will print the numbering of ToDos, and puts() function will print the data which is in the form of a string of characters. fflush() is a predefined function, its purpose is to clear (or flush) the output buffer and move the buffered data to the console."
},
{
"code": null,
"e": 6805,
"s": 6719,
"text": "Finally using the system(“pause”) to pause the screen until the user presses any key."
},
{
"code": null,
"e": 7581,
"s": 6805,
"text": "createtodo() function: It contains a switch-case to ask the user if he/she wants to add ToDo or not using a character variable (char c;). Using printf() for asking the user about another input and scanf() to input the choice of the user.Now using the concept of adding a node at the end of the linked list the nodes are added. Here two cases can be possible –If there is no node present, in this case, the start will point to NULL.If there are some nodes present, in that case, the start will point to the first node and use a pointer-to-node ( *add) to traverse till the last node (which contains NULL in the pointer part). Here, dynamic memory allocation (using calloc() is used, this is a predefined function to allocate memory dynamically) to allocate memory at run time."
},
{
"code": null,
"e": 7654,
"s": 7581,
"text": "If there is no node present, in this case, the start will point to NULL."
},
{
"code": null,
"e": 7999,
"s": 7654,
"text": "If there are some nodes present, in that case, the start will point to the first node and use a pointer-to-node ( *add) to traverse till the last node (which contains NULL in the pointer part). Here, dynamic memory allocation (using calloc() is used, this is a predefined function to allocate memory dynamically) to allocate memory at run time."
},
{
"code": null,
"e": 8326,
"s": 7999,
"text": "In insertion, a new node is made, the data is taken from the user using gets() (a predefined function used to take input of characters), pointer part is made NULL as we are adding at the end and the newly created node is made to point by the previous node present in the linked list by using traversal concept explained above."
},
{
"code": null,
"e": 8556,
"s": 8326,
"text": "adjustcount() function: This function will take account of the numbering of the nodes of the linked list. Using the traversal concept and the help of start pointer it will update the value of the count of each node at every call."
},
{
"code": null,
"e": 8837,
"s": 8556,
"text": "deletetodo() function: Using the concept of deleting a node, we are deleting ToDos. We are asking the user the node that he/she wants to delete (by asking the numbering of the node). If the start is NULL then we cannot delete anything, so we can print: There is no TODO for today."
},
{
"code": null,
"e": 8882,
"s": 8837,
"text": "Below is the program for the above approach:"
},
{
"code": null,
"e": 8884,
"s": 8882,
"text": "C"
},
{
"code": "// C program for the above approach#include <stdio.h>#include <stdlib.h> // Renaming structure to avoid the// repetitive use of struct keywordtypedef struct ToDo todo; // Declaration of structurestruct ToDo { // char array as data part char buffer[101]; // Pointer part to access addresses todo* next; // Count variable for counting // the number of nodes int count;}; // Declare start pointer as null in// the beginningtodo* start = NULL; // Driver Codeint main(){ int choice; interface(); while (1) { // Change console color and // text color system(\"Color 3F\"); // Clear the console system(\"cls\"); printf(\"1. To see your ToDo list\\n\"); printf(\"2. To create new ToDo\\n\"); printf(\"3. To delete your ToDo\\n\"); printf(\"4. Exit\"); printf(\"\\n\\n\\nEnter your choice\\t:\\t\"); // Choice from the user scanf(\"%d\", &choice); switch (choice) { // Calling functions defined // below as per the user input case 1: seetodo(); break; case 2: createtodo(); break; case 3: deletetodo(); break; case 4: exit(1); break; default: printf(\"\\nInvalid Choice :-(\\n\"); system(\"pause\"); } } return 0;} // Code for Splash screenvoid interface(){ system(\"color 4F\"); printf(\"\\n\\n\\n\\n\"); printf(\"\\t~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~\\n\"); printf(\"\\t~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~\\n\\n\"); printf(\"\\t} : } : } : } : } : } \" \": } : } : } : \" \"WELCOME TO the TODO APP \" \" : { : { : { : { : { \" \": { : { : { : {\\n\\n\"); printf(\"\\t~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~\\n\"); printf(\"\\t~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~~~~~~~~~~~~~~~\" \"~~~~~~~~~~\\n\"); printf(\"\\n\\n\\n\\t\\t\\t\\t\\t\\t\\t\\\" \"t\\t\\t\\t \" \"@Sushant_Gaurav\\n\\n\\n\\n\" \"\\n\\n\\n\\t\"); // Pausing screen until user // presses any key system(\"pause\");} // To view all the todosvoid seetodo(){ // Clearing the console system(\"cls\"); // Pointer to the node for traversal todo* temp; // temp is made to point the // start of linked list temp = start; // Condition for empty linked list if (start == NULL) printf(\"\\n\\nEmpty ToDo \\n\\n\"); // Traverse until last node while (temp != NULL) { // Print number of the node printf(\"%d.)\", temp->count); // Print data of the node puts(temp->buffer); // Clear output console fflush(stdin); // Going to next node temp = temp->next; } printf(\"\\n\\n\\n\"); system(\"pause\");} // Function to insert a node todovoid createtodo(){ // Choose choice from user char c; // Pointers to node todo *add, *temp; system(\"cls\"); // Infinite loop which will // break if \"n\" is pressed while (1) { printf(\"\\nWant to add new ToDo ??\" + \" Press 'y' for Yes and 'n' \" + \" for No :-)\\n\\t\\t\"); fflush(stdin); // Input from user scanf(\"%c\", &c); if (c == 'n') break; else { // If start node is NULL if (start == NULL) { // Dynamically allocating // memory to the newly // created node add = (todo*)calloc(1, sizeof(todo)); // Using add pointer to // create linked list start = add; printf(\"\\nType it.....\\n\"); // Input from user fflush(stdin); gets(add->buffer); // As first input so // count is 1 add->count = 1; // As first node so // start's next is NULL start->next = NULL; } else { temp = (todo*)calloc(1, sizeof(todo)); printf(\"\\nType it.....\\n\"); fflush(stdin); gets(temp->buffer); // Insertion is at last // so pointer part is NULL temp->next = NULL; // add is now pointing // newly created node add->next = temp; add = add->next; } // Using the concept of // insertion at the end, // adding a todo // Calling function to adjust // the count variable adjustcount(); } }} // Function to delete the todovoid deletetodo(){ system(\"cls\"); // To get the numbering of the // todo to be deleted int x; todo *del, *temp; printf(\"\\nEnter the ToDo's number\" + \" that you want to remove.\\n\\t\\t\"); // Checking empty condition if (start == NULL) printf(\"\\n\\nThere is no ToDo\" + \" for today :-)\\n\\n\\n\"); else { scanf(\"%d\", &x); // del will point to start del = start; // temp will point to start's // next so that traversal and // deletion is achieved easily temp = start->next; // Running infinite loop so // that user can delete and // asked again for choice while (1) { // When the values matches, // delete the node if (del->count == x) { // When the node to be // deleted is first node start = start->next; // Deallocating the memory // of the deleted node free(del); // Adjusting the count when // node is deleted adjustcount(); break; } if (temp->count == x) { del->next = temp->next; free(temp); adjustcount(); break; } else { del = temp; temp = temp->next; } } } system(\"pause\");} // Function to adjust the numbering// of the nodesvoid adjustcount(){ // For traversal, using // a node pointer todo* temp; int i = 1; temp = start; // Running loop until last node // and numbering it one by one while (temp != NULL) { temp->count = i; i++; temp = temp->next; }}",
"e": 15756,
"s": 8884,
"text": null
},
{
"code": null,
"e": 15768,
"s": 15760,
"text": "Output:"
},
{
"code": null,
"e": 15785,
"s": 15770,
"text": "Splash Screen:"
},
{
"code": null,
"e": 15813,
"s": 15785,
"text": "List of Available Functions"
},
{
"code": null,
"e": 15828,
"s": 15813,
"text": "User Presses 2"
},
{
"code": null,
"e": 15843,
"s": 15828,
"text": "User Presses 1"
},
{
"code": null,
"e": 15860,
"s": 15843,
"text": "Displaying ToDos"
},
{
"code": null,
"e": 15876,
"s": 15860,
"text": "Deleting a ToDo"
},
{
"code": null,
"e": 15915,
"s": 15876,
"text": "Displaying ToDos after Deleting a ToDo"
},
{
"code": null,
"e": 15930,
"s": 15915,
"text": "User Presses 4"
},
{
"code": null,
"e": 15949,
"s": 15932,
"text": "akshaysingh98088"
},
{
"code": null,
"e": 15973,
"s": 15949,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 15984,
"s": 15973,
"text": "C Language"
},
{
"code": null,
"e": 15995,
"s": 15984,
"text": "C Programs"
},
{
"code": null,
"e": 15999,
"s": 15995,
"text": "C++"
},
{
"code": null,
"e": 16007,
"s": 15999,
"text": "Project"
},
{
"code": null,
"e": 16026,
"s": 16007,
"text": "Technical Scripter"
},
{
"code": null,
"e": 16030,
"s": 16026,
"text": "CPP"
},
{
"code": null,
"e": 16128,
"s": 16030,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 16176,
"s": 16128,
"text": "Unordered Sets in C++ Standard Template Library"
},
{
"code": null,
"e": 16221,
"s": 16176,
"text": "What is the purpose of a function prototype?"
},
{
"code": null,
"e": 16242,
"s": 16221,
"text": "Operators in C / C++"
},
{
"code": null,
"e": 16268,
"s": 16242,
"text": "Exception Handling in C++"
},
{
"code": null,
"e": 16306,
"s": 16268,
"text": "TCP Server-Client implementation in C"
},
{
"code": null,
"e": 16319,
"s": 16306,
"text": "Strings in C"
},
{
"code": null,
"e": 16360,
"s": 16319,
"text": "Arrow operator -> in C/C++ with Examples"
},
{
"code": null,
"e": 16389,
"s": 16360,
"text": "Basics of File Handling in C"
},
{
"code": null,
"e": 16427,
"s": 16389,
"text": "UDP Server-Client implementation in C"
}
] |
Divide large number represented as string | 13 Jul, 2022
Given a large number (represented as a string) which has to divide by another number (represented as int data type). The large number can be very large which does not even fit in long long in C++. The task is to find the division of these numbers.
Examples:
Input : number = 1260257
divisor = 37
Output : 34061
(See below diagram)
Input : number = 12313413534672234
divisor = 754
Output : 16330787181262
Input : number = 1248163264128256512
divisor = 125
Output : 9985306113026052
We have already discussed Multiply Large Numbers represented as Strings.We use basic school mathematics as shown in below example.
As the dividend and result can be very large we store them in string. We first take digits which are divisible by number. After this take each digit and store result in string.
Implementation:
C++
Java
Python3
C#
PHP
Javascript
// C++ program to implement division with large// number#include <bits/stdc++.h>using namespace std; // A function to perform division of large numbersstring longDivision(string number, int divisor){ // As result can be very large store it in string string ans; // Find prefix of number that is larger // than divisor. int idx = 0; int temp = number[idx] - '0'; while (temp < divisor) temp = temp * 10 + (number[++idx] - '0'); // Repeatedly divide divisor with temp. After // every division, update temp to include one // more digit. while (number.size() > idx) { // Store result in answer i.e. temp / divisor ans += (temp / divisor) + '0'; // Take next digit of number temp = (temp % divisor) * 10 + number[++idx] - '0'; } // If divisor is greater than number if (ans.length() == 0) return "0"; // else return ans return ans;} // Driver program to test longDivison()int main(){ string number = "1248163264128256512"; int divisor = 125; cout << longDivision(number, divisor); return 0;}
// Java program to implement division// with large numberclass GFG { public static String longDivision( String number, int divisor) { // As result can be very // large store it in string // but since we need to modify // it very often so using // string builder StringBuilder result = new StringBuilder(); // We will be iterating // the dividend so converting // it to char array char[] dividend = number.toCharArray(); // Initially the carry // would be zero int carry = 0; // Iterate the dividend for ( int i = 0; i < dividend.length; i++) { // Prepare the number to // be divided int x = carry * 10 + Character.getNumericValue( dividend[i]); // Append the result with // partial quotient result.append(x / divisor); // Prepare the carry for // the next Iteration carry = x % divisor; } // Remove any leading zeros for ( int i = 0; i < result.length(); i++) { if ( result.charAt(i) != '0') { // Return the result return result.substring(i); } } // Return empty string // if number is empty return ""; } // Driver code public static void main( String[] args) { String number = "1248163264128256512"; int divisor = 125; System.out.println( longDivision( number, divisor)); }} // This code is contributed by Saurabh321Gupta.
# Python3 program to implement division# with large numberimport math # A function to perform division of# large numbersdef longDivision(number, divisor): # As result can be very large # store it in string ans = ""; # Find prefix of number that # is larger than divisor. idx = 0; temp = ord(number[idx]) - ord('0'); while (temp < divisor): temp = (temp * 10 + ord(number[idx + 1]) - ord('0')); idx += 1; idx += 1; # Repeatedly divide divisor with temp. # After every division, update temp to # include one more digit. while ((len(number)) > idx): # Store result in answer i.e. temp / divisor ans += chr(math.floor(temp // divisor) + ord('0')); # Take next digit of number temp = ((temp % divisor) * 10 + ord(number[idx]) - ord('0')); idx += 1; ans += chr(math.floor(temp // divisor) + ord('0')); # If divisor is greater than number if (len(ans) == 0): return "0"; # else return ans return ans; # Driver Codenumber = "1248163264128256512";divisor = 125;print(longDivision(number, divisor)); # This code is contributed by mits
// C# program to implement division// with large numberusing System; class GFG { // A function to perform division of large numbers static string longDivision(string number, int divisor) { // As result can be very large store it in string string ans = ""; // Find prefix of number that is larger // than divisor. int idx = 0; int temp = (int)(number[idx] - '0'); while (temp < divisor) { temp = temp * 10 + (int)(number[idx + 1] - '0'); idx++; } ++idx; // Repeatedly divide divisor with temp. After // every division, update temp to include one // more digit. while (number.Length > idx) { // Store result in answer i.e. temp / divisor ans += (char)(temp / divisor + '0'); // Take next digit of number temp = (temp % divisor) * 10 + (int)(number[idx] - '0'); idx++; } ans += (char)(temp / divisor + '0'); // If divisor is greater than number if (ans.Length == 0) return "0"; // else return ans return ans; } // Driver code static void Main() { string number = "1248163264128256512"; int divisor = 125; Console.WriteLine(longDivision(number, divisor)); }} // This code is contributed by mits
<?php// PHP program to implement division// with large number // A function to perform division of// large numbersfunction longDivision($number, $divisor){ // As result can be very large // store it in string $ans = ""; // Find prefix of number that is // larger than divisor. $idx = 0; $temp = ord($number[$idx]) - 48; while ($temp < $divisor) $temp = $temp * 10 + ord($number[++$idx]) - 48; // Repeatedly divide divisor with temp. // After every division, update temp to // include one more digit. ++$idx; while (strlen($number) > $idx) { // Store result in answer i.e. temp / divisor $ans .= chr((int)($temp / $divisor) + 48); // Take next digit of number $temp = ($temp % $divisor) * 10 + ord($number[$idx]) - 48; ++$idx; } $ans .= chr((int)($temp / $divisor) + 48); // If divisor is greater than number if (strlen($ans) == 0) return "0"; // else return ans return $ans;} // Driver Code$number = "1248163264128256512";$divisor = 125;print(longDivision($number, $divisor)); // This code is contributed by mits?>
<script> // Javascript program to implement division// with large number function longDivision(number,divisor) { // As result can be very // large store it in string // but since we need to modify // it very often so using // string builder let ans=""; // We will be iterating // the dividend so converting // it to char array // Initially the carry // would be zero let idx = 0; let temp=number[idx]-'0'; while (temp < divisor) { temp = (temp * 10 + (number[idx + 1]).charCodeAt(0) - ('0').charCodeAt(0)); idx += 1; } idx += 1; while(number.length>idx) { // Store result in answer i.e. temp / divisor ans += String.fromCharCode (Math.floor(temp / divisor) + ('0').charCodeAt(0)); // Take next digit of number temp = ((temp % divisor) * 10 + (number[idx]).charCodeAt(0) - ('0').charCodeAt(0)); idx += 1; } ans += String.fromCharCode (Math.floor(temp / divisor) + ('0').charCodeAt(0)); //If divisor is greater than number if(ans.length==0) return "0"; //else return ans return ans; } // Driver Code let number = "1248163264128256512"; let divisor = 125; document.write(longDivision( number, divisor)); // This code is contributed // by avanitrachhadiya2155 </script>
9985306113026052
Time Complexity: O(n2!), where n denoting length of string.Auxiliary Space: O(n).
This article is contributed by nuclode. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Mithun Kumar
ukasp
saurabh321gupta
avanitrachhadiya2155
anandkumarshivam2266
hardikkoriintern
large-numbers
Samsung
Mathematical
Strings
Samsung
Strings
Mathematical
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here. | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n13 Jul, 2022"
},
{
"code": null,
"e": 302,
"s": 54,
"text": "Given a large number (represented as a string) which has to divide by another number (represented as int data type). The large number can be very large which does not even fit in long long in C++. The task is to find the division of these numbers."
},
{
"code": null,
"e": 313,
"s": 302,
"text": "Examples: "
},
{
"code": null,
"e": 565,
"s": 313,
"text": "Input : number = 1260257\n divisor = 37\nOutput : 34061\n(See below diagram)\n\nInput : number = 12313413534672234\n divisor = 754\nOutput : 16330787181262\n\nInput : number = 1248163264128256512\n divisor = 125\nOutput : 9985306113026052"
},
{
"code": null,
"e": 696,
"s": 565,
"text": "We have already discussed Multiply Large Numbers represented as Strings.We use basic school mathematics as shown in below example."
},
{
"code": null,
"e": 874,
"s": 696,
"text": "As the dividend and result can be very large we store them in string. We first take digits which are divisible by number. After this take each digit and store result in string. "
},
{
"code": null,
"e": 890,
"s": 874,
"text": "Implementation:"
},
{
"code": null,
"e": 894,
"s": 890,
"text": "C++"
},
{
"code": null,
"e": 899,
"s": 894,
"text": "Java"
},
{
"code": null,
"e": 907,
"s": 899,
"text": "Python3"
},
{
"code": null,
"e": 910,
"s": 907,
"text": "C#"
},
{
"code": null,
"e": 914,
"s": 910,
"text": "PHP"
},
{
"code": null,
"e": 925,
"s": 914,
"text": "Javascript"
},
{
"code": "// C++ program to implement division with large// number#include <bits/stdc++.h>using namespace std; // A function to perform division of large numbersstring longDivision(string number, int divisor){ // As result can be very large store it in string string ans; // Find prefix of number that is larger // than divisor. int idx = 0; int temp = number[idx] - '0'; while (temp < divisor) temp = temp * 10 + (number[++idx] - '0'); // Repeatedly divide divisor with temp. After // every division, update temp to include one // more digit. while (number.size() > idx) { // Store result in answer i.e. temp / divisor ans += (temp / divisor) + '0'; // Take next digit of number temp = (temp % divisor) * 10 + number[++idx] - '0'; } // If divisor is greater than number if (ans.length() == 0) return \"0\"; // else return ans return ans;} // Driver program to test longDivison()int main(){ string number = \"1248163264128256512\"; int divisor = 125; cout << longDivision(number, divisor); return 0;}",
"e": 2019,
"s": 925,
"text": null
},
{
"code": "// Java program to implement division// with large numberclass GFG { public static String longDivision( String number, int divisor) { // As result can be very // large store it in string // but since we need to modify // it very often so using // string builder StringBuilder result = new StringBuilder(); // We will be iterating // the dividend so converting // it to char array char[] dividend = number.toCharArray(); // Initially the carry // would be zero int carry = 0; // Iterate the dividend for ( int i = 0; i < dividend.length; i++) { // Prepare the number to // be divided int x = carry * 10 + Character.getNumericValue( dividend[i]); // Append the result with // partial quotient result.append(x / divisor); // Prepare the carry for // the next Iteration carry = x % divisor; } // Remove any leading zeros for ( int i = 0; i < result.length(); i++) { if ( result.charAt(i) != '0') { // Return the result return result.substring(i); } } // Return empty string // if number is empty return \"\"; } // Driver code public static void main( String[] args) { String number = \"1248163264128256512\"; int divisor = 125; System.out.println( longDivision( number, divisor)); }} // This code is contributed by Saurabh321Gupta.",
"e": 3792,
"s": 2019,
"text": null
},
{
"code": "# Python3 program to implement division# with large numberimport math # A function to perform division of# large numbersdef longDivision(number, divisor): # As result can be very large # store it in string ans = \"\"; # Find prefix of number that # is larger than divisor. idx = 0; temp = ord(number[idx]) - ord('0'); while (temp < divisor): temp = (temp * 10 + ord(number[idx + 1]) - ord('0')); idx += 1; idx += 1; # Repeatedly divide divisor with temp. # After every division, update temp to # include one more digit. while ((len(number)) > idx): # Store result in answer i.e. temp / divisor ans += chr(math.floor(temp // divisor) + ord('0')); # Take next digit of number temp = ((temp % divisor) * 10 + ord(number[idx]) - ord('0')); idx += 1; ans += chr(math.floor(temp // divisor) + ord('0')); # If divisor is greater than number if (len(ans) == 0): return \"0\"; # else return ans return ans; # Driver Codenumber = \"1248163264128256512\";divisor = 125;print(longDivision(number, divisor)); # This code is contributed by mits",
"e": 5038,
"s": 3792,
"text": null
},
{
"code": "// C# program to implement division// with large numberusing System; class GFG { // A function to perform division of large numbers static string longDivision(string number, int divisor) { // As result can be very large store it in string string ans = \"\"; // Find prefix of number that is larger // than divisor. int idx = 0; int temp = (int)(number[idx] - '0'); while (temp < divisor) { temp = temp * 10 + (int)(number[idx + 1] - '0'); idx++; } ++idx; // Repeatedly divide divisor with temp. After // every division, update temp to include one // more digit. while (number.Length > idx) { // Store result in answer i.e. temp / divisor ans += (char)(temp / divisor + '0'); // Take next digit of number temp = (temp % divisor) * 10 + (int)(number[idx] - '0'); idx++; } ans += (char)(temp / divisor + '0'); // If divisor is greater than number if (ans.Length == 0) return \"0\"; // else return ans return ans; } // Driver code static void Main() { string number = \"1248163264128256512\"; int divisor = 125; Console.WriteLine(longDivision(number, divisor)); }} // This code is contributed by mits",
"e": 6402,
"s": 5038,
"text": null
},
{
"code": "<?php// PHP program to implement division// with large number // A function to perform division of// large numbersfunction longDivision($number, $divisor){ // As result can be very large // store it in string $ans = \"\"; // Find prefix of number that is // larger than divisor. $idx = 0; $temp = ord($number[$idx]) - 48; while ($temp < $divisor) $temp = $temp * 10 + ord($number[++$idx]) - 48; // Repeatedly divide divisor with temp. // After every division, update temp to // include one more digit. ++$idx; while (strlen($number) > $idx) { // Store result in answer i.e. temp / divisor $ans .= chr((int)($temp / $divisor) + 48); // Take next digit of number $temp = ($temp % $divisor) * 10 + ord($number[$idx]) - 48; ++$idx; } $ans .= chr((int)($temp / $divisor) + 48); // If divisor is greater than number if (strlen($ans) == 0) return \"0\"; // else return ans return $ans;} // Driver Code$number = \"1248163264128256512\";$divisor = 125;print(longDivision($number, $divisor)); // This code is contributed by mits?>",
"e": 7578,
"s": 6402,
"text": null
},
{
"code": "<script> // Javascript program to implement division// with large number function longDivision(number,divisor) { // As result can be very // large store it in string // but since we need to modify // it very often so using // string builder let ans=\"\"; // We will be iterating // the dividend so converting // it to char array // Initially the carry // would be zero let idx = 0; let temp=number[idx]-'0'; while (temp < divisor) { temp = (temp * 10 + (number[idx + 1]).charCodeAt(0) - ('0').charCodeAt(0)); idx += 1; } idx += 1; while(number.length>idx) { // Store result in answer i.e. temp / divisor ans += String.fromCharCode (Math.floor(temp / divisor) + ('0').charCodeAt(0)); // Take next digit of number temp = ((temp % divisor) * 10 + (number[idx]).charCodeAt(0) - ('0').charCodeAt(0)); idx += 1; } ans += String.fromCharCode (Math.floor(temp / divisor) + ('0').charCodeAt(0)); //If divisor is greater than number if(ans.length==0) return \"0\"; //else return ans return ans; } // Driver Code let number = \"1248163264128256512\"; let divisor = 125; document.write(longDivision( number, divisor)); // This code is contributed // by avanitrachhadiya2155 </script>",
"e": 9212,
"s": 7578,
"text": null
},
{
"code": null,
"e": 9229,
"s": 9212,
"text": "9985306113026052"
},
{
"code": null,
"e": 9311,
"s": 9229,
"text": "Time Complexity: O(n2!), where n denoting length of string.Auxiliary Space: O(n)."
},
{
"code": null,
"e": 9608,
"s": 9311,
"text": "This article is contributed by nuclode. If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to [email protected]. See your article appearing on the GeeksforGeeks main page and help other Geeks. "
},
{
"code": null,
"e": 9621,
"s": 9608,
"text": "Mithun Kumar"
},
{
"code": null,
"e": 9627,
"s": 9621,
"text": "ukasp"
},
{
"code": null,
"e": 9643,
"s": 9627,
"text": "saurabh321gupta"
},
{
"code": null,
"e": 9664,
"s": 9643,
"text": "avanitrachhadiya2155"
},
{
"code": null,
"e": 9685,
"s": 9664,
"text": "anandkumarshivam2266"
},
{
"code": null,
"e": 9702,
"s": 9685,
"text": "hardikkoriintern"
},
{
"code": null,
"e": 9716,
"s": 9702,
"text": "large-numbers"
},
{
"code": null,
"e": 9724,
"s": 9716,
"text": "Samsung"
},
{
"code": null,
"e": 9737,
"s": 9724,
"text": "Mathematical"
},
{
"code": null,
"e": 9745,
"s": 9737,
"text": "Strings"
},
{
"code": null,
"e": 9753,
"s": 9745,
"text": "Samsung"
},
{
"code": null,
"e": 9761,
"s": 9753,
"text": "Strings"
},
{
"code": null,
"e": 9774,
"s": 9761,
"text": "Mathematical"
}
] |
Print all even numbers from 1 to n in PL/SQL | 29 Jun, 2018
Prerequisite- PL/SQL Introduction
In PL/SQL code groups of commands are arranged within a block. It groups together related declarations or statements. In declare part, we declare variables and between begin and end part, we perform the operations.
Given a number N, the task is to display all the even numbers and their sum from 1 to N.
Examples:
Input: N = 3
Output: 2
Sum = 2
Input: N = 5
Output: 2, 4
Sum = 6
Approach is to initialize a number num with 2 and keep incrementing it by 2 until num is <= N.Below is its implementation:
-- Display all even number from 1 to nDECLARE -- Declare variable num num NUMBER(3) := 2; sum1 NUMBER(4) := 0;BEGIN WHILE num <= 5 LOOP -- Display even number dbms_output.Put_line(num); -- Sum of even numbers sum1 := sum1 + num; -- Next even number num := num + 2; -- End loop END LOOP; -- Display even number dbms_output.Put_line('Sum of even numbers is ' || sum1);END;
Output:
2
4
Sum of even numbers is 6
SQL-PL/SQL
SQL
SQL
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
CTE in SQL
How to Update Multiple Columns in Single Update Statement in SQL?
SQL Interview Questions
SQL | Views
Difference between DELETE, DROP and TRUNCATE
MySQL | Group_CONCAT() Function
Difference between DDL and DML in DBMS
Window functions in SQL
SQL Correlated Subqueries
Difference between DELETE and TRUNCATE | [
{
"code": null,
"e": 28,
"s": 0,
"text": "\n29 Jun, 2018"
},
{
"code": null,
"e": 62,
"s": 28,
"text": "Prerequisite- PL/SQL Introduction"
},
{
"code": null,
"e": 277,
"s": 62,
"text": "In PL/SQL code groups of commands are arranged within a block. It groups together related declarations or statements. In declare part, we declare variables and between begin and end part, we perform the operations."
},
{
"code": null,
"e": 366,
"s": 277,
"text": "Given a number N, the task is to display all the even numbers and their sum from 1 to N."
},
{
"code": null,
"e": 376,
"s": 366,
"text": "Examples:"
},
{
"code": null,
"e": 443,
"s": 376,
"text": "Input: N = 3\nOutput: 2\nSum = 2\n\nInput: N = 5\nOutput: 2, 4\nSum = 6\n"
},
{
"code": null,
"e": 566,
"s": 443,
"text": "Approach is to initialize a number num with 2 and keep incrementing it by 2 until num is <= N.Below is its implementation:"
},
{
"code": "-- Display all even number from 1 to nDECLARE -- Declare variable num num NUMBER(3) := 2; sum1 NUMBER(4) := 0;BEGIN WHILE num <= 5 LOOP -- Display even number dbms_output.Put_line(num); -- Sum of even numbers sum1 := sum1 + num; -- Next even number num := num + 2; -- End loop END LOOP; -- Display even number dbms_output.Put_line('Sum of even numbers is ' || sum1);END;",
"e": 1028,
"s": 566,
"text": null
},
{
"code": null,
"e": 1036,
"s": 1028,
"text": "Output:"
},
{
"code": null,
"e": 1066,
"s": 1036,
"text": "2\n4\nSum of even numbers is 6\n"
},
{
"code": null,
"e": 1077,
"s": 1066,
"text": "SQL-PL/SQL"
},
{
"code": null,
"e": 1081,
"s": 1077,
"text": "SQL"
},
{
"code": null,
"e": 1085,
"s": 1081,
"text": "SQL"
},
{
"code": null,
"e": 1183,
"s": 1085,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 1194,
"s": 1183,
"text": "CTE in SQL"
},
{
"code": null,
"e": 1260,
"s": 1194,
"text": "How to Update Multiple Columns in Single Update Statement in SQL?"
},
{
"code": null,
"e": 1284,
"s": 1260,
"text": "SQL Interview Questions"
},
{
"code": null,
"e": 1296,
"s": 1284,
"text": "SQL | Views"
},
{
"code": null,
"e": 1341,
"s": 1296,
"text": "Difference between DELETE, DROP and TRUNCATE"
},
{
"code": null,
"e": 1373,
"s": 1341,
"text": "MySQL | Group_CONCAT() Function"
},
{
"code": null,
"e": 1412,
"s": 1373,
"text": "Difference between DDL and DML in DBMS"
},
{
"code": null,
"e": 1436,
"s": 1412,
"text": "Window functions in SQL"
},
{
"code": null,
"e": 1462,
"s": 1436,
"text": "SQL Correlated Subqueries"
}
] |
TCS Placement Paper | MCQ 7 | 21 May, 2019
This is a TCS model placement paper for aptitude preparation. This placement paper will cover aptitude questions that are asked in TCS recruitment drives and also strictly follows the pattern of questions asked in TCS interviews. It is recommended to solve each one of the following questions to increase your chances of clearing the TCS interview.
The sticks of the same length are used to form a triangle as shown below. If 87 such sticks are used then how many triangles can be formed?a) 42b) 43c) 44d) 45Answer: b) 43Solution:As we can see the first triangle can be formed using 3 sticks. So we have 87 – 3 = 84 sticks left.So every next triangle can be formed using 2 sticks.So we have 84/2 = 42 triangles and 43 triangles in all.Find the next number in the series of 3, 12, 7, 26, 15, ?a) 54b) 55c) 64d) 74
Video Player is loading.Play VideoPlayMuteCurrent Time 0:00/Duration 0:00Loaded: 0%0:00Stream Type LIVESeek to live, currently behind liveLIVERemaining Time -0:00 1xPlayback RateChaptersChaptersDescriptionsdescriptions off, selectedCaptionscaptions settings, opens captions settings dialogcaptions off, selectedEnglishAudio TrackPicture-in-PictureFullscreenThis is a modal window.The media could not be loaded, either because the server or network failed or because the format is not supported.Beginning of dialog window. Escape will cancel and close the window.TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaqueFont Size50%75%100%125%150%175%200%300%400%Text Edge StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall CapsReset restore all settings to the default valuesDoneClose Modal DialogEnd of dialog window.Answer: a) 54Solution:3 * 2 + 1 = 712 * 2 + 2 = 267 * 2 + 1 = 1526 * 2 + 2 = 54There is a toy gun that made 10 musical sounds. It makes 2 musical sounds after being defective. What is the probability that same musical sound would be produced 5 times consecutively?a) 1/16b) 1/32c) 1/48d) 1/2Answer: b) 1/32Solution:The probability of making the same sound every time = 1/2,So, 1/2^5 = 1/32 (answer)In how many possible ways you can write 3240 as a product of 3 positive integers?a) 320b) 420c) 350d) 450Answer: d) 450 waysSolution:First let’s prime factorize the number 3240 = Let the three positive numbers be x, y and zWe have to distribute three 2’s to x, y and z ways in (3+3-1)C(3-1) = 5C2 ways = 10 waysWe have to distribute four 3’s to x, y, z in (3+4-1)C(3-1) = 6C2 waysWe have to distribute one 5 to x, y, z in 3 ways.The total number of ways = 10×15×3=450 ways.The marked price of a shirt was 40% less than the suggested retail price. Ram purchased the coat for half of the marked price at the 15th-anniversary sale. What per cent less than the suggested retail price did Ram pay?a) 70%b) 20%c) 60%d) 30%Answer: a) 70%Solution:Let the retail price of the shirt be Rs. 100So according to the question, the market price will be = 100*0.6 = 60Purchased price of Ram = 60/2 = 30which is 70% less than retail price.HCF of 2472, 1284 and a 3rd number, is 12. If their LCM is 8*9*5*103*107, then what is the number?a) 2^2*3^2*7^1b) 2^2*3^2*5^1c) 2^2*3^2*8103d) None of the above.Answer: b) 2^2×3^2×5^1Solution:2472 = 1284 = HCF = LCM = HCF of the number is the highest number which divides all the numbers. So N should be a multiple of 22×3LCM is the largest number that is divided by the given numbers. As LCM contains 32×5 these two are from N.So N = [Tex]$2^2×3^2×5^1$[Tex]An old man takes 30 minutes and a young man takes 20 minutes to walk from apartment to office. If one day the old man started at 10.00 AM and the young man at 10:05 AM from the apartment to office, when will they meet?a) 10:00b) 10:15c) 10.30d) 10:45Answer: b) 10:15Solution:Let the distance of the apartment from the office be 12 kmSo the speed of the old man = 12 / (1/2) hr = 24 km/hrThe young man speed = 12 / (1/3) hr = 36 km/hrSince the old man started 5 minutes earlier, he covers 24 × (5/60) = 2 km in 5 minutes.Now the time taken to the young man to meets him = 2/(36-24) * 60 = 10 minutesSo the time of meet = 10:05 + 10 = 10 hr 15 min or 10:15In the range of 112 to 375, how many 2’s are there?a) 312b) 156c) 159d) 160Answer: b) 156Solution:The total number of 2’s in the units place = (122, 132, 142 ... 192), (201, 212, 222, ... 292), (302, 312, ... 372) = 8 + 10 + 8 = 26 2’sThe total number of 2’s in tenth’s place = (120, 121, 122, ..., 129) + (220, 221, ..., 229) + (320, 321, ..., 329) = 30The total number of 2’s in hundred’s place = (200, 201, ... 299) = 100.So the total number of 2’s between 112 and 375 = 26 + 30 + 100 = 156Ram walks 36 km partly at a speed of 4 km/hr and partly at 3 km/hr. If he had walked at a speed of 3km/hr when he had walked at 4 and 4 km/hr when he had walked at 3 he would have walked only 34 km. The time (in hours) spent by Ram in walking wasa) 10b) 5c) 12d) 8Answer: a) 10Solution:Let Ram walk ‘x’ hrs at 4 km/hr, and ‘y’ hrs at 3 km/hr.Given,4x + 3y = 363x + 4y = 34Solving these two equations we get x + y = 10What will be the 55th word in the arrangement of the letters of the word PERFECT?a) CEPFRETb) CEPFERTc) CEPERFTd) CEPRFETAnswer: b) CEPFERTSolution:Let’s arrange the word PERFECT in dictionary order = CEEFPRTHere,CEE(4!)=24CEF(4!)=24CEPF(3!)=6So the 55th word is CEPFERT.
The sticks of the same length are used to form a triangle as shown below. If 87 such sticks are used then how many triangles can be formed?a) 42b) 43c) 44d) 45Answer: b) 43Solution:As we can see the first triangle can be formed using 3 sticks. So we have 87 – 3 = 84 sticks left.So every next triangle can be formed using 2 sticks.So we have 84/2 = 42 triangles and 43 triangles in all.
Answer: b) 43
Solution:As we can see the first triangle can be formed using 3 sticks. So we have 87 – 3 = 84 sticks left.So every next triangle can be formed using 2 sticks.So we have 84/2 = 42 triangles and 43 triangles in all.
Find the next number in the series of 3, 12, 7, 26, 15, ?a) 54b) 55c) 64d) 74
Video Player is loading.Play VideoPlayMuteCurrent Time 0:00/Duration 0:00Loaded: 0%0:00Stream Type LIVESeek to live, currently behind liveLIVERemaining Time -0:00 1xPlayback RateChaptersChaptersDescriptionsdescriptions off, selectedCaptionscaptions settings, opens captions settings dialogcaptions off, selectedEnglishAudio TrackPicture-in-PictureFullscreenThis is a modal window.The media could not be loaded, either because the server or network failed or because the format is not supported.Beginning of dialog window. Escape will cancel and close the window.TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaqueFont Size50%75%100%125%150%175%200%300%400%Text Edge StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall CapsReset restore all settings to the default valuesDoneClose Modal DialogEnd of dialog window.Answer: a) 54Solution:3 * 2 + 1 = 712 * 2 + 2 = 267 * 2 + 1 = 1526 * 2 + 2 = 54
Chapters
descriptions off, selected
captions settings, opens captions settings dialog
captions off, selected
English
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
Answer: a) 54
Solution:3 * 2 + 1 = 712 * 2 + 2 = 267 * 2 + 1 = 1526 * 2 + 2 = 54
There is a toy gun that made 10 musical sounds. It makes 2 musical sounds after being defective. What is the probability that same musical sound would be produced 5 times consecutively?a) 1/16b) 1/32c) 1/48d) 1/2Answer: b) 1/32Solution:The probability of making the same sound every time = 1/2,So, 1/2^5 = 1/32 (answer)
Answer: b) 1/32
Solution:The probability of making the same sound every time = 1/2,So, 1/2^5 = 1/32 (answer)
In how many possible ways you can write 3240 as a product of 3 positive integers?a) 320b) 420c) 350d) 450Answer: d) 450 waysSolution:First let’s prime factorize the number 3240 = Let the three positive numbers be x, y and zWe have to distribute three 2’s to x, y and z ways in (3+3-1)C(3-1) = 5C2 ways = 10 waysWe have to distribute four 3’s to x, y, z in (3+4-1)C(3-1) = 6C2 waysWe have to distribute one 5 to x, y, z in 3 ways.The total number of ways = 10×15×3=450 ways.
Answer: d) 450 ways
Solution:First let’s prime factorize the number 3240 = Let the three positive numbers be x, y and zWe have to distribute three 2’s to x, y and z ways in (3+3-1)C(3-1) = 5C2 ways = 10 waysWe have to distribute four 3’s to x, y, z in (3+4-1)C(3-1) = 6C2 waysWe have to distribute one 5 to x, y, z in 3 ways.The total number of ways = 10×15×3=450 ways.
The marked price of a shirt was 40% less than the suggested retail price. Ram purchased the coat for half of the marked price at the 15th-anniversary sale. What per cent less than the suggested retail price did Ram pay?a) 70%b) 20%c) 60%d) 30%Answer: a) 70%Solution:Let the retail price of the shirt be Rs. 100So according to the question, the market price will be = 100*0.6 = 60Purchased price of Ram = 60/2 = 30which is 70% less than retail price.
Answer: a) 70%
Solution:Let the retail price of the shirt be Rs. 100So according to the question, the market price will be = 100*0.6 = 60Purchased price of Ram = 60/2 = 30which is 70% less than retail price.
HCF of 2472, 1284 and a 3rd number, is 12. If their LCM is 8*9*5*103*107, then what is the number?a) 2^2*3^2*7^1b) 2^2*3^2*5^1c) 2^2*3^2*8103d) None of the above.Answer: b) 2^2×3^2×5^1Solution:2472 = 1284 = HCF = LCM = HCF of the number is the highest number which divides all the numbers. So N should be a multiple of 22×3LCM is the largest number that is divided by the given numbers. As LCM contains 32×5 these two are from N.So N = [Tex]$2^2×3^2×5^1$[Tex]
Answer: b) 2^2×3^2×5^1
Solution:2472 = 1284 = HCF = LCM = HCF of the number is the highest number which divides all the numbers. So N should be a multiple of 22×3LCM is the largest number that is divided by the given numbers. As LCM contains 32×5 these two are from N.So N = [Tex]$2^2×3^2×5^1$[Tex]
An old man takes 30 minutes and a young man takes 20 minutes to walk from apartment to office. If one day the old man started at 10.00 AM and the young man at 10:05 AM from the apartment to office, when will they meet?a) 10:00b) 10:15c) 10.30d) 10:45Answer: b) 10:15Solution:Let the distance of the apartment from the office be 12 kmSo the speed of the old man = 12 / (1/2) hr = 24 km/hrThe young man speed = 12 / (1/3) hr = 36 km/hrSince the old man started 5 minutes earlier, he covers 24 × (5/60) = 2 km in 5 minutes.Now the time taken to the young man to meets him = 2/(36-24) * 60 = 10 minutesSo the time of meet = 10:05 + 10 = 10 hr 15 min or 10:15
Answer: b) 10:15
Solution:Let the distance of the apartment from the office be 12 kmSo the speed of the old man = 12 / (1/2) hr = 24 km/hrThe young man speed = 12 / (1/3) hr = 36 km/hrSince the old man started 5 minutes earlier, he covers 24 × (5/60) = 2 km in 5 minutes.Now the time taken to the young man to meets him = 2/(36-24) * 60 = 10 minutesSo the time of meet = 10:05 + 10 = 10 hr 15 min or 10:15
In the range of 112 to 375, how many 2’s are there?a) 312b) 156c) 159d) 160Answer: b) 156Solution:The total number of 2’s in the units place = (122, 132, 142 ... 192), (201, 212, 222, ... 292), (302, 312, ... 372) = 8 + 10 + 8 = 26 2’sThe total number of 2’s in tenth’s place = (120, 121, 122, ..., 129) + (220, 221, ..., 229) + (320, 321, ..., 329) = 30The total number of 2’s in hundred’s place = (200, 201, ... 299) = 100.So the total number of 2’s between 112 and 375 = 26 + 30 + 100 = 156
Answer: b) 156
Solution:The total number of 2’s in the units place = (122, 132, 142 ... 192), (201, 212, 222, ... 292), (302, 312, ... 372) = 8 + 10 + 8 = 26 2’sThe total number of 2’s in tenth’s place = (120, 121, 122, ..., 129) + (220, 221, ..., 229) + (320, 321, ..., 329) = 30The total number of 2’s in hundred’s place = (200, 201, ... 299) = 100.So the total number of 2’s between 112 and 375 = 26 + 30 + 100 = 156
Ram walks 36 km partly at a speed of 4 km/hr and partly at 3 km/hr. If he had walked at a speed of 3km/hr when he had walked at 4 and 4 km/hr when he had walked at 3 he would have walked only 34 km. The time (in hours) spent by Ram in walking wasa) 10b) 5c) 12d) 8Answer: a) 10Solution:Let Ram walk ‘x’ hrs at 4 km/hr, and ‘y’ hrs at 3 km/hr.Given,4x + 3y = 363x + 4y = 34Solving these two equations we get x + y = 10
Answer: a) 10
Solution:Let Ram walk ‘x’ hrs at 4 km/hr, and ‘y’ hrs at 3 km/hr.Given,4x + 3y = 363x + 4y = 34Solving these two equations we get x + y = 10
What will be the 55th word in the arrangement of the letters of the word PERFECT?a) CEPFRETb) CEPFERTc) CEPERFTd) CEPRFETAnswer: b) CEPFERTSolution:Let’s arrange the word PERFECT in dictionary order = CEEFPRTHere,CEE(4!)=24CEF(4!)=24CEPF(3!)=6So the 55th word is CEPFERT.
Answer: b) CEPFERT
Solution:Let’s arrange the word PERFECT in dictionary order = CEEFPRTHere,CEE(4!)=24CEF(4!)=24CEPF(3!)=6So the 55th word is CEPFERT.
interview-preparation
placement preparation
TCS
Placements
TCS
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Top 20 Puzzles Commonly Asked During SDE Interviews
Amazon WOW Interview Experience for SDE 1 (2021)
Programming Language For Placement - C++, Java or Python?
Cognizant Placement Paper | Aptitude Set 1
Permutation and Combination
TCS Ninja Interview Experience and Interview Questions
Time Speed Distance
Progressions (AP, GP, HP)
Interview Preparation
Print the longest path from root to leaf in a Binary tree | [
{
"code": null,
"e": 54,
"s": 26,
"text": "\n21 May, 2019"
},
{
"code": null,
"e": 403,
"s": 54,
"text": "This is a TCS model placement paper for aptitude preparation. This placement paper will cover aptitude questions that are asked in TCS recruitment drives and also strictly follows the pattern of questions asked in TCS interviews. It is recommended to solve each one of the following questions to increase your chances of clearing the TCS interview."
},
{
"code": null,
"e": 5614,
"s": 403,
"text": "The sticks of the same length are used to form a triangle as shown below. If 87 such sticks are used then how many triangles can be formed?a) 42b) 43c) 44d) 45Answer: b) 43Solution:As we can see the first triangle can be formed using 3 sticks. So we have 87 – 3 = 84 sticks left.So every next triangle can be formed using 2 sticks.So we have 84/2 = 42 triangles and 43 triangles in all.Find the next number in the series of 3, 12, 7, 26, 15, ?a) 54b) 55c) 64d) 74\nVideo Player is loading.Play VideoPlayMuteCurrent Time 0:00/Duration 0:00Loaded: 0%0:00Stream Type LIVESeek to live, currently behind liveLIVERemaining Time -0:00 1xPlayback RateChaptersChaptersDescriptionsdescriptions off, selectedCaptionscaptions settings, opens captions settings dialogcaptions off, selectedEnglishAudio TrackPicture-in-PictureFullscreenThis is a modal window.The media could not be loaded, either because the server or network failed or because the format is not supported.Beginning of dialog window. Escape will cancel and close the window.TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaqueFont Size50%75%100%125%150%175%200%300%400%Text Edge StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall CapsReset restore all settings to the default valuesDoneClose Modal DialogEnd of dialog window.Answer: a) 54Solution:3 * 2 + 1 = 712 * 2 + 2 = 267 * 2 + 1 = 1526 * 2 + 2 = 54There is a toy gun that made 10 musical sounds. It makes 2 musical sounds after being defective. What is the probability that same musical sound would be produced 5 times consecutively?a) 1/16b) 1/32c) 1/48d) 1/2Answer: b) 1/32Solution:The probability of making the same sound every time = 1/2,So, 1/2^5 = 1/32 (answer)In how many possible ways you can write 3240 as a product of 3 positive integers?a) 320b) 420c) 350d) 450Answer: d) 450 waysSolution:First let’s prime factorize the number 3240 = Let the three positive numbers be x, y and zWe have to distribute three 2’s to x, y and z ways in (3+3-1)C(3-1) = 5C2 ways = 10 waysWe have to distribute four 3’s to x, y, z in (3+4-1)C(3-1) = 6C2 waysWe have to distribute one 5 to x, y, z in 3 ways.The total number of ways = 10×15×3=450 ways.The marked price of a shirt was 40% less than the suggested retail price. Ram purchased the coat for half of the marked price at the 15th-anniversary sale. What per cent less than the suggested retail price did Ram pay?a) 70%b) 20%c) 60%d) 30%Answer: a) 70%Solution:Let the retail price of the shirt be Rs. 100So according to the question, the market price will be = 100*0.6 = 60Purchased price of Ram = 60/2 = 30which is 70% less than retail price.HCF of 2472, 1284 and a 3rd number, is 12. If their LCM is 8*9*5*103*107, then what is the number?a) 2^2*3^2*7^1b) 2^2*3^2*5^1c) 2^2*3^2*8103d) None of the above.Answer: b) 2^2×3^2×5^1Solution:2472 = 1284 = HCF = LCM = HCF of the number is the highest number which divides all the numbers. So N should be a multiple of 22×3LCM is the largest number that is divided by the given numbers. As LCM contains 32×5 these two are from N.So N = [Tex]$2^2×3^2×5^1$[Tex]An old man takes 30 minutes and a young man takes 20 minutes to walk from apartment to office. If one day the old man started at 10.00 AM and the young man at 10:05 AM from the apartment to office, when will they meet?a) 10:00b) 10:15c) 10.30d) 10:45Answer: b) 10:15Solution:Let the distance of the apartment from the office be 12 kmSo the speed of the old man = 12 / (1/2) hr = 24 km/hrThe young man speed = 12 / (1/3) hr = 36 km/hrSince the old man started 5 minutes earlier, he covers 24 × (5/60) = 2 km in 5 minutes.Now the time taken to the young man to meets him = 2/(36-24) * 60 = 10 minutesSo the time of meet = 10:05 + 10 = 10 hr 15 min or 10:15In the range of 112 to 375, how many 2’s are there?a) 312b) 156c) 159d) 160Answer: b) 156Solution:The total number of 2’s in the units place = (122, 132, 142 ... 192), (201, 212, 222, ... 292), (302, 312, ... 372) = 8 + 10 + 8 = 26 2’sThe total number of 2’s in tenth’s place = (120, 121, 122, ..., 129) + (220, 221, ..., 229) + (320, 321, ..., 329) = 30The total number of 2’s in hundred’s place = (200, 201, ... 299) = 100.So the total number of 2’s between 112 and 375 = 26 + 30 + 100 = 156Ram walks 36 km partly at a speed of 4 km/hr and partly at 3 km/hr. If he had walked at a speed of 3km/hr when he had walked at 4 and 4 km/hr when he had walked at 3 he would have walked only 34 km. The time (in hours) spent by Ram in walking wasa) 10b) 5c) 12d) 8Answer: a) 10Solution:Let Ram walk ‘x’ hrs at 4 km/hr, and ‘y’ hrs at 3 km/hr.Given,4x + 3y = 363x + 4y = 34Solving these two equations we get x + y = 10What will be the 55th word in the arrangement of the letters of the word PERFECT?a) CEPFRETb) CEPFERTc) CEPERFTd) CEPRFETAnswer: b) CEPFERTSolution:Let’s arrange the word PERFECT in dictionary order = CEEFPRTHere,CEE(4!)=24CEF(4!)=24CEPF(3!)=6So the 55th word is CEPFERT."
},
{
"code": null,
"e": 6001,
"s": 5614,
"text": "The sticks of the same length are used to form a triangle as shown below. If 87 such sticks are used then how many triangles can be formed?a) 42b) 43c) 44d) 45Answer: b) 43Solution:As we can see the first triangle can be formed using 3 sticks. So we have 87 – 3 = 84 sticks left.So every next triangle can be formed using 2 sticks.So we have 84/2 = 42 triangles and 43 triangles in all."
},
{
"code": null,
"e": 6015,
"s": 6001,
"text": "Answer: b) 43"
},
{
"code": null,
"e": 6230,
"s": 6015,
"text": "Solution:As we can see the first triangle can be formed using 3 sticks. So we have 87 – 3 = 84 sticks left.So every next triangle can be formed using 2 sticks.So we have 84/2 = 42 triangles and 43 triangles in all."
},
{
"code": null,
"e": 7520,
"s": 6230,
"text": "Find the next number in the series of 3, 12, 7, 26, 15, ?a) 54b) 55c) 64d) 74\nVideo Player is loading.Play VideoPlayMuteCurrent Time 0:00/Duration 0:00Loaded: 0%0:00Stream Type LIVESeek to live, currently behind liveLIVERemaining Time -0:00 1xPlayback RateChaptersChaptersDescriptionsdescriptions off, selectedCaptionscaptions settings, opens captions settings dialogcaptions off, selectedEnglishAudio TrackPicture-in-PictureFullscreenThis is a modal window.The media could not be loaded, either because the server or network failed or because the format is not supported.Beginning of dialog window. Escape will cancel and close the window.TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaqueFont Size50%75%100%125%150%175%200%300%400%Text Edge StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall CapsReset restore all settings to the default valuesDoneClose Modal DialogEnd of dialog window.Answer: a) 54Solution:3 * 2 + 1 = 712 * 2 + 2 = 267 * 2 + 1 = 1526 * 2 + 2 = 54"
},
{
"code": null,
"e": 7529,
"s": 7520,
"text": "Chapters"
},
{
"code": null,
"e": 7556,
"s": 7529,
"text": "descriptions off, selected"
},
{
"code": null,
"e": 7606,
"s": 7556,
"text": "captions settings, opens captions settings dialog"
},
{
"code": null,
"e": 7629,
"s": 7606,
"text": "captions off, selected"
},
{
"code": null,
"e": 7637,
"s": 7629,
"text": "English"
},
{
"code": null,
"e": 7661,
"s": 7637,
"text": "This is a modal window."
},
{
"code": null,
"e": 7730,
"s": 7661,
"text": "Beginning of dialog window. Escape will cancel and close the window."
},
{
"code": null,
"e": 7752,
"s": 7730,
"text": "End of dialog window."
},
{
"code": null,
"e": 7766,
"s": 7752,
"text": "Answer: a) 54"
},
{
"code": null,
"e": 7833,
"s": 7766,
"text": "Solution:3 * 2 + 1 = 712 * 2 + 2 = 267 * 2 + 1 = 1526 * 2 + 2 = 54"
},
{
"code": null,
"e": 8153,
"s": 7833,
"text": "There is a toy gun that made 10 musical sounds. It makes 2 musical sounds after being defective. What is the probability that same musical sound would be produced 5 times consecutively?a) 1/16b) 1/32c) 1/48d) 1/2Answer: b) 1/32Solution:The probability of making the same sound every time = 1/2,So, 1/2^5 = 1/32 (answer)"
},
{
"code": null,
"e": 8169,
"s": 8153,
"text": "Answer: b) 1/32"
},
{
"code": null,
"e": 8262,
"s": 8169,
"text": "Solution:The probability of making the same sound every time = 1/2,So, 1/2^5 = 1/32 (answer)"
},
{
"code": null,
"e": 8736,
"s": 8262,
"text": "In how many possible ways you can write 3240 as a product of 3 positive integers?a) 320b) 420c) 350d) 450Answer: d) 450 waysSolution:First let’s prime factorize the number 3240 = Let the three positive numbers be x, y and zWe have to distribute three 2’s to x, y and z ways in (3+3-1)C(3-1) = 5C2 ways = 10 waysWe have to distribute four 3’s to x, y, z in (3+4-1)C(3-1) = 6C2 waysWe have to distribute one 5 to x, y, z in 3 ways.The total number of ways = 10×15×3=450 ways."
},
{
"code": null,
"e": 8756,
"s": 8736,
"text": "Answer: d) 450 ways"
},
{
"code": null,
"e": 9106,
"s": 8756,
"text": "Solution:First let’s prime factorize the number 3240 = Let the three positive numbers be x, y and zWe have to distribute three 2’s to x, y and z ways in (3+3-1)C(3-1) = 5C2 ways = 10 waysWe have to distribute four 3’s to x, y, z in (3+4-1)C(3-1) = 6C2 waysWe have to distribute one 5 to x, y, z in 3 ways.The total number of ways = 10×15×3=450 ways."
},
{
"code": null,
"e": 9556,
"s": 9106,
"text": "The marked price of a shirt was 40% less than the suggested retail price. Ram purchased the coat for half of the marked price at the 15th-anniversary sale. What per cent less than the suggested retail price did Ram pay?a) 70%b) 20%c) 60%d) 30%Answer: a) 70%Solution:Let the retail price of the shirt be Rs. 100So according to the question, the market price will be = 100*0.6 = 60Purchased price of Ram = 60/2 = 30which is 70% less than retail price."
},
{
"code": null,
"e": 9571,
"s": 9556,
"text": "Answer: a) 70%"
},
{
"code": null,
"e": 9764,
"s": 9571,
"text": "Solution:Let the retail price of the shirt be Rs. 100So according to the question, the market price will be = 100*0.6 = 60Purchased price of Ram = 60/2 = 30which is 70% less than retail price."
},
{
"code": null,
"e": 10224,
"s": 9764,
"text": "HCF of 2472, 1284 and a 3rd number, is 12. If their LCM is 8*9*5*103*107, then what is the number?a) 2^2*3^2*7^1b) 2^2*3^2*5^1c) 2^2*3^2*8103d) None of the above.Answer: b) 2^2×3^2×5^1Solution:2472 = 1284 = HCF = LCM = HCF of the number is the highest number which divides all the numbers. So N should be a multiple of 22×3LCM is the largest number that is divided by the given numbers. As LCM contains 32×5 these two are from N.So N = [Tex]$2^2×3^2×5^1$[Tex]"
},
{
"code": null,
"e": 10247,
"s": 10224,
"text": "Answer: b) 2^2×3^2×5^1"
},
{
"code": null,
"e": 10523,
"s": 10247,
"text": "Solution:2472 = 1284 = HCF = LCM = HCF of the number is the highest number which divides all the numbers. So N should be a multiple of 22×3LCM is the largest number that is divided by the given numbers. As LCM contains 32×5 these two are from N.So N = [Tex]$2^2×3^2×5^1$[Tex]"
},
{
"code": null,
"e": 11178,
"s": 10523,
"text": "An old man takes 30 minutes and a young man takes 20 minutes to walk from apartment to office. If one day the old man started at 10.00 AM and the young man at 10:05 AM from the apartment to office, when will they meet?a) 10:00b) 10:15c) 10.30d) 10:45Answer: b) 10:15Solution:Let the distance of the apartment from the office be 12 kmSo the speed of the old man = 12 / (1/2) hr = 24 km/hrThe young man speed = 12 / (1/3) hr = 36 km/hrSince the old man started 5 minutes earlier, he covers 24 × (5/60) = 2 km in 5 minutes.Now the time taken to the young man to meets him = 2/(36-24) * 60 = 10 minutesSo the time of meet = 10:05 + 10 = 10 hr 15 min or 10:15"
},
{
"code": null,
"e": 11195,
"s": 11178,
"text": "Answer: b) 10:15"
},
{
"code": null,
"e": 11584,
"s": 11195,
"text": "Solution:Let the distance of the apartment from the office be 12 kmSo the speed of the old man = 12 / (1/2) hr = 24 km/hrThe young man speed = 12 / (1/3) hr = 36 km/hrSince the old man started 5 minutes earlier, he covers 24 × (5/60) = 2 km in 5 minutes.Now the time taken to the young man to meets him = 2/(36-24) * 60 = 10 minutesSo the time of meet = 10:05 + 10 = 10 hr 15 min or 10:15"
},
{
"code": null,
"e": 12078,
"s": 11584,
"text": "In the range of 112 to 375, how many 2’s are there?a) 312b) 156c) 159d) 160Answer: b) 156Solution:The total number of 2’s in the units place = (122, 132, 142 ... 192), (201, 212, 222, ... 292), (302, 312, ... 372) = 8 + 10 + 8 = 26 2’sThe total number of 2’s in tenth’s place = (120, 121, 122, ..., 129) + (220, 221, ..., 229) + (320, 321, ..., 329) = 30The total number of 2’s in hundred’s place = (200, 201, ... 299) = 100.So the total number of 2’s between 112 and 375 = 26 + 30 + 100 = 156"
},
{
"code": null,
"e": 12093,
"s": 12078,
"text": "Answer: b) 156"
},
{
"code": null,
"e": 12498,
"s": 12093,
"text": "Solution:The total number of 2’s in the units place = (122, 132, 142 ... 192), (201, 212, 222, ... 292), (302, 312, ... 372) = 8 + 10 + 8 = 26 2’sThe total number of 2’s in tenth’s place = (120, 121, 122, ..., 129) + (220, 221, ..., 229) + (320, 321, ..., 329) = 30The total number of 2’s in hundred’s place = (200, 201, ... 299) = 100.So the total number of 2’s between 112 and 375 = 26 + 30 + 100 = 156"
},
{
"code": null,
"e": 12916,
"s": 12498,
"text": "Ram walks 36 km partly at a speed of 4 km/hr and partly at 3 km/hr. If he had walked at a speed of 3km/hr when he had walked at 4 and 4 km/hr when he had walked at 3 he would have walked only 34 km. The time (in hours) spent by Ram in walking wasa) 10b) 5c) 12d) 8Answer: a) 10Solution:Let Ram walk ‘x’ hrs at 4 km/hr, and ‘y’ hrs at 3 km/hr.Given,4x + 3y = 363x + 4y = 34Solving these two equations we get x + y = 10"
},
{
"code": null,
"e": 12930,
"s": 12916,
"text": "Answer: a) 10"
},
{
"code": null,
"e": 13071,
"s": 12930,
"text": "Solution:Let Ram walk ‘x’ hrs at 4 km/hr, and ‘y’ hrs at 3 km/hr.Given,4x + 3y = 363x + 4y = 34Solving these two equations we get x + y = 10"
},
{
"code": null,
"e": 13343,
"s": 13071,
"text": "What will be the 55th word in the arrangement of the letters of the word PERFECT?a) CEPFRETb) CEPFERTc) CEPERFTd) CEPRFETAnswer: b) CEPFERTSolution:Let’s arrange the word PERFECT in dictionary order = CEEFPRTHere,CEE(4!)=24CEF(4!)=24CEPF(3!)=6So the 55th word is CEPFERT."
},
{
"code": null,
"e": 13362,
"s": 13343,
"text": "Answer: b) CEPFERT"
},
{
"code": null,
"e": 13495,
"s": 13362,
"text": "Solution:Let’s arrange the word PERFECT in dictionary order = CEEFPRTHere,CEE(4!)=24CEF(4!)=24CEPF(3!)=6So the 55th word is CEPFERT."
},
{
"code": null,
"e": 13517,
"s": 13495,
"text": "interview-preparation"
},
{
"code": null,
"e": 13539,
"s": 13517,
"text": "placement preparation"
},
{
"code": null,
"e": 13543,
"s": 13539,
"text": "TCS"
},
{
"code": null,
"e": 13554,
"s": 13543,
"text": "Placements"
},
{
"code": null,
"e": 13558,
"s": 13554,
"text": "TCS"
},
{
"code": null,
"e": 13656,
"s": 13558,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 13708,
"s": 13656,
"text": "Top 20 Puzzles Commonly Asked During SDE Interviews"
},
{
"code": null,
"e": 13757,
"s": 13708,
"text": "Amazon WOW Interview Experience for SDE 1 (2021)"
},
{
"code": null,
"e": 13815,
"s": 13757,
"text": "Programming Language For Placement - C++, Java or Python?"
},
{
"code": null,
"e": 13858,
"s": 13815,
"text": "Cognizant Placement Paper | Aptitude Set 1"
},
{
"code": null,
"e": 13886,
"s": 13858,
"text": "Permutation and Combination"
},
{
"code": null,
"e": 13941,
"s": 13886,
"text": "TCS Ninja Interview Experience and Interview Questions"
},
{
"code": null,
"e": 13961,
"s": 13941,
"text": "Time Speed Distance"
},
{
"code": null,
"e": 13987,
"s": 13961,
"text": "Progressions (AP, GP, HP)"
},
{
"code": null,
"e": 14009,
"s": 13987,
"text": "Interview Preparation"
}
] |
Foundation - Pagination | Pagination is a type of navigation that divides the content into a series of related pages.
The following example demonstrates the use of pagination in Foundation −
<!doctype html>
<head>
<meta charset = "utf-8" />
<meta http-equiv = "x-ua-compatible" content = "ie = edge" />
<meta name = "viewport" content = "width = device-width, initial-scale = 1.0" />
<title>Pagination</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/foundation.min.css" integrity="sha256-1mcRjtAxlSjp6XJBgrBeeCORfBp/ppyX4tsvpQVCcpA= sha384-b5S5X654rX3Wo6z5/hnQ4GBmKuIJKMPwrJXn52ypjztlnDK2w9+9hSMBz/asy9Gw sha512-M1VveR2JGzpgWHb0elGqPTltHK3xbvu3Brgjfg4cg5ZNtyyApxw/45yHYsZ/rCVbfoO5MSZxB241wWq642jLtA==" crossorigin="anonymous">
<!-- Compressed JavaScript -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/foundation/6.0.1/js/vendor/jquery.min.js"></script>
<script src="https://cdn.jsdelivr.net/npm/[email protected]/dist/js/foundation.min.js" integrity="sha256-WUKHnLrIrx8dew//IpSEmPN/NT3DGAEmIePQYIEJLLs= sha384-53StQWuVbn6figscdDC3xV00aYCPEz3srBdV/QGSXw3f19og3Tq2wTRe0vJqRTEO sha512-X9O+2f1ty1rzBJOC8AXBnuNUdyJg0m8xMKmbt9I3Vu/UOWmSg5zG+dtnje4wAZrKtkopz/PEDClHZ1LXx5IeOw==" crossorigin="anonymous"></script>
</head>
<body>
<h2>Pagination Example</h2>
<ul class = "pagination">
<li class = "disabled">Previous</li>
<li class = "current">1</li>
<li><a href = "#">2</a></li>
<li><a href = "#">3</a></li>
<li><a href = "#">4</a></li>
<li class = "ellipsis"></li>
<li><a href = "#">12</a></li>
<li><a href = "#">13</a></li>
<li><a href = "#">Next</a></li>
</ul>
</body>
</html>
Let us carry out the following steps to see how the above given code works −
Save the above given html code pagination.html file.
Save the above given html code pagination.html file.
Open this HTML file in a browser, an output is displayed as shown below.
Open this HTML file in a browser, an output is displayed as shown below.
Previous
1
2
3
4
12
13
Next
117 Lectures
5.5 hours
Shakthi Swaroop
61 Lectures
1.5 hours
Hans Weemaes
17 Lectures
4 hours
Stephen Kahuria
8 Lectures
50 mins
Zenva
28 Lectures
2 hours
Sandra L
16 Lectures
2.5 hours
GreyCampus Inc.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2330,
"s": 2238,
"text": "Pagination is a type of navigation that divides the content into a series of related pages."
},
{
"code": null,
"e": 2403,
"s": 2330,
"text": "The following example demonstrates the use of pagination in Foundation −"
},
{
"code": null,
"e": 4011,
"s": 2403,
"text": "<!doctype html>\n <head>\n <meta charset = \"utf-8\" />\n <meta http-equiv = \"x-ua-compatible\" content = \"ie = edge\" />\n <meta name = \"viewport\" content = \"width = device-width, initial-scale = 1.0\" />\n\n <title>Pagination</title>\n\n <link rel=\"stylesheet\" href=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/css/foundation.min.css\" integrity=\"sha256-1mcRjtAxlSjp6XJBgrBeeCORfBp/ppyX4tsvpQVCcpA= sha384-b5S5X654rX3Wo6z5/hnQ4GBmKuIJKMPwrJXn52ypjztlnDK2w9+9hSMBz/asy9Gw sha512-M1VveR2JGzpgWHb0elGqPTltHK3xbvu3Brgjfg4cg5ZNtyyApxw/45yHYsZ/rCVbfoO5MSZxB241wWq642jLtA==\" crossorigin=\"anonymous\">\n\n <!-- Compressed JavaScript -->\n <script src=\"https://cdnjs.cloudflare.com/ajax/libs/foundation/6.0.1/js/vendor/jquery.min.js\"></script>\n <script src=\"https://cdn.jsdelivr.net/npm/[email protected]/dist/js/foundation.min.js\" integrity=\"sha256-WUKHnLrIrx8dew//IpSEmPN/NT3DGAEmIePQYIEJLLs= sha384-53StQWuVbn6figscdDC3xV00aYCPEz3srBdV/QGSXw3f19og3Tq2wTRe0vJqRTEO sha512-X9O+2f1ty1rzBJOC8AXBnuNUdyJg0m8xMKmbt9I3Vu/UOWmSg5zG+dtnje4wAZrKtkopz/PEDClHZ1LXx5IeOw==\" crossorigin=\"anonymous\"></script>\n </head>\n\n <body>\n <h2>Pagination Example</h2>\n\n <ul class = \"pagination\">\n <li class = \"disabled\">Previous</li>\n <li class = \"current\">1</li>\n <li><a href = \"#\">2</a></li>\n <li><a href = \"#\">3</a></li>\n <li><a href = \"#\">4</a></li>\n <li class = \"ellipsis\"></li>\n <li><a href = \"#\">12</a></li>\n <li><a href = \"#\">13</a></li>\n <li><a href = \"#\">Next</a></li>\n </ul>\n </body>\n</html>"
},
{
"code": null,
"e": 4088,
"s": 4011,
"text": "Let us carry out the following steps to see how the above given code works −"
},
{
"code": null,
"e": 4141,
"s": 4088,
"text": "Save the above given html code pagination.html file."
},
{
"code": null,
"e": 4194,
"s": 4141,
"text": "Save the above given html code pagination.html file."
},
{
"code": null,
"e": 4267,
"s": 4194,
"text": "Open this HTML file in a browser, an output is displayed as shown below."
},
{
"code": null,
"e": 4340,
"s": 4267,
"text": "Open this HTML file in a browser, an output is displayed as shown below."
},
{
"code": null,
"e": 4349,
"s": 4340,
"text": "Previous"
},
{
"code": null,
"e": 4351,
"s": 4349,
"text": "1"
},
{
"code": null,
"e": 4353,
"s": 4351,
"text": "2"
},
{
"code": null,
"e": 4355,
"s": 4353,
"text": "3"
},
{
"code": null,
"e": 4357,
"s": 4355,
"text": "4"
},
{
"code": null,
"e": 4360,
"s": 4357,
"text": "12"
},
{
"code": null,
"e": 4363,
"s": 4360,
"text": "13"
},
{
"code": null,
"e": 4368,
"s": 4363,
"text": "Next"
},
{
"code": null,
"e": 4404,
"s": 4368,
"text": "\n 117 Lectures \n 5.5 hours \n"
},
{
"code": null,
"e": 4421,
"s": 4404,
"text": " Shakthi Swaroop"
},
{
"code": null,
"e": 4456,
"s": 4421,
"text": "\n 61 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 4470,
"s": 4456,
"text": " Hans Weemaes"
},
{
"code": null,
"e": 4503,
"s": 4470,
"text": "\n 17 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 4520,
"s": 4503,
"text": " Stephen Kahuria"
},
{
"code": null,
"e": 4551,
"s": 4520,
"text": "\n 8 Lectures \n 50 mins\n"
},
{
"code": null,
"e": 4558,
"s": 4551,
"text": " Zenva"
},
{
"code": null,
"e": 4591,
"s": 4558,
"text": "\n 28 Lectures \n 2 hours \n"
},
{
"code": null,
"e": 4601,
"s": 4591,
"text": " Sandra L"
},
{
"code": null,
"e": 4636,
"s": 4601,
"text": "\n 16 Lectures \n 2.5 hours \n"
},
{
"code": null,
"e": 4653,
"s": 4636,
"text": " GreyCampus Inc."
},
{
"code": null,
"e": 4660,
"s": 4653,
"text": " Print"
},
{
"code": null,
"e": 4671,
"s": 4660,
"text": " Add Notes"
}
] |
HTML canvas fillStyle Property | The fillStyle() property of the HTML canvas is used to set the color or gradient or pattern for the drawing. The default is #000000. The <canvas> element allows you to draw graphics on a web page using JavaScript. Every canvas has two elements that describes the height and width of the canvas i.e. height and width respectively.
Following is the syntax −
ctx.fillStyle=color|gradient|pattern;
Above, the values, include −
color: The drawing’s fill color, which is a CSS color.
gradient: Linear or radial gradient object to fill the drawing
pattern: The pattern object to fill the drawing.
Let us now see an example to implement the fillStyle() property of canvas −
Live Demo
<!DOCTYPE html>
<html>
<body>
<canvas id="newCanvas" width="500" height="350" style="border:2px solid orange;">
</canvas>
<script>
var c = document.getElementById("newCanvas");
var ctx = c.getContext("2d");
var newGrad =ctx.createLinearGradient(0, 0, 130, 0);
newGrad.addColorStop(0, "blue");
newGrad.addColorStop(0.8, "green");
ctx.fillStyle = newGrad;
ctx.fillRect(0, 0, 500, 350);
</script>
</body>
</html> | [
{
"code": null,
"e": 1392,
"s": 1062,
"text": "The fillStyle() property of the HTML canvas is used to set the color or gradient or pattern for the drawing. The default is #000000. The <canvas> element allows you to draw graphics on a web page using JavaScript. Every canvas has two elements that describes the height and width of the canvas i.e. height and width respectively."
},
{
"code": null,
"e": 1418,
"s": 1392,
"text": "Following is the syntax −"
},
{
"code": null,
"e": 1456,
"s": 1418,
"text": "ctx.fillStyle=color|gradient|pattern;"
},
{
"code": null,
"e": 1485,
"s": 1456,
"text": "Above, the values, include −"
},
{
"code": null,
"e": 1540,
"s": 1485,
"text": "color: The drawing’s fill color, which is a CSS color."
},
{
"code": null,
"e": 1603,
"s": 1540,
"text": "gradient: Linear or radial gradient object to fill the drawing"
},
{
"code": null,
"e": 1652,
"s": 1603,
"text": "pattern: The pattern object to fill the drawing."
},
{
"code": null,
"e": 1728,
"s": 1652,
"text": "Let us now see an example to implement the fillStyle() property of canvas −"
},
{
"code": null,
"e": 1739,
"s": 1728,
"text": " Live Demo"
},
{
"code": null,
"e": 2170,
"s": 1739,
"text": "<!DOCTYPE html>\n<html>\n<body>\n<canvas id=\"newCanvas\" width=\"500\" height=\"350\" style=\"border:2px solid orange;\">\n</canvas>\n<script>\n var c = document.getElementById(\"newCanvas\");\n var ctx = c.getContext(\"2d\");\n var newGrad =ctx.createLinearGradient(0, 0, 130, 0);\n newGrad.addColorStop(0, \"blue\");\n newGrad.addColorStop(0.8, \"green\");\n ctx.fillStyle = newGrad;\n ctx.fillRect(0, 0, 500, 350);\n</script>\n</body>\n</html>"
}
] |
Elasticsearch - Document APIs | Elasticsearch provides single document APIs and multi-document APIs, where the API call is targeting a single document and multiple documents respectively.
It helps to add or update the JSON document in an index when a request is made to that respective index with specific mapping. For example, the following request will add the JSON object to index schools and under school mapping −
PUT schools/_doc/5
{
name":"City School", "description":"ICSE", "street":"West End",
"city":"Meerut",
"state":"UP", "zip":"250002", "location":[28.9926174, 77.692485],
"fees":3500,
"tags":["fully computerized"], "rating":"4.5"
}
On running the above code, we get the following result −
{
"_index" : "schools",
"_type" : "_doc",
"_id" : "5",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 2,
"_primary_term" : 1
}
When a request is made to add JSON object to a particular index and if that index does not exist, then this API automatically creates that index and also the underlying mapping for that particular JSON object. This functionality can be disabled by changing the values of following parameters to false, which are present in elasticsearch.yml file.
action.auto_create_index:false
index.mapper.dynamic:false
You can also restrict the auto creation of index, where only index name with specific patterns are allowed by changing the value of the following parameter −
action.auto_create_index:+acc*,-bank*
Note − Here + indicates allowed and – indicates not allowed.
Elasticsearch also provides version control facility. We can use a version query parameter to specify the version of a particular document.
PUT schools/_doc/5?version=7&version_type=external
{
"name":"Central School", "description":"CBSE Affiliation", "street":"Nagan",
"city":"paprola", "state":"HP", "zip":"176115", "location":[31.8955385, 76.8380405],
"fees":2200, "tags":["Senior Secondary", "beautiful campus"], "rating":"3.3"
}
On running the above code, we get the following result −
{
"_index" : "schools",
"_type" : "_doc",
"_id" : "5",
"_version" : 7,
"result" : "updated",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 3,
"_primary_term" : 1
}
Versioning is a real-time process and it is not affected by the real time search operations.
There are two most important types of versioning −
Internal versioning is the default version that starts with 1 and increments with each update, deletes included.
It is used when the versioning of the documents is stored in an external system like third party versioning systems. To enable this functionality, we need to set version_type to external. Here Elasticsearch will store version number as designated by the external system and will not increment them automatically.
The operation type is used to force a create operation. This helps to avoid the overwriting
of existing document.
PUT chapter/_doc/1?op_type=create
{
"Text":"this is chapter one"
}
On running the above code, we get the following result −
{
"_index" : "chapter",
"_type" : "_doc",
"_id" : "1",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 0,
"_primary_term" : 1
}
When ID is not specified in index operation, then Elasticsearch automatically generates id
for that document.
POST chapter/_doc/
{
"user" : "tpoint",
"post_date" : "2018-12-25T14:12:12",
"message" : "Elasticsearch Tutorial"
}
On running the above code, we get the following result −
{
"_index" : "chapter",
"_type" : "_doc",
"_id" : "PVghWGoB7LiDTeV6LSGu",
"_version" : 1,
"result" : "created",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 1,
"_primary_term" : 1
}
API helps to extract type JSON object by performing a get request for a particular document.
pre class="prettyprint notranslate" > GET schools/_doc/5
On running the above code, we get the following result −
{
"_index" : "schools",
"_type" : "_doc",
"_id" : "5",
"_version" : 7,
"_seq_no" : 3,
"_primary_term" : 1,
"found" : true,
"_source" : {
"name" : "Central School",
"description" : "CBSE Affiliation",
"street" : "Nagan",
"city" : "paprola",
"state" : "HP",
"zip" : "176115",
"location" : [
31.8955385,
76.8380405
],
"fees" : 2200,
"tags" : [
"Senior Secondary",
"beautiful campus"
],
"rating" : "3.3"
}
}
This operation is real time and does not get affected by the refresh rate of Index.
This operation is real time and does not get affected by the refresh rate of Index.
You can also specify the version, then Elasticsearch will fetch that version of document only.
You can also specify the version, then Elasticsearch will fetch that version of document only.
You can also specify the _all in the request, so that the Elasticsearch can search
for that document id in every type and it will return the first matched document.
You can also specify the _all in the request, so that the Elasticsearch can search
for that document id in every type and it will return the first matched document.
You can also specify the fields you want in your result from that particular document.
You can also specify the fields you want in your result from that particular document.
GET schools/_doc/5?_source_includes=name,fees
On running the above code, we get the following result −
{
"_index" : "schools",
"_type" : "_doc",
"_id" : "5",
"_version" : 7,
"_seq_no" : 3,
"_primary_term" : 1,
"found" : true,
"_source" : {
"fees" : 2200,
"name" : "Central School"
}
}
You can also fetch the source part in your result by just adding _source part in your get request.
GET schools/_doc/5?_source
On running the above code, we get the following result −
{
"_index" : "schools",
"_type" : "_doc",
"_id" : "5",
"_version" : 7,
"_seq_no" : 3,
"_primary_term" : 1,
"found" : true,
"_source" : {
"name" : "Central School",
"description" : "CBSE Affiliation",
"street" : "Nagan",
"city" : "paprola",
"state" : "HP",
"zip" : "176115",
"location" : [
31.8955385,
76.8380405
],
"fees" : 2200,
"tags" : [
"Senior Secondary",
"beautiful campus"
],
"rating" : "3.3"
}
}
You can also refresh the shard before doing get operation by set refresh parameter to true.
You can delete a particular index, mapping or a document by sending a HTTP DELETE request to Elasticsearch.
DELETE schools/_doc/4
On running the above code, we get the following result −
{
"found":true, "_index":"schools", "_type":"school", "_id":"4", "_version":2,
"_shards":{"total":2, "successful":1, "failed":0}
}
Version of the document can be specified to delete that particular version. Routing parameter can be specified to delete the document from a particular user and the operation fails if the document does not belong to that particular user. In this operation, you can specify refresh and timeout option same like GET API.
Script is used for performing this operation and versioning is used to make sure that no
updates have happened during the get and re-index. For example, you can update the fees of school using script −
POST schools/_update/4
{
"script" : {
"source": "ctx._source.name = params.sname",
"lang": "painless",
"params" : {
"sname" : "City Wise School"
}
}
}
On running the above code, we get the following result −
{
"_index" : "schools",
"_type" : "_doc",
"_id" : "4",
"_version" : 3,
"result" : "updated",
"_shards" : {
"total" : 2,
"successful" : 1,
"failed" : 0
},
"_seq_no" : 4,
"_primary_term" : 2
}
You can check the update by sending get request to the updated document.
14 Lectures
5 hours
Manuj Aggarwal
20 Lectures
1 hours
Faizan Tayyab
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2737,
"s": 2581,
"text": "Elasticsearch provides single document APIs and multi-document APIs, where the API call is targeting a single document and multiple documents respectively."
},
{
"code": null,
"e": 2968,
"s": 2737,
"text": "It helps to add or update the JSON document in an index when a request is made to that respective index with specific mapping. For example, the following request will add the JSON object to index schools and under school mapping −"
},
{
"code": null,
"e": 3212,
"s": 2968,
"text": "PUT schools/_doc/5\n{\n name\":\"City School\", \"description\":\"ICSE\", \"street\":\"West End\",\n \"city\":\"Meerut\",\n \"state\":\"UP\", \"zip\":\"250002\", \"location\":[28.9926174, 77.692485],\n \"fees\":3500,\n \"tags\":[\"fully computerized\"], \"rating\":\"4.5\"\n}"
},
{
"code": null,
"e": 3269,
"s": 3212,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 3506,
"s": 3269,
"text": "{\n \"_index\" : \"schools\",\n \"_type\" : \"_doc\",\n \"_id\" : \"5\",\n \"_version\" : 1,\n \"result\" : \"created\",\n \"_shards\" : {\n \"total\" : 2,\n \"successful\" : 1,\n \"failed\" : 0\n },\n \"_seq_no\" : 2,\n \"_primary_term\" : 1\n}\n"
},
{
"code": null,
"e": 3853,
"s": 3506,
"text": "When a request is made to add JSON object to a particular index and if that index does not exist, then this API automatically creates that index and also the underlying mapping for that particular JSON object. This functionality can be disabled by changing the values of following parameters to false, which are present in elasticsearch.yml file."
},
{
"code": null,
"e": 3912,
"s": 3853,
"text": "action.auto_create_index:false\nindex.mapper.dynamic:false\n"
},
{
"code": null,
"e": 4070,
"s": 3912,
"text": "You can also restrict the auto creation of index, where only index name with specific patterns are allowed by changing the value of the following parameter −"
},
{
"code": null,
"e": 4109,
"s": 4070,
"text": "action.auto_create_index:+acc*,-bank*\n"
},
{
"code": null,
"e": 4170,
"s": 4109,
"text": "Note − Here + indicates allowed and – indicates not allowed."
},
{
"code": null,
"e": 4310,
"s": 4170,
"text": "Elasticsearch also provides version control facility. We can use a version query parameter to specify the version of a particular document."
},
{
"code": null,
"e": 4613,
"s": 4310,
"text": "PUT schools/_doc/5?version=7&version_type=external\n{\n \"name\":\"Central School\", \"description\":\"CBSE Affiliation\", \"street\":\"Nagan\",\n \"city\":\"paprola\", \"state\":\"HP\", \"zip\":\"176115\", \"location\":[31.8955385, 76.8380405],\n \"fees\":2200, \"tags\":[\"Senior Secondary\", \"beautiful campus\"], \"rating\":\"3.3\"\n}"
},
{
"code": null,
"e": 4670,
"s": 4613,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 4907,
"s": 4670,
"text": "{\n \"_index\" : \"schools\",\n \"_type\" : \"_doc\",\n \"_id\" : \"5\",\n \"_version\" : 7,\n \"result\" : \"updated\",\n \"_shards\" : {\n \"total\" : 2,\n \"successful\" : 1,\n \"failed\" : 0\n },\n \"_seq_no\" : 3,\n \"_primary_term\" : 1\n}\n"
},
{
"code": null,
"e": 5000,
"s": 4907,
"text": "Versioning is a real-time process and it is not affected by the real time search operations."
},
{
"code": null,
"e": 5051,
"s": 5000,
"text": "There are two most important types of versioning −"
},
{
"code": null,
"e": 5164,
"s": 5051,
"text": "Internal versioning is the default version that starts with 1 and increments with each update, deletes included."
},
{
"code": null,
"e": 5477,
"s": 5164,
"text": "It is used when the versioning of the documents is stored in an external system like third party versioning systems. To enable this functionality, we need to set version_type to external. Here Elasticsearch will store version number as designated by the external system and will not increment them automatically."
},
{
"code": null,
"e": 5591,
"s": 5477,
"text": "The operation type is used to force a create operation. This helps to avoid the overwriting\nof existing document."
},
{
"code": null,
"e": 5661,
"s": 5591,
"text": "PUT chapter/_doc/1?op_type=create\n{\n \"Text\":\"this is chapter one\"\n}"
},
{
"code": null,
"e": 5718,
"s": 5661,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 5955,
"s": 5718,
"text": "{\n \"_index\" : \"chapter\",\n \"_type\" : \"_doc\",\n \"_id\" : \"1\",\n \"_version\" : 1,\n \"result\" : \"created\",\n \"_shards\" : {\n \"total\" : 2,\n \"successful\" : 1,\n \"failed\" : 0\n },\n \"_seq_no\" : 0,\n \"_primary_term\" : 1\n}\n"
},
{
"code": null,
"e": 6065,
"s": 5955,
"text": "When ID is not specified in index operation, then Elasticsearch automatically generates id\nfor that document."
},
{
"code": null,
"e": 6190,
"s": 6065,
"text": "POST chapter/_doc/\n{\n \"user\" : \"tpoint\",\n \"post_date\" : \"2018-12-25T14:12:12\",\n \"message\" : \"Elasticsearch Tutorial\"\n}"
},
{
"code": null,
"e": 6247,
"s": 6190,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 6503,
"s": 6247,
"text": "{\n \"_index\" : \"chapter\",\n \"_type\" : \"_doc\",\n \"_id\" : \"PVghWGoB7LiDTeV6LSGu\",\n \"_version\" : 1,\n \"result\" : \"created\",\n \"_shards\" : {\n \"total\" : 2,\n \"successful\" : 1,\n \"failed\" : 0\n },\n \"_seq_no\" : 1,\n \"_primary_term\" : 1\n}\n"
},
{
"code": null,
"e": 6596,
"s": 6503,
"text": "API helps to extract type JSON object by performing a get request for a particular document."
},
{
"code": null,
"e": 6653,
"s": 6596,
"text": "pre class=\"prettyprint notranslate\" > GET schools/_doc/5"
},
{
"code": null,
"e": 6710,
"s": 6653,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 7250,
"s": 6710,
"text": "{\n \"_index\" : \"schools\",\n \"_type\" : \"_doc\",\n \"_id\" : \"5\",\n \"_version\" : 7,\n \"_seq_no\" : 3,\n \"_primary_term\" : 1,\n \"found\" : true,\n \"_source\" : {\n \"name\" : \"Central School\",\n \"description\" : \"CBSE Affiliation\",\n \"street\" : \"Nagan\",\n \"city\" : \"paprola\",\n \"state\" : \"HP\",\n \"zip\" : \"176115\",\n \"location\" : [\n 31.8955385,\n 76.8380405\n ],\n \"fees\" : 2200,\n \"tags\" : [\n \"Senior Secondary\",\n \"beautiful campus\"\n ],\n \"rating\" : \"3.3\"\n }\n}\n"
},
{
"code": null,
"e": 7334,
"s": 7250,
"text": "This operation is real time and does not get affected by the refresh rate of Index."
},
{
"code": null,
"e": 7418,
"s": 7334,
"text": "This operation is real time and does not get affected by the refresh rate of Index."
},
{
"code": null,
"e": 7513,
"s": 7418,
"text": "You can also specify the version, then Elasticsearch will fetch that version of document only."
},
{
"code": null,
"e": 7608,
"s": 7513,
"text": "You can also specify the version, then Elasticsearch will fetch that version of document only."
},
{
"code": null,
"e": 7773,
"s": 7608,
"text": "You can also specify the _all in the request, so that the Elasticsearch can search\nfor that document id in every type and it will return the first matched document."
},
{
"code": null,
"e": 7938,
"s": 7773,
"text": "You can also specify the _all in the request, so that the Elasticsearch can search\nfor that document id in every type and it will return the first matched document."
},
{
"code": null,
"e": 8025,
"s": 7938,
"text": "You can also specify the fields you want in your result from that particular document."
},
{
"code": null,
"e": 8112,
"s": 8025,
"text": "You can also specify the fields you want in your result from that particular document."
},
{
"code": null,
"e": 8159,
"s": 8112,
"text": "GET schools/_doc/5?_source_includes=name,fees "
},
{
"code": null,
"e": 8216,
"s": 8159,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 8439,
"s": 8216,
"text": "{\n \"_index\" : \"schools\",\n \"_type\" : \"_doc\",\n \"_id\" : \"5\",\n \"_version\" : 7,\n \"_seq_no\" : 3,\n \"_primary_term\" : 1,\n \"found\" : true,\n \"_source\" : {\n \"fees\" : 2200,\n \"name\" : \"Central School\"\n }\n} \n"
},
{
"code": null,
"e": 8538,
"s": 8439,
"text": "You can also fetch the source part in your result by just adding _source part in your get request."
},
{
"code": null,
"e": 8566,
"s": 8538,
"text": "GET schools/_doc/5?_source "
},
{
"code": null,
"e": 8623,
"s": 8566,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 9163,
"s": 8623,
"text": "{\n \"_index\" : \"schools\",\n \"_type\" : \"_doc\",\n \"_id\" : \"5\",\n \"_version\" : 7,\n \"_seq_no\" : 3,\n \"_primary_term\" : 1,\n \"found\" : true,\n \"_source\" : {\n \"name\" : \"Central School\",\n \"description\" : \"CBSE Affiliation\",\n \"street\" : \"Nagan\",\n \"city\" : \"paprola\",\n \"state\" : \"HP\",\n \"zip\" : \"176115\",\n \"location\" : [\n 31.8955385,\n 76.8380405\n ],\n \"fees\" : 2200,\n \"tags\" : [\n \"Senior Secondary\",\n \"beautiful campus\"\n ],\n \"rating\" : \"3.3\"\n }\n}\n"
},
{
"code": null,
"e": 9255,
"s": 9163,
"text": "You can also refresh the shard before doing get operation by set refresh parameter to true."
},
{
"code": null,
"e": 9363,
"s": 9255,
"text": "You can delete a particular index, mapping or a document by sending a HTTP DELETE request to Elasticsearch."
},
{
"code": null,
"e": 9387,
"s": 9363,
"text": "DELETE schools/_doc/4 "
},
{
"code": null,
"e": 9444,
"s": 9387,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 9582,
"s": 9444,
"text": "{\n \"found\":true, \"_index\":\"schools\", \"_type\":\"school\", \"_id\":\"4\", \"_version\":2,\n \"_shards\":{\"total\":2, \"successful\":1, \"failed\":0}\n}\n"
},
{
"code": null,
"e": 9901,
"s": 9582,
"text": "Version of the document can be specified to delete that particular version. Routing parameter can be specified to delete the document from a particular user and the operation fails if the document does not belong to that particular user. In this operation, you can specify refresh and timeout option same like GET API."
},
{
"code": null,
"e": 10103,
"s": 9901,
"text": "Script is used for performing this operation and versioning is used to make sure that no\nupdates have happened during the get and re-index. For example, you can update the fees of school using script −"
},
{
"code": null,
"e": 10294,
"s": 10103,
"text": "POST schools/_update/4\n{\n \"script\" : {\n \"source\": \"ctx._source.name = params.sname\",\n \"lang\": \"painless\",\n \"params\" : {\n \"sname\" : \"City Wise School\"\n }\n }\n }"
},
{
"code": null,
"e": 10351,
"s": 10294,
"text": "On running the above code, we get the following result −"
},
{
"code": null,
"e": 10588,
"s": 10351,
"text": "{\n \"_index\" : \"schools\",\n \"_type\" : \"_doc\",\n \"_id\" : \"4\",\n \"_version\" : 3,\n \"result\" : \"updated\",\n \"_shards\" : {\n \"total\" : 2,\n \"successful\" : 1,\n \"failed\" : 0\n },\n \"_seq_no\" : 4,\n \"_primary_term\" : 2\n}\n"
},
{
"code": null,
"e": 10661,
"s": 10588,
"text": "You can check the update by sending get request to the updated document."
},
{
"code": null,
"e": 10694,
"s": 10661,
"text": "\n 14 Lectures \n 5 hours \n"
},
{
"code": null,
"e": 10710,
"s": 10694,
"text": " Manuj Aggarwal"
},
{
"code": null,
"e": 10743,
"s": 10710,
"text": "\n 20 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 10758,
"s": 10743,
"text": " Faizan Tayyab"
},
{
"code": null,
"e": 10765,
"s": 10758,
"text": " Print"
},
{
"code": null,
"e": 10776,
"s": 10765,
"text": " Add Notes"
}
] |
Slot machine project using R programming - GeeksforGeeks | 16 Mar, 2021
The slot machine is a game, and it comprises two steps: Generating three symbols randomly from 7 symbols and For the generated three symbols, computing score.
The symbols used in the slot machine are:DiamondSeven3Bars2Bars1BarcherryZero
Diamond
Seven
3Bars
2Bars
1Bar
cherry
Zero
We can take any symbol. It may depend on the choice of the individual.
From the above symbols, three symbols are generated randomly.
We can get random symbols using the sample() function in R programming.
According to the symbols, some prize money will be there.
Allocation of prize money to the generated symbols also depends on the individual.
So, for easy understanding let us consider the below amount
of money as prize for this project.
Diamond Diamond Diamond -------------------> 1000
Seven Seven Seven --------------------> 800
3Bars 3Bars 3Bars ------------------> 600
2Bars 2Bars 2Bars --------------------> 400
1Bar 1Bar 1Bar ---------------------> 250
Cherry Cherry Cherry ----------------------> 100
If three symbols are generated with the combination of all
types of bars prize money will be 50 .
If three symbols are generated with the combination of two
Cs and remaining one empty or 0 prize money will be 50.
0 Cherry Cherry------------> 50
Cherry 0 Cherry ------------> 50
Cherry Cherry 0 -------------> 50
If the 3 symbols are generated with the combination of one Cherry and the remaining two are empty or 0 prize money will be 20.
If the three symbols are zeros prize money will be obviously 0.
If we get Seven-Seven-Diamond, Diamond will be considered as Seven. If Diamond presents the prize money will be doubled so, the prize money will be 800 * 2 = 1600.
If we get Seven-Diamond-Diamond, here Diamond occurred twice so, the prize money will be doubled twice. The prize money will be ( 800 * 2 )2.
If the Diamond has occurred with any other characters, then it is called a wild card character. Like we call joker in cards.
If we get all diamonds, prize money will be doubled, and it will be considered as ” Jackpot “.
Symbols and prize money may be different, but the concept is the same.
Let us implement a slot machine project without wild card characters.
Program1: This code will take replace and probabilities as parameters
R
gsy <- function() { s <- c("Diamond","Seven","3Bars","2Bars","1Bar","Cherry","Zero") sample(s,size=3,replace=TRUE,prob=c(0.02,0.04,0.05,0.12,0.15,0.04,0.5))} gs<- function(s) { # checking for same symbols s1<- s[1] == s[2] && s[2] == s[3] b <- s %in% c("1Bar","2Bars","3bars") if (s1) { cash <- c("Diamond"=1000,"7"=800,"3Bars"=600,"2Bars"=400, "1Bar"=250,"Cherry"=100,"Zero"=0) cash_p <- unname(cash[s[1]]) } # checking for all bars else if (all(b)) { cash_p <- 5 } # checking for cherries else { cherry1 <- sum(s == "Cherry") cash_p <- c(0,2,5)[cherry1+1] } # checking for diamondsd1 <- sum(s == "Diamond") cash_p * 2 ^ d1} # run function for calling gsy functionrun <- function() { s <- gsy() print(s) gs(s)} run()
Output:
Testcase !:
Test Case 2:
Test case 3:
Program 2: In this code, we are not passing probabilities parameters.
R
gsy <- function() { s <- c("Diamond","Seven","3Bars","2Bars","1Bar","Cherry","Zero") sample(s,size=3,replace=TRUE)} gs<- function(s) { # checking for same symbols s1<- s[1] == s[2] && s[2] == s[3] b <- s %in% c("1Bar","2Bars","3bars") if (s1) { cash <- c("Diamond"=1000,"7"=800,"3Bars"=600,"2Bars"=400, "1Bar"=250,"Cherry"=100,"Zero"=0) cash_p <- unname(cash[s[1]]) } # checking for all bars else if (all(b)) { cash_p <- 5 } else { # checking for cherries cherry1 <- sum(s == "Cherry") cash_p <- c(0,2,5)[cherry1+1] } # checking for diamondsd1 <- sum(s == "Diamond") cash_p * 2 ^ d1} run <- function() { s <- gsy() print(s) gs(s)}run()
Output:
Test case 1:
Test case 2:
Test case 3:
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
Replace Specific Characters in String in R
How to filter R DataFrame by values in a column?
How to filter R dataframe by multiple conditions?
R - if statement
How to import an Excel File into R ?
How to change the order of bars in bar chart in R ? | [
{
"code": null,
"e": 24875,
"s": 24847,
"text": "\n16 Mar, 2021"
},
{
"code": null,
"e": 25034,
"s": 24875,
"text": "The slot machine is a game, and it comprises two steps: Generating three symbols randomly from 7 symbols and For the generated three symbols, computing score."
},
{
"code": null,
"e": 25112,
"s": 25034,
"text": "The symbols used in the slot machine are:DiamondSeven3Bars2Bars1BarcherryZero"
},
{
"code": null,
"e": 25120,
"s": 25112,
"text": "Diamond"
},
{
"code": null,
"e": 25126,
"s": 25120,
"text": "Seven"
},
{
"code": null,
"e": 25132,
"s": 25126,
"text": "3Bars"
},
{
"code": null,
"e": 25138,
"s": 25132,
"text": "2Bars"
},
{
"code": null,
"e": 25143,
"s": 25138,
"text": "1Bar"
},
{
"code": null,
"e": 25150,
"s": 25143,
"text": "cherry"
},
{
"code": null,
"e": 25155,
"s": 25150,
"text": "Zero"
},
{
"code": null,
"e": 25226,
"s": 25155,
"text": "We can take any symbol. It may depend on the choice of the individual."
},
{
"code": null,
"e": 25288,
"s": 25226,
"text": "From the above symbols, three symbols are generated randomly."
},
{
"code": null,
"e": 25360,
"s": 25288,
"text": "We can get random symbols using the sample() function in R programming."
},
{
"code": null,
"e": 25418,
"s": 25360,
"text": "According to the symbols, some prize money will be there."
},
{
"code": null,
"e": 25501,
"s": 25418,
"text": "Allocation of prize money to the generated symbols also depends on the individual."
},
{
"code": null,
"e": 26223,
"s": 25501,
"text": "So, for easy understanding let us consider the below amount\nof money as prize for this project.\n Diamond Diamond Diamond -------------------> 1000\n Seven Seven Seven --------------------> 800\n 3Bars 3Bars 3Bars ------------------> 600\n 2Bars 2Bars 2Bars --------------------> 400\n 1Bar 1Bar 1Bar ---------------------> 250\n Cherry Cherry Cherry ----------------------> 100\n \nIf three symbols are generated with the combination of all \ntypes of bars prize money will be 50 .\nIf three symbols are generated with the combination of two \nCs and remaining one empty or 0 prize money will be 50.\n 0 Cherry Cherry------------> 50\n Cherry 0 Cherry ------------> 50\n Cherry Cherry 0 -------------> 50"
},
{
"code": null,
"e": 26350,
"s": 26223,
"text": "If the 3 symbols are generated with the combination of one Cherry and the remaining two are empty or 0 prize money will be 20."
},
{
"code": null,
"e": 26414,
"s": 26350,
"text": "If the three symbols are zeros prize money will be obviously 0."
},
{
"code": null,
"e": 26578,
"s": 26414,
"text": "If we get Seven-Seven-Diamond, Diamond will be considered as Seven. If Diamond presents the prize money will be doubled so, the prize money will be 800 * 2 = 1600."
},
{
"code": null,
"e": 26720,
"s": 26578,
"text": "If we get Seven-Diamond-Diamond, here Diamond occurred twice so, the prize money will be doubled twice. The prize money will be ( 800 * 2 )2."
},
{
"code": null,
"e": 26845,
"s": 26720,
"text": "If the Diamond has occurred with any other characters, then it is called a wild card character. Like we call joker in cards."
},
{
"code": null,
"e": 26940,
"s": 26845,
"text": "If we get all diamonds, prize money will be doubled, and it will be considered as ” Jackpot “."
},
{
"code": null,
"e": 27011,
"s": 26940,
"text": "Symbols and prize money may be different, but the concept is the same."
},
{
"code": null,
"e": 27081,
"s": 27011,
"text": "Let us implement a slot machine project without wild card characters."
},
{
"code": null,
"e": 27151,
"s": 27081,
"text": "Program1: This code will take replace and probabilities as parameters"
},
{
"code": null,
"e": 27153,
"s": 27151,
"text": "R"
},
{
"code": "gsy <- function() { s <- c(\"Diamond\",\"Seven\",\"3Bars\",\"2Bars\",\"1Bar\",\"Cherry\",\"Zero\") sample(s,size=3,replace=TRUE,prob=c(0.02,0.04,0.05,0.12,0.15,0.04,0.5))} gs<- function(s) { # checking for same symbols s1<- s[1] == s[2] && s[2] == s[3] b <- s %in% c(\"1Bar\",\"2Bars\",\"3bars\") if (s1) { cash <- c(\"Diamond\"=1000,\"7\"=800,\"3Bars\"=600,\"2Bars\"=400, \"1Bar\"=250,\"Cherry\"=100,\"Zero\"=0) cash_p <- unname(cash[s[1]]) } # checking for all bars else if (all(b)) { cash_p <- 5 } # checking for cherries else { cherry1 <- sum(s == \"Cherry\") cash_p <- c(0,2,5)[cherry1+1] } # checking for diamondsd1 <- sum(s == \"Diamond\") cash_p * 2 ^ d1} # run function for calling gsy functionrun <- function() { s <- gsy() print(s) gs(s)} run()",
"e": 27954,
"s": 27153,
"text": null
},
{
"code": null,
"e": 27962,
"s": 27954,
"text": "Output:"
},
{
"code": null,
"e": 27974,
"s": 27962,
"text": "Testcase !:"
},
{
"code": null,
"e": 27987,
"s": 27974,
"text": "Test Case 2:"
},
{
"code": null,
"e": 28000,
"s": 27987,
"text": "Test case 3:"
},
{
"code": null,
"e": 28071,
"s": 28000,
"text": "Program 2: In this code, we are not passing probabilities parameters. "
},
{
"code": null,
"e": 28073,
"s": 28071,
"text": "R"
},
{
"code": "gsy <- function() { s <- c(\"Diamond\",\"Seven\",\"3Bars\",\"2Bars\",\"1Bar\",\"Cherry\",\"Zero\") sample(s,size=3,replace=TRUE)} gs<- function(s) { # checking for same symbols s1<- s[1] == s[2] && s[2] == s[3] b <- s %in% c(\"1Bar\",\"2Bars\",\"3bars\") if (s1) { cash <- c(\"Diamond\"=1000,\"7\"=800,\"3Bars\"=600,\"2Bars\"=400, \"1Bar\"=250,\"Cherry\"=100,\"Zero\"=0) cash_p <- unname(cash[s[1]]) } # checking for all bars else if (all(b)) { cash_p <- 5 } else { # checking for cherries cherry1 <- sum(s == \"Cherry\") cash_p <- c(0,2,5)[cherry1+1] } # checking for diamondsd1 <- sum(s == \"Diamond\") cash_p * 2 ^ d1} run <- function() { s <- gsy() print(s) gs(s)}run()",
"e": 28782,
"s": 28073,
"text": null
},
{
"code": null,
"e": 28790,
"s": 28782,
"text": "Output:"
},
{
"code": null,
"e": 28804,
"s": 28790,
"text": " Test case 1:"
},
{
"code": null,
"e": 28817,
"s": 28804,
"text": "Test case 2:"
},
{
"code": null,
"e": 28830,
"s": 28817,
"text": "Test case 3:"
},
{
"code": null,
"e": 28841,
"s": 28830,
"text": "R Language"
},
{
"code": null,
"e": 28939,
"s": 28841,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28948,
"s": 28939,
"text": "Comments"
},
{
"code": null,
"e": 28961,
"s": 28948,
"text": "Old Comments"
},
{
"code": null,
"e": 29013,
"s": 28961,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 29051,
"s": 29013,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 29086,
"s": 29051,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 29144,
"s": 29086,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 29187,
"s": 29144,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 29236,
"s": 29187,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 29286,
"s": 29236,
"text": "How to filter R dataframe by multiple conditions?"
},
{
"code": null,
"e": 29303,
"s": 29286,
"text": "R - if statement"
},
{
"code": null,
"e": 29340,
"s": 29303,
"text": "How to import an Excel File into R ?"
}
] |
Assignment, Shallow Copy, Or Deep Copy? | by Weilin Li | Towards Data Science | from copy import copy, deepcopy
The goal of this article is to describe what will happen in memory when we
Assign a variable B = A ,
Shallow copy it C = copy(A) , or
Deep copy it D = deepcopy(A).
I first describe a bit about memory management and optimization in Python. After laying the ground, I explain the difference between assignment statement, shallow copy, and deep copy. I then summarize the difference in a table.
If you prefer watching a video to reading an article, you can find the complementary video here.
int, float, list, dict, class instances, ... they are all objects in Python. In CPython implementation, built-in function id() returns memory address of an object —
>>> L1 = [1, 2, 3]>>> id(L1)3061530120
If we create a new variable L2 that refers to an object with the same value as L1, L2 will have a new memory address —
>>> L2 = [1, 2, 3]>>> id(L2)3061527304
Every time a new object is created, it will have a new memory address. Except when it is —
a very short stringan integer within [-5, 256]an empty immutable containers (e.g. tuples)
a very short string
an integer within [-5, 256]
an empty immutable containers (e.g. tuples)
Let’s see an example with an integer object. Both x and y refer to the same value 10. While L1 and L2 in the previous example had two different memory addresses, x and y share the same memory address —
>>> x = 10>>> y = 10>>> id(x)2301840>>> id(y)2301840
That’s because, in those three exceptions, Python optimizes memory by having the second variable refer to the same object in memory, — some call it “shared object”.
Keep the concept of shared object in mind because we’ll need it later when we create a deepcopy of an object.
It says in the Python documentation that “Assignment statements in Python do not copy objects, they create bindings between a target and an object.” That means when we create a variable by assignment, the new variable refers to the same object as the original variable does —
>>> A = [1, 2, [10, 11], 3, [20, 21]]>>> B = A>>> id(A)3061527080>>> id(B)3061527080
Because the new variable B and the original variable A share the same object (i.e. the same list), they also contain same elements —
>>> id(A[2])3061527368>>> id(B[2])3061527368
As illustrated in the figure below, A and B share the same id, i.e., they refer to the same object in memory. And they contain the same elements as well.
When we create a variable by shallow copy, the new variable refers to a new object —
>>> A = [1, 2, [10, 11], 3, [20, 21]]>>> C = copy(A)>>> id(A)3062428488>>> id(C)3062428520
Although A and C refers to two different objects (i.e. two lists with different memory addresses), elements in the two lists refer to the same objects —
>>> id(A[0])2301696>>> id(C[0])2301696>>> id(A[2])3062464904>>> id(C[2])3062464904
The figure below illustrates how elements in A refer to same objects as elements in C.
Similar to shallow copy, when we create a variable by deep copy, the new variable refers to a new object —
>>> A = [1, 2, [10, 11], 3, [20, 21]]>>> D = deepcopy(A)>>> id(A)3062727496>>> id(D)3062428488
As described in Python documentation —
The difference between shallow and deep copying is only relevant for compound objects (objects that contain other objects, like lists or class instances):
- A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original.
- A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original.
So different from shallow copy, elements in the two lists now refer to different objects —
>>> id(A[0])2301696>>> id(D[0])2301696>>> id(A[2])3062464648>>> id(D[2])3062466376
But why do A[0] and D[0] share the same object (i.e. have the same memory address)? Because they both refer to integers, which is one of the three exceptions due to memory optimization we mentioned in the beginning.
The figure below shows A and D refer to two different lists in memory, and elements in A refer to different objects than elements in D except integer elements due to memory optimization.
If there is one take-away from this article, it must be the table below. Variable assignment doesn’t copy objects, so A and B have the same memory address and contain same elements. Shallow copy creates a new object for C, but elements in C still refer to the same objects as elements in A. Deep copy also creates a new object for D, and elements in D refer to different objects than those elements in A with three types of exceptions.
This article is inspired by...
Python documentation on copy
Understanding Python variables and Memory Management | [
{
"code": null,
"e": 204,
"s": 172,
"text": "from copy import copy, deepcopy"
},
{
"code": null,
"e": 279,
"s": 204,
"text": "The goal of this article is to describe what will happen in memory when we"
},
{
"code": null,
"e": 305,
"s": 279,
"text": "Assign a variable B = A ,"
},
{
"code": null,
"e": 338,
"s": 305,
"text": "Shallow copy it C = copy(A) , or"
},
{
"code": null,
"e": 368,
"s": 338,
"text": "Deep copy it D = deepcopy(A)."
},
{
"code": null,
"e": 596,
"s": 368,
"text": "I first describe a bit about memory management and optimization in Python. After laying the ground, I explain the difference between assignment statement, shallow copy, and deep copy. I then summarize the difference in a table."
},
{
"code": null,
"e": 693,
"s": 596,
"text": "If you prefer watching a video to reading an article, you can find the complementary video here."
},
{
"code": null,
"e": 858,
"s": 693,
"text": "int, float, list, dict, class instances, ... they are all objects in Python. In CPython implementation, built-in function id() returns memory address of an object —"
},
{
"code": null,
"e": 897,
"s": 858,
"text": ">>> L1 = [1, 2, 3]>>> id(L1)3061530120"
},
{
"code": null,
"e": 1016,
"s": 897,
"text": "If we create a new variable L2 that refers to an object with the same value as L1, L2 will have a new memory address —"
},
{
"code": null,
"e": 1055,
"s": 1016,
"text": ">>> L2 = [1, 2, 3]>>> id(L2)3061527304"
},
{
"code": null,
"e": 1146,
"s": 1055,
"text": "Every time a new object is created, it will have a new memory address. Except when it is —"
},
{
"code": null,
"e": 1236,
"s": 1146,
"text": "a very short stringan integer within [-5, 256]an empty immutable containers (e.g. tuples)"
},
{
"code": null,
"e": 1256,
"s": 1236,
"text": "a very short string"
},
{
"code": null,
"e": 1284,
"s": 1256,
"text": "an integer within [-5, 256]"
},
{
"code": null,
"e": 1328,
"s": 1284,
"text": "an empty immutable containers (e.g. tuples)"
},
{
"code": null,
"e": 1530,
"s": 1328,
"text": "Let’s see an example with an integer object. Both x and y refer to the same value 10. While L1 and L2 in the previous example had two different memory addresses, x and y share the same memory address —"
},
{
"code": null,
"e": 1583,
"s": 1530,
"text": ">>> x = 10>>> y = 10>>> id(x)2301840>>> id(y)2301840"
},
{
"code": null,
"e": 1748,
"s": 1583,
"text": "That’s because, in those three exceptions, Python optimizes memory by having the second variable refer to the same object in memory, — some call it “shared object”."
},
{
"code": null,
"e": 1858,
"s": 1748,
"text": "Keep the concept of shared object in mind because we’ll need it later when we create a deepcopy of an object."
},
{
"code": null,
"e": 2134,
"s": 1858,
"text": "It says in the Python documentation that “Assignment statements in Python do not copy objects, they create bindings between a target and an object.” That means when we create a variable by assignment, the new variable refers to the same object as the original variable does —"
},
{
"code": null,
"e": 2219,
"s": 2134,
"text": ">>> A = [1, 2, [10, 11], 3, [20, 21]]>>> B = A>>> id(A)3061527080>>> id(B)3061527080"
},
{
"code": null,
"e": 2352,
"s": 2219,
"text": "Because the new variable B and the original variable A share the same object (i.e. the same list), they also contain same elements —"
},
{
"code": null,
"e": 2397,
"s": 2352,
"text": ">>> id(A[2])3061527368>>> id(B[2])3061527368"
},
{
"code": null,
"e": 2551,
"s": 2397,
"text": "As illustrated in the figure below, A and B share the same id, i.e., they refer to the same object in memory. And they contain the same elements as well."
},
{
"code": null,
"e": 2636,
"s": 2551,
"text": "When we create a variable by shallow copy, the new variable refers to a new object —"
},
{
"code": null,
"e": 2727,
"s": 2636,
"text": ">>> A = [1, 2, [10, 11], 3, [20, 21]]>>> C = copy(A)>>> id(A)3062428488>>> id(C)3062428520"
},
{
"code": null,
"e": 2880,
"s": 2727,
"text": "Although A and C refers to two different objects (i.e. two lists with different memory addresses), elements in the two lists refer to the same objects —"
},
{
"code": null,
"e": 2963,
"s": 2880,
"text": ">>> id(A[0])2301696>>> id(C[0])2301696>>> id(A[2])3062464904>>> id(C[2])3062464904"
},
{
"code": null,
"e": 3050,
"s": 2963,
"text": "The figure below illustrates how elements in A refer to same objects as elements in C."
},
{
"code": null,
"e": 3157,
"s": 3050,
"text": "Similar to shallow copy, when we create a variable by deep copy, the new variable refers to a new object —"
},
{
"code": null,
"e": 3252,
"s": 3157,
"text": ">>> A = [1, 2, [10, 11], 3, [20, 21]]>>> D = deepcopy(A)>>> id(A)3062727496>>> id(D)3062428488"
},
{
"code": null,
"e": 3291,
"s": 3252,
"text": "As described in Python documentation —"
},
{
"code": null,
"e": 3446,
"s": 3291,
"text": "The difference between shallow and deep copying is only relevant for compound objects (objects that contain other objects, like lists or class instances):"
},
{
"code": null,
"e": 3595,
"s": 3446,
"text": "- A shallow copy constructs a new compound object and then (to the extent possible) inserts references into it to the objects found in the original."
},
{
"code": null,
"e": 3726,
"s": 3595,
"text": "- A deep copy constructs a new compound object and then, recursively, inserts copies into it of the objects found in the original."
},
{
"code": null,
"e": 3817,
"s": 3726,
"text": "So different from shallow copy, elements in the two lists now refer to different objects —"
},
{
"code": null,
"e": 3900,
"s": 3817,
"text": ">>> id(A[0])2301696>>> id(D[0])2301696>>> id(A[2])3062464648>>> id(D[2])3062466376"
},
{
"code": null,
"e": 4116,
"s": 3900,
"text": "But why do A[0] and D[0] share the same object (i.e. have the same memory address)? Because they both refer to integers, which is one of the three exceptions due to memory optimization we mentioned in the beginning."
},
{
"code": null,
"e": 4303,
"s": 4116,
"text": "The figure below shows A and D refer to two different lists in memory, and elements in A refer to different objects than elements in D except integer elements due to memory optimization."
},
{
"code": null,
"e": 4739,
"s": 4303,
"text": "If there is one take-away from this article, it must be the table below. Variable assignment doesn’t copy objects, so A and B have the same memory address and contain same elements. Shallow copy creates a new object for C, but elements in C still refer to the same objects as elements in A. Deep copy also creates a new object for D, and elements in D refer to different objects than those elements in A with three types of exceptions."
},
{
"code": null,
"e": 4770,
"s": 4739,
"text": "This article is inspired by..."
},
{
"code": null,
"e": 4799,
"s": 4770,
"text": "Python documentation on copy"
}
] |
Java Program to display double and single quote in a string | The following are our strings with single and double quote.
String str1 = "This is Jack's mobile";
String str2 = "\"This is it\"!";
Above, for single quote, we have to mention it normally like.
This is Jack's mobile
However, for double quotes, use the following and add a slash in the beginning as well as at the end.
String str2 = "\"This is it\"!";
The following is an example.
Live Demo
public class Demo {
public static void main(String[] args) {
String str1 = "This is Jack's mobile";
String str2 = "\"This is it\"!";
System.out.println("Displaying Single Quote: "+str1);
System.out.println("Displaying Double Quotes: "+str2);
}
}
Displaying Single Quote: This is Jack's mobile
Displaying Double Quotes: "This is it"! | [
{
"code": null,
"e": 1122,
"s": 1062,
"text": "The following are our strings with single and double quote."
},
{
"code": null,
"e": 1194,
"s": 1122,
"text": "String str1 = \"This is Jack's mobile\";\nString str2 = \"\\\"This is it\\\"!\";"
},
{
"code": null,
"e": 1256,
"s": 1194,
"text": "Above, for single quote, we have to mention it normally like."
},
{
"code": null,
"e": 1278,
"s": 1256,
"text": "This is Jack's mobile"
},
{
"code": null,
"e": 1380,
"s": 1278,
"text": "However, for double quotes, use the following and add a slash in the beginning as well as at the end."
},
{
"code": null,
"e": 1413,
"s": 1380,
"text": "String str2 = \"\\\"This is it\\\"!\";"
},
{
"code": null,
"e": 1442,
"s": 1413,
"text": "The following is an example."
},
{
"code": null,
"e": 1453,
"s": 1442,
"text": " Live Demo"
},
{
"code": null,
"e": 1729,
"s": 1453,
"text": "public class Demo {\n public static void main(String[] args) {\n String str1 = \"This is Jack's mobile\";\n String str2 = \"\\\"This is it\\\"!\";\n System.out.println(\"Displaying Single Quote: \"+str1);\n System.out.println(\"Displaying Double Quotes: \"+str2);\n }\n}"
},
{
"code": null,
"e": 1816,
"s": 1729,
"text": "Displaying Single Quote: This is Jack's mobile\nDisplaying Double Quotes: \"This is it\"!"
}
] |
Java Examples - Solving Tower of Hanoi | How to use method for solving Tower of Hanoi problem?
This example displays the way of using method for solving Tower of Hanoi problem( for 3 disks).
public class MainClass {
public static void main(String[] args) {
int nDisks = 3;
doTowers(nDisks, 'A', 'B', 'C');
}
public static void doTowers(int topN, char from, char inter, char to) {
if (topN == 1) {
System.out.println("Disk 1 from " + from + " to " + to);
} else {
doTowers(topN - 1, from, to, inter);
System.out.println("Disk " + topN + " from " + from + " to " + to);
doTowers(topN - 1, inter, from, to);
}
}
}
The above code sample will produce the following result.
Disk 1 from A to C
Disk 2 from A to B
Disk 1 from C to B
Disk 3 from A to C
Disk 1 from B to A
Disk 2 from B to C
Disk 1 from A to C
The following is an another sample example of Tower of Hanoi
public class TowersOfHanoi {
public static void move(int n, int startPole, int endPole) {
if (n == 0) {
return;
}
int intermediatePole = 6 - startPole - endPole;
move(n-1, startPole, intermediatePole);
System.out.println("Move " +n + " from " + startPole + " to " +endPole);
move(n-1, intermediatePole, endPole);
}
public static void main(String[] args) {
move(5, 1, 3);
}
}
The above code sample will produce the following result.
Move 1 from 1 to 3
Move 2 from 1 to 2
Move 1 from 3 to 2
Move 3 from 1 to 3
Move 1 from 2 to 1
Move 2 from 2 to 3
Move 1 from 1 to 3
Move 4 from 1 to 2
Move 1 from 3 to 2
Move 2 from 3 to 1
Move 1 from 2 to 1
Move 3 from 3 to 2
Move 1 from 1 to 3
Move 2 from 1 to 2
Move 1 from 3 to 2
Move 5 from 1 to 3
Move 1 from 2 to 1
Move 2 from 2 to 3
Move 1 from 1 to 3
Move 3 from 2 to 1
Move 1 from 3 to 2
Move 2 from 3 to 1
Move 1 from 2 to 1
Move 4 from 2 to 3
Move 1 from 1 to 3
Move 2 from 1 to 2
Move 1 from 3 to 2
Move 3 from 1 to 3
Move 1 from 2 to 1
Move 2 from 2 to 3
Move 1 from 1 to 3
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2123,
"s": 2068,
"text": "How to use method for solving Tower of Hanoi problem?"
},
{
"code": null,
"e": 2220,
"s": 2123,
"text": "This example displays the way of using method for solving Tower of Hanoi problem( for 3 disks)."
},
{
"code": null,
"e": 2718,
"s": 2220,
"text": "public class MainClass {\n public static void main(String[] args) {\n int nDisks = 3;\n doTowers(nDisks, 'A', 'B', 'C');\n }\n public static void doTowers(int topN, char from, char inter, char to) {\n if (topN == 1) {\n System.out.println(\"Disk 1 from \" + from + \" to \" + to);\n } else {\n doTowers(topN - 1, from, to, inter);\n System.out.println(\"Disk \" + topN + \" from \" + from + \" to \" + to);\n doTowers(topN - 1, inter, from, to);\n }\n }\n}"
},
{
"code": null,
"e": 2775,
"s": 2718,
"text": "The above code sample will produce the following result."
},
{
"code": null,
"e": 2909,
"s": 2775,
"text": "Disk 1 from A to C\nDisk 2 from A to B\nDisk 1 from C to B\nDisk 3 from A to C\nDisk 1 from B to A\nDisk 2 from B to C\nDisk 1 from A to C\n"
},
{
"code": null,
"e": 2970,
"s": 2909,
"text": "The following is an another sample example of Tower of Hanoi"
},
{
"code": null,
"e": 3410,
"s": 2970,
"text": "public class TowersOfHanoi {\n public static void move(int n, int startPole, int endPole) {\n if (n == 0) {\n return;\n } \n int intermediatePole = 6 - startPole - endPole;\n move(n-1, startPole, intermediatePole);\n System.out.println(\"Move \" +n + \" from \" + startPole + \" to \" +endPole);\n move(n-1, intermediatePole, endPole);\n } \n public static void main(String[] args) {\n move(5, 1, 3);\n }\n}"
},
{
"code": null,
"e": 3467,
"s": 3410,
"text": "The above code sample will produce the following result."
},
{
"code": null,
"e": 4057,
"s": 3467,
"text": "Move 1 from 1 to 3\nMove 2 from 1 to 2\nMove 1 from 3 to 2\nMove 3 from 1 to 3\nMove 1 from 2 to 1\nMove 2 from 2 to 3\nMove 1 from 1 to 3\nMove 4 from 1 to 2\nMove 1 from 3 to 2\nMove 2 from 3 to 1\nMove 1 from 2 to 1\nMove 3 from 3 to 2\nMove 1 from 1 to 3\nMove 2 from 1 to 2\nMove 1 from 3 to 2\nMove 5 from 1 to 3\nMove 1 from 2 to 1\nMove 2 from 2 to 3\nMove 1 from 1 to 3\nMove 3 from 2 to 1\nMove 1 from 3 to 2\nMove 2 from 3 to 1\nMove 1 from 2 to 1\nMove 4 from 2 to 3\nMove 1 from 1 to 3\nMove 2 from 1 to 2\nMove 1 from 3 to 2\nMove 3 from 1 to 3\nMove 1 from 2 to 1\nMove 2 from 2 to 3\nMove 1 from 1 to 3\n"
},
{
"code": null,
"e": 4064,
"s": 4057,
"text": " Print"
},
{
"code": null,
"e": 4075,
"s": 4064,
"text": " Add Notes"
}
] |
Batch Script - Bitwise Operators | The following code snippet shows how the various operators can be used.
@echo off
SET /A "Result = 48 & 23"
echo %Result%
SET /A "Result = 16 | 16"
echo %Result%
SET /A "Result = 31 ^ 15"
echo %Result%
The above command produces the following output.
16
16
16
Redirection is a concept of taking the output of a command and re-directing that output to a different output media. The following commands are available for re-direction.
command > filename − Redirect command output to a file.
command > filename − Redirect command output to a file.
command >> filename − APPEND into a file.
command >> filename − APPEND into a file.
command < filename − Type a text file and pass the text to command.
command < filename − Type a text file and pass the text to command.
command 2> file − Write standard error of command to file (OS/2 and NT).
command 2> file − Write standard error of command to file (OS/2 and NT).
command 2>> file − Append standard error of command to file (OS/2 and NT).
command 2>> file − Append standard error of command to file (OS/2 and NT).
commandA | commandB − Redirect standard output of commandA to standard input of command.
commandA | commandB − Redirect standard output of commandA to standard input of command.
The following code snippet shows how the various redirection operations can be used.
This command redirects command output to a file.
@echo off
ipconfig>C:\details.txt
The output of the above program would be that all the details of the ipconfig command will be sent to the file C:\details.txt. If you open the above file, you might see the information similar to the one as the following.
Windows IP Configuration
Wireless LAN adapter Local Area Connection* 11:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Ethernet adapter Ethernet:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Wireless LAN adapter Wi-Fi:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Tunnel adapter Teredo Tunneling Pseudo-Interface:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
This command appends the output of the command into a file.
@echo off
systeminfo>>C:\details.txt
The output of the above program would be that all the details of the systeminfo command will be appended to the file C:\details.txt. if you open the above file you might see the information similar to the one as the following.
Windows IP Configuration
Wireless LAN adapter Local Area Connection* 11:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Ethernet adapter Ethernet:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Wireless LAN adapter Wi-Fi:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Tunnel adapter Teredo Tunneling Pseudo-Interface:
Media State . . . . . . . . . . . : Media disconnected
Connection-specific DNS Suffix . :
Host Name: WIN-50GP30FGO75
OS Name: Microsoft Windows Server 2012 R2 Standard
OS Version: 6.3.9600 N/A Build 9600
OS Manufacturer: Microsoft Corporation
OS Configuration: Standalone Server
OS Build Type: Multiprocessor Free
Registered Owner: Windows User
Registered Organization:
Product ID: 00252-70000-00000-AA535
Original Install Date: 12/13/2015, 12:10:16 AM
System Boot Time: 12/30/2015, 5:52:11 AM
System Manufacturer: LENOVO
System Model: 20287
System Type: x64-based PC
This command types a text file and passes the text to command.
@echo off
SORT < Example.txt
If you define a file called Example.txt which has the following data.
4
3
2
1
The output of the above program would be
1
2
3
4
This command writes the standard error of command to file (OS/2 and NT).
DIR C:\ >List_of_C.txt 2>errorlog.txt
In the above example, if there is any error in processing the command of the directory listing of C, then it will be sent to the log file errorlog.txt.
Appends the standard error of command to file (OS/2 and NT).
DIR C:\ >List_of_C.txt 2>errorlog.txt
DIR D:\ >List_of_C.txt 2>>errorlog.txt
In the above example, if there is any error in processing the command of the directory listing of D, then it will be appended to the log file errorlog.txt.
This command redirects standard output of commandA to standard input of command.
Echo y | del *.txt
The above command will pass the option of ‘y’ which is the value of ‘Yes’ to the command of del. This will cause the deletion of all files with the extension of txt.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2241,
"s": 2169,
"text": "The following code snippet shows how the various operators can be used."
},
{
"code": null,
"e": 2371,
"s": 2241,
"text": "@echo off\nSET /A \"Result = 48 & 23\"\necho %Result%\nSET /A \"Result = 16 | 16\"\necho %Result%\nSET /A \"Result = 31 ^ 15\"\necho %Result%"
},
{
"code": null,
"e": 2420,
"s": 2371,
"text": "The above command produces the following output."
},
{
"code": null,
"e": 2430,
"s": 2420,
"text": "16\n16\n16\n"
},
{
"code": null,
"e": 2602,
"s": 2430,
"text": "Redirection is a concept of taking the output of a command and re-directing that output to a different output media. The following commands are available for re-direction."
},
{
"code": null,
"e": 2658,
"s": 2602,
"text": "command > filename − Redirect command output to a file."
},
{
"code": null,
"e": 2714,
"s": 2658,
"text": "command > filename − Redirect command output to a file."
},
{
"code": null,
"e": 2756,
"s": 2714,
"text": "command >> filename − APPEND into a file."
},
{
"code": null,
"e": 2798,
"s": 2756,
"text": "command >> filename − APPEND into a file."
},
{
"code": null,
"e": 2866,
"s": 2798,
"text": "command < filename − Type a text file and pass the text to command."
},
{
"code": null,
"e": 2934,
"s": 2866,
"text": "command < filename − Type a text file and pass the text to command."
},
{
"code": null,
"e": 3007,
"s": 2934,
"text": "command 2> file − Write standard error of command to file (OS/2 and NT)."
},
{
"code": null,
"e": 3080,
"s": 3007,
"text": "command 2> file − Write standard error of command to file (OS/2 and NT)."
},
{
"code": null,
"e": 3155,
"s": 3080,
"text": "command 2>> file − Append standard error of command to file (OS/2 and NT)."
},
{
"code": null,
"e": 3230,
"s": 3155,
"text": "command 2>> file − Append standard error of command to file (OS/2 and NT)."
},
{
"code": null,
"e": 3319,
"s": 3230,
"text": "commandA | commandB − Redirect standard output of commandA to standard input of command."
},
{
"code": null,
"e": 3408,
"s": 3319,
"text": "commandA | commandB − Redirect standard output of commandA to standard input of command."
},
{
"code": null,
"e": 3493,
"s": 3408,
"text": "The following code snippet shows how the various redirection operations can be used."
},
{
"code": null,
"e": 3542,
"s": 3493,
"text": "This command redirects command output to a file."
},
{
"code": null,
"e": 3577,
"s": 3542,
"text": "@echo off \nipconfig>C:\\details.txt"
},
{
"code": null,
"e": 3799,
"s": 3577,
"text": "The output of the above program would be that all the details of the ipconfig command will be sent to the file C:\\details.txt. If you open the above file, you might see the information similar to the one as the following."
},
{
"code": null,
"e": 4362,
"s": 3799,
"text": "Windows IP Configuration\nWireless LAN adapter Local Area Connection* 11:\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\nEthernet adapter Ethernet:\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\nWireless LAN adapter Wi-Fi:\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\nTunnel adapter Teredo Tunneling Pseudo-Interface:\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\n"
},
{
"code": null,
"e": 4422,
"s": 4362,
"text": "This command appends the output of the command into a file."
},
{
"code": null,
"e": 4459,
"s": 4422,
"text": "@echo off\nsysteminfo>>C:\\details.txt"
},
{
"code": null,
"e": 4686,
"s": 4459,
"text": "The output of the above program would be that all the details of the systeminfo command will be appended to the file C:\\details.txt. if you open the above file you might see the information similar to the one as the following."
},
{
"code": null,
"e": 5864,
"s": 4686,
"text": "Windows IP Configuration\nWireless LAN adapter Local Area Connection* 11:\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\nEthernet adapter Ethernet:\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\nWireless LAN adapter Wi-Fi:\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\nTunnel adapter Teredo Tunneling Pseudo-Interface:\n Media State . . . . . . . . . . . : Media disconnected\n Connection-specific DNS Suffix . :\nHost Name: WIN-50GP30FGO75\nOS Name: Microsoft Windows Server 2012 R2 Standard\nOS Version: 6.3.9600 N/A Build 9600\nOS Manufacturer: Microsoft Corporation\nOS Configuration: Standalone Server\nOS Build Type: Multiprocessor Free\nRegistered Owner: Windows User\nRegistered Organization:\nProduct ID: 00252-70000-00000-AA535\nOriginal Install Date: 12/13/2015, 12:10:16 AM\nSystem Boot Time: 12/30/2015, 5:52:11 AM\nSystem Manufacturer: LENOVO\nSystem Model: 20287\nSystem Type: x64-based PC\n"
},
{
"code": null,
"e": 5927,
"s": 5864,
"text": "This command types a text file and passes the text to command."
},
{
"code": null,
"e": 5956,
"s": 5927,
"text": "@echo off\nSORT < Example.txt"
},
{
"code": null,
"e": 6026,
"s": 5956,
"text": "If you define a file called Example.txt which has the following data."
},
{
"code": null,
"e": 6035,
"s": 6026,
"text": "4\n3\n2\n1\n"
},
{
"code": null,
"e": 6076,
"s": 6035,
"text": "The output of the above program would be"
},
{
"code": null,
"e": 6085,
"s": 6076,
"text": "1\n2\n3\n4\n"
},
{
"code": null,
"e": 6158,
"s": 6085,
"text": "This command writes the standard error of command to file (OS/2 and NT)."
},
{
"code": null,
"e": 6196,
"s": 6158,
"text": "DIR C:\\ >List_of_C.txt 2>errorlog.txt"
},
{
"code": null,
"e": 6348,
"s": 6196,
"text": "In the above example, if there is any error in processing the command of the directory listing of C, then it will be sent to the log file errorlog.txt."
},
{
"code": null,
"e": 6409,
"s": 6348,
"text": "Appends the standard error of command to file (OS/2 and NT)."
},
{
"code": null,
"e": 6486,
"s": 6409,
"text": "DIR C:\\ >List_of_C.txt 2>errorlog.txt\nDIR D:\\ >List_of_C.txt 2>>errorlog.txt"
},
{
"code": null,
"e": 6642,
"s": 6486,
"text": "In the above example, if there is any error in processing the command of the directory listing of D, then it will be appended to the log file errorlog.txt."
},
{
"code": null,
"e": 6723,
"s": 6642,
"text": "This command redirects standard output of commandA to standard input of command."
},
{
"code": null,
"e": 6742,
"s": 6723,
"text": "Echo y | del *.txt"
},
{
"code": null,
"e": 6908,
"s": 6742,
"text": "The above command will pass the option of ‘y’ which is the value of ‘Yes’ to the command of del. This will cause the deletion of all files with the extension of txt."
},
{
"code": null,
"e": 6915,
"s": 6908,
"text": " Print"
},
{
"code": null,
"e": 6926,
"s": 6915,
"text": " Add Notes"
}
] |
How to implement interface in anonymous class in C#? | No, anonymous types cannot implement an interface. We need to create your own
type.
Anonymous types provide a convenient way to encapsulate a set of read-only
properties into a single object without having to explicitly define a type first.
The type name is generated by the compiler and is not available at the source code
level. The type of each property is inferred by the compiler.
You create anonymous types by using the new operator together with an object
initializer.
class Program{
public static void Main(){
var v = new { Amount = 108, Message = "Test" };
Console.WriteLine(v.Amount + v.Message);
Console.ReadLine();
}
}
108Test | [
{
"code": null,
"e": 1146,
"s": 1062,
"text": "No, anonymous types cannot implement an interface. We need to create your own\ntype."
},
{
"code": null,
"e": 1303,
"s": 1146,
"text": "Anonymous types provide a convenient way to encapsulate a set of read-only\nproperties into a single object without having to explicitly define a type first."
},
{
"code": null,
"e": 1448,
"s": 1303,
"text": "The type name is generated by the compiler and is not available at the source code\nlevel. The type of each property is inferred by the compiler."
},
{
"code": null,
"e": 1538,
"s": 1448,
"text": "You create anonymous types by using the new operator together with an object\ninitializer."
},
{
"code": null,
"e": 1717,
"s": 1538,
"text": "class Program{\n public static void Main(){\n var v = new { Amount = 108, Message = \"Test\" };\n Console.WriteLine(v.Amount + v.Message);\n Console.ReadLine();\n }\n}"
},
{
"code": null,
"e": 1725,
"s": 1717,
"text": "108Test"
}
] |
Convert elements of a Vector to Strings in R Language - toString() Function - GeeksforGeeks | 25 Oct, 2021
toString() function in R Programming Language is used to produce a single character string describing an R object.
Syntax: toString(x, width = NULL)
Parameters:
x: R object
width: Suggestion for the maximum field width. Values of NULL or 0 indicate no maximum. The minimum value accepted is 6 and smaller values are taken as 6
R
# R program to illustrate# toString function # Initializing a string vectorx <- c("GFG", "Geeks", "GeeksforGeekss") # Calling the toString() functiontoString(x)
Output :
[1] "GFG, Geeks, GeeksforGeekss"
R
# R program to illustrate# toString function # Initializing a string vectorx <- c("GFG", "Geeks", "GeeksforGeekss") # Calling the toString() functiontoString(x, width = 2)toString(x, width = 8)toString(x, width = 10)
Output:
[1] "GF...."
[1] "GFG, ...."
[1] "GFG, G...."
R
# Matrix having 3 rows and 3 columns# filled by a single constant 5mat <- (matrix(5, 3, 3))print(mat)str <- toString(mat)print("String")print(str)
Output:
[,1] [,2] [,3]
[1,] 5 5 5
[2,] 5 5 5
[3,] 5 5 5
[1] "String"
[1] "5, 5, 5, 5, 5, 5, 5, 5, 5"
kumar_satyam
R String-Functions
R Vector-Function
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
How to Replace specific values in column in R DataFrame ?
How to change Row Names of DataFrame in R ?
Filter data by multiple conditions in R using Dplyr
Change Color of Bars in Barchart using ggplot2 in R
Loops in R (for, while, repeat)
Printing Output of an R Program
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
K-Means Clustering in R Programming | [
{
"code": null,
"e": 24393,
"s": 24365,
"text": "\n25 Oct, 2021"
},
{
"code": null,
"e": 24508,
"s": 24393,
"text": "toString() function in R Programming Language is used to produce a single character string describing an R object."
},
{
"code": null,
"e": 24542,
"s": 24508,
"text": "Syntax: toString(x, width = NULL)"
},
{
"code": null,
"e": 24555,
"s": 24542,
"text": "Parameters: "
},
{
"code": null,
"e": 24567,
"s": 24555,
"text": "x: R object"
},
{
"code": null,
"e": 24723,
"s": 24567,
"text": "width: Suggestion for the maximum field width. Values of NULL or 0 indicate no maximum. The minimum value accepted is 6 and smaller values are taken as 6 "
},
{
"code": null,
"e": 24725,
"s": 24723,
"text": "R"
},
{
"code": "# R program to illustrate# toString function # Initializing a string vectorx <- c(\"GFG\", \"Geeks\", \"GeeksforGeekss\") # Calling the toString() functiontoString(x)",
"e": 24886,
"s": 24725,
"text": null
},
{
"code": null,
"e": 24896,
"s": 24886,
"text": "Output : "
},
{
"code": null,
"e": 24929,
"s": 24896,
"text": "[1] \"GFG, Geeks, GeeksforGeekss\""
},
{
"code": null,
"e": 24931,
"s": 24929,
"text": "R"
},
{
"code": "# R program to illustrate# toString function # Initializing a string vectorx <- c(\"GFG\", \"Geeks\", \"GeeksforGeekss\") # Calling the toString() functiontoString(x, width = 2)toString(x, width = 8)toString(x, width = 10)",
"e": 25148,
"s": 24931,
"text": null
},
{
"code": null,
"e": 25157,
"s": 25148,
"text": "Output: "
},
{
"code": null,
"e": 25203,
"s": 25157,
"text": "[1] \"GF....\"\n[1] \"GFG, ....\"\n[1] \"GFG, G....\""
},
{
"code": null,
"e": 25205,
"s": 25203,
"text": "R"
},
{
"code": "# Matrix having 3 rows and 3 columns# filled by a single constant 5mat <- (matrix(5, 3, 3))print(mat)str <- toString(mat)print(\"String\")print(str)",
"e": 25352,
"s": 25205,
"text": null
},
{
"code": null,
"e": 25360,
"s": 25352,
"text": "Output:"
},
{
"code": null,
"e": 25485,
"s": 25360,
"text": " [,1] [,2] [,3]\n[1,] 5 5 5\n[2,] 5 5 5\n[3,] 5 5 5\n[1] \"String\"\n[1] \"5, 5, 5, 5, 5, 5, 5, 5, 5\""
},
{
"code": null,
"e": 25498,
"s": 25485,
"text": "kumar_satyam"
},
{
"code": null,
"e": 25517,
"s": 25498,
"text": "R String-Functions"
},
{
"code": null,
"e": 25535,
"s": 25517,
"text": "R Vector-Function"
},
{
"code": null,
"e": 25546,
"s": 25535,
"text": "R Language"
},
{
"code": null,
"e": 25644,
"s": 25546,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 25653,
"s": 25644,
"text": "Comments"
},
{
"code": null,
"e": 25666,
"s": 25653,
"text": "Old Comments"
},
{
"code": null,
"e": 25724,
"s": 25666,
"text": "How to Replace specific values in column in R DataFrame ?"
},
{
"code": null,
"e": 25768,
"s": 25724,
"text": "How to change Row Names of DataFrame in R ?"
},
{
"code": null,
"e": 25820,
"s": 25768,
"text": "Filter data by multiple conditions in R using Dplyr"
},
{
"code": null,
"e": 25872,
"s": 25820,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 25904,
"s": 25872,
"text": "Loops in R (for, while, repeat)"
},
{
"code": null,
"e": 25936,
"s": 25904,
"text": "Printing Output of an R Program"
},
{
"code": null,
"e": 25974,
"s": 25936,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 26009,
"s": 25974,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 26067,
"s": 26009,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
}
] |
Creating a Proxy Webserver in Python | Set 1 - GeeksforGeeks | 15 Nov, 2017
Socket programming in python is very user friendly as compared to c. The programmer need not worry about minute details regarding sockets. In python, the user has more chance of focusing on the application layer rather than the network layer. In this tutorial we would be developing a simple multi-threaded proxy server capable of handling HTTP traffic. It would be mostly based on the basic socket programming ideas. If you are not sure about the basics then i would recommend that you brush them up before going through this tutorial.
This is a naive implementation of a proxy server. We would be gradually developing it into a quite useful server in the upcoming tutorials.
To begin with, we would achieve the process in 3 easy steps
1. Creating an incoming socketWe create a socket serverSocket in the __init__ method of the Server Class. This creates a socket for the incoming connections. We then bind the socket and then wait for the clients to connect.
def __init__(self, config):
# Shutdown on Ctrl+C
signal.signal(signal.SIGINT, self.shutdown)
# Create a TCP socket
self.serverSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Re-use the socket
self.serverSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# bind the socket to a public host, and a port
self.serverSocket.bind((config['HOST_NAME'], config['BIND_PORT']))
self.serverSocket.listen(10) # become a server socket
self.__clients = {}
2. Accept client and processThis is the easiest yet the most important of all the steps. We wait for the client’s connection request and once a successful connection is made, we dispatch the request in a separate thread, making ourselves available for the next request. This allows us to handle multiple requests simultaneously which boosts the performance of the server multifold times.
while True:
# Establish the connection
(clientSocket, client_address) = self.serverSocket.accept()
d = threading.Thread(name=self._getClientName(client_address),
target = self.proxy_thread, args=(clientSocket, client_address))
d.setDaemon(True)
d.start()
3. Redirecting the trafficThe main feature of a proxy server is to act as an intermediate between source and destination. Here, we would be fetching data from source and then pass it to the client.
First, we extract the URL from the received request data.
# get the request from browser
request = conn.recv(config['MAX_REQUEST_LEN'])
# parse the first line
first_line = request.split('\n')[0]
# get url
url = first_line.split(' ')[1]
Then, we find the destination address of the request. Address is a tuple of (destination_ip_address, destination_port_no). We will be receiving data from this address.
http_pos = url.find("://") # find pos of ://
if (http_pos==-1):
temp = url
else:
temp = url[(http_pos+3):] # get the rest of url
port_pos = temp.find(":") # find the port pos (if any)
# find end of web server
webserver_pos = temp.find("/")
if webserver_pos == -1:
webserver_pos = len(temp)
webserver = ""
port = -1
if (port_pos==-1 or webserver_pos < port_pos):
# default port
port = 80
webserver = temp[:webserver_pos]
else: # specific port
port = int((temp[(port_pos+1):])[:webserver_pos-port_pos-1])
webserver = temp[:port_pos]
Now, we setup a new connection to the destination server (or remote server), and then send a copy of the original request to the server. The server will then respond with a response. All the response messages use the generic message format of RFC 822.
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(config['CONNECTION_TIMEOUT'])
s.connect((webserver, port))
s.sendall(request)
We then redirect the server’s response to the client. conn is the original connection to the client. The response may be bigger then MAX_REQUEST_LEN that we are receiving in one call, so, a null response marks the end of the response.
while 1:
# receive data from web server
data = s.recv(config['MAX_REQUEST_LEN'])
if (len(data) > 0):
conn.send(data) # send to browser/client
else:
break
We then close the server connections appropriately and do the error handling to make sure the server works as expected.
How to test the server?1. Run the server on a terminal. Keep it running and switch to your favorite browser.2. Go to your browser’s proxy settings and change the proxy server to ‘localhost’ and port to ‘12345’.3. Now open any HTTP website (not HTTPS), for eg. geeksforgeeks.org and volla !! you should be able to access the content on the browser.
Once the server is running, we can monitor the requests coming to the client. We can use that data to monitor the content that is going or we can develop statistics based on the content.We can even restrict access to a website or blacklist an IP address. We would be dealing with more such features in the upcoming tutorials.What next?We would be adding the following features in our proxy server in the upcoming tutorials.– Blacklisting Domains– Content monitoring– Logging– HTTP WebServer + ProxyServer
The whole working source code of this tutorial is available here
Creating a Proxy Webserver in Python | Set 2
If you have any questions/comments then feel free to post them in the comments section.
About the Author:
Pinkesh Badjatiya hails from IIIT Hyderabad .He is a geek at heart with ample projects worth looking for. His project work can be seen here.
If you also wish to showcase your blog here, please see GBlog for guest blog writing on GeeksforGeeks.
GBlog
Project
Python
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Top 10 Front End Developer Skills That You Need in 2022
DSA Sheet by Love Babbar
6 Best IDE's For Python in 2022
Top 10 Projects For Beginners To Practice HTML and CSS Skills
Working with csv files in Python
SDE SHEET - A Complete Guide for SDE Preparation
XML parsing in Python
Implementing Web Scraping in Python with BeautifulSoup
Python | Simple GUI calculator using Tkinter
Working with zip files in Python | [
{
"code": null,
"e": 24629,
"s": 24601,
"text": "\n15 Nov, 2017"
},
{
"code": null,
"e": 25166,
"s": 24629,
"text": "Socket programming in python is very user friendly as compared to c. The programmer need not worry about minute details regarding sockets. In python, the user has more chance of focusing on the application layer rather than the network layer. In this tutorial we would be developing a simple multi-threaded proxy server capable of handling HTTP traffic. It would be mostly based on the basic socket programming ideas. If you are not sure about the basics then i would recommend that you brush them up before going through this tutorial."
},
{
"code": null,
"e": 25306,
"s": 25166,
"text": "This is a naive implementation of a proxy server. We would be gradually developing it into a quite useful server in the upcoming tutorials."
},
{
"code": null,
"e": 25366,
"s": 25306,
"text": "To begin with, we would achieve the process in 3 easy steps"
},
{
"code": null,
"e": 25590,
"s": 25366,
"text": "1. Creating an incoming socketWe create a socket serverSocket in the __init__ method of the Server Class. This creates a socket for the incoming connections. We then bind the socket and then wait for the clients to connect."
},
{
"code": null,
"e": 26108,
"s": 25590,
"text": "def __init__(self, config):\n # Shutdown on Ctrl+C\n signal.signal(signal.SIGINT, self.shutdown) \n\n # Create a TCP socket\n self.serverSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n\n # Re-use the socket\n self.serverSocket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n\n # bind the socket to a public host, and a port \n self.serverSocket.bind((config['HOST_NAME'], config['BIND_PORT']))\n \n self.serverSocket.listen(10) # become a server socket\n self.__clients = {}\n"
},
{
"code": null,
"e": 26496,
"s": 26108,
"text": "2. Accept client and processThis is the easiest yet the most important of all the steps. We wait for the client’s connection request and once a successful connection is made, we dispatch the request in a separate thread, making ourselves available for the next request. This allows us to handle multiple requests simultaneously which boosts the performance of the server multifold times."
},
{
"code": null,
"e": 26784,
"s": 26496,
"text": "while True:\n\n # Establish the connection\n (clientSocket, client_address) = self.serverSocket.accept() \n \n d = threading.Thread(name=self._getClientName(client_address), \n target = self.proxy_thread, args=(clientSocket, client_address))\n d.setDaemon(True)\n d.start()\n"
},
{
"code": null,
"e": 26982,
"s": 26784,
"text": "3. Redirecting the trafficThe main feature of a proxy server is to act as an intermediate between source and destination. Here, we would be fetching data from source and then pass it to the client."
},
{
"code": null,
"e": 27040,
"s": 26982,
"text": "First, we extract the URL from the received request data."
},
{
"code": null,
"e": 27221,
"s": 27040,
"text": "# get the request from browser\nrequest = conn.recv(config['MAX_REQUEST_LEN']) \n\n# parse the first line\nfirst_line = request.split('\\n')[0]\n\n# get url\nurl = first_line.split(' ')[1]"
},
{
"code": null,
"e": 27389,
"s": 27221,
"text": "Then, we find the destination address of the request. Address is a tuple of (destination_ip_address, destination_port_no). We will be receiving data from this address."
},
{
"code": null,
"e": 27964,
"s": 27389,
"text": "http_pos = url.find(\"://\") # find pos of ://\nif (http_pos==-1):\n temp = url\nelse:\n temp = url[(http_pos+3):] # get the rest of url\n\nport_pos = temp.find(\":\") # find the port pos (if any)\n\n# find end of web server\nwebserver_pos = temp.find(\"/\")\nif webserver_pos == -1:\n webserver_pos = len(temp)\n\nwebserver = \"\"\nport = -1\nif (port_pos==-1 or webserver_pos < port_pos): \n\n # default port \n port = 80 \n webserver = temp[:webserver_pos] \n\nelse: # specific port \n port = int((temp[(port_pos+1):])[:webserver_pos-port_pos-1])\n webserver = temp[:port_pos] \n"
},
{
"code": null,
"e": 28216,
"s": 27964,
"text": "Now, we setup a new connection to the destination server (or remote server), and then send a copy of the original request to the server. The server will then respond with a response. All the response messages use the generic message format of RFC 822."
},
{
"code": null,
"e": 28363,
"s": 28216,
"text": "s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) \ns.settimeout(config['CONNECTION_TIMEOUT'])\ns.connect((webserver, port))\ns.sendall(request)\n"
},
{
"code": null,
"e": 28598,
"s": 28363,
"text": "We then redirect the server’s response to the client. conn is the original connection to the client. The response may be bigger then MAX_REQUEST_LEN that we are receiving in one call, so, a null response marks the end of the response."
},
{
"code": null,
"e": 28786,
"s": 28598,
"text": "while 1:\n # receive data from web server\n data = s.recv(config['MAX_REQUEST_LEN'])\n\n if (len(data) > 0):\n conn.send(data) # send to browser/client\n else:\n break\n"
},
{
"code": null,
"e": 28906,
"s": 28786,
"text": "We then close the server connections appropriately and do the error handling to make sure the server works as expected."
},
{
"code": null,
"e": 29254,
"s": 28906,
"text": "How to test the server?1. Run the server on a terminal. Keep it running and switch to your favorite browser.2. Go to your browser’s proxy settings and change the proxy server to ‘localhost’ and port to ‘12345’.3. Now open any HTTP website (not HTTPS), for eg. geeksforgeeks.org and volla !! you should be able to access the content on the browser."
},
{
"code": null,
"e": 29759,
"s": 29254,
"text": "Once the server is running, we can monitor the requests coming to the client. We can use that data to monitor the content that is going or we can develop statistics based on the content.We can even restrict access to a website or blacklist an IP address. We would be dealing with more such features in the upcoming tutorials.What next?We would be adding the following features in our proxy server in the upcoming tutorials.– Blacklisting Domains– Content monitoring– Logging– HTTP WebServer + ProxyServer"
},
{
"code": null,
"e": 29824,
"s": 29759,
"text": "The whole working source code of this tutorial is available here"
},
{
"code": null,
"e": 29869,
"s": 29824,
"text": "Creating a Proxy Webserver in Python | Set 2"
},
{
"code": null,
"e": 29957,
"s": 29869,
"text": "If you have any questions/comments then feel free to post them in the comments section."
},
{
"code": null,
"e": 29975,
"s": 29957,
"text": "About the Author:"
},
{
"code": null,
"e": 30117,
"s": 29975,
"text": "Pinkesh Badjatiya hails from IIIT Hyderabad .He is a geek at heart with ample projects worth looking for. His project work can be seen here. "
},
{
"code": null,
"e": 30220,
"s": 30117,
"text": "If you also wish to showcase your blog here, please see GBlog for guest blog writing on GeeksforGeeks."
},
{
"code": null,
"e": 30226,
"s": 30220,
"text": "GBlog"
},
{
"code": null,
"e": 30234,
"s": 30226,
"text": "Project"
},
{
"code": null,
"e": 30241,
"s": 30234,
"text": "Python"
},
{
"code": null,
"e": 30258,
"s": 30241,
"text": "Web Technologies"
},
{
"code": null,
"e": 30356,
"s": 30258,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 30365,
"s": 30356,
"text": "Comments"
},
{
"code": null,
"e": 30378,
"s": 30365,
"text": "Old Comments"
},
{
"code": null,
"e": 30434,
"s": 30378,
"text": "Top 10 Front End Developer Skills That You Need in 2022"
},
{
"code": null,
"e": 30459,
"s": 30434,
"text": "DSA Sheet by Love Babbar"
},
{
"code": null,
"e": 30491,
"s": 30459,
"text": "6 Best IDE's For Python in 2022"
},
{
"code": null,
"e": 30553,
"s": 30491,
"text": "Top 10 Projects For Beginners To Practice HTML and CSS Skills"
},
{
"code": null,
"e": 30586,
"s": 30553,
"text": "Working with csv files in Python"
},
{
"code": null,
"e": 30635,
"s": 30586,
"text": "SDE SHEET - A Complete Guide for SDE Preparation"
},
{
"code": null,
"e": 30657,
"s": 30635,
"text": "XML parsing in Python"
},
{
"code": null,
"e": 30712,
"s": 30657,
"text": "Implementing Web Scraping in Python with BeautifulSoup"
},
{
"code": null,
"e": 30757,
"s": 30712,
"text": "Python | Simple GUI calculator using Tkinter"
}
] |
C++ Program to Implement self Balancing Binary Search Tree | AVL tree is a self-balancing Binary Search Tree where the difference between heights of left and right subtrees cannot be more than one for all nodes.
This is a C++ Program to Implement self Balancing Binary Search Tree.
Begin
class avl_tree to declare following functions:
balance() = Balance the tree by getting balance factor. Put the
difference in bal_factor. If bal_factor > 1 then balance the
left subtree.
If bal_factor < -1 then balance the right subtree.
insert() = To insert the elements in the tree:
If tree is empty insert data as root.
If tree is not empty and data > root
Insert data as left child.
Else
Insert data as right child.
End.
#include<iostream>
#include<cstdio>
#include<sstream>
#include<algorithm>
#define pow2(n) (1 << (n))
using namespace std;
struct avl//node declaration
{
int d;
struct avl *l;
struct avl *r;
}*r;
class avl_tree
{
public://declare functions
int height(avl *);
int difference(avl *);
avl * rr_rotat(avl *);
avl * ll_rotat(avl *);
avl * lr_rotat(avl*);
avl * rl_rotat(avl *);
avl * balance(avl *);
avl * insert(avl *, int);
void show(avl *, int);
void inorder(avl *);
void preorder(avl *);
void postorder(avl*);
avl_tree()
{
r = NULL;
}
};
int avl_tree::height(avl *t)
{
int h = 0;
if (t != NULL)
{
int l_height = height(t→l);
int r_height = height(t→r);
int max_height = max(l_height, r_height);
h = max_height + 1;
}
return h;
}
int avl_tree::difference(avl *t)//calculte difference between left and
right tree
{
int l_height = height(t→l);
int r_height = height(t→r);
int b_factor = l_height - r_height;
return b_factor;
}
avl *avl_tree::rr_rotat(avl *parent)//right right rotation
{
avl *t;
t = parent→r;
parent→r = t→l;
t->l = parent;
cout<<"Right-Right Rotation";
return t;
}
avl *avl_tree::ll_rotat(avl *parent)//left left rotation
{
avl *t;
t = parent→l;
parent→l = t->r;
t->r = parent;
cout<<"Left-Left Rotation";
return t;
}
avl *avl_tree::lr_rotat(avl *parent)//left right rotation
{
avl *t;
t = parent→l;
parent->l = rr_rotat(t);
cout<<"Left-Right Rotation";
return ll_rotat(parent);
}
avl *avl_tree::rl_rotat(avl *parent)//right left rotation
{
avl *t;
t= parent→r;
parent->r = ll_rotat(t);
cout<<"Right-Left Rotation";
return rr_rotat(parent);
}
avl *avl_tree::balance(avl *t)
{
int bal_factor = difference(t);
if (bal_factor > 1)
{
if (difference(t->l) > 0)
t = ll_rotat(t);
else
t = lr_rotat(t);
}
else if (bal_factor < -1)
{
if (difference(t->r) > 0)
t = rl_rotat(t);
else
t = rr_rotat(t);
}
return t;
}
avl *avl_tree::insert(avl *r, int v)
{
if (r == NULL)
{
r = new avl;
r->d = v;
r->l = NULL;
r->r= NULL;
return r;
}
else if (v< r→d)
{
r->l= insert(r→l, v);
r = balance(r);
}
else if (v >= r→d)
{
r->r= insert(r→r, v);
r = balance(r);
}
return r;
}
void avl_tree::show(avl *p, int l)//show the tree
{
int i;
if (p != NULL)
{
show(p->r, l+ 1);
cout<<" ";
if (p == r)
cout << "Root → ";
for (i = 0; i < l&& p != r; i++)
cout << " ";
cout << p→d;
show(p->l, l + 1);
}
}
void avl_tree::inorder(avl *t)//inorder traversal
{
if (t == NULL)
return;
inorder(t->l);
cout << t->d << " ";
inorder(t->r);
}
void avl_tree::preorder(avl *t)//preorder traversal
{
if (t == NULL)
return;
cout << t->d << " ";
preorder(t->l);
preorder(t->r);
}
void avl_tree::postorder(avl *t)//postorder traversal
{
if (t == NULL)
return;
postorder(t ->l);
postorder(t ->r);
cout << t→d << " ";
}
int main()
{
int c, i;
avl_tree avl;
while (1)
{
cout << "1.Insert Element into the tree" << endl;
cout << "2.show Balanced AVL Tree" << endl;
cout << "3.InOrder traversal" << endl;
cout << "4.PreOrder traversal" << endl;
cout << "5.PostOrder traversal" << endl;
cout << "6.Exit" << endl;
cout << "Enter your Choice: ";
cin >> c;
switch ©//perform switch operation
{
case 1:
cout << "Enter value to be inserted: ";
cin >> i;
r= avl.insert(r, i);
break;
case 2:
if (r == NULL)
{
cout << "Tree is Empty" << endl;
continue;
}
cout << "Balanced AVL Tree:" << endl;
avl.show(r, 1);
cout<<endl;
break;
case 3:
cout << "Inorder Traversal:" << endl;
avl.inorder(r);
cout << endl;
break;
case 4:
cout << "Preorder Traversal:" << endl;
avl.preorder(r);
cout << endl;
break;
case 5:
cout << "Postorder Traversal:" << endl;
avl.postorder(r);
cout << endl;
break;
case 6:
exit(1);
break;
default:
cout << "Wrong Choice" << endl;
}
}
return 0;
}
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 13
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 10
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 15
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 5
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 11
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 4
Left-Left Rotation1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 8
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 16
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 3
Inorder Traversal:
4 5 8 10 11 13 15 16
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 4
Preorder Traversal:
10 5 4 8 13 11 15 16
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 5
Postorder Traversal:
4 8 5 11 16 15 13 10
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 14
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 3
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 7
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 9
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 1
Enter value to be inserted: 52
Right-Right Rotation
1.Insert Element into the tree
2.show Balanced AVL Tree
3.InOrder traversal
4.PreOrder traversal
5.PostOrder traversal
6.Exit
Enter your Choice: 6 | [
{
"code": null,
"e": 1213,
"s": 1062,
"text": "AVL tree is a self-balancing Binary Search Tree where the difference between heights of left and right subtrees cannot be more than one for all nodes."
},
{
"code": null,
"e": 1283,
"s": 1213,
"text": "This is a C++ Program to Implement self Balancing Binary Search Tree."
},
{
"code": null,
"e": 1752,
"s": 1283,
"text": "Begin\nclass avl_tree to declare following functions:\nbalance() = Balance the tree by getting balance factor. Put the\n difference in bal_factor. If bal_factor > 1 then balance the\nleft subtree.\n If bal_factor < -1 then balance the right subtree.\ninsert() = To insert the elements in the tree:\n If tree is empty insert data as root.\n If tree is not empty and data > root\n Insert data as left child.\n Else\n Insert data as right child.\nEnd."
},
{
"code": null,
"e": 6647,
"s": 1752,
"text": "#include<iostream>\n#include<cstdio>\n#include<sstream>\n#include<algorithm>\n#define pow2(n) (1 << (n))\nusing namespace std;\nstruct avl//node declaration\n{\n int d;\n struct avl *l;\n struct avl *r;\n}*r;\nclass avl_tree\n{\n public://declare functions\n int height(avl *);\n int difference(avl *);\n avl * rr_rotat(avl *);\n avl * ll_rotat(avl *);\n avl * lr_rotat(avl*);\n avl * rl_rotat(avl *);\n avl * balance(avl *);\n avl * insert(avl *, int);\n void show(avl *, int);\n void inorder(avl *);\n void preorder(avl *);\n void postorder(avl*);\n avl_tree()\n {\n r = NULL;\n }\n};\nint avl_tree::height(avl *t)\n{\n int h = 0;\n if (t != NULL)\n {\n int l_height = height(t→l);\n int r_height = height(t→r);\n int max_height = max(l_height, r_height);\n h = max_height + 1;\n }\n return h;\n}\nint avl_tree::difference(avl *t)//calculte difference between left and\nright tree\n{\n int l_height = height(t→l);\n int r_height = height(t→r);\n int b_factor = l_height - r_height;\n return b_factor;\n}\navl *avl_tree::rr_rotat(avl *parent)//right right rotation\n{\n avl *t;\n t = parent→r;\n parent→r = t→l;\n t->l = parent;\n cout<<\"Right-Right Rotation\";\n return t;\n}\navl *avl_tree::ll_rotat(avl *parent)//left left rotation\n{\n avl *t;\n t = parent→l;\n parent→l = t->r; \n t->r = parent;\n cout<<\"Left-Left Rotation\";\n return t;\n}\navl *avl_tree::lr_rotat(avl *parent)//left right rotation\n{\n avl *t;\n t = parent→l;\n parent->l = rr_rotat(t);\n cout<<\"Left-Right Rotation\";\n return ll_rotat(parent);\n}\navl *avl_tree::rl_rotat(avl *parent)//right left rotation\n{\n avl *t;\n t= parent→r;\n parent->r = ll_rotat(t);\n cout<<\"Right-Left Rotation\";\n return rr_rotat(parent);\n}\navl *avl_tree::balance(avl *t)\n{\n int bal_factor = difference(t);\n if (bal_factor > 1)\n {\n if (difference(t->l) > 0)\n t = ll_rotat(t);\n else\n t = lr_rotat(t);\n }\n else if (bal_factor < -1)\n {\n if (difference(t->r) > 0)\n t = rl_rotat(t);\n else\n t = rr_rotat(t);\n }\n return t;\n }\n avl *avl_tree::insert(avl *r, int v)\n {\n if (r == NULL)\n {\n r = new avl;\n r->d = v;\n r->l = NULL;\n r->r= NULL;\n return r;\n }\n else if (v< r→d)\n {\n r->l= insert(r→l, v);\n r = balance(r);\n }\n else if (v >= r→d)\n {\n r->r= insert(r→r, v);\n r = balance(r);\n }\n return r;\n }\n void avl_tree::show(avl *p, int l)//show the tree\n {\n int i;\n if (p != NULL)\n {\n show(p->r, l+ 1);\n cout<<\" \";\n if (p == r)\n cout << \"Root → \";\n for (i = 0; i < l&& p != r; i++)\n cout << \" \";\n cout << p→d;\n show(p->l, l + 1);\n }\n }\n void avl_tree::inorder(avl *t)//inorder traversal\n {\n if (t == NULL)\n return;\n inorder(t->l);\n cout << t->d << \" \";\n inorder(t->r);\n }\n void avl_tree::preorder(avl *t)//preorder traversal\n {\n if (t == NULL)\n return;\n cout << t->d << \" \";\n preorder(t->l);\n preorder(t->r);\n }\n void avl_tree::postorder(avl *t)//postorder traversal\n {\n if (t == NULL)\n return;\n postorder(t ->l);\n postorder(t ->r);\n cout << t→d << \" \";\n }\n int main()\n {\n int c, i;\n avl_tree avl;\n while (1)\n {\n cout << \"1.Insert Element into the tree\" << endl;\n cout << \"2.show Balanced AVL Tree\" << endl;\n cout << \"3.InOrder traversal\" << endl;\n cout << \"4.PreOrder traversal\" << endl;\n cout << \"5.PostOrder traversal\" << endl;\n cout << \"6.Exit\" << endl;\n cout << \"Enter your Choice: \";\n cin >> c;\n switch ©//perform switch operation\n {\n case 1:\n cout << \"Enter value to be inserted: \";\n cin >> i;\n r= avl.insert(r, i);\n break;\n case 2:\n if (r == NULL)\n {\n cout << \"Tree is Empty\" << endl;\n continue;\n }\n cout << \"Balanced AVL Tree:\" << endl;\n avl.show(r, 1);\n cout<<endl;\n break;\n case 3:\n cout << \"Inorder Traversal:\" << endl;\n avl.inorder(r);\n cout << endl;\n break;\n case 4:\n cout << \"Preorder Traversal:\" << endl;\n avl.preorder(r);\n cout << endl;\n break;\n case 5:\n cout << \"Postorder Traversal:\" << endl;\n avl.postorder(r);\n cout << endl;\n break;\n case 6:\n exit(1);\n break;\n default:\n cout << \"Wrong Choice\" << endl;\n }\n }\n return 0;\n}"
},
{
"code": null,
"e": 9705,
"s": 6647,
"text": "1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 13\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 10\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 15\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 5\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 11\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 4\nLeft-Left Rotation1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 8\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 16\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 3\nInorder Traversal:\n4 5 8 10 11 13 15 16\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 4\nPreorder Traversal:\n10 5 4 8 13 11 15 16\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 5\nPostorder Traversal:\n4 8 5 11 16 15 13 10\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 14\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 3\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 7\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 9\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 1\nEnter value to be inserted: 52\nRight-Right Rotation\n1.Insert Element into the tree\n2.show Balanced AVL Tree\n3.InOrder traversal\n4.PreOrder traversal\n5.PostOrder traversal\n6.Exit\nEnter your Choice: 6"
}
] |
VB.Net - Nested Select Case Statement | It is possible to have a select statement as part of the statement sequence of an outer select statement. Even if the case constants of the inner and outer select contain common values, no conflicts will arise.
Module decisions
Sub Main()
'local variable definition
Dim a As Integer = 100
Dim b As Integer = 200
Select a
Case 100
Console.WriteLine("This is part of outer case ")
Select Case b
Case 200
Console.WriteLine("This is part of inner case ")
End Select
End Select
Console.WriteLine("Exact value of a is : {0}", a)
Console.WriteLine("Exact value of b is : {0}", b)
Console.ReadLine()
End Sub
End Module
When the above code is compiled and executed, it produces the following result −
This is part of outer case
This is part of inner case
Exact value of a is : 100
Exact value of b is : 200
63 Lectures
4 hours
Frahaan Hussain
103 Lectures
12 hours
Arnold Higuit
60 Lectures
9.5 hours
Arnold Higuit
97 Lectures
9 hours
Arnold Higuit
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2511,
"s": 2300,
"text": "It is possible to have a select statement as part of the statement sequence of an outer select statement. Even if the case constants of the inner and outer select contain common values, no conflicts will arise."
},
{
"code": null,
"e": 3046,
"s": 2511,
"text": "Module decisions\n Sub Main()\n 'local variable definition\n Dim a As Integer = 100\n Dim b As Integer = 200\n Select a\n Case 100\n Console.WriteLine(\"This is part of outer case \")\n Select Case b\n Case 200\n Console.WriteLine(\"This is part of inner case \")\n End Select\n End Select\n Console.WriteLine(\"Exact value of a is : {0}\", a)\n Console.WriteLine(\"Exact value of b is : {0}\", b)\n Console.ReadLine()\n End Sub\nEnd Module"
},
{
"code": null,
"e": 3127,
"s": 3046,
"text": "When the above code is compiled and executed, it produces the following result −"
},
{
"code": null,
"e": 3234,
"s": 3127,
"text": "This is part of outer case\nThis is part of inner case\nExact value of a is : 100\nExact value of b is : 200\n"
},
{
"code": null,
"e": 3267,
"s": 3234,
"text": "\n 63 Lectures \n 4 hours \n"
},
{
"code": null,
"e": 3284,
"s": 3267,
"text": " Frahaan Hussain"
},
{
"code": null,
"e": 3319,
"s": 3284,
"text": "\n 103 Lectures \n 12 hours \n"
},
{
"code": null,
"e": 3334,
"s": 3319,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 3369,
"s": 3334,
"text": "\n 60 Lectures \n 9.5 hours \n"
},
{
"code": null,
"e": 3384,
"s": 3369,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 3417,
"s": 3384,
"text": "\n 97 Lectures \n 9 hours \n"
},
{
"code": null,
"e": 3432,
"s": 3417,
"text": " Arnold Higuit"
},
{
"code": null,
"e": 3439,
"s": 3432,
"text": " Print"
},
{
"code": null,
"e": 3450,
"s": 3439,
"text": " Add Notes"
}
] |
C++ Program to Find the Number of Vowels, Consonants, Digits and White Spaces in a String | A string is a one dimensional character array that is terminated by a null character. There can be many vowels, consonants, digits and white spaces in a string.
For example.
String: There are 7 colours in the rainbow
Vowels: 12
Consonants: 15
Digits: 1
White spaces: 6
A program to find the number of vowels, consonants, digits and white spaces in a string is given as follows.
Live Demo
#include <iostream>
using namespace std;
int main() {
char str[] = {"Abracadabra 123"};
int vowels, consonants, digits, spaces;
vowels = consonants = digits = spaces = 0;
for(int i = 0; str[i]!='\0'; ++i) {
if(str[i]=='a' || str[i]=='e' || str[i]=='i' ||
str[i]=='o' || str[i]=='u' || str[i]=='A' ||
str[i]=='E' || str[i]=='I' || str[i]=='O' ||
str[i]=='U') {
++vowels;
} else if((str[i]>='a'&& str[i]<='z') || (str[i]>='A'&& str[i]<='Z')) {
++consonants;
} else if(str[i]>='0' && str[i]<='9') {
++digits;
} else if (str[i]==' ') {
++spaces;
}
}
cout << "The string is: " << str << endl;
cout << "Vowels: " << vowels << endl;
cout << "Consonants: " << consonants << endl;
cout << "Digits: " << digits << endl;
cout << "White spaces: " << spaces << endl;
return 0;
}
The string is: Abracadabra 123
Vowels: 5
Consonants: 6
Digits: 3
White spaces: 1
In the above program, the variables vowels, consonants, digits and spaces are used to store the number of vowels, consonants, digits and spaces in the string. A for loop is used to examine each character of a string. If that character is a vowels, then vowels variable is incremented by 1. Same for consonants, digits and spaces. The code snippet that demonstrates this is as follows.
for(int i = 0; str[i]!='\0'; ++i) {
if(str[i]=='a' || str[i]=='e' || str[i]=='i' ||
str[i]=='o' || str[i]=='u' || str[i]=='A' ||
str[i]=='E' || str[i]=='I' || str[i]=='O' ||
str[i]=='U') {
++vowels;
} else if((str[i]>='a'&& str[i]<='z') || (str[i]>='A'&& str[i]<='Z')) {
++consonants;
} else if(str[i]>='0' && str[i]<='9') {
++digits;
} else if (str[i]==' ') {
++spaces;
}
}
After the occurrences of the vowels, consonants, digits and spaces in the string are calculated, they are displayed. This is shown in the following code snippet.
The string is: Abracadabra 123
Vowels: 5
Consonants: 6
Digits: 3
White spaces: 1 | [
{
"code": null,
"e": 1223,
"s": 1062,
"text": "A string is a one dimensional character array that is terminated by a null character. There can be many vowels, consonants, digits and white spaces in a string."
},
{
"code": null,
"e": 1236,
"s": 1223,
"text": "For example."
},
{
"code": null,
"e": 1331,
"s": 1236,
"text": "String: There are 7 colours in the rainbow\nVowels: 12\nConsonants: 15\nDigits: 1\nWhite spaces: 6"
},
{
"code": null,
"e": 1440,
"s": 1331,
"text": "A program to find the number of vowels, consonants, digits and white spaces in a string is given as follows."
},
{
"code": null,
"e": 1451,
"s": 1440,
"text": " Live Demo"
},
{
"code": null,
"e": 2336,
"s": 1451,
"text": "#include <iostream>\nusing namespace std;\nint main() {\n char str[] = {\"Abracadabra 123\"};\n int vowels, consonants, digits, spaces;\n vowels = consonants = digits = spaces = 0;\n for(int i = 0; str[i]!='\\0'; ++i) {\n if(str[i]=='a' || str[i]=='e' || str[i]=='i' ||\n str[i]=='o' || str[i]=='u' || str[i]=='A' ||\n str[i]=='E' || str[i]=='I' || str[i]=='O' ||\n str[i]=='U') {\n ++vowels;\n } else if((str[i]>='a'&& str[i]<='z') || (str[i]>='A'&& str[i]<='Z')) {\n ++consonants;\n } else if(str[i]>='0' && str[i]<='9') {\n ++digits;\n } else if (str[i]==' ') {\n ++spaces;\n } \n }\n cout << \"The string is: \" << str << endl;\n cout << \"Vowels: \" << vowels << endl;\n cout << \"Consonants: \" << consonants << endl;\n cout << \"Digits: \" << digits << endl;\n cout << \"White spaces: \" << spaces << endl;\n return 0;\n}"
},
{
"code": null,
"e": 2417,
"s": 2336,
"text": "The string is: Abracadabra 123\nVowels: 5\nConsonants: 6\nDigits: 3\nWhite spaces: 1"
},
{
"code": null,
"e": 2802,
"s": 2417,
"text": "In the above program, the variables vowels, consonants, digits and spaces are used to store the number of vowels, consonants, digits and spaces in the string. A for loop is used to examine each character of a string. If that character is a vowels, then vowels variable is incremented by 1. Same for consonants, digits and spaces. The code snippet that demonstrates this is as follows."
},
{
"code": null,
"e": 3210,
"s": 2802,
"text": "for(int i = 0; str[i]!='\\0'; ++i) {\nif(str[i]=='a' || str[i]=='e' || str[i]=='i' ||\nstr[i]=='o' || str[i]=='u' || str[i]=='A' ||\nstr[i]=='E' || str[i]=='I' || str[i]=='O' ||\nstr[i]=='U') {\n ++vowels;\n } else if((str[i]>='a'&& str[i]<='z') || (str[i]>='A'&& str[i]<='Z')) {\n ++consonants;\n } else if(str[i]>='0' && str[i]<='9') {\n ++digits;\n } else if (str[i]==' ') {\n ++spaces;\n }\n}"
},
{
"code": null,
"e": 3372,
"s": 3210,
"text": "After the occurrences of the vowels, consonants, digits and spaces in the string are calculated, they are displayed. This is shown in the following code snippet."
},
{
"code": null,
"e": 3453,
"s": 3372,
"text": "The string is: Abracadabra 123\nVowels: 5\nConsonants: 6\nDigits: 3\nWhite spaces: 1"
}
] |
Java Program to Implement the RSA Algorithm - GeeksforGeeks | 27 Apr, 2021
RSA or Rivest–Shamir–Adleman is an algorithm employed by modern computers to encrypt and decrypt messages. It is an asymmetric cryptographic algorithm. Asymmetric means that there are two different keys. This is also called public-key cryptography because one among the keys are often given to anyone. The other is the private key which is kept private. The algorithm is predicated on the very fact that finding the factors of an outsized number is difficult: when the factors are prime numbers, the matter is named prime factorization. It is also a key pair (public and personal key) generator.
Example:
Generating Public Key
1. Select two prime no's. Suppose P = 53 and Q = 59.
Now First part of the Public key : n = P*Q = 3127.
2. We also need a small exponent say e :
But e Must be
-An integer.
-Not be a factor of n.
-1 < e < Φ(n) [Φ(n) is discussed below],
Let us now consider it to be equal to 3.
The public key has been made of n and e
Generating Private Key
1. We need to calculate Φ(n) :
Such that Φ(n) = (P-1)(Q-1)
so, Φ(n) = 3016
2. Now calculate Private Key, d :
d = (k*Φ(n) + 1) / e for some integer k
3. For k = 2, value of d is 2011.
The private key has been made of d
Implementation of RSA Algorithm:
Consider two prime numbers p and q.Compute n = p*qCompute φ(n) = (p – 1) * (q – 1)Choose e such gcd(e , φ(n) ) = 1Calculate d such e*d mod φ(n) = 1Public Key {e,n} Private Key {d,n}Cipher text C = Pe mod n where P = plaintextFor Decryption D = Dd mod n where D will refund the plaintext.
Consider two prime numbers p and q.
Compute n = p*q
Compute φ(n) = (p – 1) * (q – 1)
Choose e such gcd(e , φ(n) ) = 1
Calculate d such e*d mod φ(n) = 1
Public Key {e,n} Private Key {d,n}
Cipher text C = Pe mod n where P = plaintext
For Decryption D = Dd mod n where D will refund the plaintext.
Below is the implementation of the above approach:
Java
// Java Program to Implement the RSA Algorithmimport java.math.*;import java.util.*; class RSA { public static void main(String args[]) { int p, q, n, z, d = 0, e, i; // The number to be encrypted and decrypted int msg = 12; double c; BigInteger msgback; // 1st prime number p p = 3; // 2nd prime number q q = 11; n = p * q; z = (p - 1) * (q - 1); System.out.println("the value of z = " + z); for (e = 2; e < z; e++) { // e is for public key exponent if (gcd(e, z) == 1) { break; } } System.out.println("the value of e = " + e); for (i = 0; i <= 9; i++) { int x = 1 + (i * z); // d is for private key exponent if (x % e == 0) { d = x / e; break; } } System.out.println("the value of d = " + d); c = (Math.pow(msg, e)) % n; System.out.println("Encrypted message is : " + c); // converting int value of n to BigInteger BigInteger N = BigInteger.valueOf(n); // converting float value of c to BigInteger BigInteger C = BigDecimal.valueOf(c).toBigInteger(); msgback = (C.pow(d)).mod(N); System.out.println("Decrypted message is : " + msgback); } static int gcd(int e, int z) { if (e == 0) return z; else return gcd(z % e, e); }}
sweetyty
Picked
Technical Scripter 2020
Java
Java Programs
Technical Scripter
Java
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Object Oriented Programming (OOPs) Concept in Java
HashMap in Java with Examples
How to iterate any Map in Java
Initialize an ArrayList in Java
Interfaces in Java
Convert a String to Character array in Java
Initializing a List in Java
Java Programming Examples
Implementing a Linked List in Java using Class
How to Iterate HashMap in Java? | [
{
"code": null,
"e": 24125,
"s": 24097,
"text": "\n27 Apr, 2021"
},
{
"code": null,
"e": 24723,
"s": 24127,
"text": "RSA or Rivest–Shamir–Adleman is an algorithm employed by modern computers to encrypt and decrypt messages. It is an asymmetric cryptographic algorithm. Asymmetric means that there are two different keys. This is also called public-key cryptography because one among the keys are often given to anyone. The other is the private key which is kept private. The algorithm is predicated on the very fact that finding the factors of an outsized number is difficult: when the factors are prime numbers, the matter is named prime factorization. It is also a key pair (public and personal key) generator."
},
{
"code": null,
"e": 24732,
"s": 24723,
"text": "Example:"
},
{
"code": null,
"e": 25376,
"s": 24732,
"text": "Generating Public Key\n\n1. Select two prime no's. Suppose P = 53 and Q = 59.\nNow First part of the Public key : n = P*Q = 3127.\n\n2. We also need a small exponent say e : \n But e Must be \n\n -An integer.\n\n -Not be a factor of n.\n \n -1 < e < Φ(n) [Φ(n) is discussed below], \n Let us now consider it to be equal to 3.\n \nThe public key has been made of n and e\n\nGenerating Private Key\n\n1. We need to calculate Φ(n) :\n Such that Φ(n) = (P-1)(Q-1) \n so, Φ(n) = 3016\n\n \n2. Now calculate Private Key, d : \n d = (k*Φ(n) + 1) / e for some integer k\n3. For k = 2, value of d is 2011.\n\nThe private key has been made of d"
},
{
"code": null,
"e": 25409,
"s": 25376,
"text": "Implementation of RSA Algorithm:"
},
{
"code": null,
"e": 25697,
"s": 25409,
"text": "Consider two prime numbers p and q.Compute n = p*qCompute φ(n) = (p – 1) * (q – 1)Choose e such gcd(e , φ(n) ) = 1Calculate d such e*d mod φ(n) = 1Public Key {e,n} Private Key {d,n}Cipher text C = Pe mod n where P = plaintextFor Decryption D = Dd mod n where D will refund the plaintext."
},
{
"code": null,
"e": 25733,
"s": 25697,
"text": "Consider two prime numbers p and q."
},
{
"code": null,
"e": 25749,
"s": 25733,
"text": "Compute n = p*q"
},
{
"code": null,
"e": 25782,
"s": 25749,
"text": "Compute φ(n) = (p – 1) * (q – 1)"
},
{
"code": null,
"e": 25815,
"s": 25782,
"text": "Choose e such gcd(e , φ(n) ) = 1"
},
{
"code": null,
"e": 25849,
"s": 25815,
"text": "Calculate d such e*d mod φ(n) = 1"
},
{
"code": null,
"e": 25884,
"s": 25849,
"text": "Public Key {e,n} Private Key {d,n}"
},
{
"code": null,
"e": 25929,
"s": 25884,
"text": "Cipher text C = Pe mod n where P = plaintext"
},
{
"code": null,
"e": 25992,
"s": 25929,
"text": "For Decryption D = Dd mod n where D will refund the plaintext."
},
{
"code": null,
"e": 26043,
"s": 25992,
"text": "Below is the implementation of the above approach:"
},
{
"code": null,
"e": 26048,
"s": 26043,
"text": "Java"
},
{
"code": "// Java Program to Implement the RSA Algorithmimport java.math.*;import java.util.*; class RSA { public static void main(String args[]) { int p, q, n, z, d = 0, e, i; // The number to be encrypted and decrypted int msg = 12; double c; BigInteger msgback; // 1st prime number p p = 3; // 2nd prime number q q = 11; n = p * q; z = (p - 1) * (q - 1); System.out.println(\"the value of z = \" + z); for (e = 2; e < z; e++) { // e is for public key exponent if (gcd(e, z) == 1) { break; } } System.out.println(\"the value of e = \" + e); for (i = 0; i <= 9; i++) { int x = 1 + (i * z); // d is for private key exponent if (x % e == 0) { d = x / e; break; } } System.out.println(\"the value of d = \" + d); c = (Math.pow(msg, e)) % n; System.out.println(\"Encrypted message is : \" + c); // converting int value of n to BigInteger BigInteger N = BigInteger.valueOf(n); // converting float value of c to BigInteger BigInteger C = BigDecimal.valueOf(c).toBigInteger(); msgback = (C.pow(d)).mod(N); System.out.println(\"Decrypted message is : \" + msgback); } static int gcd(int e, int z) { if (e == 0) return z; else return gcd(z % e, e); }}",
"e": 27565,
"s": 26048,
"text": null
},
{
"code": null,
"e": 27579,
"s": 27570,
"text": "sweetyty"
},
{
"code": null,
"e": 27586,
"s": 27579,
"text": "Picked"
},
{
"code": null,
"e": 27610,
"s": 27586,
"text": "Technical Scripter 2020"
},
{
"code": null,
"e": 27615,
"s": 27610,
"text": "Java"
},
{
"code": null,
"e": 27629,
"s": 27615,
"text": "Java Programs"
},
{
"code": null,
"e": 27648,
"s": 27629,
"text": "Technical Scripter"
},
{
"code": null,
"e": 27653,
"s": 27648,
"text": "Java"
},
{
"code": null,
"e": 27751,
"s": 27653,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 27760,
"s": 27751,
"text": "Comments"
},
{
"code": null,
"e": 27773,
"s": 27760,
"text": "Old Comments"
},
{
"code": null,
"e": 27824,
"s": 27773,
"text": "Object Oriented Programming (OOPs) Concept in Java"
},
{
"code": null,
"e": 27854,
"s": 27824,
"text": "HashMap in Java with Examples"
},
{
"code": null,
"e": 27885,
"s": 27854,
"text": "How to iterate any Map in Java"
},
{
"code": null,
"e": 27917,
"s": 27885,
"text": "Initialize an ArrayList in Java"
},
{
"code": null,
"e": 27936,
"s": 27917,
"text": "Interfaces in Java"
},
{
"code": null,
"e": 27980,
"s": 27936,
"text": "Convert a String to Character array in Java"
},
{
"code": null,
"e": 28008,
"s": 27980,
"text": "Initializing a List in Java"
},
{
"code": null,
"e": 28034,
"s": 28008,
"text": "Java Programming Examples"
},
{
"code": null,
"e": 28081,
"s": 28034,
"text": "Implementing a Linked List in Java using Class"
}
] |
How to serialize a JavaScript array? | The serializeArray( ) method serializes all forms and forms elements like the .serialize() method but returns a JSON data structure for you to work with.
Let’s say the following is the content in PHP file serialize.php −
<?php
if( $_REQUEST["name"] ) {
$name = $_REQUEST['name'];
echo "Welcome ". $name;
$age = $_REQUEST['age'];
echo "<br />Your age : ". $age;
$sex = $_REQUEST['sex'];
echo "<br />Your gender : ". $sex;
}
?>
The following is the code to implement serializeArray() method in JavaScript −
Live Demo
<html>
<head>
<title>The jQuery Example</title>
<script type = "text/javascript"
src = "https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js">
</script>
<script type = "text/javascript" language = "javascript">
$(document).ready(function() {
$("#driver").click(function(event){
$.post(
"/jquery/serialize.php",
$("#testform").serializeArray(),
function(data) {
$('#stage1').html(data);
}
);
var fields = $("#testform").serializeArray();
$("#stage2").empty();
jQuery.each(fields, function(i, field){
$("#stage2").append(field.value + " ");
});
});
});
</script>
</head>
<body>
<p>Click on the button to load result.html file:</p>
<div id = "stage1" style = "background-color:blue;">
STAGE - 1
</div>
<br />
<div id = "stage2" style = "background-color:blue;">
STAGE - 2
</div>
<form id = "testform">
<table>
<tr>
<td><p>Name:</p></td>
<td><input type = "text" name = "name" size = "40" /></td>
</tr>
<tr>
<td><p>Age:</p></td>
<td><input type = "text" name = "age" size = "40" /></td>
</tr>
<tr>
<td><p>Sex:</p></td>
<td> <select name = "sex">
<option value = "Male" selected>Male</option>
<option value = "Female" selected>Female</option>
</select></td>
</tr>
<tr>
<td colspan = "2">
<input type = "button" id = "driver" value = "Load Data" />
</td>
</tr>
</table>
</form>
</body>
</html> | [
{
"code": null,
"e": 1216,
"s": 1062,
"text": "The serializeArray( ) method serializes all forms and forms elements like the .serialize() method but returns a JSON data structure for you to work with."
},
{
"code": null,
"e": 1283,
"s": 1216,
"text": "Let’s say the following is the content in PHP file serialize.php −"
},
{
"code": null,
"e": 1530,
"s": 1283,
"text": "<?php\n if( $_REQUEST[\"name\"] ) {\n $name = $_REQUEST['name'];\n echo \"Welcome \". $name;\n $age = $_REQUEST['age'];\n echo \"<br />Your age : \". $age;\n $sex = $_REQUEST['sex'];\n echo \"<br />Your gender : \". $sex;\n }\n?>"
},
{
"code": null,
"e": 1609,
"s": 1530,
"text": "The following is the code to implement serializeArray() method in JavaScript −"
},
{
"code": null,
"e": 1619,
"s": 1609,
"text": "Live Demo"
},
{
"code": null,
"e": 3612,
"s": 1619,
"text": "<html>\n <head>\n <title>The jQuery Example</title>\n <script type = \"text/javascript\"\n src = \"https://ajax.googleapis.com/ajax/libs/jquery/2.1.3/jquery.min.js\">\n </script>\n <script type = \"text/javascript\" language = \"javascript\">\n $(document).ready(function() {\n $(\"#driver\").click(function(event){\n $.post(\n \"/jquery/serialize.php\",\n $(\"#testform\").serializeArray(),\n function(data) {\n $('#stage1').html(data);\n }\n );\n var fields = $(\"#testform\").serializeArray();\n $(\"#stage2\").empty();\n jQuery.each(fields, function(i, field){\n $(\"#stage2\").append(field.value + \" \");\n });\n });\n });\n </script>\n </head>\n <body>\n <p>Click on the button to load result.html file:</p>\n <div id = \"stage1\" style = \"background-color:blue;\">\n STAGE - 1\n </div>\n <br />\n <div id = \"stage2\" style = \"background-color:blue;\">\n STAGE - 2\n </div>\n <form id = \"testform\">\n <table>\n <tr>\n <td><p>Name:</p></td>\n <td><input type = \"text\" name = \"name\" size = \"40\" /></td>\n </tr>\n <tr>\n <td><p>Age:</p></td>\n <td><input type = \"text\" name = \"age\" size = \"40\" /></td>\n </tr>\n <tr>\n <td><p>Sex:</p></td>\n <td> <select name = \"sex\">\n <option value = \"Male\" selected>Male</option>\n <option value = \"Female\" selected>Female</option>\n </select></td>\n </tr>\n <tr>\n <td colspan = \"2\">\n <input type = \"button\" id = \"driver\" value = \"Load Data\" />\n </td>\n </tr>\n </table>\n </form>\n </body>\n</html>"
}
] |
Create a Range of Colors between the Specified Colors in R Programming - colorRampPalette() Function - GeeksforGeeks | 30 Jun, 2020
colorRampPalette() function in R Language is used to create a color range between two colors specified as arguments to the function. This function is used to specify the starting and ending color of the range.
Syntax:colorRampPalette(c(“color1”, “color2”))
Parameters:color1, color2: starting and ending colors of the range.
Returns: color range between specified colors
Example:
# R program to create a color range # Apply colorRampPalette Functionfun_color_range <- colorRampPalette(c("Green", "darkgreen")) my_colors <- fun_color_range(100) # Plotting a graphplot(1:100, pch = 20, col = my_colors)
Output:
Example 2:
# R program to create a color range # Apply colorRampPalette Functionfun_color_range <- colorRampPalette(c("blue", "orange")) my_colors <- fun_color_range(100) # Plotting a graphplot(1:100, pch = 40, col = my_colors)
Output :
R-Functions
R Language
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Change Color of Bars in Barchart using ggplot2 in R
How to Change Axis Scales in R Plots?
Group by function in R using Dplyr
How to Split Column Into Multiple Columns in R DataFrame?
How to filter R DataFrame by values in a column?
How to filter R dataframe by multiple conditions?
R - if statement
Replace Specific Characters in String in R
Time Series Analysis in R
How to import an Excel File into R ? | [
{
"code": null,
"e": 25242,
"s": 25214,
"text": "\n30 Jun, 2020"
},
{
"code": null,
"e": 25452,
"s": 25242,
"text": "colorRampPalette() function in R Language is used to create a color range between two colors specified as arguments to the function. This function is used to specify the starting and ending color of the range."
},
{
"code": null,
"e": 25499,
"s": 25452,
"text": "Syntax:colorRampPalette(c(“color1”, “color2”))"
},
{
"code": null,
"e": 25567,
"s": 25499,
"text": "Parameters:color1, color2: starting and ending colors of the range."
},
{
"code": null,
"e": 25613,
"s": 25567,
"text": "Returns: color range between specified colors"
},
{
"code": null,
"e": 25622,
"s": 25613,
"text": "Example:"
},
{
"code": "# R program to create a color range # Apply colorRampPalette Functionfun_color_range <- colorRampPalette(c(\"Green\", \"darkgreen\")) my_colors <- fun_color_range(100) # Plotting a graphplot(1:100, pch = 20, col = my_colors)",
"e": 25849,
"s": 25622,
"text": null
},
{
"code": null,
"e": 25857,
"s": 25849,
"text": "Output:"
},
{
"code": null,
"e": 25868,
"s": 25857,
"text": "Example 2:"
},
{
"code": "# R program to create a color range # Apply colorRampPalette Functionfun_color_range <- colorRampPalette(c(\"blue\", \"orange\")) my_colors <- fun_color_range(100) # Plotting a graphplot(1:100, pch = 40, col = my_colors)",
"e": 26091,
"s": 25868,
"text": null
},
{
"code": null,
"e": 26100,
"s": 26091,
"text": "Output :"
},
{
"code": null,
"e": 26112,
"s": 26100,
"text": "R-Functions"
},
{
"code": null,
"e": 26123,
"s": 26112,
"text": "R Language"
},
{
"code": null,
"e": 26221,
"s": 26123,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 26273,
"s": 26221,
"text": "Change Color of Bars in Barchart using ggplot2 in R"
},
{
"code": null,
"e": 26311,
"s": 26273,
"text": "How to Change Axis Scales in R Plots?"
},
{
"code": null,
"e": 26346,
"s": 26311,
"text": "Group by function in R using Dplyr"
},
{
"code": null,
"e": 26404,
"s": 26346,
"text": "How to Split Column Into Multiple Columns in R DataFrame?"
},
{
"code": null,
"e": 26453,
"s": 26404,
"text": "How to filter R DataFrame by values in a column?"
},
{
"code": null,
"e": 26503,
"s": 26453,
"text": "How to filter R dataframe by multiple conditions?"
},
{
"code": null,
"e": 26520,
"s": 26503,
"text": "R - if statement"
},
{
"code": null,
"e": 26563,
"s": 26520,
"text": "Replace Specific Characters in String in R"
},
{
"code": null,
"e": 26589,
"s": 26563,
"text": "Time Series Analysis in R"
}
] |
5 Different methods to find the length of a string in C++? | A sequence of characters or a linear array of character is known as String. Its declaration is same as define other arrays.
Length of array is the number of characters in the String. There are many in-built method and other methods to find the length of string. Here, we are discussing 5 different methods to find the length of a string in C++.
1) Using the strlen() method of C − This function returns an integer value of the C. For this you need to pass the string in the form of character array.
#include <iostream>
#include <cstring>
using namespace std;
int main() {
char charr[] = "I Love Tutorialspoint";
int length = strlen(charr);
cout << "the length of the character array is " << length;
return 0;
}
the length of the character array is 21
2) Using the size() method of c++ − It is included in the string library of C++. The return an integer value of the number of characters in the string.
#include <iostream>
#include <string>
using namespace std;
int main() {
string str = "I love tutorialspoint";
int length = str.size();
cout << "the length of the string is " << length;
return 0;
}
The length of the string is 21
3) Using the for loop − This method does not require any function. It loops through the array and counts the number of elements in it. The loop runs until the ‘/0’ is encountered.
#include <iostream>
#include <string>
using namespace std;
int main() {
string str = "I love tutorialspoint";
int i;
for(i=0; str[i]!='\0'; i++){ }
cout << "the length of the string is " << i;
return 0;
}
The length of the string is 21
4) Using the length() method − In C++ their is a method length() in the string library that returns the number of characters in the string.
#include <iostream>
#include <string>
using namespace std;
int main() {
string str = "I love tutorialspoint";
int length = str.length();
cout << "the length of the string is " << length;
return 0;
}
The length of the string is 21
5) Finding length of string using the while loop − You can count the number of characters in a string using the while loop also. To count the number of characters, you have to use a counter in the while loop and specify the end condition as != ‘\0’ for the string.
#include <iostream>
#include <string>
using namespace std;
int main() {
string str = "I love tutorialspoint";
int length = 0;
while(str[length] !='\0' ){
length++;
}
cout<<"The length of the string is "<< length;
return 0;
}
The length of the string is 21 | [
{
"code": null,
"e": 1186,
"s": 1062,
"text": "A sequence of characters or a linear array of character is known as String. Its declaration is same as define other arrays."
},
{
"code": null,
"e": 1407,
"s": 1186,
"text": "Length of array is the number of characters in the String. There are many in-built method and other methods to find the length of string. Here, we are discussing 5 different methods to find the length of a string in C++."
},
{
"code": null,
"e": 1561,
"s": 1407,
"text": "1) Using the strlen() method of C − This function returns an integer value of the C. For this you need to pass the string in the form of character array."
},
{
"code": null,
"e": 1785,
"s": 1561,
"text": "#include <iostream>\n#include <cstring>\nusing namespace std;\nint main() {\n char charr[] = \"I Love Tutorialspoint\";\n int length = strlen(charr);\n cout << \"the length of the character array is \" << length;\n return 0;\n}"
},
{
"code": null,
"e": 1825,
"s": 1785,
"text": "the length of the character array is 21"
},
{
"code": null,
"e": 1977,
"s": 1825,
"text": "2) Using the size() method of c++ − It is included in the string library of C++. The return an integer value of the number of characters in the string."
},
{
"code": null,
"e": 2186,
"s": 1977,
"text": "#include <iostream>\n#include <string>\nusing namespace std;\nint main() {\n string str = \"I love tutorialspoint\";\n int length = str.size();\n cout << \"the length of the string is \" << length;\n return 0;\n}"
},
{
"code": null,
"e": 2217,
"s": 2186,
"text": "The length of the string is 21"
},
{
"code": null,
"e": 2397,
"s": 2217,
"text": "3) Using the for loop − This method does not require any function. It loops through the array and counts the number of elements in it. The loop runs until the ‘/0’ is encountered."
},
{
"code": null,
"e": 2620,
"s": 2397,
"text": "#include <iostream>\n#include <string>\nusing namespace std;\nint main() {\n string str = \"I love tutorialspoint\";\n int i;\n for(i=0; str[i]!='\\0'; i++){ }\n cout << \"the length of the string is \" << i;\n return 0;\n}"
},
{
"code": null,
"e": 2651,
"s": 2620,
"text": "The length of the string is 21"
},
{
"code": null,
"e": 2791,
"s": 2651,
"text": "4) Using the length() method − In C++ their is a method length() in the string library that returns the number of characters in the string."
},
{
"code": null,
"e": 3005,
"s": 2791,
"text": "#include <iostream>\n#include <string>\nusing namespace std;\nint main() {\n string str = \"I love tutorialspoint\";\n int length = str.length();\n cout << \"the length of the string is \" << length;\n return 0;\n}"
},
{
"code": null,
"e": 3036,
"s": 3005,
"text": "The length of the string is 21"
},
{
"code": null,
"e": 3301,
"s": 3036,
"text": "5) Finding length of string using the while loop − You can count the number of characters in a string using the while loop also. To count the number of characters, you have to use a counter in the while loop and specify the end condition as != ‘\\0’ for the string."
},
{
"code": null,
"e": 3550,
"s": 3301,
"text": "#include <iostream>\n#include <string>\nusing namespace std;\nint main() {\n string str = \"I love tutorialspoint\";\n int length = 0;\n while(str[length] !='\\0' ){\n length++;\n }\n cout<<\"The length of the string is \"<< length;\n return 0;\n}"
},
{
"code": null,
"e": 3581,
"s": 3550,
"text": "The length of the string is 21"
}
] |
PDFBox - Extracting Image | In the previous chapter, we have seen how to merge multiple PDF documents. In this chapter, we will understand how to extract an image from a page of a PDF document.
PDFBox library provides you a class named PDFRenderer which renders a PDF document into an AWT BufferedImage.
Following are the steps to generate an image from a PDF document.
Load an existing PDF document using the static method load() of the PDDocument class. This method accepts a file object as a parameter, since this is a static method you can invoke it using class name as shown below.
File file = new File("path of the document")
PDDocument document = PDDocument.load(file);
The class named PDFRenderer renders a PDF document into an AWT BufferedImage. Therefore, you need to instantiate this class as shown below. The constructor of this class accepts a document object; pass the document object created in the previous step as shown below.
PDFRenderer renderer = new PDFRenderer(document);
You can render the image in a particular page using the method renderImage() of the Renderer class, to this method you need to pass the index of the page where you have the image that is to be rendered.
BufferedImage image = renderer.renderImage(0);
You can write the image rendered in the previous step to a file using the write() method. To this method, you need to pass three parameters −
The rendered image object.
String representing the type of the image (jpg or png).
File object to which you need to save the extracted image.
ImageIO.write(image, "JPEG", new File("C:/PdfBox_Examples/myimage.jpg"));
Finally, close the document using the close() method of the PDDocument class as shown below.
document.close();
Suppose, we have a PDF document — sample.pdf in the path C:\PdfBox_Examples\ and this contains an image in its first page as shown below.
This example demonstrates how to convert the above PDF document into an image file. Here, we will retrieve the image in the 1st page of the PDF document and save it as myimage.jpg. Save this code as PdfToImage.java
import java.awt.image.BufferedImage;
import java.io.File;
import javax.imageio.ImageIO;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.rendering.PDFRenderer;
public class PdfToImage {
public static void main(String args[]) throws Exception {
//Loading an existing PDF document
File file = new File("C:/PdfBox_Examples/sample.pdf");
PDDocument document = PDDocument.load(file);
//Instantiating the PDFRenderer class
PDFRenderer renderer = new PDFRenderer(document);
//Rendering an image from the PDF document
BufferedImage image = renderer.renderImage(0);
//Writing the image to a file
ImageIO.write(image, "JPEG", new File("C:/PdfBox_Examples/myimage.jpg"));
System.out.println("Image created");
//Closing the document
document.close();
}
}
Compile and execute the saved Java file from the command prompt using the following commands.
javac PdfToImage.java
java PdfToImage
Upon execution, the above program retrieves the image in the given PDF document displaying the following message.
Image created
If you verify the given path, you can observe that the image is generated and saved as myimage.jpg as shown below.
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2193,
"s": 2027,
"text": "In the previous chapter, we have seen how to merge multiple PDF documents. In this chapter, we will understand how to extract an image from a page of a PDF document."
},
{
"code": null,
"e": 2303,
"s": 2193,
"text": "PDFBox library provides you a class named PDFRenderer which renders a PDF document into an AWT BufferedImage."
},
{
"code": null,
"e": 2369,
"s": 2303,
"text": "Following are the steps to generate an image from a PDF document."
},
{
"code": null,
"e": 2586,
"s": 2369,
"text": "Load an existing PDF document using the static method load() of the PDDocument class. This method accepts a file object as a parameter, since this is a static method you can invoke it using class name as shown below."
},
{
"code": null,
"e": 2678,
"s": 2586,
"text": "File file = new File(\"path of the document\") \nPDDocument document = PDDocument.load(file);\n"
},
{
"code": null,
"e": 2945,
"s": 2678,
"text": "The class named PDFRenderer renders a PDF document into an AWT BufferedImage. Therefore, you need to instantiate this class as shown below. The constructor of this class accepts a document object; pass the document object created in the previous step as shown below."
},
{
"code": null,
"e": 2996,
"s": 2945,
"text": "PDFRenderer renderer = new PDFRenderer(document);\n"
},
{
"code": null,
"e": 3199,
"s": 2996,
"text": "You can render the image in a particular page using the method renderImage() of the Renderer class, to this method you need to pass the index of the page where you have the image that is to be rendered."
},
{
"code": null,
"e": 3247,
"s": 3199,
"text": "BufferedImage image = renderer.renderImage(0);\n"
},
{
"code": null,
"e": 3389,
"s": 3247,
"text": "You can write the image rendered in the previous step to a file using the write() method. To this method, you need to pass three parameters −"
},
{
"code": null,
"e": 3416,
"s": 3389,
"text": "The rendered image object."
},
{
"code": null,
"e": 3472,
"s": 3416,
"text": "String representing the type of the image (jpg or png)."
},
{
"code": null,
"e": 3531,
"s": 3472,
"text": "File object to which you need to save the extracted image."
},
{
"code": null,
"e": 3606,
"s": 3531,
"text": "ImageIO.write(image, \"JPEG\", new File(\"C:/PdfBox_Examples/myimage.jpg\"));\n"
},
{
"code": null,
"e": 3699,
"s": 3606,
"text": "Finally, close the document using the close() method of the PDDocument class as shown below."
},
{
"code": null,
"e": 3718,
"s": 3699,
"text": "document.close();\n"
},
{
"code": null,
"e": 3856,
"s": 3718,
"text": "Suppose, we have a PDF document — sample.pdf in the path C:\\PdfBox_Examples\\ and this contains an image in its first page as shown below."
},
{
"code": null,
"e": 4071,
"s": 3856,
"text": "This example demonstrates how to convert the above PDF document into an image file. Here, we will retrieve the image in the 1st page of the PDF document and save it as myimage.jpg. Save this code as PdfToImage.java"
},
{
"code": null,
"e": 4943,
"s": 4071,
"text": "import java.awt.image.BufferedImage;\nimport java.io.File;\n\nimport javax.imageio.ImageIO;\nimport org.apache.pdfbox.pdmodel.PDDocument;\nimport org.apache.pdfbox.rendering.PDFRenderer;\npublic class PdfToImage {\n\n public static void main(String args[]) throws Exception {\n\n //Loading an existing PDF document\n File file = new File(\"C:/PdfBox_Examples/sample.pdf\");\n PDDocument document = PDDocument.load(file);\n \n //Instantiating the PDFRenderer class\n PDFRenderer renderer = new PDFRenderer(document);\n\n //Rendering an image from the PDF document\n BufferedImage image = renderer.renderImage(0);\n\n //Writing the image to a file\n ImageIO.write(image, \"JPEG\", new File(\"C:/PdfBox_Examples/myimage.jpg\"));\n \n System.out.println(\"Image created\");\n \n //Closing the document\n document.close();\n\n }\n}"
},
{
"code": null,
"e": 5037,
"s": 4943,
"text": "Compile and execute the saved Java file from the command prompt using the following commands."
},
{
"code": null,
"e": 5077,
"s": 5037,
"text": "javac PdfToImage.java \njava PdfToImage\n"
},
{
"code": null,
"e": 5191,
"s": 5077,
"text": "Upon execution, the above program retrieves the image in the given PDF document displaying the following message."
},
{
"code": null,
"e": 5206,
"s": 5191,
"text": "Image created\n"
},
{
"code": null,
"e": 5321,
"s": 5206,
"text": "If you verify the given path, you can observe that the image is generated and saved as myimage.jpg as shown below."
},
{
"code": null,
"e": 5328,
"s": 5321,
"text": " Print"
},
{
"code": null,
"e": 5339,
"s": 5328,
"text": " Add Notes"
}
] |
Dynamic Programming - GeeksforGeeks | 04 May, 2020
l(i,j) = 0, if either i=0 or j=0
= expr1, if i,j > 0 and X[i-1] = Y[j-1]
= expr2, if i,j > 0 and X[i-1] != Y[j-1]
1) The last characters of two strings match.
The length of lcs is length of lcs of X[0..i-1] and Y[0..j-1]
2) The last characters don't match.
The length of lcs is max of following two lcs values
a) LCS of X[0..i-1] and Y[0..j]
b) LCS of X[0..i] and Y[0..j-1]
Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here.
Best Time to Buy and Sell Stock
Must Do Coding Questions for Product Based Companies
Converting Epsilon-NFA to DFA using Python and Graphviz
C++ Program For Inserting A Node In A Linked List
CBSE Notes for Class 10 Physics
Largest value of K such that both K and -K exist in Array in given index range [L, R]
BigInt (BIG INTEGERS) in C++ with Example
Different Ways to Run Applet in Java
How to Install Flutter on Visual Studio Code?
How to calculate MOVING AVERAGE in a Pandas DataFrame? | [
{
"code": null,
"e": 34492,
"s": 34464,
"text": "\n04 May, 2020"
},
{
"code": null,
"e": 34621,
"s": 34492,
"text": "l(i,j) = 0, if either i=0 or j=0\n = expr1, if i,j > 0 and X[i-1] = Y[j-1]\n = expr2, if i,j > 0 and X[i-1] != Y[j-1] "
},
{
"code": null,
"e": 34895,
"s": 34621,
"text": "1) The last characters of two strings match. \n The length of lcs is length of lcs of X[0..i-1] and Y[0..j-1]\n2) The last characters don't match.\n The length of lcs is max of following two lcs values\n a) LCS of X[0..i-1] and Y[0..j]\n b) LCS of X[0..i] and Y[0..j-1]\n"
},
{
"code": null,
"e": 34993,
"s": 34895,
"text": "Writing code in comment? Please use ide.geeksforgeeks.org, generate link and share the link here."
},
{
"code": null,
"e": 35025,
"s": 34993,
"text": "Best Time to Buy and Sell Stock"
},
{
"code": null,
"e": 35078,
"s": 35025,
"text": "Must Do Coding Questions for Product Based Companies"
},
{
"code": null,
"e": 35134,
"s": 35078,
"text": "Converting Epsilon-NFA to DFA using Python and Graphviz"
},
{
"code": null,
"e": 35184,
"s": 35134,
"text": "C++ Program For Inserting A Node In A Linked List"
},
{
"code": null,
"e": 35216,
"s": 35184,
"text": "CBSE Notes for Class 10 Physics"
},
{
"code": null,
"e": 35302,
"s": 35216,
"text": "Largest value of K such that both K and -K exist in Array in given index range [L, R]"
},
{
"code": null,
"e": 35344,
"s": 35302,
"text": "BigInt (BIG INTEGERS) in C++ with Example"
},
{
"code": null,
"e": 35381,
"s": 35344,
"text": "Different Ways to Run Applet in Java"
},
{
"code": null,
"e": 35427,
"s": 35381,
"text": "How to Install Flutter on Visual Studio Code?"
}
] |
How do I convert (or scale) axis values and redefine the tick frequency in Matplotlib? | To convert or scale the axis values and redefine the tick frequency in matplotlib, we can make a list of xticks and xtick_labels using xticks() method. Place the axis scale and redefine the tick frequency.
Set the figure size and adjust the padding between and around the subplots.
Set the figure size and adjust the padding between and around the subplots.
Initialize a variable, n, for the number of data points.
Initialize a variable, n, for the number of data points.
Create x and y data points using numpy.
Create x and y data points using numpy.
Plot x and y data points using plot() method.
Plot x and y data points using plot() method.
Make lists of ticks and tick labels.
Make lists of ticks and tick labels.
Use xticks() method to place axis scale and redefine tick frequency.
Use xticks() method to place axis scale and redefine tick frequency.
To display the figure, use show() method.
To display the figure, use show() method.
import numpy as np
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = [7.50, 3.50]
plt.rcParams["figure.autolayout"] = True
n = 10
x = np.linspace(-2, 2, n)
y = np.exp(x)
plt.plot(x, y)
xticks = [i for i in range(int(n/2))]
xtick_labels = ["x"+str(i) for i in range(int(n/2))]
plt.xticks(xticks, xtick_labels)
plt.show() | [
{
"code": null,
"e": 1268,
"s": 1062,
"text": "To convert or scale the axis values and redefine the tick frequency in matplotlib, we can make a list of xticks and xtick_labels using xticks() method. Place the axis scale and redefine the tick frequency."
},
{
"code": null,
"e": 1344,
"s": 1268,
"text": "Set the figure size and adjust the padding between and around the subplots."
},
{
"code": null,
"e": 1420,
"s": 1344,
"text": "Set the figure size and adjust the padding between and around the subplots."
},
{
"code": null,
"e": 1477,
"s": 1420,
"text": "Initialize a variable, n, for the number of data points."
},
{
"code": null,
"e": 1534,
"s": 1477,
"text": "Initialize a variable, n, for the number of data points."
},
{
"code": null,
"e": 1574,
"s": 1534,
"text": "Create x and y data points using numpy."
},
{
"code": null,
"e": 1614,
"s": 1574,
"text": "Create x and y data points using numpy."
},
{
"code": null,
"e": 1660,
"s": 1614,
"text": "Plot x and y data points using plot() method."
},
{
"code": null,
"e": 1706,
"s": 1660,
"text": "Plot x and y data points using plot() method."
},
{
"code": null,
"e": 1743,
"s": 1706,
"text": "Make lists of ticks and tick labels."
},
{
"code": null,
"e": 1780,
"s": 1743,
"text": "Make lists of ticks and tick labels."
},
{
"code": null,
"e": 1849,
"s": 1780,
"text": "Use xticks() method to place axis scale and redefine tick frequency."
},
{
"code": null,
"e": 1918,
"s": 1849,
"text": "Use xticks() method to place axis scale and redefine tick frequency."
},
{
"code": null,
"e": 1960,
"s": 1918,
"text": "To display the figure, use show() method."
},
{
"code": null,
"e": 2002,
"s": 1960,
"text": "To display the figure, use show() method."
},
{
"code": null,
"e": 2342,
"s": 2002,
"text": "import numpy as np\nfrom matplotlib import pyplot as plt\nplt.rcParams[\"figure.figsize\"] = [7.50, 3.50]\nplt.rcParams[\"figure.autolayout\"] = True\nn = 10\nx = np.linspace(-2, 2, n)\ny = np.exp(x)\nplt.plot(x, y)\nxticks = [i for i in range(int(n/2))]\nxtick_labels = [\"x\"+str(i) for i in range(int(n/2))]\nplt.xticks(xticks, xtick_labels)\nplt.show()"
}
] |
Covariant return types in Java | Covariant return type refers to return type of an overriding method. It allows to narrow down return type of an overridden method without any need to cast the type or check the return type. Covariant return type works only for non-primitive return types.
From Java 5 onwards, we can override a method by changing its return type only by abiding the condition that return type is a subclass of that of overridden method return type.
Following example showcases the same.
Live Demo
class SuperClass {
SuperClass get() {
System.out.println("SuperClass");
return this;
}
}
public class Tester extends SuperClass {
Tester get() {
System.out.println("SubClass");
return this;
}
public static void main(String[] args) {
SuperClass tester = new Tester();
tester.get();
}
}
Output
Subclass | [
{
"code": null,
"e": 1317,
"s": 1062,
"text": "Covariant return type refers to return type of an overriding method. It allows to narrow down return type of an overridden method without any need to cast the type or check the return type. Covariant return type works only for non-primitive return types."
},
{
"code": null,
"e": 1494,
"s": 1317,
"text": "From Java 5 onwards, we can override a method by changing its return type only by abiding the condition that return type is a subclass of that of overridden method return type."
},
{
"code": null,
"e": 1532,
"s": 1494,
"text": "Following example showcases the same."
},
{
"code": null,
"e": 1543,
"s": 1532,
"text": " Live Demo"
},
{
"code": null,
"e": 1882,
"s": 1543,
"text": "class SuperClass {\n SuperClass get() {\n System.out.println(\"SuperClass\");\n return this;\n }\n}\npublic class Tester extends SuperClass {\n Tester get() {\n System.out.println(\"SubClass\");\n return this;\n }\n public static void main(String[] args) {\n SuperClass tester = new Tester();\n tester.get();\n }\n}"
},
{
"code": null,
"e": 1889,
"s": 1882,
"text": "Output"
},
{
"code": null,
"e": 1898,
"s": 1889,
"text": "Subclass"
}
] |
React Native Modal Component - GeeksforGeeks | 10 May, 2021
The following approach covers how to create Modal in react-native. For this, we are going to use the Modal component. It is a basic way to present content above an enclosing view.
Syntax:
<Modal
animationType=""
transparent={}
visible={}
onRequestClose={}
>
Props in Modal:
animationType: This prop controls how the modal animates.
animated: This prop is deprecated now use animationType instead.
hardwareAccelerated: This prop controls whether to force hardware acceleration for the underlying window. It is only for android devices.
onDismiss: This prop allows passing a function that will be called once the modal has been dismissed. It is only for ios devices.
onOrientationChange: It is called when the orientation changes while the modal is being displayed. It is only for ios devices.
onRequestClose: It is called when the user taps the hardware back button on Android.
onShow: This prop allows passing a function that will be called once the modal has been shown.
presentationStyle: This prop controls how the modal appears. It is only for ios devices.
statusBarTranslucent: This prop determines whether your modal should go under the system statusbar.
supportedOrientations: This prop allows the modal to be rotated to any of the specified orientations. It is only for ios devices.
transparent: This prop determines whether your modal will fill the entire view.
visible: This prop determines whether your modal is visible or not.
Now let’s start with the implementation:
Step 1: Open your terminal and install expo-cli by the following command.npm install -g expo-cli
Step 1: Open your terminal and install expo-cli by the following command.
npm install -g expo-cli
Step 2: Now create a project by the following command.expo init myapp
Step 2: Now create a project by the following command.
expo init myapp
Step 3: Now go into your project folder i.e. myappcd myapp
Step 3: Now go into your project folder i.e. myapp
cd myapp
Project Structure: It will look like this.
Example: Now let’s implement the Modal. Here we created a Modal that comes up when we click on the button.
App.js
App.js
import React , {useState} from 'react';import { StyleSheet, View , Text , Modal , Button } from 'react-native';export default function App() { const [active , setactive] = useState(false); return ( <View style={styles.container}> <Modal animationType="slide" transparent={true} visible={active} onRequestClose={() => { console.warn("closed"); }} > <View style={styles.container}> <View style={styles.View}> <Text style={styles.text}>GeeksforGeeks</Text> <Button title="close" onPress={()=>{setactive(!active)}}/> </View> </View> </Modal> <Button title={"click"} onPress={()=>{setactive(!active)}} /> </View> );} const styles = StyleSheet.create({ container: { flex: 1, backgroundColor : "black", alignItems: 'center', justifyContent: 'center', }, View : { backgroundColor : "white" , height : 140 , width : 250, borderRadius : 15, alignItems : "center", justifyContent : "center", borderColor : "black", borderWidth:2, }, text : { fontSize : 20, color : "green", marginBottom:20 }, button : { margin : 20, width:200, }});
Start the server by using the following command.
npm run android
Output: If your emulator did not open automatically then you need to do it manually. First, go to your android studio and run the emulator. Now start the server again.
Reference: https://reactnative.dev/docs/modal
Picked
React-Native
Web Technologies
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Roadmap to Become a Web Developer in 2022
How to fetch data from an API in ReactJS ?
Top 10 Angular Libraries For Web Developers
Convert a string to an integer in JavaScript
Difference between var, let and const keywords in JavaScript
Differences between Functional Components and Class Components in React
How to Insert Form Data into Database using PHP ?
How to redirect to another page in ReactJS ?
How to execute PHP code using command line ?
How to pass data from child component to its parent in ReactJS ? | [
{
"code": null,
"e": 24836,
"s": 24808,
"text": "\n10 May, 2021"
},
{
"code": null,
"e": 25016,
"s": 24836,
"text": "The following approach covers how to create Modal in react-native. For this, we are going to use the Modal component. It is a basic way to present content above an enclosing view."
},
{
"code": null,
"e": 25024,
"s": 25016,
"text": "Syntax:"
},
{
"code": null,
"e": 25102,
"s": 25024,
"text": "<Modal\n animationType=\"\"\n transparent={}\n visible={}\n onRequestClose={}\n>"
},
{
"code": null,
"e": 25118,
"s": 25102,
"text": "Props in Modal:"
},
{
"code": null,
"e": 25176,
"s": 25118,
"text": "animationType: This prop controls how the modal animates."
},
{
"code": null,
"e": 25241,
"s": 25176,
"text": "animated: This prop is deprecated now use animationType instead."
},
{
"code": null,
"e": 25379,
"s": 25241,
"text": "hardwareAccelerated: This prop controls whether to force hardware acceleration for the underlying window. It is only for android devices."
},
{
"code": null,
"e": 25509,
"s": 25379,
"text": "onDismiss: This prop allows passing a function that will be called once the modal has been dismissed. It is only for ios devices."
},
{
"code": null,
"e": 25636,
"s": 25509,
"text": "onOrientationChange: It is called when the orientation changes while the modal is being displayed. It is only for ios devices."
},
{
"code": null,
"e": 25721,
"s": 25636,
"text": "onRequestClose: It is called when the user taps the hardware back button on Android."
},
{
"code": null,
"e": 25817,
"s": 25721,
"text": "onShow: This prop allows passing a function that will be called once the modal has been shown."
},
{
"code": null,
"e": 25906,
"s": 25817,
"text": "presentationStyle: This prop controls how the modal appears. It is only for ios devices."
},
{
"code": null,
"e": 26006,
"s": 25906,
"text": "statusBarTranslucent: This prop determines whether your modal should go under the system statusbar."
},
{
"code": null,
"e": 26136,
"s": 26006,
"text": "supportedOrientations: This prop allows the modal to be rotated to any of the specified orientations. It is only for ios devices."
},
{
"code": null,
"e": 26216,
"s": 26136,
"text": "transparent: This prop determines whether your modal will fill the entire view."
},
{
"code": null,
"e": 26284,
"s": 26216,
"text": "visible: This prop determines whether your modal is visible or not."
},
{
"code": null,
"e": 26325,
"s": 26284,
"text": "Now let’s start with the implementation:"
},
{
"code": null,
"e": 26422,
"s": 26325,
"text": "Step 1: Open your terminal and install expo-cli by the following command.npm install -g expo-cli"
},
{
"code": null,
"e": 26496,
"s": 26422,
"text": "Step 1: Open your terminal and install expo-cli by the following command."
},
{
"code": null,
"e": 26520,
"s": 26496,
"text": "npm install -g expo-cli"
},
{
"code": null,
"e": 26590,
"s": 26520,
"text": "Step 2: Now create a project by the following command.expo init myapp"
},
{
"code": null,
"e": 26645,
"s": 26590,
"text": "Step 2: Now create a project by the following command."
},
{
"code": null,
"e": 26661,
"s": 26645,
"text": "expo init myapp"
},
{
"code": null,
"e": 26720,
"s": 26661,
"text": "Step 3: Now go into your project folder i.e. myappcd myapp"
},
{
"code": null,
"e": 26771,
"s": 26720,
"text": "Step 3: Now go into your project folder i.e. myapp"
},
{
"code": null,
"e": 26780,
"s": 26771,
"text": "cd myapp"
},
{
"code": null,
"e": 26823,
"s": 26780,
"text": "Project Structure: It will look like this."
},
{
"code": null,
"e": 26930,
"s": 26823,
"text": "Example: Now let’s implement the Modal. Here we created a Modal that comes up when we click on the button."
},
{
"code": null,
"e": 26937,
"s": 26930,
"text": "App.js"
},
{
"code": null,
"e": 26944,
"s": 26937,
"text": "App.js"
},
{
"code": "import React , {useState} from 'react';import { StyleSheet, View , Text , Modal , Button } from 'react-native';export default function App() { const [active , setactive] = useState(false); return ( <View style={styles.container}> <Modal animationType=\"slide\" transparent={true} visible={active} onRequestClose={() => { console.warn(\"closed\"); }} > <View style={styles.container}> <View style={styles.View}> <Text style={styles.text}>GeeksforGeeks</Text> <Button title=\"close\" onPress={()=>{setactive(!active)}}/> </View> </View> </Modal> <Button title={\"click\"} onPress={()=>{setactive(!active)}} /> </View> );} const styles = StyleSheet.create({ container: { flex: 1, backgroundColor : \"black\", alignItems: 'center', justifyContent: 'center', }, View : { backgroundColor : \"white\" , height : 140 , width : 250, borderRadius : 15, alignItems : \"center\", justifyContent : \"center\", borderColor : \"black\", borderWidth:2, }, text : { fontSize : 20, color : \"green\", marginBottom:20 }, button : { margin : 20, width:200, }});",
"e": 28215,
"s": 26944,
"text": null
},
{
"code": null,
"e": 28264,
"s": 28215,
"text": "Start the server by using the following command."
},
{
"code": null,
"e": 28280,
"s": 28264,
"text": "npm run android"
},
{
"code": null,
"e": 28448,
"s": 28280,
"text": "Output: If your emulator did not open automatically then you need to do it manually. First, go to your android studio and run the emulator. Now start the server again."
},
{
"code": null,
"e": 28494,
"s": 28448,
"text": "Reference: https://reactnative.dev/docs/modal"
},
{
"code": null,
"e": 28501,
"s": 28494,
"text": "Picked"
},
{
"code": null,
"e": 28514,
"s": 28501,
"text": "React-Native"
},
{
"code": null,
"e": 28531,
"s": 28514,
"text": "Web Technologies"
},
{
"code": null,
"e": 28629,
"s": 28531,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 28638,
"s": 28629,
"text": "Comments"
},
{
"code": null,
"e": 28651,
"s": 28638,
"text": "Old Comments"
},
{
"code": null,
"e": 28693,
"s": 28651,
"text": "Roadmap to Become a Web Developer in 2022"
},
{
"code": null,
"e": 28736,
"s": 28693,
"text": "How to fetch data from an API in ReactJS ?"
},
{
"code": null,
"e": 28780,
"s": 28736,
"text": "Top 10 Angular Libraries For Web Developers"
},
{
"code": null,
"e": 28825,
"s": 28780,
"text": "Convert a string to an integer in JavaScript"
},
{
"code": null,
"e": 28886,
"s": 28825,
"text": "Difference between var, let and const keywords in JavaScript"
},
{
"code": null,
"e": 28958,
"s": 28886,
"text": "Differences between Functional Components and Class Components in React"
},
{
"code": null,
"e": 29008,
"s": 28958,
"text": "How to Insert Form Data into Database using PHP ?"
},
{
"code": null,
"e": 29053,
"s": 29008,
"text": "How to redirect to another page in ReactJS ?"
},
{
"code": null,
"e": 29098,
"s": 29053,
"text": "How to execute PHP code using command line ?"
}
] |
Implementing a Trading Algorithm with R | by Adam Gajtkowski | Towards Data Science | This story explains how to implement the moving average trading algorithm with R. If you’re interested in setting up your automated trading pipeline, you should first read this article. This story is a purely technical guide focusing on programming and statistics, not financial advice.
Throughout this story, we will build an R function which takes historical stock data and arbitrary threshold as inputs and based on it decides whether it is a good time to purchase given stock. We will look at Apple stocks. This article may require a certain level of statistical knowledge. University level introduction to statistics modules should be sufficient.
The moving average trading algorithm takes an advantage of fluctuations around the stocks trend. We first identify if the slope of the given time series is positive. For simplicity we designed this algorithm to work only for positively trending stocks. We then detrend the historical time series, check if the most recent fluctuation is above, or below moving average. If the current price is below moving average and we don’t have the stock in our holdings, we buy the stock. If it’s above moving average and the stock is currently in our holdings we sell the stock.
Figure 1 represents the closing price of Apple stock in the past 50 days. We detrend this time-series, so that the red dotted line is aligned with the x-axis.
Figure 2 represents the detrended Apple stock for the past 50 days. Detrended time series looks stationary, that is it has constant mean and variance. If we wanted to be very rigorous, we could test for stationarity to make sure that it has desirable properties. We choose the arbitrary threshold to be +-5%. We purchase the stock if its 5 days average is 5% below the detrended time series, and sell it if it’s 5% above detrended time series. If the trade is successful, we gain 10% + any increase along the trend within this period.
The function takes 2 inputs. Historic data from Yahoo’s API and arbitrary threshold for buying/selling the stock.
moving_average_model <- function(data,trend_deviation_treshold = -5) {...}
We then access our current holdings to check which stocks we have purchased in the past. Below, we check if we currently hold Apple stocks in our portfolio.
holdings <- current_holdings()if ((holdings %>% filter(stock == "APPL"))$stock == "APPL") {stock_in_portfolio = TRUE} else {stock_in_portfolio = FALSE}
We then declare the historic data as a data frame, transform row names including date into a separate column, consider only closing price and dates in the past 70 days (about 50 working days when the stock market is open). Finally, we create an additional column which gives each day a number. Most recent day will have the highest number.
data <- as.data.frame(data) %>%tibble::rownames_to_column("date")data_close <- data %>%select(date, close = AAPL.Close) %>%filter(date > as.Date(Sys.Date() - 70)) %>%arrange(date) %>%mutate(day_number = row_number())
Next step is detrending the time series. We have created a variable with increasing numbers for each date above. If we were to treat this as a separate variable and plot it agains date, this would form a straight line. Hence, we can use this variable for detrending. If we regress the stock price on this variable, this would de trend the time series. We then could work on residuals from the regression to determine if the current price is above or below the moving average.
formula <- close ~ day_numberregression_model <- lm(formula = formula,data = data_close)
You may note that we do not suppress the constant term, nor consider that during further analysis. We simply care about the position of current price relative to the average. Hence, we may only analyse residuals.
recent_5 <- residual_values %>%rename("residual" = "regression_model$residuals") %>%filter(day_number > max(day_number) - 6) %>%summarise(close_mean = mean(residual, na.rm = TRUE))
First of all, we compute the average closing mean for the most recent 5 working days.
recent_5_50 <- residual_values %>%rename("residual" = "regression_model$residuals") %>%filter(day_number <= max(day_number) - 6) %>%summarise(close_mean = mean(residual, na.rm = TRUE))
We then compute the mean in the past 5 to 50 working days. Next, we compute the deviation of the average of the most recent 5 days to the average of 6 to 50 working days.
trend_deviation <- ((recent_5$close_mean - recent_5_50$close_mean) /recent_5_50$close_mean) * 100
We purchase the stock if the current deviation of the past 5 days is below 5 to 50 days. If we already have APPLE stock in our holdings, we sell the stock if the deviation is above threshold. We also check every time that the time-series of a given stock has a positive slope.
if (trend_deviation < trend_deviation_treshold ®ression_model$coefficients[[2]] > 0 &stock_in_portfolio == FALSE) {decision_sell <- FALSEdecision_buy <- TRUE} else if (trend_deviation > -trend_deviation_treshold ®ression_model$coefficients[[2]] > 0 &stock_in_portfolio == TRUE) {decision_sell <- TRUEdecision_buy <- FALSE}
The last step is creating a data frame, recording details of decisions made by the algorithm. We create a unique id including name of stock and date, record a run_date, the date stock price has been accessed, stock name, closing price of a given stock, computed trend deviation, threshold chosen by us, 5 days and 45 days average, decision to buy, and decision to sell.
output <- data.frame(id = paste0("APPL-", Sys.Date()),run_time = as.character(Sys.time()),stock_date = (data %>% filter(date == max(date)))$date,stock = "APPL",close_price = (data %>% filter(date == max(date)))$AAPL.Close,trend_deviation = trend_deviation,threshold = trend_deviation_treshold,recent_5_avg = recent_5$close_mean,recent_5_50_avg = recent_5_50$close_mean,decision_buy = decision_buy,decision_sell = decision_sell)
We have implemented the moving average trading strategy using R. We simply looked at the historical data of a given stock, checked if it has a positive trend, computed the current average trend deviation and made a decision based on that.
We then call this algorithm in our main trading pipeline, and record details of this decision into google sheets.
In the future, I will extend the above algorithm to multiple stocks, so that we can provide a list of stocks we’re interested in and the algorithm/pipeline automatically trades them. We could also include additional sanity checks of the inputs, and stationarity checks to make sure that the detrended time-series has desirable properties.
Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details.
Appendix
Full code for the moving-average trading function
moving_average_model <- function(data,trend_deviation_treshold = 3) {holdings <- current_holdings()if ((holdings %>% filter(stock == "APPL"))$stock == "APPL") {stock_in_portfolio = TRUE} else {stock_in_portfolio = FALSE}data <- as.data.frame(data) %>%tibble::rownames_to_column("date")data_close <- data %>%select(date, close = AAPL.Close) %>%filter(date > as.Date(Sys.Date() - 70)) %>%arrange(date) %>%mutate(day_number = row_number())formula <- close ~ day_numberregression_model <- lm(formula = formula,data = data_close)residual_values <- bind_cols(as.data.frame(regression_model$residuals),data_close)recent_5 <- residual_values %>%rename("residual" = "regression_model$residuals") %>%filter(day_number > max(day_number) - 6) %>%summarise(close_mean = mean(residual, na.rm = TRUE))recent_5_50 <- residual_values %>%rename("residual" = "regression_model$residuals") %>%filter(day_number <= max(day_number) - 6) %>%summarise(close_mean = mean(residual, na.rm = TRUE))trend_deviation <- ((recent_5$close_mean - recent_5_50$close_mean) /recent_5_50$close_mean) * 100if (trend_deviation < trend_deviation_treshold ®ression_model$coefficients[[2]] > 0 &stock_in_portfolio == FALSE) {decision_sell <- FALSEdecision_buy <- TRUE} else if (trend_deviation > -trend_deviation_treshold ®ression_model$coefficients[[2]] > 0 &stock_in_portfolio == TRUE) {decision_sell <- TRUEdecision_buy <- FALSE}output <- data.frame(id = paste0("APPL-", Sys.Date()),run_time = as.character(Sys.time()),stock_date = (data %>% filter(date == max(date)))$date,stock = "APPL",close_price = (data %>% filter(date == max(date)))$AAPL.Close,trend_deviation = trend_deviation,threshold = trend_deviation_treshold,recent_5_avg = recent_5$close_mean,recent_5_50_avg = recent_5_50$close_mean,decision_buy = decision_buy,decision_sell = decision_sell)output} | [
{
"code": null,
"e": 458,
"s": 171,
"text": "This story explains how to implement the moving average trading algorithm with R. If you’re interested in setting up your automated trading pipeline, you should first read this article. This story is a purely technical guide focusing on programming and statistics, not financial advice."
},
{
"code": null,
"e": 823,
"s": 458,
"text": "Throughout this story, we will build an R function which takes historical stock data and arbitrary threshold as inputs and based on it decides whether it is a good time to purchase given stock. We will look at Apple stocks. This article may require a certain level of statistical knowledge. University level introduction to statistics modules should be sufficient."
},
{
"code": null,
"e": 1391,
"s": 823,
"text": "The moving average trading algorithm takes an advantage of fluctuations around the stocks trend. We first identify if the slope of the given time series is positive. For simplicity we designed this algorithm to work only for positively trending stocks. We then detrend the historical time series, check if the most recent fluctuation is above, or below moving average. If the current price is below moving average and we don’t have the stock in our holdings, we buy the stock. If it’s above moving average and the stock is currently in our holdings we sell the stock."
},
{
"code": null,
"e": 1550,
"s": 1391,
"text": "Figure 1 represents the closing price of Apple stock in the past 50 days. We detrend this time-series, so that the red dotted line is aligned with the x-axis."
},
{
"code": null,
"e": 2085,
"s": 1550,
"text": "Figure 2 represents the detrended Apple stock for the past 50 days. Detrended time series looks stationary, that is it has constant mean and variance. If we wanted to be very rigorous, we could test for stationarity to make sure that it has desirable properties. We choose the arbitrary threshold to be +-5%. We purchase the stock if its 5 days average is 5% below the detrended time series, and sell it if it’s 5% above detrended time series. If the trade is successful, we gain 10% + any increase along the trend within this period."
},
{
"code": null,
"e": 2199,
"s": 2085,
"text": "The function takes 2 inputs. Historic data from Yahoo’s API and arbitrary threshold for buying/selling the stock."
},
{
"code": null,
"e": 2274,
"s": 2199,
"text": "moving_average_model <- function(data,trend_deviation_treshold = -5) {...}"
},
{
"code": null,
"e": 2431,
"s": 2274,
"text": "We then access our current holdings to check which stocks we have purchased in the past. Below, we check if we currently hold Apple stocks in our portfolio."
},
{
"code": null,
"e": 2583,
"s": 2431,
"text": "holdings <- current_holdings()if ((holdings %>% filter(stock == \"APPL\"))$stock == \"APPL\") {stock_in_portfolio = TRUE} else {stock_in_portfolio = FALSE}"
},
{
"code": null,
"e": 2923,
"s": 2583,
"text": "We then declare the historic data as a data frame, transform row names including date into a separate column, consider only closing price and dates in the past 70 days (about 50 working days when the stock market is open). Finally, we create an additional column which gives each day a number. Most recent day will have the highest number."
},
{
"code": null,
"e": 3140,
"s": 2923,
"text": "data <- as.data.frame(data) %>%tibble::rownames_to_column(\"date\")data_close <- data %>%select(date, close = AAPL.Close) %>%filter(date > as.Date(Sys.Date() - 70)) %>%arrange(date) %>%mutate(day_number = row_number())"
},
{
"code": null,
"e": 3616,
"s": 3140,
"text": "Next step is detrending the time series. We have created a variable with increasing numbers for each date above. If we were to treat this as a separate variable and plot it agains date, this would form a straight line. Hence, we can use this variable for detrending. If we regress the stock price on this variable, this would de trend the time series. We then could work on residuals from the regression to determine if the current price is above or below the moving average."
},
{
"code": null,
"e": 3705,
"s": 3616,
"text": "formula <- close ~ day_numberregression_model <- lm(formula = formula,data = data_close)"
},
{
"code": null,
"e": 3918,
"s": 3705,
"text": "You may note that we do not suppress the constant term, nor consider that during further analysis. We simply care about the position of current price relative to the average. Hence, we may only analyse residuals."
},
{
"code": null,
"e": 4099,
"s": 3918,
"text": "recent_5 <- residual_values %>%rename(\"residual\" = \"regression_model$residuals\") %>%filter(day_number > max(day_number) - 6) %>%summarise(close_mean = mean(residual, na.rm = TRUE))"
},
{
"code": null,
"e": 4185,
"s": 4099,
"text": "First of all, we compute the average closing mean for the most recent 5 working days."
},
{
"code": null,
"e": 4371,
"s": 4185,
"text": "recent_5_50 <- residual_values %>%rename(\"residual\" = \"regression_model$residuals\") %>%filter(day_number <= max(day_number) - 6) %>%summarise(close_mean = mean(residual, na.rm = TRUE))"
},
{
"code": null,
"e": 4542,
"s": 4371,
"text": "We then compute the mean in the past 5 to 50 working days. Next, we compute the deviation of the average of the most recent 5 days to the average of 6 to 50 working days."
},
{
"code": null,
"e": 4640,
"s": 4542,
"text": "trend_deviation <- ((recent_5$close_mean - recent_5_50$close_mean) /recent_5_50$close_mean) * 100"
},
{
"code": null,
"e": 4917,
"s": 4640,
"text": "We purchase the stock if the current deviation of the past 5 days is below 5 to 50 days. If we already have APPLE stock in our holdings, we sell the stock if the deviation is above threshold. We also check every time that the time-series of a given stock has a positive slope."
},
{
"code": null,
"e": 5248,
"s": 4917,
"text": "if (trend_deviation < trend_deviation_treshold ®ression_model$coefficients[[2]] > 0 &stock_in_portfolio == FALSE) {decision_sell <- FALSEdecision_buy <- TRUE} else if (trend_deviation > -trend_deviation_treshold ®ression_model$coefficients[[2]] > 0 &stock_in_portfolio == TRUE) {decision_sell <- TRUEdecision_buy <- FALSE}"
},
{
"code": null,
"e": 5618,
"s": 5248,
"text": "The last step is creating a data frame, recording details of decisions made by the algorithm. We create a unique id including name of stock and date, record a run_date, the date stock price has been accessed, stock name, closing price of a given stock, computed trend deviation, threshold chosen by us, 5 days and 45 days average, decision to buy, and decision to sell."
},
{
"code": null,
"e": 6046,
"s": 5618,
"text": "output <- data.frame(id = paste0(\"APPL-\", Sys.Date()),run_time = as.character(Sys.time()),stock_date = (data %>% filter(date == max(date)))$date,stock = \"APPL\",close_price = (data %>% filter(date == max(date)))$AAPL.Close,trend_deviation = trend_deviation,threshold = trend_deviation_treshold,recent_5_avg = recent_5$close_mean,recent_5_50_avg = recent_5_50$close_mean,decision_buy = decision_buy,decision_sell = decision_sell)"
},
{
"code": null,
"e": 6285,
"s": 6046,
"text": "We have implemented the moving average trading strategy using R. We simply looked at the historical data of a given stock, checked if it has a positive trend, computed the current average trend deviation and made a decision based on that."
},
{
"code": null,
"e": 6399,
"s": 6285,
"text": "We then call this algorithm in our main trading pipeline, and record details of this decision into google sheets."
},
{
"code": null,
"e": 6738,
"s": 6399,
"text": "In the future, I will extend the above algorithm to multiple stocks, so that we can provide a list of stocks we’re interested in and the algorithm/pipeline automatically trades them. We could also include additional sanity checks of the inputs, and stationarity checks to make sure that the detrended time-series has desirable properties."
},
{
"code": null,
"e": 7038,
"s": 6738,
"text": "Note from Towards Data Science’s editors: While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details."
},
{
"code": null,
"e": 7047,
"s": 7038,
"text": "Appendix"
},
{
"code": null,
"e": 7097,
"s": 7047,
"text": "Full code for the moving-average trading function"
}
] |
What is the scope of variables in JavaScript | The scope of a variable is the region of your program in which it is defined. JavaScript variables have only two scopes.
Global Variables − A global variable has global scope which means it can be defined anywhere in your JavaScript code.
Local Variables − A local variable will be visible only within a function where it is defined. Function parameters are always local to that function.
Within the body of a function, a local variable takes precedence over a global variable with the same name. If you declare a local variable or function parameter with the same name as a global variable, you effectively hide the global variable.
You can try to run the following code to learn how to work with scope of variables in JavaScript:
<html>
<body onload = checkscope();>
<script>
<!--
var myVar = "global"; // Declare a global variable
function checkscope( ) {
var myVar = "local"; // Declare a local variable
document.write(myVar);
}
//-->
</script>
</body>
</html> | [
{
"code": null,
"e": 1183,
"s": 1062,
"text": "The scope of a variable is the region of your program in which it is defined. JavaScript variables have only two scopes."
},
{
"code": null,
"e": 1301,
"s": 1183,
"text": "Global Variables − A global variable has global scope which means it can be defined anywhere in your JavaScript code."
},
{
"code": null,
"e": 1451,
"s": 1301,
"text": "Local Variables − A local variable will be visible only within a function where it is defined. Function parameters are always local to that function."
},
{
"code": null,
"e": 1696,
"s": 1451,
"text": "Within the body of a function, a local variable takes precedence over a global variable with the same name. If you declare a local variable or function parameter with the same name as a global variable, you effectively hide the global variable."
},
{
"code": null,
"e": 1794,
"s": 1696,
"text": "You can try to run the following code to learn how to work with scope of variables in JavaScript:"
},
{
"code": null,
"e": 2114,
"s": 1794,
"text": "<html>\n <body onload = checkscope();>\n <script>\n <!--\n var myVar = \"global\"; // Declare a global variable\n function checkscope( ) {\n var myVar = \"local\"; // Declare a local variable\n document.write(myVar);\n }\n //-->\n </script>\n </body>\n</html>"
}
] |
ALBERT - A Light BERT for Supervised Learning - GeeksforGeeks | 27 Jan, 2022
The BERT was proposed by researchers at Google AI in 2018. BERT has created something like a transformation in NLP similar to that caused by AlexNet in computer vision in 2012. It allows one to leverage large amounts of text data that is available for training the model in a self-supervised way.
ALBERT was proposed by researchers at Google Research in 2019. The goal of this paper to improve the training and results of BERT architecture by using different techniques like parameter sharing, factorization of embedding matrix, Inter sentence Coherence loss.
Model architecture:
The backbone of ALBERT architecture is similar to BERT that is encoder layers with GELU (Gaussian Error Linear Unit) activation function. However, below are the three main changes that are present in ALBERT but not in BERT.
Factorization of the Embedding matrix: In the BERT model and its improvements such as XLNet and ROBERTa, the input layer embeddings and hidden layer embeddings have the same size. But in this model, the authors separated the two embedding matrices. This is because input-level embedding (E) needs to refine only context-independent learning but hidden level embedding (H) requires context-dependent learning. This step leads to a reduction in parameters by 80% with a minor drop in performance when compared to BERT.
Cross-layer parameter sharing: The authors of this model also proposed the parameter sharing between different layers of the model to improve efficiency and decrease redundancy. The paper proposed that since the previous versions of BERT, XLNet, and ROBERTa have encoder layer stacked on top of one another causes the model to learn similar operations on different layers. The authors proposed three types of parameter sharing in this paper:Only share Feed Forward network parameterOnly share attention parametersShare all parameters. Default setting used by authors unless stated otherwise.
Only share Feed Forward network parameter
Only share attention parameters
Share all parameters. Default setting used by authors unless stated otherwise.
The above step leads to a 70% reduction in the overall number of parameters.
Inter Sentence Coherence Prediction: Similar to the BERT, ALBERT also used Masked Language model in training. However, Instead of using NSP (Next Sentence Prediction) loss, ALBERT used a new loss called SOP (Sentence Order Prediction). NSP is a binary classification loss for predicting whether two segments appear consecutively in the original text, the disadvantage of this loss is that it checks for coherence as well as the topic to identify the next sentence. However, the SOP only looks for sentence coherence.
ALBERT is released in 4 different model sizes,
As we can see from the above table is the ALBERT model has a smaller parameter size as compared to corresponding BERT models due to the above changes authors made in the architecture. For Example, BERT base has 9x more parameters than the ALBERT base, and BERT Large has 18x more parameters than ALBERT Large.
Dataset used:
Similar to the BERT, ALBERT is also pre-trained on the English Wikipedia and Book CORPUS dataset which together contains 16 GB of uncompressed data.
Implementation:
In this implementation, we will use a pre-trained ALBERT model using TF-Hub and ALBERT GitHub repository. We will run the model on Microsoft Research Paraphrase Corpus (MRPC) dataset on GLUE benchmark.
Python3
# Clone ALBERT Repo! git clone https://github.com/google-research/albert# Install Requirements of ALBERT! pip install -r albert/requirements.txt# clone GLUE repo into a folder! test -d download_glue_repo ||git clone https://gist.github.com/60c2bdb54d156a41194446737ce03e2e.git glue_repo# Download MRPC dataset!python glue_repo/download_glue_data.py --data_dir=/content/MRPC --tasks='MRPC'# Describe the URL of TFhub ALBERT BASE modelALBERT_MODEL_HUB = 'https://tfhub.dev/google/albert_base/3'# Fine Tune ALBERT classifier on MRPC dataset# To select best hyperparameter of any task of GLUE# benchamrk look into run_glue.sh!python -m albert.run_classifier \ --data_dir=MRPC/ \ --output_dir=output/ \ --albert_hub_module_handle=$ALBERT_MODEL_HUB \ --spm_model_file="from_tf_hub" \ --do_train=False \ --do_eval=True \ --do_predict=True \ --max_seq_length=512 \ --optimizer=adamw \ --task_name=MRPC \ --warmup_step=200 \ --learning_rate=2e-5 \ --train_step=800 \ --save_checkpoints_steps=100 \ --train_batch_size=32
Results & Conclusion:
Despite the much fewer number of parameters, ALBERT has achieved the state-of-the-art of many NLP tasks. Below are the results of ALBERT on GLUE benchmark datasets. The ALBER
ALBERT results as compared to other models on GLUE benchmark.
Below are the results of the ALBERT-xxl model on SQuAD and RACE benchmark datasets.
Here, ALBERT (1M) represents model is trained with 1M steps whereas, ALBERT 1.5M represents the model is trained with 1.5M epoch.
As of now, the authors have also released a new version of ALBERT (V2), with improvement in the average accuracy of the BASE, LARGE, X-LARGE model as compared to V1.
References:
ALBERT model paper
ALBERT GitHub Repo
Attention reader! Don’t stop learning now. Get hold of all the important Machine Learning Concepts with the Machine Learning Foundation Course at a student-friendly price and become industry ready.
simmytarika5
Machine Learning
Machine Learning
Writing code in comment?
Please use ide.geeksforgeeks.org,
generate link and share the link here.
Comments
Old Comments
Difference between Informed and Uninformed Search in AI
Deploy Machine Learning Model using Flask
Support Vector Machine Algorithm
Types of Environments in AI
k-nearest neighbor algorithm in Python
Principal Component Analysis with Python
Python | Decision Tree Regression using sklearn
Python | Stemming words with NLTK
Normalization vs Standardization
Intuition of Adam Optimizer | [
{
"code": null,
"e": 23953,
"s": 23925,
"text": "\n27 Jan, 2022"
},
{
"code": null,
"e": 24250,
"s": 23953,
"text": "The BERT was proposed by researchers at Google AI in 2018. BERT has created something like a transformation in NLP similar to that caused by AlexNet in computer vision in 2012. It allows one to leverage large amounts of text data that is available for training the model in a self-supervised way."
},
{
"code": null,
"e": 24513,
"s": 24250,
"text": "ALBERT was proposed by researchers at Google Research in 2019. The goal of this paper to improve the training and results of BERT architecture by using different techniques like parameter sharing, factorization of embedding matrix, Inter sentence Coherence loss."
},
{
"code": null,
"e": 24533,
"s": 24513,
"text": "Model architecture:"
},
{
"code": null,
"e": 24757,
"s": 24533,
"text": "The backbone of ALBERT architecture is similar to BERT that is encoder layers with GELU (Gaussian Error Linear Unit) activation function. However, below are the three main changes that are present in ALBERT but not in BERT."
},
{
"code": null,
"e": 25274,
"s": 24757,
"text": "Factorization of the Embedding matrix: In the BERT model and its improvements such as XLNet and ROBERTa, the input layer embeddings and hidden layer embeddings have the same size. But in this model, the authors separated the two embedding matrices. This is because input-level embedding (E) needs to refine only context-independent learning but hidden level embedding (H) requires context-dependent learning. This step leads to a reduction in parameters by 80% with a minor drop in performance when compared to BERT."
},
{
"code": null,
"e": 25866,
"s": 25274,
"text": "Cross-layer parameter sharing: The authors of this model also proposed the parameter sharing between different layers of the model to improve efficiency and decrease redundancy. The paper proposed that since the previous versions of BERT, XLNet, and ROBERTa have encoder layer stacked on top of one another causes the model to learn similar operations on different layers. The authors proposed three types of parameter sharing in this paper:Only share Feed Forward network parameterOnly share attention parametersShare all parameters. Default setting used by authors unless stated otherwise."
},
{
"code": null,
"e": 25908,
"s": 25866,
"text": "Only share Feed Forward network parameter"
},
{
"code": null,
"e": 25940,
"s": 25908,
"text": "Only share attention parameters"
},
{
"code": null,
"e": 26019,
"s": 25940,
"text": "Share all parameters. Default setting used by authors unless stated otherwise."
},
{
"code": null,
"e": 26106,
"s": 26019,
"text": " The above step leads to a 70% reduction in the overall number of parameters."
},
{
"code": null,
"e": 26623,
"s": 26106,
"text": "Inter Sentence Coherence Prediction: Similar to the BERT, ALBERT also used Masked Language model in training. However, Instead of using NSP (Next Sentence Prediction) loss, ALBERT used a new loss called SOP (Sentence Order Prediction). NSP is a binary classification loss for predicting whether two segments appear consecutively in the original text, the disadvantage of this loss is that it checks for coherence as well as the topic to identify the next sentence. However, the SOP only looks for sentence coherence."
},
{
"code": null,
"e": 26671,
"s": 26623,
"text": "ALBERT is released in 4 different model sizes, "
},
{
"code": null,
"e": 26981,
"s": 26671,
"text": "As we can see from the above table is the ALBERT model has a smaller parameter size as compared to corresponding BERT models due to the above changes authors made in the architecture. For Example, BERT base has 9x more parameters than the ALBERT base, and BERT Large has 18x more parameters than ALBERT Large."
},
{
"code": null,
"e": 26996,
"s": 26981,
"text": "Dataset used: "
},
{
"code": null,
"e": 27145,
"s": 26996,
"text": "Similar to the BERT, ALBERT is also pre-trained on the English Wikipedia and Book CORPUS dataset which together contains 16 GB of uncompressed data."
},
{
"code": null,
"e": 27161,
"s": 27145,
"text": "Implementation:"
},
{
"code": null,
"e": 27363,
"s": 27161,
"text": "In this implementation, we will use a pre-trained ALBERT model using TF-Hub and ALBERT GitHub repository. We will run the model on Microsoft Research Paraphrase Corpus (MRPC) dataset on GLUE benchmark."
},
{
"code": null,
"e": 27371,
"s": 27363,
"text": "Python3"
},
{
"code": "# Clone ALBERT Repo! git clone https://github.com/google-research/albert# Install Requirements of ALBERT! pip install -r albert/requirements.txt# clone GLUE repo into a folder! test -d download_glue_repo ||git clone https://gist.github.com/60c2bdb54d156a41194446737ce03e2e.git glue_repo# Download MRPC dataset!python glue_repo/download_glue_data.py --data_dir=/content/MRPC --tasks='MRPC'# Describe the URL of TFhub ALBERT BASE modelALBERT_MODEL_HUB = 'https://tfhub.dev/google/albert_base/3'# Fine Tune ALBERT classifier on MRPC dataset# To select best hyperparameter of any task of GLUE# benchamrk look into run_glue.sh!python -m albert.run_classifier \\ --data_dir=MRPC/ \\ --output_dir=output/ \\ --albert_hub_module_handle=$ALBERT_MODEL_HUB \\ --spm_model_file=\"from_tf_hub\" \\ --do_train=False \\ --do_eval=True \\ --do_predict=True \\ --max_seq_length=512 \\ --optimizer=adamw \\ --task_name=MRPC \\ --warmup_step=200 \\ --learning_rate=2e-5 \\ --train_step=800 \\ --save_checkpoints_steps=100 \\ --train_batch_size=32",
"e": 28398,
"s": 27371,
"text": null
},
{
"code": null,
"e": 28420,
"s": 28398,
"text": "Results & Conclusion:"
},
{
"code": null,
"e": 28595,
"s": 28420,
"text": "Despite the much fewer number of parameters, ALBERT has achieved the state-of-the-art of many NLP tasks. Below are the results of ALBERT on GLUE benchmark datasets. The ALBER"
},
{
"code": null,
"e": 28657,
"s": 28595,
"text": "ALBERT results as compared to other models on GLUE benchmark."
},
{
"code": null,
"e": 28741,
"s": 28657,
"text": "Below are the results of the ALBERT-xxl model on SQuAD and RACE benchmark datasets."
},
{
"code": null,
"e": 28871,
"s": 28741,
"text": "Here, ALBERT (1M) represents model is trained with 1M steps whereas, ALBERT 1.5M represents the model is trained with 1.5M epoch."
},
{
"code": null,
"e": 29037,
"s": 28871,
"text": "As of now, the authors have also released a new version of ALBERT (V2), with improvement in the average accuracy of the BASE, LARGE, X-LARGE model as compared to V1."
},
{
"code": null,
"e": 29049,
"s": 29037,
"text": "References:"
},
{
"code": null,
"e": 29068,
"s": 29049,
"text": "ALBERT model paper"
},
{
"code": null,
"e": 29087,
"s": 29068,
"text": "ALBERT GitHub Repo"
},
{
"code": null,
"e": 29285,
"s": 29087,
"text": "Attention reader! Don’t stop learning now. Get hold of all the important Machine Learning Concepts with the Machine Learning Foundation Course at a student-friendly price and become industry ready."
},
{
"code": null,
"e": 29298,
"s": 29285,
"text": "simmytarika5"
},
{
"code": null,
"e": 29315,
"s": 29298,
"text": "Machine Learning"
},
{
"code": null,
"e": 29332,
"s": 29315,
"text": "Machine Learning"
},
{
"code": null,
"e": 29430,
"s": 29332,
"text": "Writing code in comment?\nPlease use ide.geeksforgeeks.org,\ngenerate link and share the link here."
},
{
"code": null,
"e": 29439,
"s": 29430,
"text": "Comments"
},
{
"code": null,
"e": 29452,
"s": 29439,
"text": "Old Comments"
},
{
"code": null,
"e": 29508,
"s": 29452,
"text": "Difference between Informed and Uninformed Search in AI"
},
{
"code": null,
"e": 29550,
"s": 29508,
"text": "Deploy Machine Learning Model using Flask"
},
{
"code": null,
"e": 29583,
"s": 29550,
"text": "Support Vector Machine Algorithm"
},
{
"code": null,
"e": 29611,
"s": 29583,
"text": "Types of Environments in AI"
},
{
"code": null,
"e": 29650,
"s": 29611,
"text": "k-nearest neighbor algorithm in Python"
},
{
"code": null,
"e": 29691,
"s": 29650,
"text": "Principal Component Analysis with Python"
},
{
"code": null,
"e": 29739,
"s": 29691,
"text": "Python | Decision Tree Regression using sklearn"
},
{
"code": null,
"e": 29773,
"s": 29739,
"text": "Python | Stemming words with NLTK"
},
{
"code": null,
"e": 29806,
"s": 29773,
"text": "Normalization vs Standardization"
}
] |
Apache Pig - Quick Guide | Apache Pig is an abstraction over MapReduce. It is a tool/platform which is used to analyze larger sets of data representing them as data flows. Pig is generally used with Hadoop; we can perform all the data manipulation operations in Hadoop using Apache Pig.
To write data analysis programs, Pig provides a high-level language known as Pig Latin. This language provides various operators using which programmers can develop their own functions for reading, writing, and processing data.
To analyze data using Apache Pig, programmers need to write scripts using Pig Latin language. All these scripts are internally converted to Map and Reduce tasks. Apache Pig has a component known as Pig Engine that accepts the Pig Latin scripts as input and converts those scripts into MapReduce jobs.
Programmers who are not so good at Java normally used to struggle working with Hadoop, especially while performing any MapReduce tasks. Apache Pig is a boon for all such programmers.
Using Pig Latin, programmers can perform MapReduce tasks easily without having to type complex codes in Java.
Using Pig Latin, programmers can perform MapReduce tasks easily without having to type complex codes in Java.
Apache Pig uses multi-query approach, thereby reducing the length of codes. For example, an operation that would require you to type 200 lines of code (LoC) in Java can be easily done by typing as less as just 10 LoC in Apache Pig. Ultimately Apache Pig reduces the development time by almost 16 times.
Apache Pig uses multi-query approach, thereby reducing the length of codes. For example, an operation that would require you to type 200 lines of code (LoC) in Java can be easily done by typing as less as just 10 LoC in Apache Pig. Ultimately Apache Pig reduces the development time by almost 16 times.
Pig Latin is SQL-like language and it is easy to learn Apache Pig when you are familiar with SQL.
Pig Latin is SQL-like language and it is easy to learn Apache Pig when you are familiar with SQL.
Apache Pig provides many built-in operators to support data operations like joins, filters, ordering, etc. In addition, it also provides nested data types like tuples, bags, and maps that are missing from MapReduce.
Apache Pig provides many built-in operators to support data operations like joins, filters, ordering, etc. In addition, it also provides nested data types like tuples, bags, and maps that are missing from MapReduce.
Apache Pig comes with the following features −
Rich set of operators − It provides many operators to perform operations like join, sort, filer, etc.
Rich set of operators − It provides many operators to perform operations like join, sort, filer, etc.
Ease of programming − Pig Latin is similar to SQL and it is easy to write a Pig script if you are good at SQL.
Ease of programming − Pig Latin is similar to SQL and it is easy to write a Pig script if you are good at SQL.
Optimization opportunities − The tasks in Apache Pig optimize their execution automatically, so the programmers need to focus only on semantics of the language.
Optimization opportunities − The tasks in Apache Pig optimize their execution automatically, so the programmers need to focus only on semantics of the language.
Extensibility − Using the existing operators, users can develop their own functions to read, process, and write data.
Extensibility − Using the existing operators, users can develop their own functions to read, process, and write data.
UDF’s − Pig provides the facility to create User-defined Functions in other programming languages such as Java and invoke or embed them in Pig Scripts.
UDF’s − Pig provides the facility to create User-defined Functions in other programming languages such as Java and invoke or embed them in Pig Scripts.
Handles all kinds of data − Apache Pig analyzes all kinds of data, both structured as well as unstructured. It stores the results in HDFS.
Handles all kinds of data − Apache Pig analyzes all kinds of data, both structured as well as unstructured. It stores the results in HDFS.
Listed below are the major differences between Apache Pig and MapReduce.
Listed below are the major differences between Apache Pig and SQL.
In addition to above differences, Apache Pig Latin −
Allows splits in the pipeline.
Allows developers to store data anywhere in the pipeline.
Declares execution plans.
Provides operators to perform ETL (Extract, Transform, and Load) functions.
Both Apache Pig and Hive are used to create MapReduce jobs. And in some cases, Hive operates on HDFS in a similar way Apache Pig does. In the following table, we have listed a few significant points that set Apache Pig apart from Hive.
Apache Pig is generally used by data scientists for performing tasks involving ad-hoc processing and quick prototyping. Apache Pig is used −
To process huge data sources such as web logs.
To perform data processing for search platforms.
To process time sensitive data loads.
In 2006, Apache Pig was developed as a research project at Yahoo, especially to create and execute MapReduce jobs on every dataset. In 2007, Apache Pig was open sourced via Apache incubator. In 2008, the first release of Apache Pig came out. In 2010, Apache Pig graduated as an Apache top-level project.
The language used to analyze data in Hadoop using Pig is known as Pig Latin. It is a highlevel data processing language which provides a rich set of data types and operators to perform various operations on the data.
To perform a particular task Programmers using Pig, programmers need to write a Pig script using the Pig Latin language, and execute them using any of the execution mechanisms (Grunt Shell, UDFs, Embedded). After execution, these scripts will go through a series of transformations applied by the Pig Framework, to produce the desired output.
Internally, Apache Pig converts these scripts into a series of MapReduce jobs, and thus, it makes the programmer’s job easy. The architecture of Apache Pig is shown below.
As shown in the figure, there are various components in the Apache Pig framework. Let us take a look at the major components.
Initially the Pig Scripts are handled by the Parser. It checks the syntax of the script, does type checking, and other miscellaneous checks. The output of the parser will be a DAG (directed acyclic graph), which represents the Pig Latin statements and logical operators.
In the DAG, the logical operators of the script are represented as the nodes and the data flows are represented as edges.
The logical plan (DAG) is passed to the logical optimizer, which carries out the logical optimizations such as projection and pushdown.
The compiler compiles the optimized logical plan into a series of MapReduce jobs.
Finally the MapReduce jobs are submitted to Hadoop in a sorted order. Finally, these MapReduce jobs are executed on Hadoop producing the desired results.
The data model of Pig Latin is fully nested and it allows complex non-atomic datatypes such as map and tuple. Given below is the diagrammatical representation of Pig Latin’s data model.
Any single value in Pig Latin, irrespective of their data, type is known as an Atom. It is stored as string and can be used as string and number. int, long, float, double, chararray, and bytearray are the atomic values of Pig. A piece of data or a simple atomic value is known as a field.
Example − ‘raja’ or ‘30’
A record that is formed by an ordered set of fields is known as a tuple, the fields can be of any type. A tuple is similar to a row in a table of RDBMS.
Example − (Raja, 30)
A bag is an unordered set of tuples. In other words, a collection of tuples (non-unique) is known as a bag. Each tuple can have any number of fields (flexible schema). A bag is represented by ‘{}’. It is similar to a table in RDBMS, but unlike a table in RDBMS, it is not necessary that every tuple contain the same number of fields or that the fields in the same position (column) have the same type.
Example − {(Raja, 30), (Mohammad, 45)}
A bag can be a field in a relation; in that context, it is known as inner bag.
Example − {Raja, 30, {9848022338, [email protected],}}
A map (or data map) is a set of key-value pairs. The key needs to be of type chararray and should be unique. The value might be of any type. It is represented by ‘[]’
Example − [name#Raja, age#30]
A relation is a bag of tuples. The relations in Pig Latin are unordered (there is no guarantee that tuples are processed in any particular order).
This chapter explains the how to download, install, and set up Apache Pig in your system.
It is essential that you have Hadoop and Java installed on your system before you go for Apache Pig. Therefore, prior to installing Apache Pig, install Hadoop and Java by following the steps given in the following link −
http://www.tutorialspoint.com/hadoop/hadoop_enviornment_setup.htm
First of all, download the latest version of Apache Pig from the following website − https://pig.apache.org/
Open the homepage of Apache Pig website. Under the section News, click on the link release page as shown in the following snapshot.
On clicking the specified link, you will be redirected to the Apache Pig Releases page. On this page, under the Download section, you will have two links, namely, Pig 0.8 and later and Pig 0.7 and before. Click on the link Pig 0.8 and later, then you will be redirected to the page having a set of mirrors.
Choose and click any one of these mirrors as shown below.
These mirrors will take you to the Pig Releases page. This page contains various versions of Apache Pig. Click the latest version among them.
Within these folders, you will have the source and binary files of Apache Pig in various distributions. Download the tar files of the source and binary files of Apache Pig 0.15, pig0.15.0-src.tar.gz and pig-0.15.0.tar.gz.
After downloading the Apache Pig software, install it in your Linux environment by following the steps given below.
Create a directory with the name Pig in the same directory where the installation directories of Hadoop, Java, and other software were installed. (In our tutorial, we have created the Pig directory in the user named Hadoop).
$ mkdir Pig
Extract the downloaded tar files as shown below.
$ cd Downloads/
$ tar zxvf pig-0.15.0-src.tar.gz
$ tar zxvf pig-0.15.0.tar.gz
Move the content of pig-0.15.0-src.tar.gz file to the Pig directory created earlier as shown below.
$ mv pig-0.15.0-src.tar.gz/* /home/Hadoop/Pig/
After installing Apache Pig, we have to configure it. To configure, we need to edit two files − bashrc and pig.properties.
In the .bashrc file, set the following variables −
PIG_HOME folder to the Apache Pig’s installation folder,
PIG_HOME folder to the Apache Pig’s installation folder,
PATH environment variable to the bin folder, and
PATH environment variable to the bin folder, and
PIG_CLASSPATH environment variable to the etc (configuration) folder of your Hadoop installations (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files).
PIG_CLASSPATH environment variable to the etc (configuration) folder of your Hadoop installations (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files).
export PIG_HOME = /home/Hadoop/Pig
export PATH = $PATH:/home/Hadoop/pig/bin
export PIG_CLASSPATH = $HADOOP_HOME/conf
In the conf folder of Pig, we have a file named pig.properties. In the pig.properties file, you can set various parameters as given below.
pig -h properties
The following properties are supported −
Logging: verbose = true|false; default is false. This property is the same as -v
switch brief=true|false; default is false. This property is the same
as -b switch debug=OFF|ERROR|WARN|INFO|DEBUG; default is INFO.
This property is the same as -d switch aggregate.warning = true|false; default is true.
If true, prints count of warnings of each type rather than logging each warning.
Performance tuning: pig.cachedbag.memusage=<mem fraction>; default is 0.2 (20% of all memory).
Note that this memory is shared across all large bags used by the application.
pig.skewedjoin.reduce.memusagea=<mem fraction>; default is 0.3 (30% of all memory).
Specifies the fraction of heap available for the reducer to perform the join.
pig.exec.nocombiner = true|false; default is false.
Only disable combiner as a temporary workaround for problems.
opt.multiquery = true|false; multiquery is on by default.
Only disable multiquery as a temporary workaround for problems.
opt.fetch=true|false; fetch is on by default.
Scripts containing Filter, Foreach, Limit, Stream, and Union can be dumped without MR jobs.
pig.tmpfilecompression = true|false; compression is off by default.
Determines whether output of intermediate jobs is compressed.
pig.tmpfilecompression.codec = lzo|gzip; default is gzip.
Used in conjunction with pig.tmpfilecompression. Defines compression type.
pig.noSplitCombination = true|false. Split combination is on by default.
Determines if multiple small files are combined into a single map.
pig.exec.mapPartAgg = true|false. Default is false.
Determines if partial aggregation is done within map phase, before records are sent to combiner.
pig.exec.mapPartAgg.minReduction=<min aggregation factor>. Default is 10.
If the in-map partial aggregation does not reduce the output num records by this factor, it gets disabled.
Miscellaneous: exectype = mapreduce|tez|local; default is mapreduce. This property is the same as -x switch
pig.additional.jars.uris=<comma seperated list of jars>. Used in place of register command.
udf.import.list=<comma seperated list of imports>. Used to avoid package names in UDF.
stop.on.failure = true|false; default is false. Set to true to terminate on the first error.
pig.datetime.default.tz=<UTC time offset>. e.g. +08:00. Default is the default timezone of the host.
Determines the timezone used to handle datetime datatype and UDFs.
Additionally, any Hadoop property can be specified.
Verify the installation of Apache Pig by typing the version command. If the installation is successful, you will get the version of Apache Pig as shown below.
$ pig –version
Apache Pig version 0.15.0 (r1682971)
compiled Jun 01 2015, 11:44:35
In the previous chapter, we explained how to install Apache Pig. In this chapter, we will discuss how to execute Apache Pig.
You can run Apache Pig in two modes, namely, Local Mode and HDFS mode.
In this mode, all the files are installed and run from your local host and local file system. There is no need of Hadoop or HDFS. This mode is generally used for testing purpose.
MapReduce mode is where we load or process the data that exists in the Hadoop File System (HDFS) using Apache Pig. In this mode, whenever we execute the Pig Latin statements to process the data, a MapReduce job is invoked in the back-end to perform a particular operation on the data that exists in the HDFS.
Apache Pig scripts can be executed in three ways, namely, interactive mode, batch mode, and embedded mode.
Interactive Mode (Grunt shell) − You can run Apache Pig in interactive mode using the Grunt shell. In this shell, you can enter the Pig Latin statements and get the output (using Dump operator).
Interactive Mode (Grunt shell) − You can run Apache Pig in interactive mode using the Grunt shell. In this shell, you can enter the Pig Latin statements and get the output (using Dump operator).
Batch Mode (Script) − You can run Apache Pig in Batch mode by writing the Pig Latin script in a single file with .pig extension.
Batch Mode (Script) − You can run Apache Pig in Batch mode by writing the Pig Latin script in a single file with .pig extension.
Embedded Mode (UDF) − Apache Pig provides the provision of defining our own functions (User Defined Functions) in programming languages such as Java, and using them in our script.
Embedded Mode (UDF) − Apache Pig provides the provision of defining our own functions (User Defined Functions) in programming languages such as Java, and using them in our script.
You can invoke the Grunt shell in a desired mode (local/MapReduce) using the −x option as shown below.
Command −
$ ./pig –x local
Command −
$ ./pig -x mapreduce
Output −
Output −
Either of these commands gives you the Grunt shell prompt as shown below.
grunt>
You can exit the Grunt shell using ‘ctrl + d’.
After invoking the Grunt shell, you can execute a Pig script by directly entering the Pig Latin statements in it.
grunt> customers = LOAD 'customers.txt' USING PigStorage(',');
You can write an entire Pig Latin script in a file and execute it using the –x command. Let us suppose we have a Pig script in a file named sample_script.pig as shown below.
student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING
PigStorage(',') as (id:int,name:chararray,city:chararray);
Dump student;
Now, you can execute the script in the above file as shown below.
Note − We will discuss in detail how to run a Pig script in Bach mode and in embedded mode in subsequent chapters.
After invoking the Grunt shell, you can run your Pig scripts in the shell. In addition to that, there are certain useful shell and utility commands provided by the Grunt shell. This chapter explains the shell and utility commands provided by the Grunt shell.
Note − In some portions of this chapter, the commands like Load and Store are used. Refer the respective chapters to get in-detail information on them.
The Grunt shell of Apache Pig is mainly used to write Pig Latin scripts. Prior to that, we can invoke any shell commands using sh and fs.
Using sh command, we can invoke any shell commands from the Grunt shell. Using sh command from the Grunt shell, we cannot execute the commands that are a part of the shell environment (ex − cd).
Syntax
Given below is the syntax of sh command.
grunt> sh shell command parameters
Example
We can invoke the ls command of Linux shell from the Grunt shell using the sh option as shown below. In this example, it lists out the files in the /pig/bin/ directory.
grunt> sh ls
pig
pig_1444799121955.log
pig.cmd
pig.py
Using the fs command, we can invoke any FsShell commands from the Grunt shell.
Syntax
Given below is the syntax of fs command.
grunt> sh File System command parameters
Example
We can invoke the ls command of HDFS from the Grunt shell using fs command. In the following example, it lists the files in the HDFS root directory.
grunt> fs –ls
Found 3 items
drwxrwxrwx - Hadoop supergroup 0 2015-09-08 14:13 Hbase
drwxr-xr-x - Hadoop supergroup 0 2015-09-09 14:52 seqgen_data
drwxr-xr-x - Hadoop supergroup 0 2015-09-08 11:30 twitter_data
In the same way, we can invoke all the other file system shell commands from the Grunt shell using the fs command.
The Grunt shell provides a set of utility commands. These include utility commands such as clear, help, history, quit, and set; and commands such as exec, kill, and run to control Pig from the Grunt shell. Given below is the description of the utility commands provided by the Grunt shell.
The clear command is used to clear the screen of the Grunt shell.
Syntax
You can clear the screen of the grunt shell using the clear command as shown below.
grunt> clear
The help command gives you a list of Pig commands or Pig properties.
Usage
You can get a list of Pig commands using the help command as shown below.
grunt> help
Commands: <pig latin statement>; - See the PigLatin manual for details:
http://hadoop.apache.org/pig
File system commands:fs <fs arguments> - Equivalent to Hadoop dfs command:
http://hadoop.apache.org/common/docs/current/hdfs_shell.html
Diagnostic Commands:describe <alias>[::<alias] - Show the schema for the alias.
Inner aliases can be described as A::B.
explain [-script <pigscript>] [-out <path>] [-brief] [-dot|-xml]
[-param <param_name>=<pCram_value>]
[-param_file <file_name>] [<alias>] -
Show the execution plan to compute the alias or for entire script.
-script - Explain the entire script.
-out - Store the output into directory rather than print to stdout.
-brief - Don't expand nested plans (presenting a smaller graph for overview).
-dot - Generate the output in .dot format. Default is text format.
-xml - Generate the output in .xml format. Default is text format.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
alias - Alias to explain.
dump <alias> - Compute the alias and writes the results to stdout.
Utility Commands: exec [-param <param_name>=param_value] [-param_file <file_name>] <script> -
Execute the script with access to grunt environment including aliases.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
script - Script to be executed.
run [-param <param_name>=param_value] [-param_file <file_name>] <script> -
Execute the script with access to grunt environment.
-param <param_name - See parameter substitution for details.
-param_file <file_name> - See parameter substitution for details.
script - Script to be executed.
sh <shell command> - Invoke a shell command.
kill <job_id> - Kill the hadoop job specified by the hadoop job id.
set <key> <value> - Provide execution parameters to Pig. Keys and values are case sensitive.
The following keys are supported:
default_parallel - Script-level reduce parallelism. Basic input size heuristics used
by default.
debug - Set debug on or off. Default is off.
job.name - Single-quoted name for jobs. Default is PigLatin:<script name>
job.priority - Priority for jobs. Values: very_low, low, normal, high, very_high.
Default is normal stream.skippath - String that contains the path.
This is used by streaming any hadoop property.
help - Display this message.
history [-n] - Display the list statements in cache.
-n Hide line numbers.
quit - Quit the grunt shell.
This command displays a list of statements executed / used so far since the Grunt sell is invoked.
Usage
Assume we have executed three statements since opening the Grunt shell.
grunt> customers = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',');
grunt> orders = LOAD 'hdfs://localhost:9000/pig_data/orders.txt' USING PigStorage(',');
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING PigStorage(',');
Then, using the history command will produce the following output.
grunt> history
customers = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',');
orders = LOAD 'hdfs://localhost:9000/pig_data/orders.txt' USING PigStorage(',');
student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING PigStorage(',');
The set command is used to show/assign values to keys used in Pig.
Usage
Using this command, you can set values to the following keys.
You can set the job priority to a job by passing one of the following values to this key −
very_low
low
normal
high
very_high
You can quit from the Grunt shell using this command.
Usage
Quit from the Grunt shell as shown below.
grunt> quit
Let us now take a look at the commands using which you can control Apache Pig from the Grunt shell.
Using the exec command, we can execute Pig scripts from the Grunt shell.
Syntax
Given below is the syntax of the utility command exec.
grunt> exec [–param param_name = param_value] [–param_file file_name] [script]
Example
Let us assume there is a file named student.txt in the /pig_data/ directory of HDFS with the following content.
Student.txt
001,Rajiv,Hyderabad
002,siddarth,Kolkata
003,Rajesh,Delhi
And, assume we have a script file named sample_script.pig in the /pig_data/ directory of HDFS with the following content.
Sample_script.pig
student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING PigStorage(',')
as (id:int,name:chararray,city:chararray);
Dump student;
Now, let us execute the above script from the Grunt shell using the exec command as shown below.
grunt> exec /sample_script.pig
Output
The exec command executes the script in the sample_script.pig. As directed in the script, it loads the student.txt file into Pig and gives you the result of the Dump operator displaying the following content.
(1,Rajiv,Hyderabad)
(2,siddarth,Kolkata)
(3,Rajesh,Delhi)
You can kill a job from the Grunt shell using this command.
Syntax
Given below is the syntax of the kill command.
grunt> kill JobId
Example
Suppose there is a running Pig job having id Id_0055, you can kill it from the Grunt shell using the kill command, as shown below.
grunt> kill Id_0055
You can run a Pig script from the Grunt shell using the run command
Syntax
Given below is the syntax of the run command.
grunt> run [–param param_name = param_value] [–param_file file_name] script
Example
Let us assume there is a file named student.txt in the /pig_data/ directory of HDFS with the following content.
Student.txt
001,Rajiv,Hyderabad
002,siddarth,Kolkata
003,Rajesh,Delhi
And, assume we have a script file named sample_script.pig in the local filesystem with the following content.
Sample_script.pig
student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING
PigStorage(',') as (id:int,name:chararray,city:chararray);
Now, let us run the above script from the Grunt shell using the run command as shown below.
grunt> run /sample_script.pig
You can see the output of the script using the Dump operator as shown below.
grunt> Dump;
(1,Rajiv,Hyderabad)
(2,siddarth,Kolkata)
(3,Rajesh,Delhi)
Note − The difference between exec and the run command is that if we use run, the statements from the script are available in the command history.
Pig Latin is the language used to analyze data in Hadoop using Apache Pig. In this chapter, we are going to discuss the basics of Pig Latin such as Pig Latin statements, data types, general and relational operators, and Pig Latin UDF’s.
As discussed in the previous chapters, the data model of Pig is fully nested. A Relation is the outermost structure of the Pig Latin data model. And it is a bag where −
A bag is a collection of tuples.
A tuple is an ordered set of fields.
A field is a piece of data.
While processing data using Pig Latin, statements are the basic constructs.
These statements work with relations. They include expressions and schemas.
These statements work with relations. They include expressions and schemas.
Every statement ends with a semicolon (;).
Every statement ends with a semicolon (;).
We will perform various operations using operators provided by Pig Latin, through statements.
We will perform various operations using operators provided by Pig Latin, through statements.
Except LOAD and STORE, while performing all other operations, Pig Latin statements take a relation as input and produce another relation as output.
Except LOAD and STORE, while performing all other operations, Pig Latin statements take a relation as input and produce another relation as output.
As soon as you enter a Load statement in the Grunt shell, its semantic checking will be carried out. To see the contents of the schema, you need to use the Dump operator. Only after performing the dump operation, the MapReduce job for loading the data into the file system will be carried out.
As soon as you enter a Load statement in the Grunt shell, its semantic checking will be carried out. To see the contents of the schema, you need to use the Dump operator. Only after performing the dump operation, the MapReduce job for loading the data into the file system will be carried out.
Given below is a Pig Latin statement, which loads data to Apache Pig.
grunt> Student_data = LOAD 'student_data.txt' USING PigStorage(',')as
( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );
Given below table describes the Pig Latin data types.
Represents a signed 32-bit integer.
Example : 8
Represents a signed 64-bit integer.
Example : 5L
Represents a signed 32-bit floating point.
Example : 5.5F
Represents a 64-bit floating point.
Example : 10.5
Represents a character array (string) in Unicode UTF-8 format.
Example : ‘tutorials point’
Represents a Byte array (blob).
Represents a Boolean value.
Example : true/ false.
Represents a date-time.
Example : 1970-01-01T00:00:00.000+00:00
Represents a Java BigInteger.
Example : 60708090709
Represents a Java BigDecimal
Example : 185.98376256272893883
A tuple is an ordered set of fields.
Example : (raja, 30)
A bag is a collection of tuples.
Example : {(raju,30),(Mohhammad,45)}
A Map is a set of key-value pairs.
Example : [ ‘name’#’Raju’, ‘age’#30]
Values for all the above data types can be NULL. Apache Pig treats null values in a similar way as SQL does.
A null can be an unknown value or a non-existent value. It is used as a placeholder for optional values. These nulls can occur naturally or can be the result of an operation.
The following table describes the arithmetic operators of Pig Latin. Suppose a = 10 and b = 20.
Addition − Adds values on either side of the operator
Subtraction − Subtracts right hand operand from left hand operand
Multiplication − Multiplies values on either side of the operator
Division − Divides left hand operand by right hand operand
Modulus − Divides left hand operand by right hand operand and returns remainder
Bincond − Evaluates the Boolean operators. It has three operands as shown below.
variable x = (expression) ? value1 if true : value2 if false.
b = (a == 1)? 20: 30;
if a=1 the value of b is 20.
if a!=1 the value of b is 30.
CASE
WHEN
THEN
ELSE END
Case − The case operator is equivalent to nested bincond operator.
CASE f2 % 2
WHEN 0 THEN 'even'
WHEN 1 THEN 'odd'
END
The following table describes the comparison operators of Pig Latin.
Equal − Checks if the values of two operands are equal or not; if yes, then the condition becomes true.
Not Equal − Checks if the values of two operands are equal or not. If the values are not equal, then condition becomes true.
Greater than − Checks if the value of the left operand is greater than the value of the right operand. If yes, then the condition becomes true.
Less than − Checks if the value of the left operand is less than the value of the right operand. If yes, then the condition becomes true.
Greater than or equal to − Checks if the value of the left operand is greater than or equal to the value of the right operand. If yes, then the condition becomes true.
Less than or equal to − Checks if the value of the left operand is less than or equal to the value of the right operand. If yes, then the condition becomes true.
Pattern matching − Checks whether the string in the left-hand side matches with the constant in the right-hand side.
The following table describes the Type construction operators of Pig Latin.
Tuple constructor operator − This operator is used to construct a tuple.
Bag constructor operator − This operator is used to construct a bag.
Map constructor operator − This operator is used to construct a tuple.
The following table describes the relational operators of Pig Latin.
In general, Apache Pig works on top of Hadoop. It is an analytical tool that analyzes large datasets that exist in the Hadoop File System. To analyze data using Apache Pig, we have to initially load the data into Apache Pig. This chapter explains how to load data to Apache Pig from HDFS.
In MapReduce mode, Pig reads (loads) data from HDFS and stores the results back in HDFS. Therefore, let us start HDFS and create the following sample data in HDFS.
The above dataset contains personal details like id, first name, last name, phone number and city, of six students.
First of all, verify the installation using Hadoop version command, as shown below.
$ hadoop version
If your system contains Hadoop, and if you have set the PATH variable, then you will get the following output −
Hadoop 2.6.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r
e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1
Compiled by jenkins on 2014-11-13T21:10Z
Compiled with protoc 2.5.0
From source with checksum 18e43357c8f927c0695f1e9522859d6a
This command was run using /home/Hadoop/hadoop/share/hadoop/common/hadoop
common-2.6.0.jar
Browse through the sbin directory of Hadoop and start yarn and Hadoop dfs (distributed file system) as shown below.
cd /$Hadoop_Home/sbin/
$ start-dfs.sh
localhost: starting namenode, logging to /home/Hadoop/hadoop/logs/hadoopHadoop-namenode-localhost.localdomain.out
localhost: starting datanode, logging to /home/Hadoop/hadoop/logs/hadoopHadoop-datanode-localhost.localdomain.out
Starting secondary namenodes [0.0.0.0]
starting secondarynamenode, logging to /home/Hadoop/hadoop/logs/hadoop-Hadoopsecondarynamenode-localhost.localdomain.out
$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/Hadoop/hadoop/logs/yarn-Hadoopresourcemanager-localhost.localdomain.out
localhost: starting nodemanager, logging to /home/Hadoop/hadoop/logs/yarnHadoop-nodemanager-localhost.localdomain.out
In Hadoop DFS, you can create directories using the command mkdir. Create a new directory in HDFS with the name Pig_Data in the required path as shown below.
$cd /$Hadoop_Home/bin/
$ hdfs dfs -mkdir hdfs://localhost:9000/Pig_Data
The input file of Pig contains each tuple/record in individual lines. And the entities of the record are separated by a delimiter (In our example we used “,”).
In the local file system, create an input file student_data.txt containing data as shown below.
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai.
Now, move the file from the local file system to HDFS using put command as shown below. (You can use copyFromLocal command as well.)
$ cd $HADOOP_HOME/bin
$ hdfs dfs -put /home/Hadoop/Pig/Pig_Data/student_data.txt dfs://localhost:9000/pig_data/
You can use the cat command to verify whether the file has been moved into the HDFS, as shown below.
$ cd $HADOOP_HOME/bin
$ hdfs dfs -cat hdfs://localhost:9000/pig_data/student_data.txt
You can see the content of the file as shown below.
15/10/01 12:16:55 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai
You can load data into Apache Pig from the file system (HDFS/ Local) using LOAD operator of Pig Latin.
The load statement consists of two parts divided by the “=” operator. On the left-hand side, we need to mention the name of the relation where we want to store the data, and on the right-hand side, we have to define how we store the data. Given below is the syntax of the Load operator.
Relation_name = LOAD 'Input file path' USING function as schema;
Where,
relation_name − We have to mention the relation in which we want to store the data.
relation_name − We have to mention the relation in which we want to store the data.
Input file path − We have to mention the HDFS directory where the file is stored. (In MapReduce mode)
Input file path − We have to mention the HDFS directory where the file is stored. (In MapReduce mode)
function − We have to choose a function from the set of load functions provided by Apache Pig (BinStorage, JsonLoader, PigStorage, TextLoader).
function − We have to choose a function from the set of load functions provided by Apache Pig (BinStorage, JsonLoader, PigStorage, TextLoader).
Schema − We have to define the schema of the data. We can define the required schema as follows −
Schema − We have to define the schema of the data. We can define the required schema as follows −
(column1 : data type, column2 : data type, column3 : data type);
Note − We load the data without specifying the schema. In that case, the columns will be addressed as $01, $02, etc... (check).
As an example, let us load the data in student_data.txt in Pig under the schema named Student using the LOAD command.
First of all, open the Linux terminal. Start the Pig Grunt shell in MapReduce mode as shown below.
$ Pig –x mapreduce
It will start the Pig Grunt shell as shown below.
15/10/01 12:33:37 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
15/10/01 12:33:37 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
15/10/01 12:33:37 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
2015-10-01 12:33:38,080 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35
2015-10-01 12:33:38,080 [main] INFO org.apache.pig.Main - Logging error messages to: /home/Hadoop/pig_1443683018078.log
2015-10-01 12:33:38,242 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/Hadoop/.pigbootup not found
2015-10-01 12:33:39,630 [main]
INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost:9000
grunt>
Now load the data from the file student_data.txt into Pig by executing the following Pig Latin statement in the Grunt shell.
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt'
USING PigStorage(',')
as ( id:int, firstname:chararray, lastname:chararray, phone:chararray,
city:chararray );
Following is the description of the above statement.
We have stored the data using the following schema.
Note − The load statement will simply load the data into the specified relation in Pig. To verify the execution of the Load statement, you have to use the Diagnostic Operators which are discussed in the next chapters.
In the previous chapter, we learnt how to load data into Apache Pig. You can store the loaded data in the file system using the store operator. This chapter explains how to store data in Apache Pig using the Store operator.
Given below is the syntax of the Store statement.
STORE Relation_name INTO ' required_directory_path ' [USING function];
Assume we have a file student_data.txt in HDFS with the following content.
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai.
And we have read it into a relation student using the LOAD operator as shown below.
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt'
USING PigStorage(',')
as ( id:int, firstname:chararray, lastname:chararray, phone:chararray,
city:chararray );
Now, let us store the relation in the HDFS directory “/pig_Output/” as shown below.
grunt> STORE student INTO ' hdfs://localhost:9000/pig_Output/ ' USING PigStorage (',');
After executing the store statement, you will get the following output. A directory is created with the specified name and the data will be stored in it.
2015-10-05 13:05:05,429 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
MapReduceLau ncher - 100% complete
2015-10-05 13:05:05,429 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats -
Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.6.0 0.15.0 Hadoop 2015-10-0 13:03:03 2015-10-05 13:05:05 UNKNOWN
Success!
Job Stats (time in seconds):
JobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime
job_14459_06 1 0 n/a n/a n/a n/a
MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature
0 0 0 0 student MAP_ONLY
OutPut folder
hdfs://localhost:9000/pig_Output/
Input(s): Successfully read 0 records from: "hdfs://localhost:9000/pig_data/student_data.txt"
Output(s): Successfully stored 0 records in: "hdfs://localhost:9000/pig_Output"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG: job_1443519499159_0006
2015-10-05 13:06:06,192 [main] INFO org.apache.pig.backend.hadoop.executionengine
.mapReduceLayer.MapReduceLau ncher - Success!
You can verify the stored data as shown below.
First of all, list out the files in the directory named pig_output using the ls command as shown below.
hdfs dfs -ls 'hdfs://localhost:9000/pig_Output/'
Found 2 items
rw-r--r- 1 Hadoop supergroup 0 2015-10-05 13:03 hdfs://localhost:9000/pig_Output/_SUCCESS
rw-r--r- 1 Hadoop supergroup 224 2015-10-05 13:03 hdfs://localhost:9000/pig_Output/part-m-00000
You can observe that two files were created after executing the store statement.
Using cat command, list the contents of the file named part-m-00000 as shown below.
$ hdfs dfs -cat 'hdfs://localhost:9000/pig_Output/part-m-00000'
1,Rajiv,Reddy,9848022337,Hyderabad
2,siddarth,Battacharya,9848022338,Kolkata
3,Rajesh,Khanna,9848022339,Delhi
4,Preethi,Agarwal,9848022330,Pune
5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
6,Archana,Mishra,9848022335,Chennai
The load statement will simply load the data into the specified relation in Apache Pig. To verify the execution of the Load statement, you have to use the Diagnostic Operators. Pig Latin provides four different types of diagnostic operators −
Dump operator
Describe operator
Explanation operator
Illustration operator
In this chapter, we will discuss the Dump operators of Pig Latin.
The Dump operator is used to run the Pig Latin statements and display the results on the screen. It is generally used for debugging Purpose.
Given below is the syntax of the Dump operator.
grunt> Dump Relation_Name
Assume we have a file student_data.txt in HDFS with the following content.
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai.
And we have read it into a relation student using the LOAD operator as shown below.
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt'
USING PigStorage(',')
as ( id:int, firstname:chararray, lastname:chararray, phone:chararray,
city:chararray );
Now, let us print the contents of the relation using the Dump operator as shown below.
grunt> Dump student
Once you execute the above Pig Latin statement, it will start a MapReduce job to read data from HDFS. It will produce the following output.
2015-10-01 15:05:27,642 [main]
INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher -
100% complete
2015-10-01 15:05:27,652 [main]
INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.6.0 0.15.0 Hadoop 2015-10-01 15:03:11 2015-10-01 05:27 UNKNOWN
Success!
Job Stats (time in seconds):
JobId job_14459_0004
Maps 1
Reduces 0
MaxMapTime n/a
MinMapTime n/a
AvgMapTime n/a
MedianMapTime n/a
MaxReduceTime 0
MinReduceTime 0
AvgReduceTime 0
MedianReducetime 0
Alias student
Feature MAP_ONLY
Outputs hdfs://localhost:9000/tmp/temp580182027/tmp757878456,
Input(s): Successfully read 0 records from: "hdfs://localhost:9000/pig_data/
student_data.txt"
Output(s): Successfully stored 0 records in: "hdfs://localhost:9000/tmp/temp580182027/
tmp757878456"
Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager
spill count : 0Total bags proactively spilled: 0 Total records proactively spilled: 0
Job DAG: job_1443519499159_0004
2015-10-01 15:06:28,403 [main]
INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLau ncher - Success!
2015-10-01 15:06:28,441 [main] INFO org.apache.pig.data.SchemaTupleBackend -
Key [pig.schematuple] was not set... will not generate code.
2015-10-01 15:06:28,485 [main]
INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths
to process : 1
2015-10-01 15:06:28,485 [main]
INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths
to process : 1
(1,Rajiv,Reddy,9848022337,Hyderabad)
(2,siddarth,Battacharya,9848022338,Kolkata)
(3,Rajesh,Khanna,9848022339,Delhi)
(4,Preethi,Agarwal,9848022330,Pune)
(5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar)
(6,Archana,Mishra,9848022335,Chennai)
The describe operator is used to view the schema of a relation.
The syntax of the describe operator is as follows −
grunt> Describe Relation_name
Assume we have a file student_data.txt in HDFS with the following content.
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai.
And we have read it into a relation student using the LOAD operator as shown below.
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' USING PigStorage(',')
as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );
Now, let us describe the relation named student and verify the schema as shown below.
grunt> describe student;
Once you execute the above Pig Latin statement, it will produce the following output.
grunt> student: { id: int,firstname: chararray,lastname: chararray,phone: chararray,city: chararray }
The explain operator is used to display the logical, physical, and MapReduce execution plans of a relation.
Given below is the syntax of the explain operator.
grunt> explain Relation_name;
Assume we have a file student_data.txt in HDFS with the following content.
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai.
And we have read it into a relation student using the LOAD operator as shown below.
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' USING PigStorage(',')
as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );
Now, let us explain the relation named student using the explain operator as shown below.
grunt> explain student;
It will produce the following output.
$ explain student;
2015-10-05 11:32:43,660 [main]
2015-10-05 11:32:43,660 [main] INFO org.apache.pig.newplan.logical.optimizer
.LogicalPlanOptimizer -
{RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator,
GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter,
MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer,
PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
#-----------------------------------------------
# New Logical Plan:
#-----------------------------------------------
student: (Name: LOStore Schema:
id#31:int,firstname#32:chararray,lastname#33:chararray,phone#34:chararray,city#
35:chararray)
|
|---student: (Name: LOForEach Schema:
id#31:int,firstname#32:chararray,lastname#33:chararray,phone#34:chararray,city#
35:chararray)
| |
| (Name: LOGenerate[false,false,false,false,false] Schema:
id#31:int,firstname#32:chararray,lastname#33:chararray,phone#34:chararray,city#
35:chararray)ColumnPrune:InputUids=[34, 35, 32, 33,
31]ColumnPrune:OutputUids=[34, 35, 32, 33, 31]
| | |
| | (Name: Cast Type: int Uid: 31)
| | | | | |---id:(Name: Project Type: bytearray Uid: 31 Input: 0 Column: (*))
| | |
| | (Name: Cast Type: chararray Uid: 32)
| | |
| | |---firstname:(Name: Project Type: bytearray Uid: 32 Input: 1
Column: (*))
| | |
| | (Name: Cast Type: chararray Uid: 33)
| | |
| | |---lastname:(Name: Project Type: bytearray Uid: 33 Input: 2
Column: (*))
| | |
| | (Name: Cast Type: chararray Uid: 34)
| | |
| | |---phone:(Name: Project Type: bytearray Uid: 34 Input: 3 Column:
(*))
| | |
| | (Name: Cast Type: chararray Uid: 35)
| | |
| | |---city:(Name: Project Type: bytearray Uid: 35 Input: 4 Column:
(*))
| |
| |---(Name: LOInnerLoad[0] Schema: id#31:bytearray)
| |
| |---(Name: LOInnerLoad[1] Schema: firstname#32:bytearray)
| |
| |---(Name: LOInnerLoad[2] Schema: lastname#33:bytearray)
| |
| |---(Name: LOInnerLoad[3] Schema: phone#34:bytearray)
| |
| |---(Name: LOInnerLoad[4] Schema: city#35:bytearray)
|
|---student: (Name: LOLoad Schema:
id#31:bytearray,firstname#32:bytearray,lastname#33:bytearray,phone#34:bytearray
,city#35:bytearray)RequiredFields:null
#-----------------------------------------------
# Physical Plan: #-----------------------------------------------
student: Store(fakefile:org.apache.pig.builtin.PigStorage) - scope-36
|
|---student: New For Each(false,false,false,false,false)[bag] - scope-35
| |
| Cast[int] - scope-21
| |
| |---Project[bytearray][0] - scope-20
| |
| Cast[chararray] - scope-24
| |
| |---Project[bytearray][1] - scope-23
| |
| Cast[chararray] - scope-27
| |
| |---Project[bytearray][2] - scope-26
| |
| Cast[chararray] - scope-30
| |
| |---Project[bytearray][3] - scope-29
| |
| Cast[chararray] - scope-33
| |
| |---Project[bytearray][4] - scope-32
|
|---student: Load(hdfs://localhost:9000/pig_data/student_data.txt:PigStorage(',')) - scope19
2015-10-05 11:32:43,682 [main]
INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler -
File concatenation threshold: 100 optimistic? false
2015-10-05 11:32:43,684 [main]
INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOp timizer -
MR plan size before optimization: 1 2015-10-05 11:32:43,685 [main]
INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.
MultiQueryOp timizer - MR plan size after optimization: 1
#--------------------------------------------------
# Map Reduce Plan
#--------------------------------------------------
MapReduce node scope-37
Map Plan
student: Store(fakefile:org.apache.pig.builtin.PigStorage) - scope-36
|
|---student: New For Each(false,false,false,false,false)[bag] - scope-35
| |
| Cast[int] - scope-21
| |
| |---Project[bytearray][0] - scope-20
| |
| Cast[chararray] - scope-24
| |
| |---Project[bytearray][1] - scope-23
| |
| Cast[chararray] - scope-27
| |
| |---Project[bytearray][2] - scope-26
| |
| Cast[chararray] - scope-30
| |
| |---Project[bytearray][3] - scope-29
| |
| Cast[chararray] - scope-33
| |
| |---Project[bytearray][4] - scope-32
|
|---student:
Load(hdfs://localhost:9000/pig_data/student_data.txt:PigStorage(',')) - scope
19-------- Global sort: false
----------------
The illustrate operator gives you the step-by-step execution of a sequence of statements.
Given below is the syntax of the illustrate operator.
grunt> illustrate Relation_name;
Assume we have a file student_data.txt in HDFS with the following content.
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai.
And we have read it into a relation student using the LOAD operator as shown below.
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' USING PigStorage(',')
as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );
Now, let us illustrate the relation named student as shown below.
grunt> illustrate student;
On executing the above statement, you will get the following output.
grunt> illustrate student;
INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$M ap - Aliases
being processed per job phase (AliasName[line,offset]): M: student[1,10] C: R:
---------------------------------------------------------------------------------------------
|student | id:int | firstname:chararray | lastname:chararray | phone:chararray | city:chararray |
---------------------------------------------------------------------------------------------
| | 002 | siddarth | Battacharya | 9848022338 | Kolkata |
---------------------------------------------------------------------------------------------
The GROUP operator is used to group the data in one or more relations. It collects the data having the same key.
Given below is the syntax of the group operator.
grunt> Group_data = GROUP Relation_name BY age;
Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
And we have loaded this file into Apache Pig with the relation name student_details as shown below.
grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, age:int, phone:chararray, city:chararray);
Now, let us group the records/tuples in the relation by age as shown below.
grunt> group_data = GROUP student_details by age;
Verify the relation group_data using the DUMP operator as shown below.
grunt> Dump group_data;
Then you will get output displaying the contents of the relation named group_data as shown below. Here you can observe that the resulting schema has two columns −
One is age, by which we have grouped the relation.
One is age, by which we have grouped the relation.
The other is a bag, which contains the group of tuples, student records with the respective age.
The other is a bag, which contains the group of tuples, student records with the respective age.
(21,{(4,Preethi,Agarwal,21,9848022330,Pune),(1,Rajiv,Reddy,21,9848022337,Hydera bad)})
(22,{(3,Rajesh,Khanna,22,9848022339,Delhi),(2,siddarth,Battacharya,22,984802233 8,Kolkata)})
(23,{(6,Archana,Mishra,23,9848022335,Chennai),(5,Trupthi,Mohanthy,23,9848022336 ,Bhuwaneshwar)})
(24,{(8,Bharathi,Nambiayar,24,9848022333,Chennai),(7,Komal,Nayak,24,9848022334, trivendram)})
You can see the schema of the table after grouping the data using the describe command as shown below.
grunt> Describe group_data;
group_data: {group: int,student_details: {(id: int,firstname: chararray,
lastname: chararray,age: int,phone: chararray,city: chararray)}}
In the same way, you can get the sample illustration of the schema using the illustrate command as shown below.
$ Illustrate group_data;
It will produce the following output −
-------------------------------------------------------------------------------------------------
|group_data| group:int | student_details:bag{:tuple(id:int,firstname:chararray,lastname:chararray,age:int,phone:chararray,city:chararray)}|
-------------------------------------------------------------------------------------------------
| | 21 | { 4, Preethi, Agarwal, 21, 9848022330, Pune), (1, Rajiv, Reddy, 21, 9848022337, Hyderabad)}|
| | 2 | {(2,siddarth,Battacharya,22,9848022338,Kolkata),(003,Rajesh,Khanna,22,9848022339,Delhi)}|
-------------------------------------------------------------------------------------------------
Let us group the relation by age and city as shown below.
grunt> group_multiple = GROUP student_details by (age, city);
You can verify the content of the relation named group_multiple using the Dump operator as shown below.
grunt> Dump group_multiple;
((21,Pune),{(4,Preethi,Agarwal,21,9848022330,Pune)})
((21,Hyderabad),{(1,Rajiv,Reddy,21,9848022337,Hyderabad)})
((22,Delhi),{(3,Rajesh,Khanna,22,9848022339,Delhi)})
((22,Kolkata),{(2,siddarth,Battacharya,22,9848022338,Kolkata)})
((23,Chennai),{(6,Archana,Mishra,23,9848022335,Chennai)})
((23,Bhuwaneshwar),{(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar)})
((24,Chennai),{(8,Bharathi,Nambiayar,24,9848022333,Chennai)})
(24,trivendram),{(7,Komal,Nayak,24,9848022334,trivendram)})
You can group a relation by all the columns as shown below.
grunt> group_all = GROUP student_details All;
Now, verify the content of the relation group_all as shown below.
grunt> Dump group_all;
(all,{(8,Bharathi,Nambiayar,24,9848022333,Chennai),(7,Komal,Nayak,24,9848022334 ,trivendram),
(6,Archana,Mishra,23,9848022335,Chennai),(5,Trupthi,Mohanthy,23,9848022336,Bhuw aneshwar),
(4,Preethi,Agarwal,21,9848022330,Pune),(3,Rajesh,Khanna,22,9848022339,Delhi),
(2,siddarth,Battacharya,22,9848022338,Kolkata),(1,Rajiv,Reddy,21,9848022337,Hyd erabad)})
The COGROUP operator works more or less in the same way as the GROUP operator. The only difference between the two operators is that the group operator is normally used with one relation, while the cogroup operator is used in statements involving two or more relations.
Assume that we have two files namely student_details.txt and employee_details.txt in the HDFS directory /pig_data/ as shown below.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
employee_details.txt
001,Robin,22,newyork
002,BOB,23,Kolkata
003,Maya,23,Tokyo
004,Sara,25,London
005,David,23,Bhuwaneshwar
006,Maggy,22,Chennai
And we have loaded these files into Pig with the relation names student_details and employee_details respectively, as shown below.
grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, age:int, phone:chararray, city:chararray);
grunt> employee_details = LOAD 'hdfs://localhost:9000/pig_data/employee_details.txt' USING PigStorage(',')
as (id:int, name:chararray, age:int, city:chararray);
Now, let us group the records/tuples of the relations student_details and employee_details with the key age, as shown below.
grunt> cogroup_data = COGROUP student_details by age, employee_details by age;
Verify the relation cogroup_data using the DUMP operator as shown below.
grunt> Dump cogroup_data;
It will produce the following output, displaying the contents of the relation named cogroup_data as shown below.
(21,{(4,Preethi,Agarwal,21,9848022330,Pune), (1,Rajiv,Reddy,21,9848022337,Hyderabad)},
{ })
(22,{ (3,Rajesh,Khanna,22,9848022339,Delhi), (2,siddarth,Battacharya,22,9848022338,Kolkata) },
{ (6,Maggy,22,Chennai),(1,Robin,22,newyork) })
(23,{(6,Archana,Mishra,23,9848022335,Chennai),(5,Trupthi,Mohanthy,23,9848022336 ,Bhuwaneshwar)},
{(5,David,23,Bhuwaneshwar),(3,Maya,23,Tokyo),(2,BOB,23,Kolkata)})
(24,{(8,Bharathi,Nambiayar,24,9848022333,Chennai),(7,Komal,Nayak,24,9848022334, trivendram)},
{ })
(25,{ },
{(4,Sara,25,London)})
The cogroup operator groups the tuples from each relation according to age where each group depicts a particular age value.
For example, if we consider the 1st tuple of the result, it is grouped by age 21. And it contains two bags −
the first bag holds all the tuples from the first relation (student_details in this case) having age 21, and
the first bag holds all the tuples from the first relation (student_details in this case) having age 21, and
the second bag contains all the tuples from the second relation (employee_details in this case) having age 21.
the second bag contains all the tuples from the second relation (employee_details in this case) having age 21.
In case a relation doesn’t have tuples having the age value 21, it returns an empty bag.
The JOIN operator is used to combine records from two or more relations. While performing a join operation, we declare one (or a group of) tuple(s) from each relation, as keys. When these keys match, the two particular tuples are matched, else the records are dropped. Joins can be of the following types −
Self-join
Inner-join
Outer-join − left join, right join, and full join
This chapter explains with examples how to use the join operator in Pig Latin. Assume that we have two files namely customers.txt and orders.txt in the /pig_data/ directory of HDFS as shown below.
customers.txt
1,Ramesh,32,Ahmedabad,2000.00
2,Khilan,25,Delhi,1500.00
3,kaushik,23,Kota,2000.00
4,Chaitali,25,Mumbai,6500.00
5,Hardik,27,Bhopal,8500.00
6,Komal,22,MP,4500.00
7,Muffy,24,Indore,10000.00
orders.txt
102,2009-10-08 00:00:00,3,3000
100,2009-10-08 00:00:00,3,1500
101,2009-11-20 00:00:00,2,1560
103,2008-05-20 00:00:00,4,2060
And we have loaded these two files into Pig with the relations customers and orders as shown below.
grunt> customers = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',')
as (id:int, name:chararray, age:int, address:chararray, salary:int);
grunt> orders = LOAD 'hdfs://localhost:9000/pig_data/orders.txt' USING PigStorage(',')
as (oid:int, date:chararray, customer_id:int, amount:int);
Let us now perform various Join operations on these two relations.
Self-join is used to join a table with itself as if the table were two relations, temporarily renaming at least one relation.
Generally, in Apache Pig, to perform self-join, we will load the same data multiple times, under different aliases (names). Therefore let us load the contents of the file customers.txt as two tables as shown below.
grunt> customers1 = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',')
as (id:int, name:chararray, age:int, address:chararray, salary:int);
grunt> customers2 = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',')
as (id:int, name:chararray, age:int, address:chararray, salary:int);
Given below is the syntax of performing self-join operation using the JOIN operator.
grunt> Relation3_name = JOIN Relation1_name BY key, Relation2_name BY key ;
Let us perform self-join operation on the relation customers, by joining the two relations customers1 and customers2 as shown below.
grunt> customers3 = JOIN customers1 BY id, customers2 BY id;
Verify the relation customers3 using the DUMP operator as shown below.
grunt> Dump customers3;
It will produce the following output, displaying the contents of the relation customers.
(1,Ramesh,32,Ahmedabad,2000,1,Ramesh,32,Ahmedabad,2000)
(2,Khilan,25,Delhi,1500,2,Khilan,25,Delhi,1500)
(3,kaushik,23,Kota,2000,3,kaushik,23,Kota,2000)
(4,Chaitali,25,Mumbai,6500,4,Chaitali,25,Mumbai,6500)
(5,Hardik,27,Bhopal,8500,5,Hardik,27,Bhopal,8500)
(6,Komal,22,MP,4500,6,Komal,22,MP,4500)
(7,Muffy,24,Indore,10000,7,Muffy,24,Indore,10000)
Inner Join is used quite frequently; it is also referred to as equijoin. An inner join returns rows when there is a match in both tables.
It creates a new relation by combining column values of two relations (say A and B) based upon the join-predicate. The query compares each row of A with each row of B to find all pairs of rows which satisfy the join-predicate. When the join-predicate is satisfied, the column values for each matched pair of rows of A and B are combined into a result row.
Here is the syntax of performing inner join operation using the JOIN operator.
grunt> result = JOIN relation1 BY columnname, relation2 BY columnname;
Let us perform inner join operation on the two relations customers and orders as shown below.
grunt> coustomer_orders = JOIN customers BY id, orders BY customer_id;
Verify the relation coustomer_orders using the DUMP operator as shown below.
grunt> Dump coustomer_orders;
You will get the following output that will the contents of the relation named coustomer_orders.
(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)
(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)
(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)
(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)
Note −
Outer Join: Unlike inner join, outer join returns all the rows from at least one of the relations. An outer join operation is carried out in three ways −
Left outer join
Right outer join
Full outer join
The left outer Join operation returns all rows from the left table, even if there are no matches in the right relation.
Given below is the syntax of performing left outer join operation using the JOIN operator.
grunt> Relation3_name = JOIN Relation1_name BY id LEFT OUTER, Relation2_name BY customer_id;
Let us perform left outer join operation on the two relations customers and orders as shown below.
grunt> outer_left = JOIN customers BY id LEFT OUTER, orders BY customer_id;
Verify the relation outer_left using the DUMP operator as shown below.
grunt> Dump outer_left;
It will produce the following output, displaying the contents of the relation outer_left.
(1,Ramesh,32,Ahmedabad,2000,,,,)
(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)
(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)
(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)
(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)
(5,Hardik,27,Bhopal,8500,,,,)
(6,Komal,22,MP,4500,,,,)
(7,Muffy,24,Indore,10000,,,,)
The right outer join operation returns all rows from the right table, even if there are no matches in the left table.
Given below is the syntax of performing right outer join operation using the JOIN operator.
grunt> outer_right = JOIN customers BY id RIGHT, orders BY customer_id;
Let us perform right outer join operation on the two relations customers and orders as shown below.
grunt> outer_right = JOIN customers BY id RIGHT, orders BY customer_id;
Verify the relation outer_right using the DUMP operator as shown below.
grunt> Dump outer_right
It will produce the following output, displaying the contents of the relation outer_right.
(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)
(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)
(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)
(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)
The full outer join operation returns rows when there is a match in one of the relations.
Given below is the syntax of performing full outer join using the JOIN operator.
grunt> outer_full = JOIN customers BY id FULL OUTER, orders BY customer_id;
Let us perform full outer join operation on the two relations customers and orders as shown below.
grunt> outer_full = JOIN customers BY id FULL OUTER, orders BY customer_id;
Verify the relation outer_full using the DUMP operator as shown below.
grun> Dump outer_full;
It will produce the following output, displaying the contents of the relation outer_full.
(1,Ramesh,32,Ahmedabad,2000,,,,)
(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)
(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)
(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)
(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)
(5,Hardik,27,Bhopal,8500,,,,)
(6,Komal,22,MP,4500,,,,)
(7,Muffy,24,Indore,10000,,,,)
We can perform JOIN operation using multiple keys.
Here is how you can perform a JOIN operation on two tables using multiple keys.
grunt> Relation3_name = JOIN Relation2_name BY (key1, key2), Relation3_name BY (key1, key2);
Assume that we have two files namely employee.txt and employee_contact.txt in the /pig_data/ directory of HDFS as shown below.
employee.txt
001,Rajiv,Reddy,21,programmer,003
002,siddarth,Battacharya,22,programmer,003
003,Rajesh,Khanna,22,programmer,003
004,Preethi,Agarwal,21,programmer,003
005,Trupthi,Mohanthy,23,programmer,003
006,Archana,Mishra,23,programmer,003
007,Komal,Nayak,24,teamlead,002
008,Bharathi,Nambiayar,24,manager,001
employee_contact.txt
001,9848022337,[email protected],Hyderabad,003
002,9848022338,[email protected],Kolkata,003
003,9848022339,[email protected],Delhi,003
004,9848022330,[email protected],Pune,003
005,9848022336,[email protected],Bhuwaneshwar,003
006,9848022335,[email protected],Chennai,003
007,9848022334,[email protected],trivendram,002
008,9848022333,[email protected],Chennai,001
And we have loaded these two files into Pig with relations employee and employee_contact as shown below.
grunt> employee = LOAD 'hdfs://localhost:9000/pig_data/employee.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, age:int, designation:chararray, jobid:int);
grunt> employee_contact = LOAD 'hdfs://localhost:9000/pig_data/employee_contact.txt' USING PigStorage(',')
as (id:int, phone:chararray, email:chararray, city:chararray, jobid:int);
Now, let us join the contents of these two relations using the JOIN operator as shown below.
grunt> emp = JOIN employee BY (id,jobid), employee_contact BY (id,jobid);
Verify the relation emp using the DUMP operator as shown below.
grunt> Dump emp;
It will produce the following output, displaying the contents of the relation named emp as shown below.
(1,Rajiv,Reddy,21,programmer,113,1,9848022337,[email protected],Hyderabad,113)
(2,siddarth,Battacharya,22,programmer,113,2,9848022338,[email protected],Kolka ta,113)
(3,Rajesh,Khanna,22,programmer,113,3,9848022339,[email protected],Delhi,113)
(4,Preethi,Agarwal,21,programmer,113,4,9848022330,[email protected],Pune,113)
(5,Trupthi,Mohanthy,23,programmer,113,5,9848022336,[email protected],Bhuwaneshw ar,113)
(6,Archana,Mishra,23,programmer,113,6,9848022335,[email protected],Chennai,113)
(7,Komal,Nayak,24,teamlead,112,7,9848022334,[email protected],trivendram,112)
(8,Bharathi,Nambiayar,24,manager,111,8,9848022333,[email protected],Chennai,111)
The CROSS operator computes the cross-product of two or more relations. This chapter explains with example how to use the cross operator in Pig Latin.
Given below is the syntax of the CROSS operator.
grunt> Relation3_name = CROSS Relation1_name, Relation2_name;
Assume that we have two files namely customers.txt and orders.txt in the /pig_data/ directory of HDFS as shown below.
customers.txt
1,Ramesh,32,Ahmedabad,2000.00
2,Khilan,25,Delhi,1500.00
3,kaushik,23,Kota,2000.00
4,Chaitali,25,Mumbai,6500.00
5,Hardik,27,Bhopal,8500.00
6,Komal,22,MP,4500.00
7,Muffy,24,Indore,10000.00
orders.txt
102,2009-10-08 00:00:00,3,3000
100,2009-10-08 00:00:00,3,1500
101,2009-11-20 00:00:00,2,1560
103,2008-05-20 00:00:00,4,2060
And we have loaded these two files into Pig with the relations customers and orders as shown below.
grunt> customers = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',')
as (id:int, name:chararray, age:int, address:chararray, salary:int);
grunt> orders = LOAD 'hdfs://localhost:9000/pig_data/orders.txt' USING PigStorage(',')
as (oid:int, date:chararray, customer_id:int, amount:int);
Let us now get the cross-product of these two relations using the cross operator on these two relations as shown below.
grunt> cross_data = CROSS customers, orders;
Verify the relation cross_data using the DUMP operator as shown below.
grunt> Dump cross_data;
It will produce the following output, displaying the contents of the relation cross_data.
(7,Muffy,24,Indore,10000,103,2008-05-20 00:00:00,4,2060)
(7,Muffy,24,Indore,10000,101,2009-11-20 00:00:00,2,1560)
(7,Muffy,24,Indore,10000,100,2009-10-08 00:00:00,3,1500)
(7,Muffy,24,Indore,10000,102,2009-10-08 00:00:00,3,3000)
(6,Komal,22,MP,4500,103,2008-05-20 00:00:00,4,2060)
(6,Komal,22,MP,4500,101,2009-11-20 00:00:00,2,1560)
(6,Komal,22,MP,4500,100,2009-10-08 00:00:00,3,1500)
(6,Komal,22,MP,4500,102,2009-10-08 00:00:00,3,3000)
(5,Hardik,27,Bhopal,8500,103,2008-05-20 00:00:00,4,2060)
(5,Hardik,27,Bhopal,8500,101,2009-11-20 00:00:00,2,1560)
(5,Hardik,27,Bhopal,8500,100,2009-10-08 00:00:00,3,1500)
(5,Hardik,27,Bhopal,8500,102,2009-10-08 00:00:00,3,3000)
(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)
(4,Chaitali,25,Mumbai,6500,101,2009-20 00:00:00,4,2060)
(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)
(2,Khilan,25,Delhi,1500,100,2009-10-08 00:00:00,3,1500)
(2,Khilan,25,Delhi,1500,102,2009-10-08 00:00:00,3,3000)
(1,Ramesh,32,Ahmedabad,2000,103,2008-05-20 00:00:00,4,2060)
(1,Ramesh,32,Ahmedabad,2000,101,2009-11-20 00:00:00,2,1560)
(1,Ramesh,32,Ahmedabad,2000,100,2009-10-08 00:00:00,3,1500)
(1,Ramesh,32,Ahmedabad,2000,102,2009-10-08 00:00:00,3,3000)-11-20 00:00:00,2,1560)
(4,Chaitali,25,Mumbai,6500,100,2009-10-08 00:00:00,3,1500)
(4,Chaitali,25,Mumbai,6500,102,2009-10-08 00:00:00,3,3000)
(3,kaushik,23,Kota,2000,103,2008-05-20 00:00:00,4,2060)
(3,kaushik,23,Kota,2000,101,2009-11-20 00:00:00,2,1560)
(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)
(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)
(2,Khilan,25,Delhi,1500,103,2008-05-20 00:00:00,4,2060)
(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)
(2,Khilan,25,Delhi,1500,100,2009-10-08 00:00:00,3,1500)
(2,Khilan,25,Delhi,1500,102,2009-10-08 00:00:00,3,3000)
(1,Ramesh,32,Ahmedabad,2000,103,2008-05-20 00:00:00,4,2060)
(1,Ramesh,32,Ahmedabad,2000,101,2009-11-20 00:00:00,2,1560)
(1,Ramesh,32,Ahmedabad,2000,100,2009-10-08 00:00:00,3,1500)
(1,Ramesh,32,Ahmedabad,2000,102,2009-10-08 00:00:00,3,3000)
The UNION operator of Pig Latin is used to merge the content of two relations. To perform UNION operation on two relations, their columns and domains must be identical.
Given below is the syntax of the UNION operator.
grunt> Relation_name3 = UNION Relation_name1, Relation_name2;
Assume that we have two files namely student_data1.txt and student_data2.txt in the /pig_data/ directory of HDFS as shown below.
Student_data1.txt
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai.
Student_data2.txt
7,Komal,Nayak,9848022334,trivendram.
8,Bharathi,Nambiayar,9848022333,Chennai.
And we have loaded these two files into Pig with the relations student1 and student2 as shown below.
grunt> student1 = LOAD 'hdfs://localhost:9000/pig_data/student_data1.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);
grunt> student2 = LOAD 'hdfs://localhost:9000/pig_data/student_data2.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);
Let us now merge the contents of these two relations using the UNION operator as shown below.
grunt> student = UNION student1, student2;
Verify the relation student using the DUMP operator as shown below.
grunt> Dump student;
It will display the following output, displaying the contents of the relation student.
(1,Rajiv,Reddy,9848022337,Hyderabad) (2,siddarth,Battacharya,9848022338,Kolkata)
(3,Rajesh,Khanna,9848022339,Delhi)
(4,Preethi,Agarwal,9848022330,Pune)
(5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar)
(6,Archana,Mishra,9848022335,Chennai)
(7,Komal,Nayak,9848022334,trivendram)
(8,Bharathi,Nambiayar,9848022333,Chennai)
The SPLIT operator is used to split a relation into two or more relations.
Given below is the syntax of the SPLIT operator.
grunt> SPLIT Relation1_name INTO Relation2_name IF (condition1), Relation2_name (condition2),
Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
And we have loaded this file into Pig with the relation name student_details as shown below.
student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, age:int, phone:chararray, city:chararray);
Let us now split the relation into two, one listing the employees of age less than 23, and the other listing the employees having the age between 22 and 25.
SPLIT student_details into student_details1 if age<23, student_details2 if (22<age and age>25);
Verify the relations student_details1 and student_details2 using the DUMP operator as shown below.
grunt> Dump student_details1;
grunt> Dump student_details2;
It will produce the following output, displaying the contents of the relations student_details1 and student_details2 respectively.
grunt> Dump student_details1;
(1,Rajiv,Reddy,21,9848022337,Hyderabad)
(2,siddarth,Battacharya,22,9848022338,Kolkata)
(3,Rajesh,Khanna,22,9848022339,Delhi)
(4,Preethi,Agarwal,21,9848022330,Pune)
grunt> Dump student_details2;
(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar)
(6,Archana,Mishra,23,9848022335,Chennai)
(7,Komal,Nayak,24,9848022334,trivendram)
(8,Bharathi,Nambiayar,24,9848022333,Chennai)
The FILTER operator is used to select the required tuples from a relation based on a condition.
Given below is the syntax of the FILTER operator.
grunt> Relation2_name = FILTER Relation1_name BY (condition);
Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
And we have loaded this file into Pig with the relation name student_details as shown below.
grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, age:int, phone:chararray, city:chararray);
Let us now use the Filter operator to get the details of the students who belong to the city Chennai.
filter_data = FILTER student_details BY city == 'Chennai';
Verify the relation filter_data using the DUMP operator as shown below.
grunt> Dump filter_data;
It will produce the following output, displaying the contents of the relation filter_data as follows.
(6,Archana,Mishra,23,9848022335,Chennai)
(8,Bharathi,Nambiayar,24,9848022333,Chennai)
The DISTINCT operator is used to remove redundant (duplicate) tuples from a relation.
Given below is the syntax of the DISTINCT operator.
grunt> Relation_name2 = DISTINCT Relatin_name1;
Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below.
student_details.txt
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai
006,Archana,Mishra,9848022335,Chennai
And we have loaded this file into Pig with the relation name student_details as shown below.
grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);
Let us now remove the redundant (duplicate) tuples from the relation named student_details using the DISTINCT operator, and store it as another relation named distinct_data as shown below.
grunt> distinct_data = DISTINCT student_details;
Verify the relation distinct_data using the DUMP operator as shown below.
grunt> Dump distinct_data;
It will produce the following output, displaying the contents of the relation distinct_data as follows.
(1,Rajiv,Reddy,9848022337,Hyderabad)
(2,siddarth,Battacharya,9848022338,Kolkata)
(3,Rajesh,Khanna,9848022339,Delhi)
(4,Preethi,Agarwal,9848022330,Pune)
(5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar)
(6,Archana,Mishra,9848022335,Chennai)
The FOREACH operator is used to generate specified data transformations based on the column data.
Given below is the syntax of FOREACH operator.
grunt> Relation_name2 = FOREACH Relatin_name1 GENERATE (required data);
Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
And we have loaded this file into Pig with the relation name student_details as shown below.
grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray,age:int, phone:chararray, city:chararray);
Let us now get the id, age, and city values of each student from the relation student_details and store it into another relation named foreach_data using the foreach operator as shown below.
grunt> foreach_data = FOREACH student_details GENERATE id,age,city;
Verify the relation foreach_data using the DUMP operator as shown below.
grunt> Dump foreach_data;
It will produce the following output, displaying the contents of the relation foreach_data.
(1,21,Hyderabad)
(2,22,Kolkata)
(3,22,Delhi)
(4,21,Pune)
(5,23,Bhuwaneshwar)
(6,23,Chennai)
(7,24,trivendram)
(8,24,Chennai)
The ORDER BY operator is used to display the contents of a relation in a sorted order based on one or more fields.
Given below is the syntax of the ORDER BY operator.
grunt> Relation_name2 = ORDER Relatin_name1 BY (ASC|DESC);
Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
And we have loaded this file into Pig with the relation name student_details as shown below.
grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray,age:int, phone:chararray, city:chararray);
Let us now sort the relation in a descending order based on the age of the student and store it into another relation named order_by_data using the ORDER BY operator as shown below.
grunt> order_by_data = ORDER student_details BY age DESC;
Verify the relation order_by_data using the DUMP operator as shown below.
grunt> Dump order_by_data;
It will produce the following output, displaying the contents of the relation order_by_data.
(8,Bharathi,Nambiayar,24,9848022333,Chennai)
(7,Komal,Nayak,24,9848022334,trivendram)
(6,Archana,Mishra,23,9848022335,Chennai)
(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar)
(3,Rajesh,Khanna,22,9848022339,Delhi)
(2,siddarth,Battacharya,22,9848022338,Kolkata)
(4,Preethi,Agarwal,21,9848022330,Pune)
(1,Rajiv,Reddy,21,9848022337,Hyderabad)
The LIMIT operator is used to get a limited number of tuples from a relation.
Given below is the syntax of the LIMIT operator.
grunt> Result = LIMIT Relation_name required number of tuples;
Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
And we have loaded this file into Pig with the relation name student_details as shown below.
grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray,age:int, phone:chararray, city:chararray);
Now, let’s sort the relation in descending order based on the age of the student and store it into another relation named limit_data using the ORDER BY operator as shown below.
grunt> limit_data = LIMIT student_details 4;
Verify the relation limit_data using the DUMP operator as shown below.
grunt> Dump limit_data;
It will produce the following output, displaying the contents of the relation limit_data as follows.
(1,Rajiv,Reddy,21,9848022337,Hyderabad)
(2,siddarth,Battacharya,22,9848022338,Kolkata)
(3,Rajesh,Khanna,22,9848022339,Delhi)
(4,Preethi,Agarwal,21,9848022330,Pune)
Apache Pig provides various built-in functions namely eval, load, store, math, string, bag and tuple functions.
Given below is the list of eval functions provided by Apache Pig.
To compute the average of the numerical values within a bag.
To concatenate the elements of a bag into a string. While concatenating, we can place a delimiter between these values (optional).
To concatenate two or more expressions of same type.
To get the number of elements in a bag, while counting the number of tuples in a bag.
It is similar to the COUNT() function. It is used to get the number of elements in a bag.
To compare two bags (fields) in a tuple.
To check if a bag or map is empty.
To calculate the highest value for a column (numeric values or chararrays) in a single-column bag.
To get the minimum (lowest) value (numeric or chararray) for a certain column in a single-column bag.
Using the Pig Latin PluckTuple() function, we can define a string Prefix and filter the columns in a relation that begin with the given prefix.
To compute the number of elements based on any Pig data type.
To subtract two bags. It takes two bags as inputs and returns a bag which contains the tuples of the first bag that are not in the second bag.
To get the total of the numeric values of a column in a single-column bag.
To split a string (which contains a group of words) in a single tuple and return a bag which contains the output of the split operation.
The Load and Store functions in Apache Pig are used to determine how the data goes ad comes out of Pig. These functions are used with the load and store operators. Given below is the list of load and store functions available in Pig.
To load and store structured files.
To load unstructured data into Pig.
To load and store data into Pig using machine readable format.
In Pig Latin, we can load and store compressed data.
Given below is the list of Bag and Tuple functions.
To convert two or more expressions into a bag.
To get the top N tuples of a relation.
To convert one or more expressions into a tuple.
To convert the key-value pairs into a Map.
We have the following String functions in Apache Pig.
To verify whether a given string ends with a particular substring.
Accepts two string parameters and verifies whether the first string starts with the second.
Returns a substring from a given string.
To compare two stings ignoring the case.
Returns the first occurrence of a character in a string, searching forward from a start index.
Returns the index of the last occurrence of a character in a string, searching backward from a start index.
Converts the first character in a string to lower case.
Returns a string with the first character converted to upper case.
UPPER(expression) Returns a string converted to upper case.
Converts all characters in a string to lower case.
To replace existing characters in a string with new characters.
To split a string around matches of a given regular expression.
Similar to the STRSPLIT() function, it splits the string by given delimiter and returns the result in a bag.
Returns a copy of a string with leading and trailing whitespaces removed.
Returns a copy of a string with leading whitespaces removed.
Returns a copy of a string with trailing whitespaces removed.
Apache Pig provides the following Date and Time functions −
This function returns a date-time object according to the given parameters. The other alternative for this function are ToDate(iosstring), ToDate(userstring, format), ToDate(userstring, format, timezone)
returns the date-time object of the current time.
Returns the day of a month from the date-time object.
Returns the hour of a day from the date-time object.
Returns the millisecond of a second from the date-time object.
Returns the minute of an hour from the date-time object.
Returns the month of a year from the date-time object.
Returns the second of a minute from the date-time object.
Returns the week of a year from the date-time object.
Returns the week year from the date-time object.
Returns the year from the date-time object.
Returns the result of a date-time object along with the duration object.
Subtracts the Duration object from the Date-Time object and returns the result.
Returns the number of days between the two date-time objects.
Returns the number of hours between two date-time objects.
Returns the number of milliseconds between two date-time objects.
Returns the number of minutes between two date-time objects.
Returns the number of months between two date-time objects.
Returns the number of seconds between two date-time objects.
Returns the number of weeks between two date-time objects.
Returns the number of years between two date-time objects.
We have the following Math functions in Apache Pig −
To get the absolute value of an expression.
To get the arc cosine of an expression.
To get the arc sine of an expression.
This function is used to get the arc tangent of an expression.
This function is used to get the cube root of an expression.
This function is used to get the value of an expression rounded up to the nearest integer.
This function is used to get the trigonometric cosine of an expression.
This function is used to get the hyperbolic cosine of an expression.
This function is used to get the Euler’s number e raised to the power of x.
To get the value of an expression rounded down to the nearest integer.
To get the natural logarithm (base e) of an expression.
To get the base 10 logarithm of an expression.
To get a pseudo random number (type double) greater than or equal to 0.0 and less than 1.0.
To get the value of an expression rounded to an integer (if the result type is float) or rounded to a long (if the result type is double).
To get the sine of an expression.
To get the hyperbolic sine of an expression.
To get the positive square root of an expression.
To get the trigonometric tangent of an angle.
To get the hyperbolic tangent of an expression.
In addition to the built-in functions, Apache Pig provides extensive support for User Defined Functions (UDF’s). Using these UDF’s, we can define our own functions and use them. The UDF support is provided in six programming languages, namely, Java, Jython, Python, JavaScript, Ruby and Groovy.
For writing UDF’s, complete support is provided in Java and limited support is provided in all the remaining languages. Using Java, you can write UDF’s involving all parts of the processing like data load/store, column transformation, and aggregation. Since Apache Pig has been written in Java, the UDF’s written using Java language work efficiently compared to other languages.
In Apache Pig, we also have a Java repository for UDF’s named Piggybank. Using Piggybank, we can access Java UDF’s written by other users, and contribute our own UDF’s.
While writing UDF’s using Java, we can create and use the following three types of functions −
Filter Functions − The filter functions are used as conditions in filter statements. These functions accept a Pig value as input and return a Boolean value.
Filter Functions − The filter functions are used as conditions in filter statements. These functions accept a Pig value as input and return a Boolean value.
Eval Functions − The Eval functions are used in FOREACH-GENERATE statements. These functions accept a Pig value as input and return a Pig result.
Eval Functions − The Eval functions are used in FOREACH-GENERATE statements. These functions accept a Pig value as input and return a Pig result.
Algebraic Functions − The Algebraic functions act on inner bags in a FOREACHGENERATE statement. These functions are used to perform full MapReduce operations on an inner bag.
Algebraic Functions − The Algebraic functions act on inner bags in a FOREACHGENERATE statement. These functions are used to perform full MapReduce operations on an inner bag.
To write a UDF using Java, we have to integrate the jar file Pig-0.15.0.jar. In this section, we discuss how to write a sample UDF using Eclipse. Before proceeding further, make sure you have installed Eclipse and Maven in your system.
Follow the steps given below to write a UDF function −
Open Eclipse and create a new project (say myproject).
Open Eclipse and create a new project (say myproject).
Convert the newly created project into a Maven project.
Convert the newly created project into a Maven project.
Copy the following content in the pom.xml. This file contains the Maven dependencies for Apache Pig and Hadoop-core jar files.
Copy the following content in the pom.xml. This file contains the Maven dependencies for Apache Pig and Hadoop-core jar files.
<project xmlns = "http://maven.apache.org/POM/4.0.0"
xmlns:xsi = "http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation = "http://maven.apache.org/POM/4.0.0http://maven.apache .org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>Pig_Udf</groupId>
<artifactId>Pig_Udf</artifactId>
<version>0.0.1-SNAPSHOT</version>
<build>
<sourceDirectory>src</sourceDirectory>
<plugins>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>org.apache.pig</groupId>
<artifactId>pig</artifactId>
<version>0.15.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>0.20.2</version>
</dependency>
</dependencies>
</project>
Save the file and refresh it. In the Maven Dependencies section, you can find the downloaded jar files.
Save the file and refresh it. In the Maven Dependencies section, you can find the downloaded jar files.
Create a new class file with name Sample_Eval and copy the following content in it.
Create a new class file with name Sample_Eval and copy the following content in it.
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
import java.io.IOException;
import org.apache.pig.EvalFunc;
import org.apache.pig.data.Tuple;
public class Sample_Eval extends EvalFunc<String>{
public String exec(Tuple input) throws IOException {
if (input == null || input.size() == 0)
return null;
String str = (String)input.get(0);
return str.toUpperCase();
}
}
While writing UDF’s, it is mandatory to inherit the EvalFunc class and provide implementation to exec() function. Within this function, the code required for the UDF is written. In the above example, we have return the code to convert the contents of the given column to uppercase.
After compiling the class without errors, right-click on the Sample_Eval.java file. It gives you a menu. Select export as shown in the following screenshot.
After compiling the class without errors, right-click on the Sample_Eval.java file. It gives you a menu. Select export as shown in the following screenshot.
On clicking export, you will get the following window. Click on JAR file.
On clicking export, you will get the following window. Click on JAR file.
Proceed further by clicking Next> button. You will get another window where you need to enter the path in the local file system, where you need to store the jar file.
Proceed further by clicking Next> button. You will get another window where you need to enter the path in the local file system, where you need to store the jar file.
Finally click the Finish button. In the specified folder, a Jar file sample_udf.jar is created. This jar file contains the UDF written in Java.
Finally click the Finish button. In the specified folder, a Jar file sample_udf.jar is created. This jar file contains the UDF written in Java.
After writing the UDF and generating the Jar file, follow the steps given below −
After writing UDF (in Java) we have to register the Jar file that contain the UDF using the Register operator. By registering the Jar file, users can intimate the location of the UDF to Apache Pig.
Syntax
Given below is the syntax of the Register operator.
REGISTER path;
Example
As an example let us register the sample_udf.jar created earlier in this chapter.
Start Apache Pig in local mode and register the jar file sample_udf.jar as shown below.
$cd PIG_HOME/bin
$./pig –x local
REGISTER '/$PIG_HOME/sample_udf.jar'
Note − assume the Jar file in the path − /$PIG_HOME/sample_udf.jar
After registering the UDF we can define an alias to it using the Define operator.
Syntax
Given below is the syntax of the Define operator.
DEFINE alias {function | [`command` [input] [output] [ship] [cache] [stderr] ] };
Example
Define the alias for sample_eval as shown below.
DEFINE sample_eval sample_eval();
After defining the alias you can use the UDF same as the built-in functions. Suppose there is a file named emp_data in the HDFS /Pig_Data/ directory with the following content.
001,Robin,22,newyork
002,BOB,23,Kolkata
003,Maya,23,Tokyo
004,Sara,25,London
005,David,23,Bhuwaneshwar
006,Maggy,22,Chennai
007,Robert,22,newyork
008,Syam,23,Kolkata
009,Mary,25,Tokyo
010,Saran,25,London
011,Stacy,25,Bhuwaneshwar
012,Kelly,22,Chennai
And assume we have loaded this file into Pig as shown below.
grunt> emp_data = LOAD 'hdfs://localhost:9000/pig_data/emp1.txt' USING PigStorage(',')
as (id:int, name:chararray, age:int, city:chararray);
Let us now convert the names of the employees in to upper case using the UDF sample_eval.
grunt> Upper_case = FOREACH emp_data GENERATE sample_eval(name);
Verify the contents of the relation Upper_case as shown below.
grunt> Dump Upper_case;
(ROBIN)
(BOB)
(MAYA)
(SARA)
(DAVID)
(MAGGY)
(ROBERT)
(SYAM)
(MARY)
(SARAN)
(STACY)
(KELLY)
Here in this chapter, we will see how how to run Apache Pig scripts in batch mode.
While writing a script in a file, we can include comments in it as shown below.
We will begin the multi-line comments with '/*', end them with '*/'.
/* These are the multi-line comments
In the pig script */
We will begin the single-line comments with '--'.
--we can write single line comments like this.
While executing Apache Pig statements in batch mode, follow the steps given below.
Write all the required Pig Latin statements in a single file. We can write all the Pig Latin statements and commands in a single file and save it as .pig file.
Execute the Apache Pig script. You can execute the Pig script from the shell (Linux) as shown below.
You can execute it from the Grunt shell as well using the exec command as shown below.
grunt> exec /sample_script.pig
We can also execute a Pig script that resides in the HDFS. Suppose there is a Pig script with the name Sample_script.pig in the HDFS directory named /pig_data/. We can execute it as shown below.
$ pig -x mapreduce hdfs://localhost:9000/pig_data/Sample_script.pig
Assume we have a file student_details.txt in HDFS with the following content.
student_details.txt
001,Rajiv,Reddy,21,9848022337,Hyderabad
002,siddarth,Battacharya,22,9848022338,Kolkata
003,Rajesh,Khanna,22,9848022339,Delhi
004,Preethi,Agarwal,21,9848022330,Pune
005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar
006,Archana,Mishra,23,9848022335,Chennai
007,Komal,Nayak,24,9848022334,trivendram
008,Bharathi,Nambiayar,24,9848022333,Chennai
We also have a sample script with the name sample_script.pig, in the same HDFS directory. This file contains statements performing operations and transformations on the student relation, as shown below.
student = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')
as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);
student_order = ORDER student BY age DESC;
student_limit = LIMIT student_order 4;
Dump student_limit;
The first statement of the script will load the data in the file named student_details.txt as a relation named student.
The first statement of the script will load the data in the file named student_details.txt as a relation named student.
The second statement of the script will arrange the tuples of the relation in descending order, based on age, and store it as student_order.
The second statement of the script will arrange the tuples of the relation in descending order, based on age, and store it as student_order.
The third statement of the script will store the first 4 tuples of student_order as student_limit.
The third statement of the script will store the first 4 tuples of student_order as student_limit.
Finally the fourth statement will dump the content of the relation student_limit.
Finally the fourth statement will dump the content of the relation student_limit.
Let us now execute the sample_script.pig as shown below.
$./pig -x mapreduce hdfs://localhost:9000/pig_data/sample_script.pig
Apache Pig gets executed and gives you the output with the following content.
(7,Komal,Nayak,24,9848022334,trivendram)
(8,Bharathi,Nambiayar,24,9848022333,Chennai)
(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar)
(6,Archana,Mishra,23,9848022335,Chennai)
2015-10-19 10:31:27,446 [main] INFO org.apache.pig.Main - Pig script completed in 12
minutes, 32 seconds and 751 milliseconds (752751 ms)
46 Lectures
3.5 hours
Arnab Chakraborty
23 Lectures
1.5 hours
Mukund Kumar Mishra
16 Lectures
1 hours
Nilay Mehta
52 Lectures
1.5 hours
Bigdata Engineer
14 Lectures
1 hours
Bigdata Engineer
23 Lectures
1 hours
Bigdata Engineer
Print
Add Notes
Bookmark this page | [
{
"code": null,
"e": 2944,
"s": 2684,
"text": "Apache Pig is an abstraction over MapReduce. It is a tool/platform which is used to analyze larger sets of data representing them as data flows. Pig is generally used with Hadoop; we can perform all the data manipulation operations in Hadoop using Apache Pig."
},
{
"code": null,
"e": 3172,
"s": 2944,
"text": "To write data analysis programs, Pig provides a high-level language known as Pig Latin. This language provides various operators using which programmers can develop their own functions for reading, writing, and processing data."
},
{
"code": null,
"e": 3473,
"s": 3172,
"text": "To analyze data using Apache Pig, programmers need to write scripts using Pig Latin language. All these scripts are internally converted to Map and Reduce tasks. Apache Pig has a component known as Pig Engine that accepts the Pig Latin scripts as input and converts those scripts into MapReduce jobs."
},
{
"code": null,
"e": 3656,
"s": 3473,
"text": "Programmers who are not so good at Java normally used to struggle working with Hadoop, especially while performing any MapReduce tasks. Apache Pig is a boon for all such programmers."
},
{
"code": null,
"e": 3766,
"s": 3656,
"text": "Using Pig Latin, programmers can perform MapReduce tasks easily without having to type complex codes in Java."
},
{
"code": null,
"e": 3876,
"s": 3766,
"text": "Using Pig Latin, programmers can perform MapReduce tasks easily without having to type complex codes in Java."
},
{
"code": null,
"e": 4179,
"s": 3876,
"text": "Apache Pig uses multi-query approach, thereby reducing the length of codes. For example, an operation that would require you to type 200 lines of code (LoC) in Java can be easily done by typing as less as just 10 LoC in Apache Pig. Ultimately Apache Pig reduces the development time by almost 16 times."
},
{
"code": null,
"e": 4482,
"s": 4179,
"text": "Apache Pig uses multi-query approach, thereby reducing the length of codes. For example, an operation that would require you to type 200 lines of code (LoC) in Java can be easily done by typing as less as just 10 LoC in Apache Pig. Ultimately Apache Pig reduces the development time by almost 16 times."
},
{
"code": null,
"e": 4580,
"s": 4482,
"text": "Pig Latin is SQL-like language and it is easy to learn Apache Pig when you are familiar with SQL."
},
{
"code": null,
"e": 4678,
"s": 4580,
"text": "Pig Latin is SQL-like language and it is easy to learn Apache Pig when you are familiar with SQL."
},
{
"code": null,
"e": 4894,
"s": 4678,
"text": "Apache Pig provides many built-in operators to support data operations like joins, filters, ordering, etc. In addition, it also provides nested data types like tuples, bags, and maps that are missing from MapReduce."
},
{
"code": null,
"e": 5110,
"s": 4894,
"text": "Apache Pig provides many built-in operators to support data operations like joins, filters, ordering, etc. In addition, it also provides nested data types like tuples, bags, and maps that are missing from MapReduce."
},
{
"code": null,
"e": 5157,
"s": 5110,
"text": "Apache Pig comes with the following features −"
},
{
"code": null,
"e": 5260,
"s": 5157,
"text": "Rich set of operators − It provides many operators to perform operations like join, sort, filer, etc."
},
{
"code": null,
"e": 5363,
"s": 5260,
"text": "Rich set of operators − It provides many operators to perform operations like join, sort, filer, etc."
},
{
"code": null,
"e": 5474,
"s": 5363,
"text": "Ease of programming − Pig Latin is similar to SQL and it is easy to write a Pig script if you are good at SQL."
},
{
"code": null,
"e": 5585,
"s": 5474,
"text": "Ease of programming − Pig Latin is similar to SQL and it is easy to write a Pig script if you are good at SQL."
},
{
"code": null,
"e": 5746,
"s": 5585,
"text": "Optimization opportunities − The tasks in Apache Pig optimize their execution automatically, so the programmers need to focus only on semantics of the language."
},
{
"code": null,
"e": 5907,
"s": 5746,
"text": "Optimization opportunities − The tasks in Apache Pig optimize their execution automatically, so the programmers need to focus only on semantics of the language."
},
{
"code": null,
"e": 6025,
"s": 5907,
"text": "Extensibility − Using the existing operators, users can develop their own functions to read, process, and write data."
},
{
"code": null,
"e": 6143,
"s": 6025,
"text": "Extensibility − Using the existing operators, users can develop their own functions to read, process, and write data."
},
{
"code": null,
"e": 6295,
"s": 6143,
"text": "UDF’s − Pig provides the facility to create User-defined Functions in other programming languages such as Java and invoke or embed them in Pig Scripts."
},
{
"code": null,
"e": 6447,
"s": 6295,
"text": "UDF’s − Pig provides the facility to create User-defined Functions in other programming languages such as Java and invoke or embed them in Pig Scripts."
},
{
"code": null,
"e": 6586,
"s": 6447,
"text": "Handles all kinds of data − Apache Pig analyzes all kinds of data, both structured as well as unstructured. It stores the results in HDFS."
},
{
"code": null,
"e": 6725,
"s": 6586,
"text": "Handles all kinds of data − Apache Pig analyzes all kinds of data, both structured as well as unstructured. It stores the results in HDFS."
},
{
"code": null,
"e": 6798,
"s": 6725,
"text": "Listed below are the major differences between Apache Pig and MapReduce."
},
{
"code": null,
"e": 6865,
"s": 6798,
"text": "Listed below are the major differences between Apache Pig and SQL."
},
{
"code": null,
"e": 6918,
"s": 6865,
"text": "In addition to above differences, Apache Pig Latin −"
},
{
"code": null,
"e": 6949,
"s": 6918,
"text": "Allows splits in the pipeline."
},
{
"code": null,
"e": 7007,
"s": 6949,
"text": "Allows developers to store data anywhere in the pipeline."
},
{
"code": null,
"e": 7033,
"s": 7007,
"text": "Declares execution plans."
},
{
"code": null,
"e": 7109,
"s": 7033,
"text": "Provides operators to perform ETL (Extract, Transform, and Load) functions."
},
{
"code": null,
"e": 7345,
"s": 7109,
"text": "Both Apache Pig and Hive are used to create MapReduce jobs. And in some cases, Hive operates on HDFS in a similar way Apache Pig does. In the following table, we have listed a few significant points that set Apache Pig apart from Hive."
},
{
"code": null,
"e": 7486,
"s": 7345,
"text": "Apache Pig is generally used by data scientists for performing tasks involving ad-hoc processing and quick prototyping. Apache Pig is used −"
},
{
"code": null,
"e": 7533,
"s": 7486,
"text": "To process huge data sources such as web logs."
},
{
"code": null,
"e": 7582,
"s": 7533,
"text": "To perform data processing for search platforms."
},
{
"code": null,
"e": 7620,
"s": 7582,
"text": "To process time sensitive data loads."
},
{
"code": null,
"e": 7924,
"s": 7620,
"text": "In 2006, Apache Pig was developed as a research project at Yahoo, especially to create and execute MapReduce jobs on every dataset. In 2007, Apache Pig was open sourced via Apache incubator. In 2008, the first release of Apache Pig came out. In 2010, Apache Pig graduated as an Apache top-level project."
},
{
"code": null,
"e": 8141,
"s": 7924,
"text": "The language used to analyze data in Hadoop using Pig is known as Pig Latin. It is a highlevel data processing language which provides a rich set of data types and operators to perform various operations on the data."
},
{
"code": null,
"e": 8484,
"s": 8141,
"text": "To perform a particular task Programmers using Pig, programmers need to write a Pig script using the Pig Latin language, and execute them using any of the execution mechanisms (Grunt Shell, UDFs, Embedded). After execution, these scripts will go through a series of transformations applied by the Pig Framework, to produce the desired output."
},
{
"code": null,
"e": 8656,
"s": 8484,
"text": "Internally, Apache Pig converts these scripts into a series of MapReduce jobs, and thus, it makes the programmer’s job easy. The architecture of Apache Pig is shown below."
},
{
"code": null,
"e": 8782,
"s": 8656,
"text": "As shown in the figure, there are various components in the Apache Pig framework. Let us take a look at the major components."
},
{
"code": null,
"e": 9053,
"s": 8782,
"text": "Initially the Pig Scripts are handled by the Parser. It checks the syntax of the script, does type checking, and other miscellaneous checks. The output of the parser will be a DAG (directed acyclic graph), which represents the Pig Latin statements and logical operators."
},
{
"code": null,
"e": 9175,
"s": 9053,
"text": "In the DAG, the logical operators of the script are represented as the nodes and the data flows are represented as edges."
},
{
"code": null,
"e": 9311,
"s": 9175,
"text": "The logical plan (DAG) is passed to the logical optimizer, which carries out the logical optimizations such as projection and pushdown."
},
{
"code": null,
"e": 9393,
"s": 9311,
"text": "The compiler compiles the optimized logical plan into a series of MapReduce jobs."
},
{
"code": null,
"e": 9547,
"s": 9393,
"text": "Finally the MapReduce jobs are submitted to Hadoop in a sorted order. Finally, these MapReduce jobs are executed on Hadoop producing the desired results."
},
{
"code": null,
"e": 9733,
"s": 9547,
"text": "The data model of Pig Latin is fully nested and it allows complex non-atomic datatypes such as map and tuple. Given below is the diagrammatical representation of Pig Latin’s data model."
},
{
"code": null,
"e": 10022,
"s": 9733,
"text": "Any single value in Pig Latin, irrespective of their data, type is known as an Atom. It is stored as string and can be used as string and number. int, long, float, double, chararray, and bytearray are the atomic values of Pig. A piece of data or a simple atomic value is known as a field."
},
{
"code": null,
"e": 10047,
"s": 10022,
"text": "Example − ‘raja’ or ‘30’"
},
{
"code": null,
"e": 10200,
"s": 10047,
"text": "A record that is formed by an ordered set of fields is known as a tuple, the fields can be of any type. A tuple is similar to a row in a table of RDBMS."
},
{
"code": null,
"e": 10221,
"s": 10200,
"text": "Example − (Raja, 30)"
},
{
"code": null,
"e": 10623,
"s": 10221,
"text": "A bag is an unordered set of tuples. In other words, a collection of tuples (non-unique) is known as a bag. Each tuple can have any number of fields (flexible schema). A bag is represented by ‘{}’. It is similar to a table in RDBMS, but unlike a table in RDBMS, it is not necessary that every tuple contain the same number of fields or that the fields in the same position (column) have the same type."
},
{
"code": null,
"e": 10662,
"s": 10623,
"text": "Example − {(Raja, 30), (Mohammad, 45)}"
},
{
"code": null,
"e": 10741,
"s": 10662,
"text": "A bag can be a field in a relation; in that context, it is known as inner bag."
},
{
"code": null,
"e": 10793,
"s": 10741,
"text": "Example − {Raja, 30, {9848022338, [email protected],}}"
},
{
"code": null,
"e": 10960,
"s": 10793,
"text": "A map (or data map) is a set of key-value pairs. The key needs to be of type chararray and should be unique. The value might be of any type. It is represented by ‘[]’"
},
{
"code": null,
"e": 10990,
"s": 10960,
"text": "Example − [name#Raja, age#30]"
},
{
"code": null,
"e": 11138,
"s": 10990,
"text": "A relation is a bag of tuples. The relations in Pig Latin are unordered (there is no guarantee that tuples are processed in any particular order)."
},
{
"code": null,
"e": 11228,
"s": 11138,
"text": "This chapter explains the how to download, install, and set up Apache Pig in your system."
},
{
"code": null,
"e": 11449,
"s": 11228,
"text": "It is essential that you have Hadoop and Java installed on your system before you go for Apache Pig. Therefore, prior to installing Apache Pig, install Hadoop and Java by following the steps given in the following link −"
},
{
"code": null,
"e": 11515,
"s": 11449,
"text": "http://www.tutorialspoint.com/hadoop/hadoop_enviornment_setup.htm"
},
{
"code": null,
"e": 11624,
"s": 11515,
"text": "First of all, download the latest version of Apache Pig from the following website − https://pig.apache.org/"
},
{
"code": null,
"e": 11756,
"s": 11624,
"text": "Open the homepage of Apache Pig website. Under the section News, click on the link release page as shown in the following snapshot."
},
{
"code": null,
"e": 12063,
"s": 11756,
"text": "On clicking the specified link, you will be redirected to the Apache Pig Releases page. On this page, under the Download section, you will have two links, namely, Pig 0.8 and later and Pig 0.7 and before. Click on the link Pig 0.8 and later, then you will be redirected to the page having a set of mirrors."
},
{
"code": null,
"e": 12121,
"s": 12063,
"text": "Choose and click any one of these mirrors as shown below."
},
{
"code": null,
"e": 12263,
"s": 12121,
"text": "These mirrors will take you to the Pig Releases page. This page contains various versions of Apache Pig. Click the latest version among them."
},
{
"code": null,
"e": 12485,
"s": 12263,
"text": "Within these folders, you will have the source and binary files of Apache Pig in various distributions. Download the tar files of the source and binary files of Apache Pig 0.15, pig0.15.0-src.tar.gz and pig-0.15.0.tar.gz."
},
{
"code": null,
"e": 12601,
"s": 12485,
"text": "After downloading the Apache Pig software, install it in your Linux environment by following the steps given below."
},
{
"code": null,
"e": 12826,
"s": 12601,
"text": "Create a directory with the name Pig in the same directory where the installation directories of Hadoop, Java, and other software were installed. (In our tutorial, we have created the Pig directory in the user named Hadoop)."
},
{
"code": null,
"e": 12838,
"s": 12826,
"text": "$ mkdir Pig"
},
{
"code": null,
"e": 12887,
"s": 12838,
"text": "Extract the downloaded tar files as shown below."
},
{
"code": null,
"e": 12968,
"s": 12887,
"text": "$ cd Downloads/ \n$ tar zxvf pig-0.15.0-src.tar.gz \n$ tar zxvf pig-0.15.0.tar.gz "
},
{
"code": null,
"e": 13068,
"s": 12968,
"text": "Move the content of pig-0.15.0-src.tar.gz file to the Pig directory created earlier as shown below."
},
{
"code": null,
"e": 13115,
"s": 13068,
"text": "$ mv pig-0.15.0-src.tar.gz/* /home/Hadoop/Pig/"
},
{
"code": null,
"e": 13238,
"s": 13115,
"text": "After installing Apache Pig, we have to configure it. To configure, we need to edit two files − bashrc and pig.properties."
},
{
"code": null,
"e": 13289,
"s": 13238,
"text": "In the .bashrc file, set the following variables −"
},
{
"code": null,
"e": 13346,
"s": 13289,
"text": "PIG_HOME folder to the Apache Pig’s installation folder,"
},
{
"code": null,
"e": 13403,
"s": 13346,
"text": "PIG_HOME folder to the Apache Pig’s installation folder,"
},
{
"code": null,
"e": 13452,
"s": 13403,
"text": "PATH environment variable to the bin folder, and"
},
{
"code": null,
"e": 13501,
"s": 13452,
"text": "PATH environment variable to the bin folder, and"
},
{
"code": null,
"e": 13689,
"s": 13501,
"text": "PIG_CLASSPATH environment variable to the etc (configuration) folder of your Hadoop installations (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files)."
},
{
"code": null,
"e": 13877,
"s": 13689,
"text": "PIG_CLASSPATH environment variable to the etc (configuration) folder of your Hadoop installations (the directory that contains the core-site.xml, hdfs-site.xml and mapred-site.xml files)."
},
{
"code": null,
"e": 13996,
"s": 13877,
"text": "export PIG_HOME = /home/Hadoop/Pig\nexport PATH = $PATH:/home/Hadoop/pig/bin\nexport PIG_CLASSPATH = $HADOOP_HOME/conf\n"
},
{
"code": null,
"e": 14135,
"s": 13996,
"text": "In the conf folder of Pig, we have a file named pig.properties. In the pig.properties file, you can set various parameters as given below."
},
{
"code": null,
"e": 14155,
"s": 14135,
"text": "pig -h properties \n"
},
{
"code": null,
"e": 14196,
"s": 14155,
"text": "The following properties are supported −"
},
{
"code": null,
"e": 16999,
"s": 14196,
"text": "Logging: verbose = true|false; default is false. This property is the same as -v\n switch brief=true|false; default is false. This property is the same \n as -b switch debug=OFF|ERROR|WARN|INFO|DEBUG; default is INFO. \n This property is the same as -d switch aggregate.warning = true|false; default is true. \n If true, prints count of warnings of each type rather than logging each warning.\t\t \n\t\t \nPerformance tuning: pig.cachedbag.memusage=<mem fraction>; default is 0.2 (20% of all memory).\n Note that this memory is shared across all large bags used by the application. \n pig.skewedjoin.reduce.memusagea=<mem fraction>; default is 0.3 (30% of all memory).\n Specifies the fraction of heap available for the reducer to perform the join.\n pig.exec.nocombiner = true|false; default is false.\n Only disable combiner as a temporary workaround for problems. \n opt.multiquery = true|false; multiquery is on by default.\n Only disable multiquery as a temporary workaround for problems.\n opt.fetch=true|false; fetch is on by default.\n Scripts containing Filter, Foreach, Limit, Stream, and Union can be dumped without MR jobs. \n pig.tmpfilecompression = true|false; compression is off by default. \n Determines whether output of intermediate jobs is compressed. \n pig.tmpfilecompression.codec = lzo|gzip; default is gzip.\n Used in conjunction with pig.tmpfilecompression. Defines compression type. \n pig.noSplitCombination = true|false. Split combination is on by default.\n Determines if multiple small files are combined into a single map. \n\t\t\t \n pig.exec.mapPartAgg = true|false. Default is false. \n Determines if partial aggregation is done within map phase, before records are sent to combiner. \n pig.exec.mapPartAgg.minReduction=<min aggregation factor>. Default is 10. \n If the in-map partial aggregation does not reduce the output num records by this factor, it gets disabled.\n\t\t\t \nMiscellaneous: exectype = mapreduce|tez|local; default is mapreduce. This property is the same as -x switch\n pig.additional.jars.uris=<comma seperated list of jars>. Used in place of register command.\n udf.import.list=<comma seperated list of imports>. Used to avoid package names in UDF.\n stop.on.failure = true|false; default is false. Set to true to terminate on the first error. \n pig.datetime.default.tz=<UTC time offset>. e.g. +08:00. Default is the default timezone of the host.\n Determines the timezone used to handle datetime datatype and UDFs.\nAdditionally, any Hadoop property can be specified.\n"
},
{
"code": null,
"e": 17158,
"s": 16999,
"text": "Verify the installation of Apache Pig by typing the version command. If the installation is successful, you will get the version of Apache Pig as shown below."
},
{
"code": null,
"e": 17247,
"s": 17158,
"text": "$ pig –version \n \nApache Pig version 0.15.0 (r1682971) \ncompiled Jun 01 2015, 11:44:35\n"
},
{
"code": null,
"e": 17372,
"s": 17247,
"text": "In the previous chapter, we explained how to install Apache Pig. In this chapter, we will discuss how to execute Apache Pig."
},
{
"code": null,
"e": 17443,
"s": 17372,
"text": "You can run Apache Pig in two modes, namely, Local Mode and HDFS mode."
},
{
"code": null,
"e": 17622,
"s": 17443,
"text": "In this mode, all the files are installed and run from your local host and local file system. There is no need of Hadoop or HDFS. This mode is generally used for testing purpose."
},
{
"code": null,
"e": 17931,
"s": 17622,
"text": "MapReduce mode is where we load or process the data that exists in the Hadoop File System (HDFS) using Apache Pig. In this mode, whenever we execute the Pig Latin statements to process the data, a MapReduce job is invoked in the back-end to perform a particular operation on the data that exists in the HDFS."
},
{
"code": null,
"e": 18038,
"s": 17931,
"text": "Apache Pig scripts can be executed in three ways, namely, interactive mode, batch mode, and embedded mode."
},
{
"code": null,
"e": 18233,
"s": 18038,
"text": "Interactive Mode (Grunt shell) − You can run Apache Pig in interactive mode using the Grunt shell. In this shell, you can enter the Pig Latin statements and get the output (using Dump operator)."
},
{
"code": null,
"e": 18428,
"s": 18233,
"text": "Interactive Mode (Grunt shell) − You can run Apache Pig in interactive mode using the Grunt shell. In this shell, you can enter the Pig Latin statements and get the output (using Dump operator)."
},
{
"code": null,
"e": 18557,
"s": 18428,
"text": "Batch Mode (Script) − You can run Apache Pig in Batch mode by writing the Pig Latin script in a single file with .pig extension."
},
{
"code": null,
"e": 18686,
"s": 18557,
"text": "Batch Mode (Script) − You can run Apache Pig in Batch mode by writing the Pig Latin script in a single file with .pig extension."
},
{
"code": null,
"e": 18866,
"s": 18686,
"text": "Embedded Mode (UDF) − Apache Pig provides the provision of defining our own functions (User Defined Functions) in programming languages such as Java, and using them in our script."
},
{
"code": null,
"e": 19046,
"s": 18866,
"text": "Embedded Mode (UDF) − Apache Pig provides the provision of defining our own functions (User Defined Functions) in programming languages such as Java, and using them in our script."
},
{
"code": null,
"e": 19149,
"s": 19046,
"text": "You can invoke the Grunt shell in a desired mode (local/MapReduce) using the −x option as shown below."
},
{
"code": null,
"e": 19159,
"s": 19149,
"text": "Command −"
},
{
"code": null,
"e": 19176,
"s": 19159,
"text": "$ ./pig –x local"
},
{
"code": null,
"e": 19186,
"s": 19176,
"text": "Command −"
},
{
"code": null,
"e": 19207,
"s": 19186,
"text": "$ ./pig -x mapreduce"
},
{
"code": null,
"e": 19216,
"s": 19207,
"text": "Output −"
},
{
"code": null,
"e": 19225,
"s": 19216,
"text": "Output −"
},
{
"code": null,
"e": 19299,
"s": 19225,
"text": "Either of these commands gives you the Grunt shell prompt as shown below."
},
{
"code": null,
"e": 19306,
"s": 19299,
"text": "grunt>"
},
{
"code": null,
"e": 19353,
"s": 19306,
"text": "You can exit the Grunt shell using ‘ctrl + d’."
},
{
"code": null,
"e": 19467,
"s": 19353,
"text": "After invoking the Grunt shell, you can execute a Pig script by directly entering the Pig Latin statements in it."
},
{
"code": null,
"e": 19530,
"s": 19467,
"text": "grunt> customers = LOAD 'customers.txt' USING PigStorage(',');"
},
{
"code": null,
"e": 19705,
"s": 19530,
"text": "You can write an entire Pig Latin script in a file and execute it using the –x command. Let us suppose we have a Pig script in a file named sample_script.pig as shown below."
},
{
"code": null,
"e": 19850,
"s": 19705,
"text": "student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING\n PigStorage(',') as (id:int,name:chararray,city:chararray);\n \nDump student;"
},
{
"code": null,
"e": 19916,
"s": 19850,
"text": "Now, you can execute the script in the above file as shown below."
},
{
"code": null,
"e": 20031,
"s": 19916,
"text": "Note − We will discuss in detail how to run a Pig script in Bach mode and in embedded mode in subsequent chapters."
},
{
"code": null,
"e": 20290,
"s": 20031,
"text": "After invoking the Grunt shell, you can run your Pig scripts in the shell. In addition to that, there are certain useful shell and utility commands provided by the Grunt shell. This chapter explains the shell and utility commands provided by the Grunt shell."
},
{
"code": null,
"e": 20442,
"s": 20290,
"text": "Note − In some portions of this chapter, the commands like Load and Store are used. Refer the respective chapters to get in-detail information on them."
},
{
"code": null,
"e": 20580,
"s": 20442,
"text": "The Grunt shell of Apache Pig is mainly used to write Pig Latin scripts. Prior to that, we can invoke any shell commands using sh and fs."
},
{
"code": null,
"e": 20775,
"s": 20580,
"text": "Using sh command, we can invoke any shell commands from the Grunt shell. Using sh command from the Grunt shell, we cannot execute the commands that are a part of the shell environment (ex − cd)."
},
{
"code": null,
"e": 20782,
"s": 20775,
"text": "Syntax"
},
{
"code": null,
"e": 20823,
"s": 20782,
"text": "Given below is the syntax of sh command."
},
{
"code": null,
"e": 20859,
"s": 20823,
"text": "grunt> sh shell command parameters\n"
},
{
"code": null,
"e": 20867,
"s": 20859,
"text": "Example"
},
{
"code": null,
"e": 21036,
"s": 20867,
"text": "We can invoke the ls command of Linux shell from the Grunt shell using the sh option as shown below. In this example, it lists out the files in the /pig/bin/ directory."
},
{
"code": null,
"e": 21098,
"s": 21036,
"text": "grunt> sh ls\n \npig \npig_1444799121955.log \npig.cmd \npig.py\n"
},
{
"code": null,
"e": 21177,
"s": 21098,
"text": "Using the fs command, we can invoke any FsShell commands from the Grunt shell."
},
{
"code": null,
"e": 21184,
"s": 21177,
"text": "Syntax"
},
{
"code": null,
"e": 21225,
"s": 21184,
"text": "Given below is the syntax of fs command."
},
{
"code": null,
"e": 21267,
"s": 21225,
"text": "grunt> sh File System command parameters\n"
},
{
"code": null,
"e": 21275,
"s": 21267,
"text": "Example"
},
{
"code": null,
"e": 21424,
"s": 21275,
"text": "We can invoke the ls command of HDFS from the Grunt shell using fs command. In the following example, it lists the files in the HDFS root directory."
},
{
"code": null,
"e": 21670,
"s": 21424,
"text": "grunt> fs –ls\n \nFound 3 items\ndrwxrwxrwx - Hadoop supergroup 0 2015-09-08 14:13 Hbase\ndrwxr-xr-x - Hadoop supergroup 0 2015-09-09 14:52 seqgen_data\ndrwxr-xr-x - Hadoop supergroup 0 2015-09-08 11:30 twitter_data\n"
},
{
"code": null,
"e": 21785,
"s": 21670,
"text": "In the same way, we can invoke all the other file system shell commands from the Grunt shell using the fs command."
},
{
"code": null,
"e": 22075,
"s": 21785,
"text": "The Grunt shell provides a set of utility commands. These include utility commands such as clear, help, history, quit, and set; and commands such as exec, kill, and run to control Pig from the Grunt shell. Given below is the description of the utility commands provided by the Grunt shell."
},
{
"code": null,
"e": 22141,
"s": 22075,
"text": "The clear command is used to clear the screen of the Grunt shell."
},
{
"code": null,
"e": 22148,
"s": 22141,
"text": "Syntax"
},
{
"code": null,
"e": 22232,
"s": 22148,
"text": "You can clear the screen of the grunt shell using the clear command as shown below."
},
{
"code": null,
"e": 22246,
"s": 22232,
"text": "grunt> clear\n"
},
{
"code": null,
"e": 22315,
"s": 22246,
"text": "The help command gives you a list of Pig commands or Pig properties."
},
{
"code": null,
"e": 22321,
"s": 22315,
"text": "Usage"
},
{
"code": null,
"e": 22395,
"s": 22321,
"text": "You can get a list of Pig commands using the help command as shown below."
},
{
"code": null,
"e": 25163,
"s": 22395,
"text": "grunt> help\n\nCommands: <pig latin statement>; - See the PigLatin manual for details:\nhttp://hadoop.apache.org/pig\n \nFile system commands:fs <fs arguments> - Equivalent to Hadoop dfs command:\nhttp://hadoop.apache.org/common/docs/current/hdfs_shell.html\n\t \nDiagnostic Commands:describe <alias>[::<alias] - Show the schema for the alias.\nInner aliases can be described as A::B.\n explain [-script <pigscript>] [-out <path>] [-brief] [-dot|-xml] \n [-param <param_name>=<pCram_value>]\n [-param_file <file_name>] [<alias>] - \n Show the execution plan to compute the alias or for entire script.\n -script - Explain the entire script.\n -out - Store the output into directory rather than print to stdout.\n -brief - Don't expand nested plans (presenting a smaller graph for overview).\n -dot - Generate the output in .dot format. Default is text format.\n -xml - Generate the output in .xml format. Default is text format.\n -param <param_name - See parameter substitution for details.\n -param_file <file_name> - See parameter substitution for details.\n alias - Alias to explain.\n dump <alias> - Compute the alias and writes the results to stdout.\n\nUtility Commands: exec [-param <param_name>=param_value] [-param_file <file_name>] <script> -\n Execute the script with access to grunt environment including aliases.\n -param <param_name - See parameter substitution for details.\n -param_file <file_name> - See parameter substitution for details.\n script - Script to be executed.\n run [-param <param_name>=param_value] [-param_file <file_name>] <script> -\n Execute the script with access to grunt environment.\n\t\t -param <param_name - See parameter substitution for details. \n -param_file <file_name> - See parameter substitution for details.\n script - Script to be executed.\n sh <shell command> - Invoke a shell command.\n kill <job_id> - Kill the hadoop job specified by the hadoop job id.\n set <key> <value> - Provide execution parameters to Pig. Keys and values are case sensitive.\n The following keys are supported:\n default_parallel - Script-level reduce parallelism. Basic input size heuristics used \n by default.\n debug - Set debug on or off. Default is off.\n job.name - Single-quoted name for jobs. Default is PigLatin:<script name> \n job.priority - Priority for jobs. Values: very_low, low, normal, high, very_high.\n Default is normal stream.skippath - String that contains the path.\n This is used by streaming any hadoop property.\n help - Display this message.\n history [-n] - Display the list statements in cache.\n -n Hide line numbers.\n quit - Quit the grunt shell. \n"
},
{
"code": null,
"e": 25262,
"s": 25163,
"text": "This command displays a list of statements executed / used so far since the Grunt sell is invoked."
},
{
"code": null,
"e": 25268,
"s": 25262,
"text": "Usage"
},
{
"code": null,
"e": 25340,
"s": 25268,
"text": "Assume we have executed three statements since opening the Grunt shell."
},
{
"code": null,
"e": 25618,
"s": 25340,
"text": "grunt> customers = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',');\n \ngrunt> orders = LOAD 'hdfs://localhost:9000/pig_data/orders.txt' USING PigStorage(',');\n \ngrunt> student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING PigStorage(',');\n "
},
{
"code": null,
"e": 25685,
"s": 25618,
"text": "Then, using the history command will produce the following output."
},
{
"code": null,
"e": 25963,
"s": 25685,
"text": "grunt> history\n\ncustomers = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(','); \n \norders = LOAD 'hdfs://localhost:9000/pig_data/orders.txt' USING PigStorage(',');\n \nstudent = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING PigStorage(',');\n \n"
},
{
"code": null,
"e": 26030,
"s": 25963,
"text": "The set command is used to show/assign values to keys used in Pig."
},
{
"code": null,
"e": 26036,
"s": 26030,
"text": "Usage"
},
{
"code": null,
"e": 26098,
"s": 26036,
"text": "Using this command, you can set values to the following keys."
},
{
"code": null,
"e": 26189,
"s": 26098,
"text": "You can set the job priority to a job by passing one of the following values to this key −"
},
{
"code": null,
"e": 26198,
"s": 26189,
"text": "very_low"
},
{
"code": null,
"e": 26202,
"s": 26198,
"text": "low"
},
{
"code": null,
"e": 26209,
"s": 26202,
"text": "normal"
},
{
"code": null,
"e": 26214,
"s": 26209,
"text": "high"
},
{
"code": null,
"e": 26224,
"s": 26214,
"text": "very_high"
},
{
"code": null,
"e": 26278,
"s": 26224,
"text": "You can quit from the Grunt shell using this command."
},
{
"code": null,
"e": 26284,
"s": 26278,
"text": "Usage"
},
{
"code": null,
"e": 26326,
"s": 26284,
"text": "Quit from the Grunt shell as shown below."
},
{
"code": null,
"e": 26339,
"s": 26326,
"text": "grunt> quit\n"
},
{
"code": null,
"e": 26439,
"s": 26339,
"text": "Let us now take a look at the commands using which you can control Apache Pig from the Grunt shell."
},
{
"code": null,
"e": 26512,
"s": 26439,
"text": "Using the exec command, we can execute Pig scripts from the Grunt shell."
},
{
"code": null,
"e": 26519,
"s": 26512,
"text": "Syntax"
},
{
"code": null,
"e": 26574,
"s": 26519,
"text": "Given below is the syntax of the utility command exec."
},
{
"code": null,
"e": 26654,
"s": 26574,
"text": "grunt> exec [–param param_name = param_value] [–param_file file_name] [script]\n"
},
{
"code": null,
"e": 26662,
"s": 26654,
"text": "Example"
},
{
"code": null,
"e": 26774,
"s": 26662,
"text": "Let us assume there is a file named student.txt in the /pig_data/ directory of HDFS with the following content."
},
{
"code": null,
"e": 26786,
"s": 26774,
"text": "Student.txt"
},
{
"code": null,
"e": 26845,
"s": 26786,
"text": "001,Rajiv,Hyderabad\n002,siddarth,Kolkata\n003,Rajesh,Delhi\n"
},
{
"code": null,
"e": 26967,
"s": 26845,
"text": "And, assume we have a script file named sample_script.pig in the /pig_data/ directory of HDFS with the following content."
},
{
"code": null,
"e": 26985,
"s": 26967,
"text": "Sample_script.pig"
},
{
"code": null,
"e": 27131,
"s": 26985,
"text": "student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING PigStorage(',') \n as (id:int,name:chararray,city:chararray);\n \nDump student;"
},
{
"code": null,
"e": 27228,
"s": 27131,
"text": "Now, let us execute the above script from the Grunt shell using the exec command as shown below."
},
{
"code": null,
"e": 27260,
"s": 27228,
"text": "grunt> exec /sample_script.pig\n"
},
{
"code": null,
"e": 27267,
"s": 27260,
"text": "Output"
},
{
"code": null,
"e": 27476,
"s": 27267,
"text": "The exec command executes the script in the sample_script.pig. As directed in the script, it loads the student.txt file into Pig and gives you the result of the Dump operator displaying the following content."
},
{
"code": null,
"e": 27536,
"s": 27476,
"text": "(1,Rajiv,Hyderabad)\n(2,siddarth,Kolkata)\n(3,Rajesh,Delhi) \n"
},
{
"code": null,
"e": 27596,
"s": 27536,
"text": "You can kill a job from the Grunt shell using this command."
},
{
"code": null,
"e": 27603,
"s": 27596,
"text": "Syntax"
},
{
"code": null,
"e": 27650,
"s": 27603,
"text": "Given below is the syntax of the kill command."
},
{
"code": null,
"e": 27669,
"s": 27650,
"text": "grunt> kill JobId\n"
},
{
"code": null,
"e": 27677,
"s": 27669,
"text": "Example"
},
{
"code": null,
"e": 27808,
"s": 27677,
"text": "Suppose there is a running Pig job having id Id_0055, you can kill it from the Grunt shell using the kill command, as shown below."
},
{
"code": null,
"e": 27829,
"s": 27808,
"text": "grunt> kill Id_0055\n"
},
{
"code": null,
"e": 27897,
"s": 27829,
"text": "You can run a Pig script from the Grunt shell using the run command"
},
{
"code": null,
"e": 27904,
"s": 27897,
"text": "Syntax"
},
{
"code": null,
"e": 27950,
"s": 27904,
"text": "Given below is the syntax of the run command."
},
{
"code": null,
"e": 28027,
"s": 27950,
"text": "grunt> run [–param param_name = param_value] [–param_file file_name] script\n"
},
{
"code": null,
"e": 28035,
"s": 28027,
"text": "Example"
},
{
"code": null,
"e": 28147,
"s": 28035,
"text": "Let us assume there is a file named student.txt in the /pig_data/ directory of HDFS with the following content."
},
{
"code": null,
"e": 28159,
"s": 28147,
"text": "Student.txt"
},
{
"code": null,
"e": 28218,
"s": 28159,
"text": "001,Rajiv,Hyderabad\n002,siddarth,Kolkata\n003,Rajesh,Delhi\n"
},
{
"code": null,
"e": 28328,
"s": 28218,
"text": "And, assume we have a script file named sample_script.pig in the local filesystem with the following content."
},
{
"code": null,
"e": 28346,
"s": 28328,
"text": "Sample_script.pig"
},
{
"code": null,
"e": 28474,
"s": 28346,
"text": "student = LOAD 'hdfs://localhost:9000/pig_data/student.txt' USING\n PigStorage(',') as (id:int,name:chararray,city:chararray);"
},
{
"code": null,
"e": 28566,
"s": 28474,
"text": "Now, let us run the above script from the Grunt shell using the run command as shown below."
},
{
"code": null,
"e": 28597,
"s": 28566,
"text": "grunt> run /sample_script.pig\n"
},
{
"code": null,
"e": 28674,
"s": 28597,
"text": "You can see the output of the script using the Dump operator as shown below."
},
{
"code": null,
"e": 28747,
"s": 28674,
"text": "grunt> Dump;\n\n(1,Rajiv,Hyderabad)\n(2,siddarth,Kolkata)\n(3,Rajesh,Delhi)\n"
},
{
"code": null,
"e": 28894,
"s": 28747,
"text": "Note − The difference between exec and the run command is that if we use run, the statements from the script are available in the command history."
},
{
"code": null,
"e": 29132,
"s": 28894,
"text": "Pig Latin is the language used to analyze data in Hadoop using Apache Pig. In this chapter, we are going to discuss the basics of Pig Latin such as Pig Latin statements, data types, general and relational operators, and Pig Latin UDF’s."
},
{
"code": null,
"e": 29301,
"s": 29132,
"text": "As discussed in the previous chapters, the data model of Pig is fully nested. A Relation is the outermost structure of the Pig Latin data model. And it is a bag where −"
},
{
"code": null,
"e": 29334,
"s": 29301,
"text": "A bag is a collection of tuples."
},
{
"code": null,
"e": 29371,
"s": 29334,
"text": "A tuple is an ordered set of fields."
},
{
"code": null,
"e": 29399,
"s": 29371,
"text": "A field is a piece of data."
},
{
"code": null,
"e": 29475,
"s": 29399,
"text": "While processing data using Pig Latin, statements are the basic constructs."
},
{
"code": null,
"e": 29551,
"s": 29475,
"text": "These statements work with relations. They include expressions and schemas."
},
{
"code": null,
"e": 29627,
"s": 29551,
"text": "These statements work with relations. They include expressions and schemas."
},
{
"code": null,
"e": 29670,
"s": 29627,
"text": "Every statement ends with a semicolon (;)."
},
{
"code": null,
"e": 29713,
"s": 29670,
"text": "Every statement ends with a semicolon (;)."
},
{
"code": null,
"e": 29807,
"s": 29713,
"text": "We will perform various operations using operators provided by Pig Latin, through statements."
},
{
"code": null,
"e": 29901,
"s": 29807,
"text": "We will perform various operations using operators provided by Pig Latin, through statements."
},
{
"code": null,
"e": 30049,
"s": 29901,
"text": "Except LOAD and STORE, while performing all other operations, Pig Latin statements take a relation as input and produce another relation as output."
},
{
"code": null,
"e": 30197,
"s": 30049,
"text": "Except LOAD and STORE, while performing all other operations, Pig Latin statements take a relation as input and produce another relation as output."
},
{
"code": null,
"e": 30491,
"s": 30197,
"text": "As soon as you enter a Load statement in the Grunt shell, its semantic checking will be carried out. To see the contents of the schema, you need to use the Dump operator. Only after performing the dump operation, the MapReduce job for loading the data into the file system will be carried out."
},
{
"code": null,
"e": 30785,
"s": 30491,
"text": "As soon as you enter a Load statement in the Grunt shell, its semantic checking will be carried out. To see the contents of the schema, you need to use the Dump operator. Only after performing the dump operation, the MapReduce job for loading the data into the file system will be carried out."
},
{
"code": null,
"e": 30855,
"s": 30785,
"text": "Given below is a Pig Latin statement, which loads data to Apache Pig."
},
{
"code": null,
"e": 31015,
"s": 30855,
"text": "grunt> Student_data = LOAD 'student_data.txt' USING PigStorage(',')as \n ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );"
},
{
"code": null,
"e": 31069,
"s": 31015,
"text": "Given below table describes the Pig Latin data types."
},
{
"code": null,
"e": 31105,
"s": 31069,
"text": "Represents a signed 32-bit integer."
},
{
"code": null,
"e": 31117,
"s": 31105,
"text": "Example : 8"
},
{
"code": null,
"e": 31153,
"s": 31117,
"text": "Represents a signed 64-bit integer."
},
{
"code": null,
"e": 31166,
"s": 31153,
"text": "Example : 5L"
},
{
"code": null,
"e": 31209,
"s": 31166,
"text": "Represents a signed 32-bit floating point."
},
{
"code": null,
"e": 31224,
"s": 31209,
"text": "Example : 5.5F"
},
{
"code": null,
"e": 31260,
"s": 31224,
"text": "Represents a 64-bit floating point."
},
{
"code": null,
"e": 31275,
"s": 31260,
"text": "Example : 10.5"
},
{
"code": null,
"e": 31338,
"s": 31275,
"text": "Represents a character array (string) in Unicode UTF-8 format."
},
{
"code": null,
"e": 31366,
"s": 31338,
"text": "Example : ‘tutorials point’"
},
{
"code": null,
"e": 31398,
"s": 31366,
"text": "Represents a Byte array (blob)."
},
{
"code": null,
"e": 31426,
"s": 31398,
"text": "Represents a Boolean value."
},
{
"code": null,
"e": 31449,
"s": 31426,
"text": "Example : true/ false."
},
{
"code": null,
"e": 31473,
"s": 31449,
"text": "Represents a date-time."
},
{
"code": null,
"e": 31513,
"s": 31473,
"text": "Example : 1970-01-01T00:00:00.000+00:00"
},
{
"code": null,
"e": 31543,
"s": 31513,
"text": "Represents a Java BigInteger."
},
{
"code": null,
"e": 31565,
"s": 31543,
"text": "Example : 60708090709"
},
{
"code": null,
"e": 31595,
"s": 31565,
"text": "Represents a Java BigDecimal"
},
{
"code": null,
"e": 31627,
"s": 31595,
"text": "Example : 185.98376256272893883"
},
{
"code": null,
"e": 31664,
"s": 31627,
"text": "A tuple is an ordered set of fields."
},
{
"code": null,
"e": 31685,
"s": 31664,
"text": "Example : (raja, 30)"
},
{
"code": null,
"e": 31718,
"s": 31685,
"text": "A bag is a collection of tuples."
},
{
"code": null,
"e": 31755,
"s": 31718,
"text": "Example : {(raju,30),(Mohhammad,45)}"
},
{
"code": null,
"e": 31790,
"s": 31755,
"text": "A Map is a set of key-value pairs."
},
{
"code": null,
"e": 31827,
"s": 31790,
"text": "Example : [ ‘name’#’Raju’, ‘age’#30]"
},
{
"code": null,
"e": 31936,
"s": 31827,
"text": "Values for all the above data types can be NULL. Apache Pig treats null values in a similar way as SQL does."
},
{
"code": null,
"e": 32111,
"s": 31936,
"text": "A null can be an unknown value or a non-existent value. It is used as a placeholder for optional values. These nulls can occur naturally or can be the result of an operation."
},
{
"code": null,
"e": 32207,
"s": 32111,
"text": "The following table describes the arithmetic operators of Pig Latin. Suppose a = 10 and b = 20."
},
{
"code": null,
"e": 32261,
"s": 32207,
"text": "Addition − Adds values on either side of the operator"
},
{
"code": null,
"e": 32327,
"s": 32261,
"text": "Subtraction − Subtracts right hand operand from left hand operand"
},
{
"code": null,
"e": 32393,
"s": 32327,
"text": "Multiplication − Multiplies values on either side of the operator"
},
{
"code": null,
"e": 32452,
"s": 32393,
"text": "Division − Divides left hand operand by right hand operand"
},
{
"code": null,
"e": 32532,
"s": 32452,
"text": "Modulus − Divides left hand operand by right hand operand and returns remainder"
},
{
"code": null,
"e": 32613,
"s": 32532,
"text": "Bincond − Evaluates the Boolean operators. It has three operands as shown below."
},
{
"code": null,
"e": 32675,
"s": 32613,
"text": "variable x = (expression) ? value1 if true : value2 if false."
},
{
"code": null,
"e": 32697,
"s": 32675,
"text": "b = (a == 1)? 20: 30;"
},
{
"code": null,
"e": 32726,
"s": 32697,
"text": "if a=1 the value of b is 20."
},
{
"code": null,
"e": 32756,
"s": 32726,
"text": "if a!=1 the value of b is 30."
},
{
"code": null,
"e": 32761,
"s": 32756,
"text": "CASE"
},
{
"code": null,
"e": 32766,
"s": 32761,
"text": "WHEN"
},
{
"code": null,
"e": 32771,
"s": 32766,
"text": "THEN"
},
{
"code": null,
"e": 32780,
"s": 32771,
"text": "ELSE END"
},
{
"code": null,
"e": 32847,
"s": 32780,
"text": "Case − The case operator is equivalent to nested bincond operator."
},
{
"code": null,
"e": 32859,
"s": 32847,
"text": "CASE f2 % 2"
},
{
"code": null,
"e": 32878,
"s": 32859,
"text": "WHEN 0 THEN 'even'"
},
{
"code": null,
"e": 32896,
"s": 32878,
"text": "WHEN 1 THEN 'odd'"
},
{
"code": null,
"e": 32900,
"s": 32896,
"text": "END"
},
{
"code": null,
"e": 32969,
"s": 32900,
"text": "The following table describes the comparison operators of Pig Latin."
},
{
"code": null,
"e": 33073,
"s": 32969,
"text": "Equal − Checks if the values of two operands are equal or not; if yes, then the condition becomes true."
},
{
"code": null,
"e": 33198,
"s": 33073,
"text": "Not Equal − Checks if the values of two operands are equal or not. If the values are not equal, then condition becomes true."
},
{
"code": null,
"e": 33342,
"s": 33198,
"text": "Greater than − Checks if the value of the left operand is greater than the value of the right operand. If yes, then the condition becomes true."
},
{
"code": null,
"e": 33480,
"s": 33342,
"text": "Less than − Checks if the value of the left operand is less than the value of the right operand. If yes, then the condition becomes true."
},
{
"code": null,
"e": 33648,
"s": 33480,
"text": "Greater than or equal to − Checks if the value of the left operand is greater than or equal to the value of the right operand. If yes, then the condition becomes true."
},
{
"code": null,
"e": 33810,
"s": 33648,
"text": "Less than or equal to − Checks if the value of the left operand is less than or equal to the value of the right operand. If yes, then the condition becomes true."
},
{
"code": null,
"e": 33927,
"s": 33810,
"text": "Pattern matching − Checks whether the string in the left-hand side matches with the constant in the right-hand side."
},
{
"code": null,
"e": 34003,
"s": 33927,
"text": "The following table describes the Type construction operators of Pig Latin."
},
{
"code": null,
"e": 34076,
"s": 34003,
"text": "Tuple constructor operator − This operator is used to construct a tuple."
},
{
"code": null,
"e": 34145,
"s": 34076,
"text": "Bag constructor operator − This operator is used to construct a bag."
},
{
"code": null,
"e": 34216,
"s": 34145,
"text": "Map constructor operator − This operator is used to construct a tuple."
},
{
"code": null,
"e": 34285,
"s": 34216,
"text": "The following table describes the relational operators of Pig Latin."
},
{
"code": null,
"e": 34574,
"s": 34285,
"text": "In general, Apache Pig works on top of Hadoop. It is an analytical tool that analyzes large datasets that exist in the Hadoop File System. To analyze data using Apache Pig, we have to initially load the data into Apache Pig. This chapter explains how to load data to Apache Pig from HDFS."
},
{
"code": null,
"e": 34738,
"s": 34574,
"text": "In MapReduce mode, Pig reads (loads) data from HDFS and stores the results back in HDFS. Therefore, let us start HDFS and create the following sample data in HDFS."
},
{
"code": null,
"e": 34854,
"s": 34738,
"text": "The above dataset contains personal details like id, first name, last name, phone number and city, of six students."
},
{
"code": null,
"e": 34938,
"s": 34854,
"text": "First of all, verify the installation using Hadoop version command, as shown below."
},
{
"code": null,
"e": 34955,
"s": 34938,
"text": "$ hadoop version"
},
{
"code": null,
"e": 35067,
"s": 34955,
"text": "If your system contains Hadoop, and if you have set the PATH variable, then you will get the following output −"
},
{
"code": null,
"e": 35411,
"s": 35067,
"text": "Hadoop 2.6.0 \nSubversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r \ne3496499ecb8d220fba99dc5ed4c99c8f9e33bb1 \nCompiled by jenkins on 2014-11-13T21:10Z \nCompiled with protoc 2.5.0 \nFrom source with checksum 18e43357c8f927c0695f1e9522859d6a \nThis command was run using /home/Hadoop/hadoop/share/hadoop/common/hadoop\ncommon-2.6.0.jar\n"
},
{
"code": null,
"e": 35527,
"s": 35411,
"text": "Browse through the sbin directory of Hadoop and start yarn and Hadoop dfs (distributed file system) as shown below."
},
{
"code": null,
"e": 36235,
"s": 35527,
"text": "cd /$Hadoop_Home/sbin/ \n$ start-dfs.sh \nlocalhost: starting namenode, logging to /home/Hadoop/hadoop/logs/hadoopHadoop-namenode-localhost.localdomain.out \nlocalhost: starting datanode, logging to /home/Hadoop/hadoop/logs/hadoopHadoop-datanode-localhost.localdomain.out \nStarting secondary namenodes [0.0.0.0] \nstarting secondarynamenode, logging to /home/Hadoop/hadoop/logs/hadoop-Hadoopsecondarynamenode-localhost.localdomain.out\n \n$ start-yarn.sh \nstarting yarn daemons \nstarting resourcemanager, logging to /home/Hadoop/hadoop/logs/yarn-Hadoopresourcemanager-localhost.localdomain.out \nlocalhost: starting nodemanager, logging to /home/Hadoop/hadoop/logs/yarnHadoop-nodemanager-localhost.localdomain.out\n"
},
{
"code": null,
"e": 36393,
"s": 36235,
"text": "In Hadoop DFS, you can create directories using the command mkdir. Create a new directory in HDFS with the name Pig_Data in the required path as shown below."
},
{
"code": null,
"e": 36467,
"s": 36393,
"text": "$cd /$Hadoop_Home/bin/ \n$ hdfs dfs -mkdir hdfs://localhost:9000/Pig_Data "
},
{
"code": null,
"e": 36627,
"s": 36467,
"text": "The input file of Pig contains each tuple/record in individual lines. And the entities of the record are separated by a delimiter (In our example we used “,”)."
},
{
"code": null,
"e": 36723,
"s": 36627,
"text": "In the local file system, create an input file student_data.txt containing data as shown below."
},
{
"code": null,
"e": 36960,
"s": 36723,
"text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata\n003,Rajesh,Khanna,9848022339,Delhi\n004,Preethi,Agarwal,9848022330,Pune\n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai.\n"
},
{
"code": null,
"e": 37093,
"s": 36960,
"text": "Now, move the file from the local file system to HDFS using put command as shown below. (You can use copyFromLocal command as well.)"
},
{
"code": null,
"e": 37206,
"s": 37093,
"text": "$ cd $HADOOP_HOME/bin \n$ hdfs dfs -put /home/Hadoop/Pig/Pig_Data/student_data.txt dfs://localhost:9000/pig_data/"
},
{
"code": null,
"e": 37307,
"s": 37206,
"text": "You can use the cat command to verify whether the file has been moved into the HDFS, as shown below."
},
{
"code": null,
"e": 37393,
"s": 37307,
"text": "$ cd $HADOOP_HOME/bin\n$ hdfs dfs -cat hdfs://localhost:9000/pig_data/student_data.txt"
},
{
"code": null,
"e": 37445,
"s": 37393,
"text": "You can see the content of the file as shown below."
},
{
"code": null,
"e": 37832,
"s": 37445,
"text": "15/10/01 12:16:55 WARN util.NativeCodeLoader: Unable to load native-hadoop\nlibrary for your platform... using builtin-java classes where applicable\n \n001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata\n003,Rajesh,Khanna,9848022339,Delhi\n004,Preethi,Agarwal,9848022330,Pune\n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai\n"
},
{
"code": null,
"e": 37935,
"s": 37832,
"text": "You can load data into Apache Pig from the file system (HDFS/ Local) using LOAD operator of Pig Latin."
},
{
"code": null,
"e": 38222,
"s": 37935,
"text": "The load statement consists of two parts divided by the “=” operator. On the left-hand side, we need to mention the name of the relation where we want to store the data, and on the right-hand side, we have to define how we store the data. Given below is the syntax of the Load operator."
},
{
"code": null,
"e": 38288,
"s": 38222,
"text": "Relation_name = LOAD 'Input file path' USING function as schema;\n"
},
{
"code": null,
"e": 38295,
"s": 38288,
"text": "Where,"
},
{
"code": null,
"e": 38379,
"s": 38295,
"text": "relation_name − We have to mention the relation in which we want to store the data."
},
{
"code": null,
"e": 38463,
"s": 38379,
"text": "relation_name − We have to mention the relation in which we want to store the data."
},
{
"code": null,
"e": 38565,
"s": 38463,
"text": "Input file path − We have to mention the HDFS directory where the file is stored. (In MapReduce mode)"
},
{
"code": null,
"e": 38667,
"s": 38565,
"text": "Input file path − We have to mention the HDFS directory where the file is stored. (In MapReduce mode)"
},
{
"code": null,
"e": 38811,
"s": 38667,
"text": "function − We have to choose a function from the set of load functions provided by Apache Pig (BinStorage, JsonLoader, PigStorage, TextLoader)."
},
{
"code": null,
"e": 38955,
"s": 38811,
"text": "function − We have to choose a function from the set of load functions provided by Apache Pig (BinStorage, JsonLoader, PigStorage, TextLoader)."
},
{
"code": null,
"e": 39053,
"s": 38955,
"text": "Schema − We have to define the schema of the data. We can define the required schema as follows −"
},
{
"code": null,
"e": 39151,
"s": 39053,
"text": "Schema − We have to define the schema of the data. We can define the required schema as follows −"
},
{
"code": null,
"e": 39217,
"s": 39151,
"text": "(column1 : data type, column2 : data type, column3 : data type);\n"
},
{
"code": null,
"e": 39345,
"s": 39217,
"text": "Note − We load the data without specifying the schema. In that case, the columns will be addressed as $01, $02, etc... (check)."
},
{
"code": null,
"e": 39463,
"s": 39345,
"text": "As an example, let us load the data in student_data.txt in Pig under the schema named Student using the LOAD command."
},
{
"code": null,
"e": 39562,
"s": 39463,
"text": "First of all, open the Linux terminal. Start the Pig Grunt shell in MapReduce mode as shown below."
},
{
"code": null,
"e": 39581,
"s": 39562,
"text": "$ Pig –x mapreduce"
},
{
"code": null,
"e": 39631,
"s": 39581,
"text": "It will start the Pig Grunt shell as shown below."
},
{
"code": null,
"e": 40397,
"s": 39631,
"text": "15/10/01 12:33:37 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL\n15/10/01 12:33:37 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE\n15/10/01 12:33:37 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType\n\n2015-10-01 12:33:38,080 [main] INFO org.apache.pig.Main - Apache Pig version 0.15.0 (r1682971) compiled Jun 01 2015, 11:44:35\n2015-10-01 12:33:38,080 [main] INFO org.apache.pig.Main - Logging error messages to: /home/Hadoop/pig_1443683018078.log\n2015-10-01 12:33:38,242 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/Hadoop/.pigbootup not found\n \n2015-10-01 12:33:39,630 [main]\nINFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost:9000\n \ngrunt>\n"
},
{
"code": null,
"e": 40522,
"s": 40397,
"text": "Now load the data from the file student_data.txt into Pig by executing the following Pig Latin statement in the Grunt shell."
},
{
"code": null,
"e": 40716,
"s": 40522,
"text": "grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' \n USING PigStorage(',')\n as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, \n city:chararray );"
},
{
"code": null,
"e": 40769,
"s": 40716,
"text": "Following is the description of the above statement."
},
{
"code": null,
"e": 40821,
"s": 40769,
"text": "We have stored the data using the following schema."
},
{
"code": null,
"e": 41039,
"s": 40821,
"text": "Note − The load statement will simply load the data into the specified relation in Pig. To verify the execution of the Load statement, you have to use the Diagnostic Operators which are discussed in the next chapters."
},
{
"code": null,
"e": 41263,
"s": 41039,
"text": "In the previous chapter, we learnt how to load data into Apache Pig. You can store the loaded data in the file system using the store operator. This chapter explains how to store data in Apache Pig using the Store operator."
},
{
"code": null,
"e": 41313,
"s": 41263,
"text": "Given below is the syntax of the Store statement."
},
{
"code": null,
"e": 41385,
"s": 41313,
"text": "STORE Relation_name INTO ' required_directory_path ' [USING function];\n"
},
{
"code": null,
"e": 41460,
"s": 41385,
"text": "Assume we have a file student_data.txt in HDFS with the following content."
},
{
"code": null,
"e": 41697,
"s": 41460,
"text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata\n003,Rajesh,Khanna,9848022339,Delhi\n004,Preethi,Agarwal,9848022330,Pune\n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai.\n"
},
{
"code": null,
"e": 41781,
"s": 41697,
"text": "And we have read it into a relation student using the LOAD operator as shown below."
},
{
"code": null,
"e": 41975,
"s": 41781,
"text": "grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' \n USING PigStorage(',')\n as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, \n city:chararray );"
},
{
"code": null,
"e": 42059,
"s": 41975,
"text": "Now, let us store the relation in the HDFS directory “/pig_Output/” as shown below."
},
{
"code": null,
"e": 42147,
"s": 42059,
"text": "grunt> STORE student INTO ' hdfs://localhost:9000/pig_Output/ ' USING PigStorage (',');"
},
{
"code": null,
"e": 42301,
"s": 42147,
"text": "After executing the store statement, you will get the following output. A directory is created with the specified name and the data will be stored in it."
},
{
"code": null,
"e": 43714,
"s": 42301,
"text": "2015-10-05 13:05:05,429 [main] INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.\nMapReduceLau ncher - 100% complete\n2015-10-05 13:05:05,429 [main] INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - \nScript Statistics:\n \nHadoopVersion PigVersion UserId StartedAt FinishedAt Features \n2.6.0 0.15.0 Hadoop 2015-10-0 13:03:03 2015-10-05 13:05:05 UNKNOWN \nSuccess! \nJob Stats (time in seconds): \nJobId Maps Reduces MaxMapTime MinMapTime AvgMapTime MedianMapTime \njob_14459_06 1 0 n/a n/a n/a n/a\nMaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature \n 0 0 0 0 student MAP_ONLY \nOutPut folder\nhdfs://localhost:9000/pig_Output/ \n \nInput(s): Successfully read 0 records from: \"hdfs://localhost:9000/pig_data/student_data.txt\" \nOutput(s): Successfully stored 0 records in: \"hdfs://localhost:9000/pig_Output\" \nCounters:\nTotal records written : 0\nTotal bytes written : 0\nSpillable Memory Manager spill count : 0 \nTotal bags proactively spilled: 0\nTotal records proactively spilled: 0\n \nJob DAG: job_1443519499159_0006\n \n2015-10-05 13:06:06,192 [main] INFO org.apache.pig.backend.hadoop.executionengine\n.mapReduceLayer.MapReduceLau ncher - Success!\n"
},
{
"code": null,
"e": 43761,
"s": 43714,
"text": "You can verify the stored data as shown below."
},
{
"code": null,
"e": 43865,
"s": 43761,
"text": "First of all, list out the files in the directory named pig_output using the ls command as shown below."
},
{
"code": null,
"e": 44135,
"s": 43865,
"text": "hdfs dfs -ls 'hdfs://localhost:9000/pig_Output/'\nFound 2 items\nrw-r--r- 1 Hadoop supergroup 0 2015-10-05 13:03 hdfs://localhost:9000/pig_Output/_SUCCESS\nrw-r--r- 1 Hadoop supergroup 224 2015-10-05 13:03 hdfs://localhost:9000/pig_Output/part-m-00000\n"
},
{
"code": null,
"e": 44216,
"s": 44135,
"text": "You can observe that two files were created after executing the store statement."
},
{
"code": null,
"e": 44300,
"s": 44216,
"text": "Using cat command, list the contents of the file named part-m-00000 as shown below."
},
{
"code": null,
"e": 44590,
"s": 44300,
"text": "$ hdfs dfs -cat 'hdfs://localhost:9000/pig_Output/part-m-00000' \n1,Rajiv,Reddy,9848022337,Hyderabad\n2,siddarth,Battacharya,9848022338,Kolkata\n3,Rajesh,Khanna,9848022339,Delhi\n4,Preethi,Agarwal,9848022330,Pune\n5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n6,Archana,Mishra,9848022335,Chennai \n"
},
{
"code": null,
"e": 44833,
"s": 44590,
"text": "The load statement will simply load the data into the specified relation in Apache Pig. To verify the execution of the Load statement, you have to use the Diagnostic Operators. Pig Latin provides four different types of diagnostic operators −"
},
{
"code": null,
"e": 44847,
"s": 44833,
"text": "Dump operator"
},
{
"code": null,
"e": 44865,
"s": 44847,
"text": "Describe operator"
},
{
"code": null,
"e": 44886,
"s": 44865,
"text": "Explanation operator"
},
{
"code": null,
"e": 44908,
"s": 44886,
"text": "Illustration operator"
},
{
"code": null,
"e": 44974,
"s": 44908,
"text": "In this chapter, we will discuss the Dump operators of Pig Latin."
},
{
"code": null,
"e": 45115,
"s": 44974,
"text": "The Dump operator is used to run the Pig Latin statements and display the results on the screen. It is generally used for debugging Purpose."
},
{
"code": null,
"e": 45163,
"s": 45115,
"text": "Given below is the syntax of the Dump operator."
},
{
"code": null,
"e": 45190,
"s": 45163,
"text": "grunt> Dump Relation_Name\n"
},
{
"code": null,
"e": 45265,
"s": 45190,
"text": "Assume we have a file student_data.txt in HDFS with the following content."
},
{
"code": null,
"e": 45502,
"s": 45265,
"text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata\n003,Rajesh,Khanna,9848022339,Delhi\n004,Preethi,Agarwal,9848022330,Pune\n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai.\n"
},
{
"code": null,
"e": 45586,
"s": 45502,
"text": "And we have read it into a relation student using the LOAD operator as shown below."
},
{
"code": null,
"e": 45780,
"s": 45586,
"text": "grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' \n USING PigStorage(',')\n as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, \n city:chararray );"
},
{
"code": null,
"e": 45867,
"s": 45780,
"text": "Now, let us print the contents of the relation using the Dump operator as shown below."
},
{
"code": null,
"e": 45888,
"s": 45867,
"text": "grunt> Dump student\n"
},
{
"code": null,
"e": 46028,
"s": 45888,
"text": "Once you execute the above Pig Latin statement, it will start a MapReduce job to read data from HDFS. It will produce the following output."
},
{
"code": null,
"e": 48141,
"s": 46028,
"text": "2015-10-01 15:05:27,642 [main]\nINFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - \n100% complete\n2015-10-01 15:05:27,652 [main]\nINFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics: \nHadoopVersion PigVersion UserId StartedAt FinishedAt Features \n2.6.0 0.15.0 Hadoop 2015-10-01 15:03:11 2015-10-01 05:27 UNKNOWN\n \nSuccess! \nJob Stats (time in seconds):\n \nJobId job_14459_0004\nMaps 1 \nReduces 0 \nMaxMapTime n/a \nMinMapTime n/a\nAvgMapTime n/a \nMedianMapTime n/a\nMaxReduceTime 0\nMinReduceTime 0 \nAvgReduceTime 0\nMedianReducetime 0\nAlias student \nFeature MAP_ONLY \nOutputs hdfs://localhost:9000/tmp/temp580182027/tmp757878456,\n\nInput(s): Successfully read 0 records from: \"hdfs://localhost:9000/pig_data/\nstudent_data.txt\"\n \nOutput(s): Successfully stored 0 records in: \"hdfs://localhost:9000/tmp/temp580182027/\ntmp757878456\" \n\nCounters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager \nspill count : 0Total bags proactively spilled: 0 Total records proactively spilled: 0 \n\nJob DAG: job_1443519499159_0004\n \n2015-10-01 15:06:28,403 [main]\nINFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLau ncher - Success!\n2015-10-01 15:06:28,441 [main] INFO org.apache.pig.data.SchemaTupleBackend - \nKey [pig.schematuple] was not set... will not generate code.\n2015-10-01 15:06:28,485 [main]\nINFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths \nto process : 1\n2015-10-01 15:06:28,485 [main]\nINFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths\nto process : 1\n\n(1,Rajiv,Reddy,9848022337,Hyderabad)\n(2,siddarth,Battacharya,9848022338,Kolkata)\n(3,Rajesh,Khanna,9848022339,Delhi)\n(4,Preethi,Agarwal,9848022330,Pune)\n(5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar)\n(6,Archana,Mishra,9848022335,Chennai)\n"
},
{
"code": null,
"e": 48205,
"s": 48141,
"text": "The describe operator is used to view the schema of a relation."
},
{
"code": null,
"e": 48257,
"s": 48205,
"text": "The syntax of the describe operator is as follows −"
},
{
"code": null,
"e": 48288,
"s": 48257,
"text": "grunt> Describe Relation_name\n"
},
{
"code": null,
"e": 48363,
"s": 48288,
"text": "Assume we have a file student_data.txt in HDFS with the following content."
},
{
"code": null,
"e": 48600,
"s": 48363,
"text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata\n003,Rajesh,Khanna,9848022339,Delhi\n004,Preethi,Agarwal,9848022330,Pune\n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai.\n"
},
{
"code": null,
"e": 48684,
"s": 48600,
"text": "And we have read it into a relation student using the LOAD operator as shown below."
},
{
"code": null,
"e": 48870,
"s": 48684,
"text": "grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' USING PigStorage(',')\n as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );"
},
{
"code": null,
"e": 48956,
"s": 48870,
"text": "Now, let us describe the relation named student and verify the schema as shown below."
},
{
"code": null,
"e": 48982,
"s": 48956,
"text": "grunt> describe student;\n"
},
{
"code": null,
"e": 49068,
"s": 48982,
"text": "Once you execute the above Pig Latin statement, it will produce the following output."
},
{
"code": null,
"e": 49171,
"s": 49068,
"text": "grunt> student: { id: int,firstname: chararray,lastname: chararray,phone: chararray,city: chararray }\n"
},
{
"code": null,
"e": 49279,
"s": 49171,
"text": "The explain operator is used to display the logical, physical, and MapReduce execution plans of a relation."
},
{
"code": null,
"e": 49330,
"s": 49279,
"text": "Given below is the syntax of the explain operator."
},
{
"code": null,
"e": 49361,
"s": 49330,
"text": "grunt> explain Relation_name;\n"
},
{
"code": null,
"e": 49436,
"s": 49361,
"text": "Assume we have a file student_data.txt in HDFS with the following content."
},
{
"code": null,
"e": 49673,
"s": 49436,
"text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata\n003,Rajesh,Khanna,9848022339,Delhi\n004,Preethi,Agarwal,9848022330,Pune\n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai.\n"
},
{
"code": null,
"e": 49757,
"s": 49673,
"text": "And we have read it into a relation student using the LOAD operator as shown below."
},
{
"code": null,
"e": 49943,
"s": 49757,
"text": "grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' USING PigStorage(',')\n as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );"
},
{
"code": null,
"e": 50033,
"s": 49943,
"text": "Now, let us explain the relation named student using the explain operator as shown below."
},
{
"code": null,
"e": 50058,
"s": 50033,
"text": "grunt> explain student;\n"
},
{
"code": null,
"e": 50096,
"s": 50058,
"text": "It will produce the following output."
},
{
"code": null,
"e": 54866,
"s": 50096,
"text": "$ explain student;\n\n2015-10-05 11:32:43,660 [main]\n2015-10-05 11:32:43,660 [main] INFO org.apache.pig.newplan.logical.optimizer\n.LogicalPlanOptimizer -\n{RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator,\nGroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, \nMergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer,\nPushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]} \n#-----------------------------------------------\n# New Logical Plan: \n#-----------------------------------------------\nstudent: (Name: LOStore Schema:\nid#31:int,firstname#32:chararray,lastname#33:chararray,phone#34:chararray,city#\n35:chararray)\n| \n|---student: (Name: LOForEach Schema:\nid#31:int,firstname#32:chararray,lastname#33:chararray,phone#34:chararray,city#\n35:chararray)\n | |\n | (Name: LOGenerate[false,false,false,false,false] Schema:\nid#31:int,firstname#32:chararray,lastname#33:chararray,phone#34:chararray,city#\n35:chararray)ColumnPrune:InputUids=[34, 35, 32, 33,\n31]ColumnPrune:OutputUids=[34, 35, 32, 33, 31]\n | | | \n | | (Name: Cast Type: int Uid: 31) \n | | | | | |---id:(Name: Project Type: bytearray Uid: 31 Input: 0 Column: (*))\n | | | \n | | (Name: Cast Type: chararray Uid: 32)\n | | | \n | | |---firstname:(Name: Project Type: bytearray Uid: 32 Input: 1\nColumn: (*))\n | | |\n | | (Name: Cast Type: chararray Uid: 33)\n | | |\n | | |---lastname:(Name: Project Type: bytearray Uid: 33 Input: 2\n\t Column: (*))\n | | | \n | | (Name: Cast Type: chararray Uid: 34)\n | | | \n | | |---phone:(Name: Project Type: bytearray Uid: 34 Input: 3 Column:\n(*))\n | | | \n | | (Name: Cast Type: chararray Uid: 35)\n | | | \n | | |---city:(Name: Project Type: bytearray Uid: 35 Input: 4 Column:\n(*))\n | | \n | |---(Name: LOInnerLoad[0] Schema: id#31:bytearray)\n | | \n | |---(Name: LOInnerLoad[1] Schema: firstname#32:bytearray)\n | |\n | |---(Name: LOInnerLoad[2] Schema: lastname#33:bytearray)\n | |\n | |---(Name: LOInnerLoad[3] Schema: phone#34:bytearray)\n | | \n | |---(Name: LOInnerLoad[4] Schema: city#35:bytearray)\n |\n |---student: (Name: LOLoad Schema: \nid#31:bytearray,firstname#32:bytearray,lastname#33:bytearray,phone#34:bytearray\n,city#35:bytearray)RequiredFields:null \n#-----------------------------------------------\n# Physical Plan: #-----------------------------------------------\nstudent: Store(fakefile:org.apache.pig.builtin.PigStorage) - scope-36\n| \n|---student: New For Each(false,false,false,false,false)[bag] - scope-35\n | |\n | Cast[int] - scope-21\n | |\n | |---Project[bytearray][0] - scope-20\n | | \n | Cast[chararray] - scope-24\n | |\n | |---Project[bytearray][1] - scope-23\n | | \n | Cast[chararray] - scope-27\n | | \n | |---Project[bytearray][2] - scope-26 \n | | \n | Cast[chararray] - scope-30 \n | | \n | |---Project[bytearray][3] - scope-29\n | |\n | Cast[chararray] - scope-33\n | | \n | |---Project[bytearray][4] - scope-32\n | \n |---student: Load(hdfs://localhost:9000/pig_data/student_data.txt:PigStorage(',')) - scope19\n2015-10-05 11:32:43,682 [main]\nINFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - \nFile concatenation threshold: 100 optimistic? false\n2015-10-05 11:32:43,684 [main]\nINFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOp timizer - \nMR plan size before optimization: 1 2015-10-05 11:32:43,685 [main]\nINFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.\nMultiQueryOp timizer - MR plan size after optimization: 1 \n#--------------------------------------------------\n# Map Reduce Plan \n#--------------------------------------------------\nMapReduce node scope-37\nMap Plan\nstudent: Store(fakefile:org.apache.pig.builtin.PigStorage) - scope-36\n|\n|---student: New For Each(false,false,false,false,false)[bag] - scope-35\n | |\n | Cast[int] - scope-21 \n | |\n | |---Project[bytearray][0] - scope-20\n | |\n | Cast[chararray] - scope-24\n | |\n | |---Project[bytearray][1] - scope-23\n | |\n | Cast[chararray] - scope-27\n | | \n | |---Project[bytearray][2] - scope-26 \n | | \n | Cast[chararray] - scope-30 \n | | \n | |---Project[bytearray][3] - scope-29 \n | | \n | Cast[chararray] - scope-33\n | | \n | |---Project[bytearray][4] - scope-32 \n | \n |---student:\nLoad(hdfs://localhost:9000/pig_data/student_data.txt:PigStorage(',')) - scope\n19-------- Global sort: false\n ---------------- \n"
},
{
"code": null,
"e": 54956,
"s": 54866,
"text": "The illustrate operator gives you the step-by-step execution of a sequence of statements."
},
{
"code": null,
"e": 55010,
"s": 54956,
"text": "Given below is the syntax of the illustrate operator."
},
{
"code": null,
"e": 55044,
"s": 55010,
"text": "grunt> illustrate Relation_name;\n"
},
{
"code": null,
"e": 55119,
"s": 55044,
"text": "Assume we have a file student_data.txt in HDFS with the following content."
},
{
"code": null,
"e": 55358,
"s": 55119,
"text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata \n003,Rajesh,Khanna,9848022339,Delhi\n004,Preethi,Agarwal,9848022330,Pune \n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai.\n"
},
{
"code": null,
"e": 55442,
"s": 55358,
"text": "And we have read it into a relation student using the LOAD operator as shown below."
},
{
"code": null,
"e": 55628,
"s": 55442,
"text": "grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt' USING PigStorage(',')\n as ( id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray );"
},
{
"code": null,
"e": 55694,
"s": 55628,
"text": "Now, let us illustrate the relation named student as shown below."
},
{
"code": null,
"e": 55722,
"s": 55694,
"text": "grunt> illustrate student;\n"
},
{
"code": null,
"e": 55791,
"s": 55722,
"text": "On executing the above statement, you will get the following output."
},
{
"code": null,
"e": 56472,
"s": 55791,
"text": "grunt> illustrate student;\n\nINFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$M ap - Aliases\nbeing processed per job phase (AliasName[line,offset]): M: student[1,10] C: R:\n---------------------------------------------------------------------------------------------\n|student | id:int | firstname:chararray | lastname:chararray | phone:chararray | city:chararray |\n--------------------------------------------------------------------------------------------- \n| | 002 | siddarth | Battacharya | 9848022338 | Kolkata |\n---------------------------------------------------------------------------------------------\n"
},
{
"code": null,
"e": 56585,
"s": 56472,
"text": "The GROUP operator is used to group the data in one or more relations. It collects the data having the same key."
},
{
"code": null,
"e": 56634,
"s": 56585,
"text": "Given below is the syntax of the group operator."
},
{
"code": null,
"e": 56683,
"s": 56634,
"text": "grunt> Group_data = GROUP Relation_name BY age;\n"
},
{
"code": null,
"e": 56785,
"s": 56683,
"text": "Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below."
},
{
"code": null,
"e": 56805,
"s": 56785,
"text": "student_details.txt"
},
{
"code": null,
"e": 57145,
"s": 56805,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad\n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi\n004,Preethi,Agarwal,21,9848022330,Pune\n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar\n006,Archana,Mishra,23,9848022335,Chennai\n007,Komal,Nayak,24,9848022334,trivendram\n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 57245,
"s": 57145,
"text": "And we have loaded this file into Apache Pig with the relation name student_details as shown below."
},
{
"code": null,
"e": 57449,
"s": 57245,
"text": "grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray, age:int, phone:chararray, city:chararray);"
},
{
"code": null,
"e": 57525,
"s": 57449,
"text": "Now, let us group the records/tuples in the relation by age as shown below."
},
{
"code": null,
"e": 57576,
"s": 57525,
"text": "grunt> group_data = GROUP student_details by age;\n"
},
{
"code": null,
"e": 57647,
"s": 57576,
"text": "Verify the relation group_data using the DUMP operator as shown below."
},
{
"code": null,
"e": 57672,
"s": 57647,
"text": "grunt> Dump group_data;\n"
},
{
"code": null,
"e": 57835,
"s": 57672,
"text": "Then you will get output displaying the contents of the relation named group_data as shown below. Here you can observe that the resulting schema has two columns −"
},
{
"code": null,
"e": 57886,
"s": 57835,
"text": "One is age, by which we have grouped the relation."
},
{
"code": null,
"e": 57937,
"s": 57886,
"text": "One is age, by which we have grouped the relation."
},
{
"code": null,
"e": 58034,
"s": 57937,
"text": "The other is a bag, which contains the group of tuples, student records with the respective age."
},
{
"code": null,
"e": 58131,
"s": 58034,
"text": "The other is a bag, which contains the group of tuples, student records with the respective age."
},
{
"code": null,
"e": 58503,
"s": 58131,
"text": "(21,{(4,Preethi,Agarwal,21,9848022330,Pune),(1,Rajiv,Reddy,21,9848022337,Hydera bad)})\n(22,{(3,Rajesh,Khanna,22,9848022339,Delhi),(2,siddarth,Battacharya,22,984802233 8,Kolkata)})\n(23,{(6,Archana,Mishra,23,9848022335,Chennai),(5,Trupthi,Mohanthy,23,9848022336 ,Bhuwaneshwar)})\n(24,{(8,Bharathi,Nambiayar,24,9848022333,Chennai),(7,Komal,Nayak,24,9848022334, trivendram)})\n"
},
{
"code": null,
"e": 58606,
"s": 58503,
"text": "You can see the schema of the table after grouping the data using the describe command as shown below."
},
{
"code": null,
"e": 58791,
"s": 58606,
"text": "grunt> Describe group_data;\n \ngroup_data: {group: int,student_details: {(id: int,firstname: chararray,\n lastname: chararray,age: int,phone: chararray,city: chararray)}}\n"
},
{
"code": null,
"e": 58903,
"s": 58791,
"text": "In the same way, you can get the sample illustration of the schema using the illustrate command as shown below."
},
{
"code": null,
"e": 58928,
"s": 58903,
"text": "$ Illustrate group_data;"
},
{
"code": null,
"e": 58967,
"s": 58928,
"text": "It will produce the following output −"
},
{
"code": null,
"e": 59642,
"s": 58967,
"text": "------------------------------------------------------------------------------------------------- \n|group_data| group:int | student_details:bag{:tuple(id:int,firstname:chararray,lastname:chararray,age:int,phone:chararray,city:chararray)}|\n------------------------------------------------------------------------------------------------- \n| | 21 | { 4, Preethi, Agarwal, 21, 9848022330, Pune), (1, Rajiv, Reddy, 21, 9848022337, Hyderabad)}| \n| | 2 | {(2,siddarth,Battacharya,22,9848022338,Kolkata),(003,Rajesh,Khanna,22,9848022339,Delhi)}| \n-------------------------------------------------------------------------------------------------\n"
},
{
"code": null,
"e": 59700,
"s": 59642,
"text": "Let us group the relation by age and city as shown below."
},
{
"code": null,
"e": 59763,
"s": 59700,
"text": "grunt> group_multiple = GROUP student_details by (age, city);\n"
},
{
"code": null,
"e": 59867,
"s": 59763,
"text": "You can verify the content of the relation named group_multiple using the Dump operator as shown below."
},
{
"code": null,
"e": 60379,
"s": 59867,
"text": "grunt> Dump group_multiple; \n \n((21,Pune),{(4,Preethi,Agarwal,21,9848022330,Pune)})\n((21,Hyderabad),{(1,Rajiv,Reddy,21,9848022337,Hyderabad)})\n((22,Delhi),{(3,Rajesh,Khanna,22,9848022339,Delhi)})\n((22,Kolkata),{(2,siddarth,Battacharya,22,9848022338,Kolkata)})\n((23,Chennai),{(6,Archana,Mishra,23,9848022335,Chennai)})\n((23,Bhuwaneshwar),{(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar)})\n((24,Chennai),{(8,Bharathi,Nambiayar,24,9848022333,Chennai)})\n(24,trivendram),{(7,Komal,Nayak,24,9848022334,trivendram)})\n"
},
{
"code": null,
"e": 60439,
"s": 60379,
"text": "You can group a relation by all the columns as shown below."
},
{
"code": null,
"e": 60486,
"s": 60439,
"text": "grunt> group_all = GROUP student_details All;\n"
},
{
"code": null,
"e": 60552,
"s": 60486,
"text": "Now, verify the content of the relation group_all as shown below."
},
{
"code": null,
"e": 60937,
"s": 60552,
"text": "grunt> Dump group_all; \n \n(all,{(8,Bharathi,Nambiayar,24,9848022333,Chennai),(7,Komal,Nayak,24,9848022334 ,trivendram), \n(6,Archana,Mishra,23,9848022335,Chennai),(5,Trupthi,Mohanthy,23,9848022336,Bhuw aneshwar), \n(4,Preethi,Agarwal,21,9848022330,Pune),(3,Rajesh,Khanna,22,9848022339,Delhi), \n(2,siddarth,Battacharya,22,9848022338,Kolkata),(1,Rajiv,Reddy,21,9848022337,Hyd erabad)})\n"
},
{
"code": null,
"e": 61207,
"s": 60937,
"text": "The COGROUP operator works more or less in the same way as the GROUP operator. The only difference between the two operators is that the group operator is normally used with one relation, while the cogroup operator is used in statements involving two or more relations."
},
{
"code": null,
"e": 61338,
"s": 61207,
"text": "Assume that we have two files namely student_details.txt and employee_details.txt in the HDFS directory /pig_data/ as shown below."
},
{
"code": null,
"e": 61358,
"s": 61338,
"text": "student_details.txt"
},
{
"code": null,
"e": 61698,
"s": 61358,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad\n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi\n004,Preethi,Agarwal,21,9848022330,Pune\n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar\n006,Archana,Mishra,23,9848022335,Chennai\n007,Komal,Nayak,24,9848022334,trivendram\n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 61719,
"s": 61698,
"text": "employee_details.txt"
},
{
"code": null,
"e": 61849,
"s": 61719,
"text": "001,Robin,22,newyork \n002,BOB,23,Kolkata \n003,Maya,23,Tokyo \n004,Sara,25,London \n005,David,23,Bhuwaneshwar \n006,Maggy,22,Chennai\n"
},
{
"code": null,
"e": 61980,
"s": 61849,
"text": "And we have loaded these files into Pig with the relation names student_details and employee_details respectively, as shown below."
},
{
"code": null,
"e": 62352,
"s": 61980,
"text": "grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray, age:int, phone:chararray, city:chararray); \n \ngrunt> employee_details = LOAD 'hdfs://localhost:9000/pig_data/employee_details.txt' USING PigStorage(',')\n as (id:int, name:chararray, age:int, city:chararray);"
},
{
"code": null,
"e": 62477,
"s": 62352,
"text": "Now, let us group the records/tuples of the relations student_details and employee_details with the key age, as shown below."
},
{
"code": null,
"e": 62556,
"s": 62477,
"text": "grunt> cogroup_data = COGROUP student_details by age, employee_details by age;"
},
{
"code": null,
"e": 62629,
"s": 62556,
"text": "Verify the relation cogroup_data using the DUMP operator as shown below."
},
{
"code": null,
"e": 62655,
"s": 62629,
"text": "grunt> Dump cogroup_data;"
},
{
"code": null,
"e": 62768,
"s": 62655,
"text": "It will produce the following output, displaying the contents of the relation named cogroup_data as shown below."
},
{
"code": null,
"e": 63329,
"s": 62768,
"text": "(21,{(4,Preethi,Agarwal,21,9848022330,Pune), (1,Rajiv,Reddy,21,9848022337,Hyderabad)}, \n { }) \n(22,{ (3,Rajesh,Khanna,22,9848022339,Delhi), (2,siddarth,Battacharya,22,9848022338,Kolkata) }, \n { (6,Maggy,22,Chennai),(1,Robin,22,newyork) }) \n(23,{(6,Archana,Mishra,23,9848022335,Chennai),(5,Trupthi,Mohanthy,23,9848022336 ,Bhuwaneshwar)}, \n {(5,David,23,Bhuwaneshwar),(3,Maya,23,Tokyo),(2,BOB,23,Kolkata)}) \n(24,{(8,Bharathi,Nambiayar,24,9848022333,Chennai),(7,Komal,Nayak,24,9848022334, trivendram)}, \n { }) \n(25,{ }, \n {(4,Sara,25,London)})\n"
},
{
"code": null,
"e": 63453,
"s": 63329,
"text": "The cogroup operator groups the tuples from each relation according to age where each group depicts a particular age value."
},
{
"code": null,
"e": 63562,
"s": 63453,
"text": "For example, if we consider the 1st tuple of the result, it is grouped by age 21. And it contains two bags −"
},
{
"code": null,
"e": 63671,
"s": 63562,
"text": "the first bag holds all the tuples from the first relation (student_details in this case) having age 21, and"
},
{
"code": null,
"e": 63780,
"s": 63671,
"text": "the first bag holds all the tuples from the first relation (student_details in this case) having age 21, and"
},
{
"code": null,
"e": 63892,
"s": 63780,
"text": "the second bag contains all the tuples from the second relation (employee_details in this case) having age 21. "
},
{
"code": null,
"e": 64004,
"s": 63892,
"text": "the second bag contains all the tuples from the second relation (employee_details in this case) having age 21. "
},
{
"code": null,
"e": 64093,
"s": 64004,
"text": "In case a relation doesn’t have tuples having the age value 21, it returns an empty bag."
},
{
"code": null,
"e": 64400,
"s": 64093,
"text": "The JOIN operator is used to combine records from two or more relations. While performing a join operation, we declare one (or a group of) tuple(s) from each relation, as keys. When these keys match, the two particular tuples are matched, else the records are dropped. Joins can be of the following types −"
},
{
"code": null,
"e": 64410,
"s": 64400,
"text": "Self-join"
},
{
"code": null,
"e": 64421,
"s": 64410,
"text": "Inner-join"
},
{
"code": null,
"e": 64471,
"s": 64421,
"text": "Outer-join − left join, right join, and full join"
},
{
"code": null,
"e": 64668,
"s": 64471,
"text": "This chapter explains with examples how to use the join operator in Pig Latin. Assume that we have two files namely customers.txt and orders.txt in the /pig_data/ directory of HDFS as shown below."
},
{
"code": null,
"e": 64682,
"s": 64668,
"text": "customers.txt"
},
{
"code": null,
"e": 64871,
"s": 64682,
"text": "1,Ramesh,32,Ahmedabad,2000.00\n2,Khilan,25,Delhi,1500.00\n3,kaushik,23,Kota,2000.00\n4,Chaitali,25,Mumbai,6500.00 \n5,Hardik,27,Bhopal,8500.00\n6,Komal,22,MP,4500.00\n7,Muffy,24,Indore,10000.00\n"
},
{
"code": null,
"e": 64882,
"s": 64871,
"text": "orders.txt"
},
{
"code": null,
"e": 65007,
"s": 64882,
"text": "102,2009-10-08 00:00:00,3,3000\n100,2009-10-08 00:00:00,3,1500\n101,2009-11-20 00:00:00,2,1560\n103,2008-05-20 00:00:00,4,2060\n"
},
{
"code": null,
"e": 65107,
"s": 65007,
"text": "And we have loaded these two files into Pig with the relations customers and orders as shown below."
},
{
"code": null,
"e": 65424,
"s": 65107,
"text": "grunt> customers = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',')\n as (id:int, name:chararray, age:int, address:chararray, salary:int);\n \ngrunt> orders = LOAD 'hdfs://localhost:9000/pig_data/orders.txt' USING PigStorage(',')\n as (oid:int, date:chararray, customer_id:int, amount:int);"
},
{
"code": null,
"e": 65491,
"s": 65424,
"text": "Let us now perform various Join operations on these two relations."
},
{
"code": null,
"e": 65617,
"s": 65491,
"text": "Self-join is used to join a table with itself as if the table were two relations, temporarily renaming at least one relation."
},
{
"code": null,
"e": 65832,
"s": 65617,
"text": "Generally, in Apache Pig, to perform self-join, we will load the same data multiple times, under different aliases (names). Therefore let us load the contents of the file customers.txt as two tables as shown below."
},
{
"code": null,
"e": 66168,
"s": 65832,
"text": "grunt> customers1 = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',')\n as (id:int, name:chararray, age:int, address:chararray, salary:int);\n \ngrunt> customers2 = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',')\n as (id:int, name:chararray, age:int, address:chararray, salary:int); "
},
{
"code": null,
"e": 66253,
"s": 66168,
"text": "Given below is the syntax of performing self-join operation using the JOIN operator."
},
{
"code": null,
"e": 66330,
"s": 66253,
"text": "grunt> Relation3_name = JOIN Relation1_name BY key, Relation2_name BY key ;\n"
},
{
"code": null,
"e": 66463,
"s": 66330,
"text": "Let us perform self-join operation on the relation customers, by joining the two relations customers1 and customers2 as shown below."
},
{
"code": null,
"e": 66524,
"s": 66463,
"text": "grunt> customers3 = JOIN customers1 BY id, customers2 BY id;"
},
{
"code": null,
"e": 66595,
"s": 66524,
"text": "Verify the relation customers3 using the DUMP operator as shown below."
},
{
"code": null,
"e": 66619,
"s": 66595,
"text": "grunt> Dump customers3;"
},
{
"code": null,
"e": 66708,
"s": 66619,
"text": "It will produce the following output, displaying the contents of the relation customers."
},
{
"code": null,
"e": 67055,
"s": 66708,
"text": "(1,Ramesh,32,Ahmedabad,2000,1,Ramesh,32,Ahmedabad,2000)\n(2,Khilan,25,Delhi,1500,2,Khilan,25,Delhi,1500)\n(3,kaushik,23,Kota,2000,3,kaushik,23,Kota,2000)\n(4,Chaitali,25,Mumbai,6500,4,Chaitali,25,Mumbai,6500)\n(5,Hardik,27,Bhopal,8500,5,Hardik,27,Bhopal,8500)\n(6,Komal,22,MP,4500,6,Komal,22,MP,4500)\n(7,Muffy,24,Indore,10000,7,Muffy,24,Indore,10000)\n"
},
{
"code": null,
"e": 67193,
"s": 67055,
"text": "Inner Join is used quite frequently; it is also referred to as equijoin. An inner join returns rows when there is a match in both tables."
},
{
"code": null,
"e": 67549,
"s": 67193,
"text": "It creates a new relation by combining column values of two relations (say A and B) based upon the join-predicate. The query compares each row of A with each row of B to find all pairs of rows which satisfy the join-predicate. When the join-predicate is satisfied, the column values for each matched pair of rows of A and B are combined into a result row."
},
{
"code": null,
"e": 67628,
"s": 67549,
"text": "Here is the syntax of performing inner join operation using the JOIN operator."
},
{
"code": null,
"e": 67700,
"s": 67628,
"text": "grunt> result = JOIN relation1 BY columnname, relation2 BY columnname;\n"
},
{
"code": null,
"e": 67794,
"s": 67700,
"text": "Let us perform inner join operation on the two relations customers and orders as shown below."
},
{
"code": null,
"e": 67865,
"s": 67794,
"text": "grunt> coustomer_orders = JOIN customers BY id, orders BY customer_id;"
},
{
"code": null,
"e": 67942,
"s": 67865,
"text": "Verify the relation coustomer_orders using the DUMP operator as shown below."
},
{
"code": null,
"e": 67972,
"s": 67942,
"text": "grunt> Dump coustomer_orders;"
},
{
"code": null,
"e": 68069,
"s": 67972,
"text": "You will get the following output that will the contents of the relation named coustomer_orders."
},
{
"code": null,
"e": 68297,
"s": 68069,
"text": "(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)\n(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)\n(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)\n(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)\n"
},
{
"code": null,
"e": 68304,
"s": 68297,
"text": "Note −"
},
{
"code": null,
"e": 68458,
"s": 68304,
"text": "Outer Join: Unlike inner join, outer join returns all the rows from at least one of the relations. An outer join operation is carried out in three ways −"
},
{
"code": null,
"e": 68474,
"s": 68458,
"text": "Left outer join"
},
{
"code": null,
"e": 68491,
"s": 68474,
"text": "Right outer join"
},
{
"code": null,
"e": 68507,
"s": 68491,
"text": "Full outer join"
},
{
"code": null,
"e": 68627,
"s": 68507,
"text": "The left outer Join operation returns all rows from the left table, even if there are no matches in the right relation."
},
{
"code": null,
"e": 68718,
"s": 68627,
"text": "Given below is the syntax of performing left outer join operation using the JOIN operator."
},
{
"code": null,
"e": 68812,
"s": 68718,
"text": "grunt> Relation3_name = JOIN Relation1_name BY id LEFT OUTER, Relation2_name BY customer_id;\n"
},
{
"code": null,
"e": 68911,
"s": 68812,
"text": "Let us perform left outer join operation on the two relations customers and orders as shown below."
},
{
"code": null,
"e": 68987,
"s": 68911,
"text": "grunt> outer_left = JOIN customers BY id LEFT OUTER, orders BY customer_id;"
},
{
"code": null,
"e": 69058,
"s": 68987,
"text": "Verify the relation outer_left using the DUMP operator as shown below."
},
{
"code": null,
"e": 69082,
"s": 69058,
"text": "grunt> Dump outer_left;"
},
{
"code": null,
"e": 69172,
"s": 69082,
"text": "It will produce the following output, displaying the contents of the relation outer_left."
},
{
"code": null,
"e": 69519,
"s": 69172,
"text": "(1,Ramesh,32,Ahmedabad,2000,,,,)\n(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)\n(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)\n(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)\n(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)\n(5,Hardik,27,Bhopal,8500,,,,)\n(6,Komal,22,MP,4500,,,,)\n(7,Muffy,24,Indore,10000,,,,) \n"
},
{
"code": null,
"e": 69637,
"s": 69519,
"text": "The right outer join operation returns all rows from the right table, even if there are no matches in the left table."
},
{
"code": null,
"e": 69729,
"s": 69637,
"text": "Given below is the syntax of performing right outer join operation using the JOIN operator."
},
{
"code": null,
"e": 69802,
"s": 69729,
"text": "grunt> outer_right = JOIN customers BY id RIGHT, orders BY customer_id;\n"
},
{
"code": null,
"e": 69902,
"s": 69802,
"text": "Let us perform right outer join operation on the two relations customers and orders as shown below."
},
{
"code": null,
"e": 69974,
"s": 69902,
"text": "grunt> outer_right = JOIN customers BY id RIGHT, orders BY customer_id;"
},
{
"code": null,
"e": 70046,
"s": 69974,
"text": "Verify the relation outer_right using the DUMP operator as shown below."
},
{
"code": null,
"e": 70070,
"s": 70046,
"text": "grunt> Dump outer_right"
},
{
"code": null,
"e": 70161,
"s": 70070,
"text": "It will produce the following output, displaying the contents of the relation outer_right."
},
{
"code": null,
"e": 70389,
"s": 70161,
"text": "(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)\n(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)\n(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)\n(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)\n"
},
{
"code": null,
"e": 70479,
"s": 70389,
"text": "The full outer join operation returns rows when there is a match in one of the relations."
},
{
"code": null,
"e": 70560,
"s": 70479,
"text": "Given below is the syntax of performing full outer join using the JOIN operator."
},
{
"code": null,
"e": 70637,
"s": 70560,
"text": "grunt> outer_full = JOIN customers BY id FULL OUTER, orders BY customer_id;\n"
},
{
"code": null,
"e": 70736,
"s": 70637,
"text": "Let us perform full outer join operation on the two relations customers and orders as shown below."
},
{
"code": null,
"e": 70812,
"s": 70736,
"text": "grunt> outer_full = JOIN customers BY id FULL OUTER, orders BY customer_id;"
},
{
"code": null,
"e": 70883,
"s": 70812,
"text": "Verify the relation outer_full using the DUMP operator as shown below."
},
{
"code": null,
"e": 70907,
"s": 70883,
"text": "grun> Dump outer_full; "
},
{
"code": null,
"e": 70997,
"s": 70907,
"text": "It will produce the following output, displaying the contents of the relation outer_full."
},
{
"code": null,
"e": 71343,
"s": 70997,
"text": "(1,Ramesh,32,Ahmedabad,2000,,,,)\n(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560)\n(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500)\n(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000)\n(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060)\n(5,Hardik,27,Bhopal,8500,,,,)\n(6,Komal,22,MP,4500,,,,)\n(7,Muffy,24,Indore,10000,,,,)\n"
},
{
"code": null,
"e": 71394,
"s": 71343,
"text": "We can perform JOIN operation using multiple keys."
},
{
"code": null,
"e": 71474,
"s": 71394,
"text": "Here is how you can perform a JOIN operation on two tables using multiple keys."
},
{
"code": null,
"e": 71568,
"s": 71474,
"text": "grunt> Relation3_name = JOIN Relation2_name BY (key1, key2), Relation3_name BY (key1, key2);\n"
},
{
"code": null,
"e": 71695,
"s": 71568,
"text": "Assume that we have two files namely employee.txt and employee_contact.txt in the /pig_data/ directory of HDFS as shown below."
},
{
"code": null,
"e": 71708,
"s": 71695,
"text": "employee.txt"
},
{
"code": null,
"e": 72006,
"s": 71708,
"text": "001,Rajiv,Reddy,21,programmer,003\n002,siddarth,Battacharya,22,programmer,003\n003,Rajesh,Khanna,22,programmer,003\n004,Preethi,Agarwal,21,programmer,003\n005,Trupthi,Mohanthy,23,programmer,003\n006,Archana,Mishra,23,programmer,003\n007,Komal,Nayak,24,teamlead,002\n008,Bharathi,Nambiayar,24,manager,001\n"
},
{
"code": null,
"e": 72027,
"s": 72006,
"text": "employee_contact.txt"
},
{
"code": null,
"e": 72390,
"s": 72027,
"text": "001,9848022337,[email protected],Hyderabad,003\n002,9848022338,[email protected],Kolkata,003\n003,9848022339,[email protected],Delhi,003\n004,9848022330,[email protected],Pune,003\n005,9848022336,[email protected],Bhuwaneshwar,003\n006,9848022335,[email protected],Chennai,003\n007,9848022334,[email protected],trivendram,002\n008,9848022333,[email protected],Chennai,001\n"
},
{
"code": null,
"e": 72495,
"s": 72390,
"text": "And we have loaded these two files into Pig with relations employee and employee_contact as shown below."
},
{
"code": null,
"e": 72874,
"s": 72495,
"text": "grunt> employee = LOAD 'hdfs://localhost:9000/pig_data/employee.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray, age:int, designation:chararray, jobid:int);\n \ngrunt> employee_contact = LOAD 'hdfs://localhost:9000/pig_data/employee_contact.txt' USING PigStorage(',') \n as (id:int, phone:chararray, email:chararray, city:chararray, jobid:int);"
},
{
"code": null,
"e": 72967,
"s": 72874,
"text": "Now, let us join the contents of these two relations using the JOIN operator as shown below."
},
{
"code": null,
"e": 73041,
"s": 72967,
"text": "grunt> emp = JOIN employee BY (id,jobid), employee_contact BY (id,jobid);"
},
{
"code": null,
"e": 73105,
"s": 73041,
"text": "Verify the relation emp using the DUMP operator as shown below."
},
{
"code": null,
"e": 73123,
"s": 73105,
"text": "grunt> Dump emp; "
},
{
"code": null,
"e": 73227,
"s": 73123,
"text": "It will produce the following output, displaying the contents of the relation named emp as shown below."
},
{
"code": null,
"e": 73885,
"s": 73227,
"text": "(1,Rajiv,Reddy,21,programmer,113,1,9848022337,[email protected],Hyderabad,113)\n(2,siddarth,Battacharya,22,programmer,113,2,9848022338,[email protected],Kolka ta,113) \n(3,Rajesh,Khanna,22,programmer,113,3,9848022339,[email protected],Delhi,113) \n(4,Preethi,Agarwal,21,programmer,113,4,9848022330,[email protected],Pune,113) \n(5,Trupthi,Mohanthy,23,programmer,113,5,9848022336,[email protected],Bhuwaneshw ar,113) \n(6,Archana,Mishra,23,programmer,113,6,9848022335,[email protected],Chennai,113) \n(7,Komal,Nayak,24,teamlead,112,7,9848022334,[email protected],trivendram,112) \n(8,Bharathi,Nambiayar,24,manager,111,8,9848022333,[email protected],Chennai,111)\n"
},
{
"code": null,
"e": 74036,
"s": 73885,
"text": "The CROSS operator computes the cross-product of two or more relations. This chapter explains with example how to use the cross operator in Pig Latin."
},
{
"code": null,
"e": 74085,
"s": 74036,
"text": "Given below is the syntax of the CROSS operator."
},
{
"code": null,
"e": 74148,
"s": 74085,
"text": "grunt> Relation3_name = CROSS Relation1_name, Relation2_name;\n"
},
{
"code": null,
"e": 74266,
"s": 74148,
"text": "Assume that we have two files namely customers.txt and orders.txt in the /pig_data/ directory of HDFS as shown below."
},
{
"code": null,
"e": 74280,
"s": 74266,
"text": "customers.txt"
},
{
"code": null,
"e": 74468,
"s": 74280,
"text": "1,Ramesh,32,Ahmedabad,2000.00\n2,Khilan,25,Delhi,1500.00\n3,kaushik,23,Kota,2000.00\n4,Chaitali,25,Mumbai,6500.00\n5,Hardik,27,Bhopal,8500.00\n6,Komal,22,MP,4500.00\n7,Muffy,24,Indore,10000.00\n"
},
{
"code": null,
"e": 74479,
"s": 74468,
"text": "orders.txt"
},
{
"code": null,
"e": 74604,
"s": 74479,
"text": "102,2009-10-08 00:00:00,3,3000\n100,2009-10-08 00:00:00,3,1500\n101,2009-11-20 00:00:00,2,1560\n103,2008-05-20 00:00:00,4,2060\n"
},
{
"code": null,
"e": 74704,
"s": 74604,
"text": "And we have loaded these two files into Pig with the relations customers and orders as shown below."
},
{
"code": null,
"e": 75021,
"s": 74704,
"text": "grunt> customers = LOAD 'hdfs://localhost:9000/pig_data/customers.txt' USING PigStorage(',')\n as (id:int, name:chararray, age:int, address:chararray, salary:int);\n \ngrunt> orders = LOAD 'hdfs://localhost:9000/pig_data/orders.txt' USING PigStorage(',')\n as (oid:int, date:chararray, customer_id:int, amount:int);"
},
{
"code": null,
"e": 75141,
"s": 75021,
"text": "Let us now get the cross-product of these two relations using the cross operator on these two relations as shown below."
},
{
"code": null,
"e": 75186,
"s": 75141,
"text": "grunt> cross_data = CROSS customers, orders;"
},
{
"code": null,
"e": 75257,
"s": 75186,
"text": "Verify the relation cross_data using the DUMP operator as shown below."
},
{
"code": null,
"e": 75281,
"s": 75257,
"text": "grunt> Dump cross_data;"
},
{
"code": null,
"e": 75371,
"s": 75281,
"text": "It will produce the following output, displaying the contents of the relation cross_data."
},
{
"code": null,
"e": 77423,
"s": 75371,
"text": "(7,Muffy,24,Indore,10000,103,2008-05-20 00:00:00,4,2060) \n(7,Muffy,24,Indore,10000,101,2009-11-20 00:00:00,2,1560) \n(7,Muffy,24,Indore,10000,100,2009-10-08 00:00:00,3,1500) \n(7,Muffy,24,Indore,10000,102,2009-10-08 00:00:00,3,3000) \n(6,Komal,22,MP,4500,103,2008-05-20 00:00:00,4,2060) \n(6,Komal,22,MP,4500,101,2009-11-20 00:00:00,2,1560) \n(6,Komal,22,MP,4500,100,2009-10-08 00:00:00,3,1500) \n(6,Komal,22,MP,4500,102,2009-10-08 00:00:00,3,3000) \n(5,Hardik,27,Bhopal,8500,103,2008-05-20 00:00:00,4,2060) \n(5,Hardik,27,Bhopal,8500,101,2009-11-20 00:00:00,2,1560) \n(5,Hardik,27,Bhopal,8500,100,2009-10-08 00:00:00,3,1500) \n(5,Hardik,27,Bhopal,8500,102,2009-10-08 00:00:00,3,3000) \n(4,Chaitali,25,Mumbai,6500,103,2008-05-20 00:00:00,4,2060) \n(4,Chaitali,25,Mumbai,6500,101,2009-20 00:00:00,4,2060) \n(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560) \n(2,Khilan,25,Delhi,1500,100,2009-10-08 00:00:00,3,1500) \n(2,Khilan,25,Delhi,1500,102,2009-10-08 00:00:00,3,3000) \n(1,Ramesh,32,Ahmedabad,2000,103,2008-05-20 00:00:00,4,2060) \n(1,Ramesh,32,Ahmedabad,2000,101,2009-11-20 00:00:00,2,1560) \n(1,Ramesh,32,Ahmedabad,2000,100,2009-10-08 00:00:00,3,1500) \n(1,Ramesh,32,Ahmedabad,2000,102,2009-10-08 00:00:00,3,3000)-11-20 00:00:00,2,1560) \n(4,Chaitali,25,Mumbai,6500,100,2009-10-08 00:00:00,3,1500) \n(4,Chaitali,25,Mumbai,6500,102,2009-10-08 00:00:00,3,3000) \n(3,kaushik,23,Kota,2000,103,2008-05-20 00:00:00,4,2060) \n(3,kaushik,23,Kota,2000,101,2009-11-20 00:00:00,2,1560) \n(3,kaushik,23,Kota,2000,100,2009-10-08 00:00:00,3,1500) \n(3,kaushik,23,Kota,2000,102,2009-10-08 00:00:00,3,3000) \n(2,Khilan,25,Delhi,1500,103,2008-05-20 00:00:00,4,2060) \n(2,Khilan,25,Delhi,1500,101,2009-11-20 00:00:00,2,1560) \n(2,Khilan,25,Delhi,1500,100,2009-10-08 00:00:00,3,1500)\n(2,Khilan,25,Delhi,1500,102,2009-10-08 00:00:00,3,3000) \n(1,Ramesh,32,Ahmedabad,2000,103,2008-05-20 00:00:00,4,2060) \n(1,Ramesh,32,Ahmedabad,2000,101,2009-11-20 00:00:00,2,1560) \n(1,Ramesh,32,Ahmedabad,2000,100,2009-10-08 00:00:00,3,1500) \n(1,Ramesh,32,Ahmedabad,2000,102,2009-10-08 00:00:00,3,3000) \n"
},
{
"code": null,
"e": 77592,
"s": 77423,
"text": "The UNION operator of Pig Latin is used to merge the content of two relations. To perform UNION operation on two relations, their columns and domains must be identical."
},
{
"code": null,
"e": 77641,
"s": 77592,
"text": "Given below is the syntax of the UNION operator."
},
{
"code": null,
"e": 77704,
"s": 77641,
"text": "grunt> Relation_name3 = UNION Relation_name1, Relation_name2;\n"
},
{
"code": null,
"e": 77833,
"s": 77704,
"text": "Assume that we have two files namely student_data1.txt and student_data2.txt in the /pig_data/ directory of HDFS as shown below."
},
{
"code": null,
"e": 77851,
"s": 77833,
"text": "Student_data1.txt"
},
{
"code": null,
"e": 78088,
"s": 77851,
"text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata\n003,Rajesh,Khanna,9848022339,Delhi\n004,Preethi,Agarwal,9848022330,Pune\n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai.\n"
},
{
"code": null,
"e": 78106,
"s": 78088,
"text": "Student_data2.txt"
},
{
"code": null,
"e": 78185,
"s": 78106,
"text": "7,Komal,Nayak,9848022334,trivendram.\n8,Bharathi,Nambiayar,9848022333,Chennai.\n"
},
{
"code": null,
"e": 78286,
"s": 78185,
"text": "And we have loaded these two files into Pig with the relations student1 and student2 as shown below."
},
{
"code": null,
"e": 78663,
"s": 78286,
"text": "grunt> student1 = LOAD 'hdfs://localhost:9000/pig_data/student_data1.txt' USING PigStorage(',') \n as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray); \n \ngrunt> student2 = LOAD 'hdfs://localhost:9000/pig_data/student_data2.txt' USING PigStorage(',') \n as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);"
},
{
"code": null,
"e": 78757,
"s": 78663,
"text": "Let us now merge the contents of these two relations using the UNION operator as shown below."
},
{
"code": null,
"e": 78800,
"s": 78757,
"text": "grunt> student = UNION student1, student2;"
},
{
"code": null,
"e": 78868,
"s": 78800,
"text": "Verify the relation student using the DUMP operator as shown below."
},
{
"code": null,
"e": 78890,
"s": 78868,
"text": "grunt> Dump student; "
},
{
"code": null,
"e": 78977,
"s": 78890,
"text": "It will display the following output, displaying the contents of the relation student."
},
{
"code": null,
"e": 79296,
"s": 78977,
"text": "(1,Rajiv,Reddy,9848022337,Hyderabad) (2,siddarth,Battacharya,9848022338,Kolkata)\n(3,Rajesh,Khanna,9848022339,Delhi)\n(4,Preethi,Agarwal,9848022330,Pune) \n(5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar)\n(6,Archana,Mishra,9848022335,Chennai) \n(7,Komal,Nayak,9848022334,trivendram) \n(8,Bharathi,Nambiayar,9848022333,Chennai)\n"
},
{
"code": null,
"e": 79371,
"s": 79296,
"text": "The SPLIT operator is used to split a relation into two or more relations."
},
{
"code": null,
"e": 79420,
"s": 79371,
"text": "Given below is the syntax of the SPLIT operator."
},
{
"code": null,
"e": 79515,
"s": 79420,
"text": "grunt> SPLIT Relation1_name INTO Relation2_name IF (condition1), Relation2_name (condition2),\n"
},
{
"code": null,
"e": 79617,
"s": 79515,
"text": "Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below."
},
{
"code": null,
"e": 79637,
"s": 79617,
"text": "student_details.txt"
},
{
"code": null,
"e": 79982,
"s": 79637,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad\n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi \n004,Preethi,Agarwal,21,9848022330,Pune \n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar \n006,Archana,Mishra,23,9848022335,Chennai \n007,Komal,Nayak,24,9848022334,trivendram \n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 80075,
"s": 79982,
"text": "And we have loaded this file into Pig with the relation name student_details as shown below."
},
{
"code": null,
"e": 80273,
"s": 80075,
"text": "student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray, age:int, phone:chararray, city:chararray); "
},
{
"code": null,
"e": 80430,
"s": 80273,
"text": "Let us now split the relation into two, one listing the employees of age less than 23, and the other listing the employees having the age between 22 and 25."
},
{
"code": null,
"e": 80526,
"s": 80430,
"text": "SPLIT student_details into student_details1 if age<23, student_details2 if (22<age and age>25);"
},
{
"code": null,
"e": 80625,
"s": 80526,
"text": "Verify the relations student_details1 and student_details2 using the DUMP operator as shown below."
},
{
"code": null,
"e": 80689,
"s": 80625,
"text": "grunt> Dump student_details1; \n\ngrunt> Dump student_details2; "
},
{
"code": null,
"e": 80820,
"s": 80689,
"text": "It will produce the following output, displaying the contents of the relations student_details1 and student_details2 respectively."
},
{
"code": null,
"e": 81230,
"s": 80820,
"text": "grunt> Dump student_details1; \n(1,Rajiv,Reddy,21,9848022337,Hyderabad) \n(2,siddarth,Battacharya,22,9848022338,Kolkata)\n(3,Rajesh,Khanna,22,9848022339,Delhi) \n(4,Preethi,Agarwal,21,9848022330,Pune)\n \ngrunt> Dump student_details2; \n(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar) \n(6,Archana,Mishra,23,9848022335,Chennai) \n(7,Komal,Nayak,24,9848022334,trivendram) \n(8,Bharathi,Nambiayar,24,9848022333,Chennai)\n"
},
{
"code": null,
"e": 81326,
"s": 81230,
"text": "The FILTER operator is used to select the required tuples from a relation based on a condition."
},
{
"code": null,
"e": 81376,
"s": 81326,
"text": "Given below is the syntax of the FILTER operator."
},
{
"code": null,
"e": 81439,
"s": 81376,
"text": "grunt> Relation2_name = FILTER Relation1_name BY (condition);\n"
},
{
"code": null,
"e": 81541,
"s": 81439,
"text": "Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below."
},
{
"code": null,
"e": 81561,
"s": 81541,
"text": "student_details.txt"
},
{
"code": null,
"e": 81906,
"s": 81561,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad\n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi \n004,Preethi,Agarwal,21,9848022330,Pune \n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar \n006,Archana,Mishra,23,9848022335,Chennai \n007,Komal,Nayak,24,9848022334,trivendram \n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 81999,
"s": 81906,
"text": "And we have loaded this file into Pig with the relation name student_details as shown below."
},
{
"code": null,
"e": 82203,
"s": 81999,
"text": "grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray, age:int, phone:chararray, city:chararray);"
},
{
"code": null,
"e": 82305,
"s": 82203,
"text": "Let us now use the Filter operator to get the details of the students who belong to the city Chennai."
},
{
"code": null,
"e": 82364,
"s": 82305,
"text": "filter_data = FILTER student_details BY city == 'Chennai';"
},
{
"code": null,
"e": 82436,
"s": 82364,
"text": "Verify the relation filter_data using the DUMP operator as shown below."
},
{
"code": null,
"e": 82461,
"s": 82436,
"text": "grunt> Dump filter_data;"
},
{
"code": null,
"e": 82563,
"s": 82461,
"text": "It will produce the following output, displaying the contents of the relation filter_data as follows."
},
{
"code": null,
"e": 82650,
"s": 82563,
"text": "(6,Archana,Mishra,23,9848022335,Chennai)\n(8,Bharathi,Nambiayar,24,9848022333,Chennai)\n"
},
{
"code": null,
"e": 82736,
"s": 82650,
"text": "The DISTINCT operator is used to remove redundant (duplicate) tuples from a relation."
},
{
"code": null,
"e": 82788,
"s": 82736,
"text": "Given below is the syntax of the DISTINCT operator."
},
{
"code": null,
"e": 82837,
"s": 82788,
"text": "grunt> Relation_name2 = DISTINCT Relatin_name1;\n"
},
{
"code": null,
"e": 82939,
"s": 82837,
"text": "Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below."
},
{
"code": null,
"e": 82959,
"s": 82939,
"text": "student_details.txt"
},
{
"code": null,
"e": 83318,
"s": 82959,
"text": "001,Rajiv,Reddy,9848022337,Hyderabad\n002,siddarth,Battacharya,9848022338,Kolkata \n002,siddarth,Battacharya,9848022338,Kolkata \n003,Rajesh,Khanna,9848022339,Delhi \n003,Rajesh,Khanna,9848022339,Delhi \n004,Preethi,Agarwal,9848022330,Pune \n005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar\n006,Archana,Mishra,9848022335,Chennai \n006,Archana,Mishra,9848022335,Chennai\n"
},
{
"code": null,
"e": 83411,
"s": 83318,
"text": "And we have loaded this file into Pig with the relation name student_details as shown below."
},
{
"code": null,
"e": 83607,
"s": 83411,
"text": "grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',') \n as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);"
},
{
"code": null,
"e": 83796,
"s": 83607,
"text": "Let us now remove the redundant (duplicate) tuples from the relation named student_details using the DISTINCT operator, and store it as another relation named distinct_data as shown below."
},
{
"code": null,
"e": 83846,
"s": 83796,
"text": "grunt> distinct_data = DISTINCT student_details;\n"
},
{
"code": null,
"e": 83920,
"s": 83846,
"text": "Verify the relation distinct_data using the DUMP operator as shown below."
},
{
"code": null,
"e": 83947,
"s": 83920,
"text": "grunt> Dump distinct_data;"
},
{
"code": null,
"e": 84051,
"s": 83947,
"text": "It will produce the following output, displaying the contents of the relation distinct_data as follows."
},
{
"code": null,
"e": 84290,
"s": 84051,
"text": "(1,Rajiv,Reddy,9848022337,Hyderabad)\n(2,siddarth,Battacharya,9848022338,Kolkata) \n(3,Rajesh,Khanna,9848022339,Delhi) \n(4,Preethi,Agarwal,9848022330,Pune) \n(5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar)\n(6,Archana,Mishra,9848022335,Chennai)\n"
},
{
"code": null,
"e": 84388,
"s": 84290,
"text": "The FOREACH operator is used to generate specified data transformations based on the column data."
},
{
"code": null,
"e": 84435,
"s": 84388,
"text": "Given below is the syntax of FOREACH operator."
},
{
"code": null,
"e": 84508,
"s": 84435,
"text": "grunt> Relation_name2 = FOREACH Relatin_name1 GENERATE (required data);\n"
},
{
"code": null,
"e": 84610,
"s": 84508,
"text": "Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below."
},
{
"code": null,
"e": 84630,
"s": 84610,
"text": "student_details.txt"
},
{
"code": null,
"e": 84975,
"s": 84630,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad\n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi \n004,Preethi,Agarwal,21,9848022330,Pune \n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar \n006,Archana,Mishra,23,9848022335,Chennai \n007,Komal,Nayak,24,9848022334,trivendram \n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 85068,
"s": 84975,
"text": "And we have loaded this file into Pig with the relation name student_details as shown below."
},
{
"code": null,
"e": 85271,
"s": 85068,
"text": "grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray,age:int, phone:chararray, city:chararray);"
},
{
"code": null,
"e": 85462,
"s": 85271,
"text": "Let us now get the id, age, and city values of each student from the relation student_details and store it into another relation named foreach_data using the foreach operator as shown below."
},
{
"code": null,
"e": 85530,
"s": 85462,
"text": "grunt> foreach_data = FOREACH student_details GENERATE id,age,city;"
},
{
"code": null,
"e": 85603,
"s": 85530,
"text": "Verify the relation foreach_data using the DUMP operator as shown below."
},
{
"code": null,
"e": 85629,
"s": 85603,
"text": "grunt> Dump foreach_data;"
},
{
"code": null,
"e": 85721,
"s": 85629,
"text": "It will produce the following output, displaying the contents of the relation foreach_data."
},
{
"code": null,
"e": 85850,
"s": 85721,
"text": "(1,21,Hyderabad)\n(2,22,Kolkata)\n(3,22,Delhi)\n(4,21,Pune) \n(5,23,Bhuwaneshwar)\n(6,23,Chennai) \n(7,24,trivendram)\n(8,24,Chennai) \n"
},
{
"code": null,
"e": 85965,
"s": 85850,
"text": "The ORDER BY operator is used to display the contents of a relation in a sorted order based on one or more fields."
},
{
"code": null,
"e": 86017,
"s": 85965,
"text": "Given below is the syntax of the ORDER BY operator."
},
{
"code": null,
"e": 86077,
"s": 86017,
"text": "grunt> Relation_name2 = ORDER Relatin_name1 BY (ASC|DESC);\n"
},
{
"code": null,
"e": 86179,
"s": 86077,
"text": "Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below."
},
{
"code": null,
"e": 86199,
"s": 86179,
"text": "student_details.txt"
},
{
"code": null,
"e": 86544,
"s": 86199,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad\n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi \n004,Preethi,Agarwal,21,9848022330,Pune \n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar \n006,Archana,Mishra,23,9848022335,Chennai \n007,Komal,Nayak,24,9848022334,trivendram \n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 86637,
"s": 86544,
"text": "And we have loaded this file into Pig with the relation name student_details as shown below."
},
{
"code": null,
"e": 86840,
"s": 86637,
"text": "grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray,age:int, phone:chararray, city:chararray);"
},
{
"code": null,
"e": 87022,
"s": 86840,
"text": "Let us now sort the relation in a descending order based on the age of the student and store it into another relation named order_by_data using the ORDER BY operator as shown below."
},
{
"code": null,
"e": 87080,
"s": 87022,
"text": "grunt> order_by_data = ORDER student_details BY age DESC;"
},
{
"code": null,
"e": 87154,
"s": 87080,
"text": "Verify the relation order_by_data using the DUMP operator as shown below."
},
{
"code": null,
"e": 87182,
"s": 87154,
"text": "grunt> Dump order_by_data; "
},
{
"code": null,
"e": 87275,
"s": 87182,
"text": "It will produce the following output, displaying the contents of the relation order_by_data."
},
{
"code": null,
"e": 87618,
"s": 87275,
"text": "(8,Bharathi,Nambiayar,24,9848022333,Chennai)\n(7,Komal,Nayak,24,9848022334,trivendram)\n(6,Archana,Mishra,23,9848022335,Chennai) \n(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar)\n(3,Rajesh,Khanna,22,9848022339,Delhi) \n(2,siddarth,Battacharya,22,9848022338,Kolkata)\n(4,Preethi,Agarwal,21,9848022330,Pune) \n(1,Rajiv,Reddy,21,9848022337,Hyderabad)\n"
},
{
"code": null,
"e": 87696,
"s": 87618,
"text": "The LIMIT operator is used to get a limited number of tuples from a relation."
},
{
"code": null,
"e": 87745,
"s": 87696,
"text": "Given below is the syntax of the LIMIT operator."
},
{
"code": null,
"e": 87809,
"s": 87745,
"text": "grunt> Result = LIMIT Relation_name required number of tuples;\n"
},
{
"code": null,
"e": 87911,
"s": 87809,
"text": "Assume that we have a file named student_details.txt in the HDFS directory /pig_data/ as shown below."
},
{
"code": null,
"e": 87931,
"s": 87911,
"text": "student_details.txt"
},
{
"code": null,
"e": 88276,
"s": 87931,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad\n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi \n004,Preethi,Agarwal,21,9848022330,Pune \n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar \n006,Archana,Mishra,23,9848022335,Chennai \n007,Komal,Nayak,24,9848022334,trivendram \n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 88369,
"s": 88276,
"text": "And we have loaded this file into Pig with the relation name student_details as shown below."
},
{
"code": null,
"e": 88572,
"s": 88369,
"text": "grunt> student_details = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray,age:int, phone:chararray, city:chararray);"
},
{
"code": null,
"e": 88749,
"s": 88572,
"text": "Now, let’s sort the relation in descending order based on the age of the student and store it into another relation named limit_data using the ORDER BY operator as shown below."
},
{
"code": null,
"e": 88795,
"s": 88749,
"text": "grunt> limit_data = LIMIT student_details 4; "
},
{
"code": null,
"e": 88866,
"s": 88795,
"text": "Verify the relation limit_data using the DUMP operator as shown below."
},
{
"code": null,
"e": 88891,
"s": 88866,
"text": "grunt> Dump limit_data; "
},
{
"code": null,
"e": 88992,
"s": 88891,
"text": "It will produce the following output, displaying the contents of the relation limit_data as follows."
},
{
"code": null,
"e": 89161,
"s": 88992,
"text": "(1,Rajiv,Reddy,21,9848022337,Hyderabad) \n(2,siddarth,Battacharya,22,9848022338,Kolkata) \n(3,Rajesh,Khanna,22,9848022339,Delhi) \n(4,Preethi,Agarwal,21,9848022330,Pune) \n"
},
{
"code": null,
"e": 89273,
"s": 89161,
"text": "Apache Pig provides various built-in functions namely eval, load, store, math, string, bag and tuple functions."
},
{
"code": null,
"e": 89339,
"s": 89273,
"text": "Given below is the list of eval functions provided by Apache Pig."
},
{
"code": null,
"e": 89400,
"s": 89339,
"text": "To compute the average of the numerical values within a bag."
},
{
"code": null,
"e": 89531,
"s": 89400,
"text": "To concatenate the elements of a bag into a string. While concatenating, we can place a delimiter between these values (optional)."
},
{
"code": null,
"e": 89584,
"s": 89531,
"text": "To concatenate two or more expressions of same type."
},
{
"code": null,
"e": 89670,
"s": 89584,
"text": "To get the number of elements in a bag, while counting the number of tuples in a bag."
},
{
"code": null,
"e": 89760,
"s": 89670,
"text": "It is similar to the COUNT() function. It is used to get the number of elements in a bag."
},
{
"code": null,
"e": 89801,
"s": 89760,
"text": "To compare two bags (fields) in a tuple."
},
{
"code": null,
"e": 89836,
"s": 89801,
"text": "To check if a bag or map is empty."
},
{
"code": null,
"e": 89935,
"s": 89836,
"text": "To calculate the highest value for a column (numeric values or chararrays) in a single-column bag."
},
{
"code": null,
"e": 90037,
"s": 89935,
"text": "To get the minimum (lowest) value (numeric or chararray) for a certain column in a single-column bag."
},
{
"code": null,
"e": 90181,
"s": 90037,
"text": "Using the Pig Latin PluckTuple() function, we can define a string Prefix and filter the columns in a relation that begin with the given prefix."
},
{
"code": null,
"e": 90243,
"s": 90181,
"text": "To compute the number of elements based on any Pig data type."
},
{
"code": null,
"e": 90386,
"s": 90243,
"text": "To subtract two bags. It takes two bags as inputs and returns a bag which contains the tuples of the first bag that are not in the second bag."
},
{
"code": null,
"e": 90461,
"s": 90386,
"text": "To get the total of the numeric values of a column in a single-column bag."
},
{
"code": null,
"e": 90598,
"s": 90461,
"text": "To split a string (which contains a group of words) in a single tuple and return a bag which contains the output of the split operation."
},
{
"code": null,
"e": 90832,
"s": 90598,
"text": "The Load and Store functions in Apache Pig are used to determine how the data goes ad comes out of Pig. These functions are used with the load and store operators. Given below is the list of load and store functions available in Pig."
},
{
"code": null,
"e": 90868,
"s": 90832,
"text": "To load and store structured files."
},
{
"code": null,
"e": 90904,
"s": 90868,
"text": "To load unstructured data into Pig."
},
{
"code": null,
"e": 90967,
"s": 90904,
"text": "To load and store data into Pig using machine readable format."
},
{
"code": null,
"e": 91020,
"s": 90967,
"text": "In Pig Latin, we can load and store compressed data."
},
{
"code": null,
"e": 91072,
"s": 91020,
"text": "Given below is the list of Bag and Tuple functions."
},
{
"code": null,
"e": 91119,
"s": 91072,
"text": "To convert two or more expressions into a bag."
},
{
"code": null,
"e": 91158,
"s": 91119,
"text": "To get the top N tuples of a relation."
},
{
"code": null,
"e": 91207,
"s": 91158,
"text": "To convert one or more expressions into a tuple."
},
{
"code": null,
"e": 91250,
"s": 91207,
"text": "To convert the key-value pairs into a Map."
},
{
"code": null,
"e": 91304,
"s": 91250,
"text": "We have the following String functions in Apache Pig."
},
{
"code": null,
"e": 91371,
"s": 91304,
"text": "To verify whether a given string ends with a particular substring."
},
{
"code": null,
"e": 91463,
"s": 91371,
"text": "Accepts two string parameters and verifies whether the first string starts with the second."
},
{
"code": null,
"e": 91504,
"s": 91463,
"text": "Returns a substring from a given string."
},
{
"code": null,
"e": 91545,
"s": 91504,
"text": "To compare two stings ignoring the case."
},
{
"code": null,
"e": 91640,
"s": 91545,
"text": "Returns the first occurrence of a character in a string, searching forward from a start index."
},
{
"code": null,
"e": 91748,
"s": 91640,
"text": "Returns the index of the last occurrence of a character in a string, searching backward from a start index."
},
{
"code": null,
"e": 91804,
"s": 91748,
"text": "Converts the first character in a string to lower case."
},
{
"code": null,
"e": 91871,
"s": 91804,
"text": "Returns a string with the first character converted to upper case."
},
{
"code": null,
"e": 91931,
"s": 91871,
"text": "UPPER(expression) Returns a string converted to upper case."
},
{
"code": null,
"e": 91982,
"s": 91931,
"text": "Converts all characters in a string to lower case."
},
{
"code": null,
"e": 92046,
"s": 91982,
"text": "To replace existing characters in a string with new characters."
},
{
"code": null,
"e": 92110,
"s": 92046,
"text": "To split a string around matches of a given regular expression."
},
{
"code": null,
"e": 92219,
"s": 92110,
"text": "Similar to the STRSPLIT() function, it splits the string by given delimiter and returns the result in a bag."
},
{
"code": null,
"e": 92293,
"s": 92219,
"text": "Returns a copy of a string with leading and trailing whitespaces removed."
},
{
"code": null,
"e": 92354,
"s": 92293,
"text": "Returns a copy of a string with leading whitespaces removed."
},
{
"code": null,
"e": 92416,
"s": 92354,
"text": "Returns a copy of a string with trailing whitespaces removed."
},
{
"code": null,
"e": 92476,
"s": 92416,
"text": "Apache Pig provides the following Date and Time functions −"
},
{
"code": null,
"e": 92680,
"s": 92476,
"text": "This function returns a date-time object according to the given parameters. The other alternative for this function are ToDate(iosstring), ToDate(userstring, format), ToDate(userstring, format, timezone)"
},
{
"code": null,
"e": 92730,
"s": 92680,
"text": "returns the date-time object of the current time."
},
{
"code": null,
"e": 92784,
"s": 92730,
"text": "Returns the day of a month from the date-time object."
},
{
"code": null,
"e": 92837,
"s": 92784,
"text": "Returns the hour of a day from the date-time object."
},
{
"code": null,
"e": 92900,
"s": 92837,
"text": "Returns the millisecond of a second from the date-time object."
},
{
"code": null,
"e": 92957,
"s": 92900,
"text": "Returns the minute of an hour from the date-time object."
},
{
"code": null,
"e": 93012,
"s": 92957,
"text": "Returns the month of a year from the date-time object."
},
{
"code": null,
"e": 93070,
"s": 93012,
"text": "Returns the second of a minute from the date-time object."
},
{
"code": null,
"e": 93124,
"s": 93070,
"text": "Returns the week of a year from the date-time object."
},
{
"code": null,
"e": 93173,
"s": 93124,
"text": "Returns the week year from the date-time object."
},
{
"code": null,
"e": 93217,
"s": 93173,
"text": "Returns the year from the date-time object."
},
{
"code": null,
"e": 93290,
"s": 93217,
"text": "Returns the result of a date-time object along with the duration object."
},
{
"code": null,
"e": 93370,
"s": 93290,
"text": "Subtracts the Duration object from the Date-Time object and returns the result."
},
{
"code": null,
"e": 93432,
"s": 93370,
"text": "Returns the number of days between the two date-time objects."
},
{
"code": null,
"e": 93491,
"s": 93432,
"text": "Returns the number of hours between two date-time objects."
},
{
"code": null,
"e": 93557,
"s": 93491,
"text": "Returns the number of milliseconds between two date-time objects."
},
{
"code": null,
"e": 93618,
"s": 93557,
"text": "Returns the number of minutes between two date-time objects."
},
{
"code": null,
"e": 93678,
"s": 93618,
"text": "Returns the number of months between two date-time objects."
},
{
"code": null,
"e": 93739,
"s": 93678,
"text": "Returns the number of seconds between two date-time objects."
},
{
"code": null,
"e": 93798,
"s": 93739,
"text": "Returns the number of weeks between two date-time objects."
},
{
"code": null,
"e": 93857,
"s": 93798,
"text": "Returns the number of years between two date-time objects."
},
{
"code": null,
"e": 93910,
"s": 93857,
"text": "We have the following Math functions in Apache Pig −"
},
{
"code": null,
"e": 93954,
"s": 93910,
"text": "To get the absolute value of an expression."
},
{
"code": null,
"e": 93994,
"s": 93954,
"text": "To get the arc cosine of an expression."
},
{
"code": null,
"e": 94032,
"s": 93994,
"text": "To get the arc sine of an expression."
},
{
"code": null,
"e": 94095,
"s": 94032,
"text": "This function is used to get the arc tangent of an expression."
},
{
"code": null,
"e": 94156,
"s": 94095,
"text": "This function is used to get the cube root of an expression."
},
{
"code": null,
"e": 94247,
"s": 94156,
"text": "This function is used to get the value of an expression rounded up to the nearest integer."
},
{
"code": null,
"e": 94319,
"s": 94247,
"text": "This function is used to get the trigonometric cosine of an expression."
},
{
"code": null,
"e": 94388,
"s": 94319,
"text": "This function is used to get the hyperbolic cosine of an expression."
},
{
"code": null,
"e": 94464,
"s": 94388,
"text": "This function is used to get the Euler’s number e raised to the power of x."
},
{
"code": null,
"e": 94535,
"s": 94464,
"text": "To get the value of an expression rounded down to the nearest integer."
},
{
"code": null,
"e": 94591,
"s": 94535,
"text": "To get the natural logarithm (base e) of an expression."
},
{
"code": null,
"e": 94638,
"s": 94591,
"text": "To get the base 10 logarithm of an expression."
},
{
"code": null,
"e": 94730,
"s": 94638,
"text": "To get a pseudo random number (type double) greater than or equal to 0.0 and less than 1.0."
},
{
"code": null,
"e": 94869,
"s": 94730,
"text": "To get the value of an expression rounded to an integer (if the result type is float) or rounded to a long (if the result type is double)."
},
{
"code": null,
"e": 94903,
"s": 94869,
"text": "To get the sine of an expression."
},
{
"code": null,
"e": 94948,
"s": 94903,
"text": "To get the hyperbolic sine of an expression."
},
{
"code": null,
"e": 94998,
"s": 94948,
"text": "To get the positive square root of an expression."
},
{
"code": null,
"e": 95044,
"s": 94998,
"text": "To get the trigonometric tangent of an angle."
},
{
"code": null,
"e": 95092,
"s": 95044,
"text": "To get the hyperbolic tangent of an expression."
},
{
"code": null,
"e": 95387,
"s": 95092,
"text": "In addition to the built-in functions, Apache Pig provides extensive support for User Defined Functions (UDF’s). Using these UDF’s, we can define our own functions and use them. The UDF support is provided in six programming languages, namely, Java, Jython, Python, JavaScript, Ruby and Groovy."
},
{
"code": null,
"e": 95766,
"s": 95387,
"text": "For writing UDF’s, complete support is provided in Java and limited support is provided in all the remaining languages. Using Java, you can write UDF’s involving all parts of the processing like data load/store, column transformation, and aggregation. Since Apache Pig has been written in Java, the UDF’s written using Java language work efficiently compared to other languages."
},
{
"code": null,
"e": 95935,
"s": 95766,
"text": "In Apache Pig, we also have a Java repository for UDF’s named Piggybank. Using Piggybank, we can access Java UDF’s written by other users, and contribute our own UDF’s."
},
{
"code": null,
"e": 96030,
"s": 95935,
"text": "While writing UDF’s using Java, we can create and use the following three types of functions −"
},
{
"code": null,
"e": 96187,
"s": 96030,
"text": "Filter Functions − The filter functions are used as conditions in filter statements. These functions accept a Pig value as input and return a Boolean value."
},
{
"code": null,
"e": 96344,
"s": 96187,
"text": "Filter Functions − The filter functions are used as conditions in filter statements. These functions accept a Pig value as input and return a Boolean value."
},
{
"code": null,
"e": 96490,
"s": 96344,
"text": "Eval Functions − The Eval functions are used in FOREACH-GENERATE statements. These functions accept a Pig value as input and return a Pig result."
},
{
"code": null,
"e": 96636,
"s": 96490,
"text": "Eval Functions − The Eval functions are used in FOREACH-GENERATE statements. These functions accept a Pig value as input and return a Pig result."
},
{
"code": null,
"e": 96811,
"s": 96636,
"text": "Algebraic Functions − The Algebraic functions act on inner bags in a FOREACHGENERATE statement. These functions are used to perform full MapReduce operations on an inner bag."
},
{
"code": null,
"e": 96986,
"s": 96811,
"text": "Algebraic Functions − The Algebraic functions act on inner bags in a FOREACHGENERATE statement. These functions are used to perform full MapReduce operations on an inner bag."
},
{
"code": null,
"e": 97222,
"s": 96986,
"text": "To write a UDF using Java, we have to integrate the jar file Pig-0.15.0.jar. In this section, we discuss how to write a sample UDF using Eclipse. Before proceeding further, make sure you have installed Eclipse and Maven in your system."
},
{
"code": null,
"e": 97277,
"s": 97222,
"text": "Follow the steps given below to write a UDF function −"
},
{
"code": null,
"e": 97332,
"s": 97277,
"text": "Open Eclipse and create a new project (say myproject)."
},
{
"code": null,
"e": 97387,
"s": 97332,
"text": "Open Eclipse and create a new project (say myproject)."
},
{
"code": null,
"e": 97443,
"s": 97387,
"text": "Convert the newly created project into a Maven project."
},
{
"code": null,
"e": 97499,
"s": 97443,
"text": "Convert the newly created project into a Maven project."
},
{
"code": null,
"e": 97626,
"s": 97499,
"text": "Copy the following content in the pom.xml. This file contains the Maven dependencies for Apache Pig and Hadoop-core jar files."
},
{
"code": null,
"e": 97753,
"s": 97626,
"text": "Copy the following content in the pom.xml. This file contains the Maven dependencies for Apache Pig and Hadoop-core jar files."
},
{
"code": null,
"e": 99026,
"s": 97753,
"text": "<project xmlns = \"http://maven.apache.org/POM/4.0.0\"\n xmlns:xsi = \"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation = \"http://maven.apache.org/POM/4.0.0http://maven.apache .org/xsd/maven-4.0.0.xsd\"> \n\t\n <modelVersion>4.0.0</modelVersion> \n <groupId>Pig_Udf</groupId> \n <artifactId>Pig_Udf</artifactId> \n <version>0.0.1-SNAPSHOT</version>\n\t\n <build> \n <sourceDirectory>src</sourceDirectory> \n <plugins> \n <plugin> \n <artifactId>maven-compiler-plugin</artifactId> \n <version>3.3</version> \n <configuration> \n <source>1.7</source> \n <target>1.7</target> \n </configuration> \n </plugin> \n </plugins> \n </build>\n\t\n <dependencies> \n\t\n <dependency> \n <groupId>org.apache.pig</groupId> \n <artifactId>pig</artifactId> \n <version>0.15.0</version> \n </dependency> \n\t\t\n <dependency> \n <groupId>org.apache.hadoop</groupId> \n <artifactId>hadoop-core</artifactId> \n <version>0.20.2</version> \n </dependency> \n \n </dependencies> \n\t\n</project>"
},
{
"code": null,
"e": 99130,
"s": 99026,
"text": "Save the file and refresh it. In the Maven Dependencies section, you can find the downloaded jar files."
},
{
"code": null,
"e": 99234,
"s": 99130,
"text": "Save the file and refresh it. In the Maven Dependencies section, you can find the downloaded jar files."
},
{
"code": null,
"e": 99318,
"s": 99234,
"text": "Create a new class file with name Sample_Eval and copy the following content in it."
},
{
"code": null,
"e": 99402,
"s": 99318,
"text": "Create a new class file with name Sample_Eval and copy the following content in it."
},
{
"code": null,
"e": 99876,
"s": 99402,
"text": "import java.io.IOException; \nimport org.apache.pig.EvalFunc; \nimport org.apache.pig.data.Tuple; \n \nimport java.io.IOException; \nimport org.apache.pig.EvalFunc; \nimport org.apache.pig.data.Tuple;\n\npublic class Sample_Eval extends EvalFunc<String>{ \n\n public String exec(Tuple input) throws IOException { \n if (input == null || input.size() == 0) \n return null; \n String str = (String)input.get(0); \n return str.toUpperCase(); \n } \n}"
},
{
"code": null,
"e": 100158,
"s": 99876,
"text": "While writing UDF’s, it is mandatory to inherit the EvalFunc class and provide implementation to exec() function. Within this function, the code required for the UDF is written. In the above example, we have return the code to convert the contents of the given column to uppercase."
},
{
"code": null,
"e": 100315,
"s": 100158,
"text": "After compiling the class without errors, right-click on the Sample_Eval.java file. It gives you a menu. Select export as shown in the following screenshot."
},
{
"code": null,
"e": 100472,
"s": 100315,
"text": "After compiling the class without errors, right-click on the Sample_Eval.java file. It gives you a menu. Select export as shown in the following screenshot."
},
{
"code": null,
"e": 100546,
"s": 100472,
"text": "On clicking export, you will get the following window. Click on JAR file."
},
{
"code": null,
"e": 100620,
"s": 100546,
"text": "On clicking export, you will get the following window. Click on JAR file."
},
{
"code": null,
"e": 100787,
"s": 100620,
"text": "Proceed further by clicking Next> button. You will get another window where you need to enter the path in the local file system, where you need to store the jar file."
},
{
"code": null,
"e": 100954,
"s": 100787,
"text": "Proceed further by clicking Next> button. You will get another window where you need to enter the path in the local file system, where you need to store the jar file."
},
{
"code": null,
"e": 101098,
"s": 100954,
"text": "Finally click the Finish button. In the specified folder, a Jar file sample_udf.jar is created. This jar file contains the UDF written in Java."
},
{
"code": null,
"e": 101242,
"s": 101098,
"text": "Finally click the Finish button. In the specified folder, a Jar file sample_udf.jar is created. This jar file contains the UDF written in Java."
},
{
"code": null,
"e": 101324,
"s": 101242,
"text": "After writing the UDF and generating the Jar file, follow the steps given below −"
},
{
"code": null,
"e": 101522,
"s": 101324,
"text": "After writing UDF (in Java) we have to register the Jar file that contain the UDF using the Register operator. By registering the Jar file, users can intimate the location of the UDF to Apache Pig."
},
{
"code": null,
"e": 101529,
"s": 101522,
"text": "Syntax"
},
{
"code": null,
"e": 101581,
"s": 101529,
"text": "Given below is the syntax of the Register operator."
},
{
"code": null,
"e": 101598,
"s": 101581,
"text": "REGISTER path; \n"
},
{
"code": null,
"e": 101606,
"s": 101598,
"text": "Example"
},
{
"code": null,
"e": 101688,
"s": 101606,
"text": "As an example let us register the sample_udf.jar created earlier in this chapter."
},
{
"code": null,
"e": 101776,
"s": 101688,
"text": "Start Apache Pig in local mode and register the jar file sample_udf.jar as shown below."
},
{
"code": null,
"e": 101850,
"s": 101776,
"text": "$cd PIG_HOME/bin \n$./pig –x local \n\nREGISTER '/$PIG_HOME/sample_udf.jar'\n"
},
{
"code": null,
"e": 101917,
"s": 101850,
"text": "Note − assume the Jar file in the path − /$PIG_HOME/sample_udf.jar"
},
{
"code": null,
"e": 101999,
"s": 101917,
"text": "After registering the UDF we can define an alias to it using the Define operator."
},
{
"code": null,
"e": 102006,
"s": 101999,
"text": "Syntax"
},
{
"code": null,
"e": 102056,
"s": 102006,
"text": "Given below is the syntax of the Define operator."
},
{
"code": null,
"e": 102140,
"s": 102056,
"text": "DEFINE alias {function | [`command` [input] [output] [ship] [cache] [stderr] ] }; \n"
},
{
"code": null,
"e": 102148,
"s": 102140,
"text": "Example"
},
{
"code": null,
"e": 102197,
"s": 102148,
"text": "Define the alias for sample_eval as shown below."
},
{
"code": null,
"e": 102232,
"s": 102197,
"text": "DEFINE sample_eval sample_eval();\n"
},
{
"code": null,
"e": 102409,
"s": 102232,
"text": "After defining the alias you can use the UDF same as the built-in functions. Suppose there is a file named emp_data in the HDFS /Pig_Data/ directory with the following content."
},
{
"code": null,
"e": 102665,
"s": 102409,
"text": "001,Robin,22,newyork\n002,BOB,23,Kolkata\n003,Maya,23,Tokyo\n004,Sara,25,London \n005,David,23,Bhuwaneshwar \n006,Maggy,22,Chennai\n007,Robert,22,newyork\n008,Syam,23,Kolkata\n009,Mary,25,Tokyo\n010,Saran,25,London \n011,Stacy,25,Bhuwaneshwar \n012,Kelly,22,Chennai\n"
},
{
"code": null,
"e": 102726,
"s": 102665,
"text": "And assume we have loaded this file into Pig as shown below."
},
{
"code": null,
"e": 102870,
"s": 102726,
"text": "grunt> emp_data = LOAD 'hdfs://localhost:9000/pig_data/emp1.txt' USING PigStorage(',')\n as (id:int, name:chararray, age:int, city:chararray);"
},
{
"code": null,
"e": 102960,
"s": 102870,
"text": "Let us now convert the names of the employees in to upper case using the UDF sample_eval."
},
{
"code": null,
"e": 103025,
"s": 102960,
"text": "grunt> Upper_case = FOREACH emp_data GENERATE sample_eval(name);"
},
{
"code": null,
"e": 103088,
"s": 103025,
"text": "Verify the contents of the relation Upper_case as shown below."
},
{
"code": null,
"e": 103207,
"s": 103088,
"text": "grunt> Dump Upper_case;\n \n(ROBIN)\n(BOB)\n(MAYA)\n(SARA)\n(DAVID)\n(MAGGY)\n(ROBERT)\n(SYAM)\n(MARY)\n(SARAN)\n(STACY)\n(KELLY)\n"
},
{
"code": null,
"e": 103290,
"s": 103207,
"text": "Here in this chapter, we will see how how to run Apache Pig scripts in batch mode."
},
{
"code": null,
"e": 103370,
"s": 103290,
"text": "While writing a script in a file, we can include comments in it as shown below."
},
{
"code": null,
"e": 103439,
"s": 103370,
"text": "We will begin the multi-line comments with '/*', end them with '*/'."
},
{
"code": null,
"e": 103502,
"s": 103439,
"text": "/* These are the multi-line comments \n In the pig script */ \n"
},
{
"code": null,
"e": 103552,
"s": 103502,
"text": "We will begin the single-line comments with '--'."
},
{
"code": null,
"e": 103600,
"s": 103552,
"text": "--we can write single line comments like this.\n"
},
{
"code": null,
"e": 103683,
"s": 103600,
"text": "While executing Apache Pig statements in batch mode, follow the steps given below."
},
{
"code": null,
"e": 103843,
"s": 103683,
"text": "Write all the required Pig Latin statements in a single file. We can write all the Pig Latin statements and commands in a single file and save it as .pig file."
},
{
"code": null,
"e": 103944,
"s": 103843,
"text": "Execute the Apache Pig script. You can execute the Pig script from the shell (Linux) as shown below."
},
{
"code": null,
"e": 104031,
"s": 103944,
"text": "You can execute it from the Grunt shell as well using the exec command as shown below."
},
{
"code": null,
"e": 104062,
"s": 104031,
"text": "grunt> exec /sample_script.pig"
},
{
"code": null,
"e": 104257,
"s": 104062,
"text": "We can also execute a Pig script that resides in the HDFS. Suppose there is a Pig script with the name Sample_script.pig in the HDFS directory named /pig_data/. We can execute it as shown below."
},
{
"code": null,
"e": 104326,
"s": 104257,
"text": "$ pig -x mapreduce hdfs://localhost:9000/pig_data/Sample_script.pig "
},
{
"code": null,
"e": 104404,
"s": 104326,
"text": "Assume we have a file student_details.txt in HDFS with the following content."
},
{
"code": null,
"e": 104424,
"s": 104404,
"text": "student_details.txt"
},
{
"code": null,
"e": 104770,
"s": 104424,
"text": "001,Rajiv,Reddy,21,9848022337,Hyderabad \n002,siddarth,Battacharya,22,9848022338,Kolkata\n003,Rajesh,Khanna,22,9848022339,Delhi \n004,Preethi,Agarwal,21,9848022330,Pune \n005,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar \n006,Archana,Mishra,23,9848022335,Chennai \n007,Komal,Nayak,24,9848022334,trivendram \n008,Bharathi,Nambiayar,24,9848022333,Chennai\n"
},
{
"code": null,
"e": 104973,
"s": 104770,
"text": "We also have a sample script with the name sample_script.pig, in the same HDFS directory. This file contains statements performing operations and transformations on the student relation, as shown below."
},
{
"code": null,
"e": 105263,
"s": 104973,
"text": "student = LOAD 'hdfs://localhost:9000/pig_data/student_details.txt' USING PigStorage(',')\n as (id:int, firstname:chararray, lastname:chararray, phone:chararray, city:chararray);\n\t\nstudent_order = ORDER student BY age DESC;\n \nstudent_limit = LIMIT student_order 4;\n \nDump student_limit;"
},
{
"code": null,
"e": 105383,
"s": 105263,
"text": "The first statement of the script will load the data in the file named student_details.txt as a relation named student."
},
{
"code": null,
"e": 105503,
"s": 105383,
"text": "The first statement of the script will load the data in the file named student_details.txt as a relation named student."
},
{
"code": null,
"e": 105644,
"s": 105503,
"text": "The second statement of the script will arrange the tuples of the relation in descending order, based on age, and store it as student_order."
},
{
"code": null,
"e": 105785,
"s": 105644,
"text": "The second statement of the script will arrange the tuples of the relation in descending order, based on age, and store it as student_order."
},
{
"code": null,
"e": 105884,
"s": 105785,
"text": "The third statement of the script will store the first 4 tuples of student_order as student_limit."
},
{
"code": null,
"e": 105983,
"s": 105884,
"text": "The third statement of the script will store the first 4 tuples of student_order as student_limit."
},
{
"code": null,
"e": 106065,
"s": 105983,
"text": "Finally the fourth statement will dump the content of the relation student_limit."
},
{
"code": null,
"e": 106147,
"s": 106065,
"text": "Finally the fourth statement will dump the content of the relation student_limit."
},
{
"code": null,
"e": 106204,
"s": 106147,
"text": "Let us now execute the sample_script.pig as shown below."
},
{
"code": null,
"e": 106273,
"s": 106204,
"text": "$./pig -x mapreduce hdfs://localhost:9000/pig_data/sample_script.pig"
},
{
"code": null,
"e": 106351,
"s": 106273,
"text": "Apache Pig gets executed and gives you the output with the following content."
},
{
"code": null,
"e": 106668,
"s": 106351,
"text": "(7,Komal,Nayak,24,9848022334,trivendram)\n(8,Bharathi,Nambiayar,24,9848022333,Chennai) \n(5,Trupthi,Mohanthy,23,9848022336,Bhuwaneshwar) \n(6,Archana,Mishra,23,9848022335,Chennai)\n2015-10-19 10:31:27,446 [main] INFO org.apache.pig.Main - Pig script completed in 12\nminutes, 32 seconds and 751 milliseconds (752751 ms)\n"
},
{
"code": null,
"e": 106703,
"s": 106668,
"text": "\n 46 Lectures \n 3.5 hours \n"
},
{
"code": null,
"e": 106722,
"s": 106703,
"text": " Arnab Chakraborty"
},
{
"code": null,
"e": 106757,
"s": 106722,
"text": "\n 23 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 106778,
"s": 106757,
"text": " Mukund Kumar Mishra"
},
{
"code": null,
"e": 106811,
"s": 106778,
"text": "\n 16 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 106824,
"s": 106811,
"text": " Nilay Mehta"
},
{
"code": null,
"e": 106859,
"s": 106824,
"text": "\n 52 Lectures \n 1.5 hours \n"
},
{
"code": null,
"e": 106877,
"s": 106859,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 106910,
"s": 106877,
"text": "\n 14 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 106928,
"s": 106910,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 106961,
"s": 106928,
"text": "\n 23 Lectures \n 1 hours \n"
},
{
"code": null,
"e": 106979,
"s": 106961,
"text": " Bigdata Engineer"
},
{
"code": null,
"e": 106986,
"s": 106979,
"text": " Print"
},
{
"code": null,
"e": 106997,
"s": 106986,
"text": " Add Notes"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.